Stop comparing yourself to AI
PLUS: A preview of Untangled’s very first tiny book, and a workshop on managing conflict.
Hi, and welcome back to Untangled, a newsletter and podcast about technology, people, and power. This issue has a long preamble but it is full of meaningful information, so grab your coffee, sit back, and drink each word in.
📖 Untangled’s first-ever tiny book
In August, I will launch the first-ever Untangled ‘tiny book,’ AI Untangled. If you want a sneak peek, you can download a snippet here:
In short, I updated my favorite essays on artificial intelligence and remixed them into common themes — frames, metaphors, complex systems, and alternative futures. Each chapter includes a nice li’l introduction, followed by the essays themselves. I am also including brand-new bonus content — an unreleased piece, and a surprise special offer. The PDF of the book will be free for paid subscribers. Sign up now so that you get an electronic copy as soon as it launches.
🤯 A Workshop on Managing Conflict
The Facilitation Leadership Lab’s workshop on Managing Conflict is live! Let’s just say, it’s more than a li’l ironic that I’m co-teaching a course on managing conflict. Ten years ago, I avoided conflict like the proverbial plague. I became skilled at maneuvering around it but it never felt natural to address conflict directly. Over time, though, I learned the skills and frameworks to diagram conflict and navigate group dynamics to help teams and organizations realize their purpose.
In our 3-hr workshop, you’ll learn key frameworks to help you identify unhelpful patterns, separate underlying positions from interests, develop shared action agendas to address underlying challenges, challenge unproductive behaviors, and ultimately, create processes and containers for productive conflict
Now, I don’t seek out conflict these days, but I don’t mind it, and I don’t avoid it. If that change sounds like one you’d like to make, sign up for our workshop. If you’d like a sneak peek into one such framework we’ll draw on in the workshop, download this snippet of the Facilitation Leadership Lab Workbook.
💰WorldCoin Launched
WorldCoin, a crypto project created by Sam Altman that purports to solve the problem of identity authentication and provide universal basic income, launched earlier this week. Here’s what I wrote a year ago in “Tech for …what, exactly?” about the pilot project.
The apparent mission of WorldCoin is to distribute cryptocurrency to everyone in the world. To achieve this, the company intends to use an orb loaded with cameras and sensors which scan our eyes, faces, and bodies — and then turn that information into a unique identifier, or an ‘Irishash’. They’ve already done this with 450,000 people across 24 countries.
You might be wondering what biometrics have to do with UBI. Well, WorldCoin will argue that proving ‘human uniqueness’ while preserving privacy is an unresolved technical problem and that it’s critical to ensuring that every person has “not claimed their free share of WorldCoin already.” So the data capture part might be less about UBI, and more about fraud prevention. As WorldCoin CEO Alex Blania put it, “our goal is to use the data for the sole purpose of developing our algorithms to minimize fraud and enhance user privacy.”
WorldCoin’s focus on fraud and fairness might sound reasonable, as if unlocking UBI is impossible without first confronting fraud — but it’s not. Rather, it’s a turbocharged example of a ‘tech for good’ mindset and approach that frames the problem in such a way that launders the interests, expertise, and beliefs of technologists.
While I think WorldCoin is problematic for a number of reasons, it’s the ‘tech for good’ mindset that we must abolish from our brains. Here are a few questions I included in the piece to start us down that path:
What problem does the technology purport to solve and who defined that problem?
How does the way the problem is being framed shape our understanding of it?
What might the one framing the problem gain from solving it?
📻 Okay, last thing: I’ve included the audio edition of ‘Does generative AI have ‘emergent’ properties?’ at the end of this essay. Enjoy!
Now on to the show!
Have you come across a headline that purports ChatGPT got an MBA or passed the bar exam? Yeah, me too. Pick your favorite benchmark test — there is likely a headline suggesting ChatGPT passed it too. In this issue, I synthesize the work of scholars like Arvind Narayanan, Sayash Kapoor, Melanie Mitchell, Deborah Raji, Alex Hanna, Emily Bender, and others to…
Explain why a lot (!) more evidence is required before we might say that ChatGPT passed these tests.
Dig into the weeds on the problems with benchmarks more broadly and the important idea of ‘construct validity.’
Offer an alternative means to measure the performance and progress of generative AI systems.
Keep reading with a 7-day free trial
Subscribe to Untangled with Charley Johnson to keep reading this post and get 7 days of free access to the full post archives.