Hi, it’s Charley, and this is Untangled, a newsletter about technology and power.
👇ICYMI
Last week, I explained why Google’s Gemini isn’t ‘woke.’ Rather, Google papered over a systemic problem and it backfired. For those cut off by the paywall, here are a few things you missed:
I argued that the Web as we know it is in real bad shape. Are we awash in synthetic content? Yep, but also: the business of search is being replaced by the business of a chatbot answer machine. This largely removes links from the picture, which undercuts the ad revenue of media companies who have become dependent upon search. Not great.
I synthesized my favorite paper of the week, wherein researchers find that large language models embed dialectical prejudice. This might not surprise loyal Untangled readers, but here’s the kicker: the prejudice is covert! The authors find “raciolinguistic stereotypes about speakers of African American English” but when asked directly about their attitude toward African Americans, the tool responded with positive sentiments.
🙏 A favor
Before we get into it, I need a favor. When I started this in November 2021, I had roughly 50 subscribers. Now there are 6,100+ of you, which is exciting, humbling, and…a li’l weird. There are so many of you I don’t know! So please fill out this survey — it’s short, I promise — to tell me who you are and how I can ensure Untangled meets your needs.
This week I explain what would happen if we codified our current misunderstandings of AI into the scientific research process — and then I offer a take on the potential TikTok ban.
🪄Illusions of understanding
Imagine, for a moment, if all of the ways we misunderstand AI — that it’s objective, that it’s ‘hallucinating,’ that it can ‘understand’ complex concepts, etc. — became embedded in the scientific process. Turns out, we don’t need to imagine anything. This is already taking place, and it’s the subject of the great new paper, “Artificial intelligence and illusions of understanding in scientific research” by Lisa Messeri and Molly Crockett. The authors outline four ‘visions’ many scientists seem to hold:
AI as Oracle or the belief that because AI will exceed human capabilities, we should use them when designing a new study or generating new hypotheses.
AI as Surrogate or the idea that because data collection is complex and costly, we should ask AI interview questions and use synthetic data instead.
AI as Quant or the vision that ‘Hey, data are too big these days to ask a human to curate or analyze them, so let’s ask AI to do that.’ Some scientists even want to use AI to annotate and “ascribe meaning to text, images, and qualitative data,” according to the authors.
AI as Arbiter or the belief that peer review — the time-consuming and subjective research review process — can be turned into a speedy objective meritocracy with AI.
The authors reject the presumption of objectivity built into these visions wholesale:
Understanding the inherently social nature of these epistemic risks underscores the inadequacy of purely technical solutions for addressing them. Illusions of understanding that arise from an overreliance on AI in science cannot be overcome by using more sophisticated AI models or by preventing errors such as hallucinations. Rather, they require sociotechnical approaches that account for the inseparability of social and technical dynamics. In other words, because scientific research is a fundamentally social process, evaluating the epistemic risks of AI for science requires not only technical assessments, but also an understanding of the social and cognitive processes through which scientists extend epistemic trust, decide what research questions to pursue and interpret the results of experiments.
Right, science is a social process that can’t be solved by AI tools, which are themselves encoded in social phenomena. If we fail to grasp this and instead subscribe — like many scientists already do! — to the visions above, we’re in a lot of trouble. Messeri and Crockett explain how this setup is likely to encourage two problematic monocultures:
Monocultures of knowing or “prioritizing the kinds of questions and methods that are best suited for AI assistance.” That would 100% reduce the diversity of the questions researchers ask.
Monocultures of knowers or “prioritizing the types of standpoint that AI is able to express.” That would reduce the diversity of perspectives included in the research process.
In short, a reliance on the visions above risks homogenizing the questions and viewpoints perceived to be scientifically valid. As Messeri and Crockett conclude:
Visions of AI for science invite an illusion of objectivity, in which scientists falsely believe that AI tools either eliminate all standpoints (in the case of Oracles and Arbiters) or are capable of representing everyone (as desired for Surrogates). By reasserting the fantasy of a single kind of knower masked as neutral and universal (but actually reflecting the standpoints of the AI tool builders), visions of ‘objective’ AI tools for science retreat from recent progress in recognizing the necessity of diverse standpoints for the scientific project.
👉 If you want to dig into how science is a social process, read “The problem of Elon Musk’s ‘first principles thinking.’”
🚫 My take on the potential TikTok ban
Keep reading with a 7-day free trial
Subscribe to Untangled with Charley Johnson to keep reading this post and get 7 days of free access to the full post archives.