Power lies with those who frame the problem.
Read to the end for a sneak peek of the Untangled Workbook
Hi, it’s Charley, and this is Untangled, a newsletter about technology and power.
👇ICYMI
I explained how we misunderstand AI — that it’s ‘objective,’ that it can ‘understand’ complex concepts and engage in reasoning, etc. — is becoming embedded in the scientific research process.
I explained why Google’s Gemini isn’t ‘woke.’ Rather, Google papered over a systemic problem and it backfired.
I published an essay about a new category of crypto project — Decentralized Physical Infrastructure or DePin — and how everything is a Ponzi in crypto, until sometimes, it’s not.
🙋♂️New Release
One of my goals for Untangled this year is to make my writing as practically useful as possible. Which is challenging! The path from conceptual frameworks to individual actions isn’t exactly a straight line. So I was thrilled to give this a go by updating some of the ideas and concepts from Untangled Special Issues like “The Primer” and “Technically Social” and contributing them to the “Sociotech Impact” section of the “Inclusive Innovation Playbook” by Diversily. Please read it and check out the important work Diversily is doing to embed inclusion and equity in the practices and products of technologists.
On to the show!
Automation is a political problem
Does this narrative arch sound familiar?
A new technology pops up.
Nearly everyone warns of technological displacement, especially those with something to gain, like technology companies and consulting firms. They argue that jobs will be automated away, replaced by new jobs requiring new skills.
All that is needed then is to train people on tomorrow's jobs so they don’t get left behind.
Here’s the problem with this story: it individualizes a structural and political problem. A great new essay by Jason Ludwig, “Politics — Not Tech — Can Save Black Jobs from AI,” situates this important distinction in the context of Black workers, and the history of labor policy in the U.S. Ludwig argues that historically, attempts to train Black workers in tech skills are “inadequate and trap them in a race to the bottom to sell their labor.” He offers a case study of the Manpower Development and Training Act (MDTA), which was signed into law by President John F. Kennedy in 1962.
The MDTA trained workers whose jobs were at risk of being automated away by new technologies. As the civil rights movement gained momentum, the race to “upgrade” the skills of Black workers became tied to racial equity in an automated society. But as Ludwig notes, these training programs never imagined anything more for Black workers than “second-class or subordinate roles.” As Willard Witz, Secretary of Labor at the time said, those workers trained in data processing and basic computer skills could become “skilled subprofessionals who can relieve highly trained engineers and computer specialists from routine duties.” There was a ceiling on the progress of Black workers from the jump, resulting in a “racialized hierarchy of technological work.”
It certainly didn’t help that the training programs often “reinforced pejorative notions of Black cultural pathology that located the cause of racial inequality in individual behavior.” Right, the programs didn’t just teach specific technical skills, they preached “self-reliance, temperance, thrift, and moral purity,” according to Ludwig. But even if these training programs weren’t rooted in paternalism and prejudice and indeed built new technical capacities, Ludwig’s point still stands: focusing on individual improvement might be necessary but it’s woefully insufficient to address a structural problem. As he writes:
Retraining workers can be an important temporary measure. But it will only ever be a stop-gap solution to the broader political-economic problems underlying technological change. Presenting it as the only path forward distracts from finding more effective ways of securing a better future for those at the margins of American society, such as experimenting with universal basic income or with reimagining the welfare state for the 21st century. Anxiety over automation can pit working people against one another in a race to learn new skills. Instead, we should demand that technological innovation be put toward the business of human flourishing.
In short, we’ve got it all backward. We shouldn’t orient people and their jobs to a new technological development. We should orient new technologies to the goal of human flourishing.
💼 If you want to read more on the relationship between technology and labor, read “Why ‘Will AI take your job?’ is the wrong question.”
🗳️If you want to read about another way technology is political, read “Technologies encode politics.”
📒The Untangled Workbook
Power lies with those who frame the problem. Take the example above: an individualized problem will generate very different solutions than a problem statement that takes the system it interacts with seriously. A problem seen through a technological lens will generate very different solutions than a problem statement that takes seriously the way technical systems are entangled in social systems. A problem that is seen through a technical lens (i.e. known solutions can be implemented by current know-how) will generate very different solutions than an adaptive problem (i.e. require behavior or cultural change)
This is the starting point of my new workbook, creatively titled “Untangled: A Workbook.” The workbook will help you interrogate and identify the ‘sociotechnical challenge’ you’re puzzling through, and offer a range of mindset shifts, tools, and resources to align your actions and/or your organization’s actions to collective systems change.
Keep reading with a 7-day free trial
Subscribe to Untangled with Charley Johnson to keep reading this post and get 7 days of free access to the full post archives.