The world is presently a strange mix of the old - building things and destroying them - and the new - technologies that replicate human thought and intelligence. A year ago I wrote a series of notes on ‘reckoning with AI’. To recap some conclusions:
When one considers the AI arms race dynamics, add in heightening geopolitical rivalry, selection mechanisms and incentive structures that work against safety, the rise of digital oligopolies, the woeful track record of big tech, the limited desire or capacity for serious regulations, sprinkle in some weirdo Silicon Valley millenarianism, and it seems there is low likelihood of AI-related technologies developing in ways that are in the best interests of people and societies.
It is unclear whether the current wave of AI will be more a case of offering increased capacity for aimless action, or if it portends something more significant. Regardless, it does appear that there is a high likelihood that without any significant countervailing forces, these technological developments will reinforce and deepen the political, economic and societal distortions of surveillance capitalism.
Practices of extraction are central to mounting environmental and societal problems. The AI models now being rolled out replicate and reinforce this unsustainable approach to our world. At a time when we desperately need to find more sustainable ways of living together, it is difficult to see how further embedding extractive systems into our modes of living is going to help.
Nothing I have seen in the intervening period has led me to revise these judgments. Put most simply, if one considers the track record of big tech, it is difficult to have much faith that AI will be rolled out in ways that are responsible and respectful to people and societies. At the end of Max Fisher’s The Chaos Machine: The Inside Story of How Social Media Rewired Our Minds and Our World, he observes:
Coercing the companies into regulating themselves is also an uncertain path. The social media giants, as currently constituted, may be simply unable to roll back their systems’ worst tendencies. Technically, it would be easy. But the cultural, ideological, and economic forces that led executives to create and supercharge those systems in the first place still apply.
Given that LLMs (large language models) are coming from the same people and places, and are motivated by the same incentives, why should we expect anything different in terms of their behaviour and levels of responsibility?
Recall Jacques Ellul from 1962:
‘Technical Progress is Always Ambiguous’
Let us consider these elements under the following four rubrics:
1.) All technical progress exacts a price;
2.) Technique raises more problems than it solves;
3.) Pernicious effects are inseparable from favorable effects; and
4.) Every technique implies unforeseeable effects.
When it comes to AI: is the price too high? Will the problems be too great? Will the pernicious effects be too severe? Will the unforeseeable effects be too consequential? Are these questions being seriously and honestly asked by enough people?
Consider Ezra Klein’s recent podcasts on AI (Ethan Mollick, Nilay Patel, Dario Amodei), in which Anthropic’s CEO, Dario Amodei, channels Derek Zoolander:
But companies can try to make beneficial applications themselves, right? Like, this is why we’re working with cancer institutes. We’re hoping to partner with ministries of education in Africa, to see if we can use the models in kind of a positive way for education, rather than the way they may be used by default.
Partnering with ‘ministries of education in Africa’, excellent, a brave new world awaits.
Turning to Ethan Mollick’s, Co-Intelligence: Living and Working With AI:
In study after study, the people who get the biggest boost from AI are those with the lowest initial ability—it turns poor performers into good performers. In writing tasks, bad writers become solid. In creativity tests, it boosts the least creative the most.
Levelling and middling, one need only look at the proliferation of AI-generated images on Substacks to get an inkling of what this portends. Steve Bannon’s ‘flood the zone with shit’ at scale.
In ‘The Life, Death—And Afterlife—of Literary Fiction’, Will Blythe reflects:
At times, the digital universe feels to me like the technological equivalent of a black hole, swallowing everything around it, including the un-digital idiosyncrasy of humans, to the point that we are unable to re-emerge from that hole into a freer, more open constellation.
Returning to Mollick, he cites this paper from Fabrizio Dell’Acqua, ‘Falling Asleep at the Wheel: Human/AI Collaboration in a Field Experiment on HR Recruiters’:
As AI performance improves, human overseers face greater incentives to delegate. If the AI appears too high quality, workers are at risk of “falling asleep at the wheel” and mindlessly following its recommendations without deliberation.
The experiment presented in this paper tests human/AI collaboration in a controlled environment and shows that AI assistance that is too precise leads humans to “fall asleep at the wheel”; becoming more reliant on AI and less engaged in their work efforts.
This extends beyond HR practices, however, and can be found on the battlefield:
“A human being had to [verify the target] for just a few seconds,” B. said, explaining that this became the protocol after realizing the Lavender system was “getting it right” most of the time. “At first, we did checks to ensure that the machine didn’t get confused. But at some point we relied on the automatic system, and we only checked that [the target] was a man — that was enough. It doesn’t take a long time to tell if someone has a male or a female voice.”
To conduct the male/female check, B. claimed that in the current war, “I would invest 20 seconds for each target at this stage, and do dozens of them every day. I had zero added value as a human, apart from being a stamp of approval. It saved a lot of time. If [the operative] came up in the automated mechanism, and I checked that he was a man, there would be permission to bomb him, subject to an examination of collateral damage.”
A world that cares less, in all senses.
High-res digital technologies that offer a low-res reality. Cory Doctorow:
The legitimate world looks so much like a scam that it’s much easier to make a scam look like the legit world.
Again, thinking of Ted Chiang’s insightful New Yorker piece, ‘ChatGPT Is a Blurry JPEG of the Web’, in which he emphasises that as knowledge is continuously copied, compressed, and compiled, the quality is further degraded:
Repeatedly resaving a JPEG creates more compression artifacts, because more information is lost every time. It’s the digital equivalent of repeatedly making photocopies of photocopies in the old days. The image quality only gets worse.
There is a certain perversity in the way these developments that suggest a thinning of skill and shrinking of thought are presented as trends to be celebrated. But that is where we collectively find ourselves, in a condition of ‘technological somnambulism’, to adopt Langdon Winner’s phrasing. His judgement from 1986 is one that holds today:
…the interesting puzzle in our times is that we so willingly sleepwalk through the process of reconstituting the conditions of human existence.