Prior to the pandemic, I had commenced work on a project that would consider Artificial Intelligence (AI) safety practices in reference to the development of nuclear power. At that time I was struck by the comparatively limited work done on AI safety, considering the potential downside risks. As Seth Baum observed in 2017, ‘the AI field is focused mainly on building AIs that are more capable, with little regard to social impacts.’ The discourse and thinking around it still feels restricted, and in some ways quite thin. In a series of notes I plan to draw on this prior work to try to think where we find ourselves with the arrival of ChatGPT and ‘the AI arms race’ it has ignited.
Ezra Klein has expressed deep unease with the speed that AI-related technologies are moving. He writes:
Humanity needs to accelerate its adaptation to these technologies or a collective, enforceable decision must be made to slow the development of these technologies. Even doing both may not be enough.
What we cannot do is put these systems out of our mind, mistaking the feeling of normalcy for the fact of it.
In the same article, Klein references a post by Paul Christiano, who started the Alignment Research Center after previously working at OpenAI. Christiano emphasises the pace of developments:
Catastrophically risky AI systems could plausibly exist soon, and there likely won’t be a strong consensus about this fact until such systems pose a meaningful existential risk per year. There is not necessarily any “fire alarm.”
The broader intellectual world seems to wildly overestimate how long it will take AI systems to go from “large impact on the world” to “unrecognizably transformed world.” This is more likely to be years than decades, and there’s a real chance that it’s months.
Extrapolating the rate of existing AI progress suggests you don’t get too much time between weak AI systems and very strong AI systems.
In another post, Christiano writes:
There is one decision I do strongly want to push for: AI developers should not develop and deploy systems with a significant risk of killing everyone.
It is remarkable he feels the need to explicitly state this, it makes one wonder about the general state of AI discourse.
Much of the discussion around AI safety tends to focus on longer-term risks, including the possibility of general AI and superintelligence. This is understandable, given the considerable downside risks that come from getting that wrong. For more on this, Stuart Russell’s book, Human Compatible, is a good place to start. Yet AI doesn’t need to take over the world to pose a threat to it.
Without getting lost in speculating about possible dystopias, it is possible to consider which direction of travel from here is more likely. Shoshana Zuboff’s work on surveillance capitalism offers vital clues. She depicts a world radically different from the original utopian promise of the internet:
From the dawn of the public internet and the world wide web in the mid-1990s, the liberal democracies failed to construct a coherent political vision of a digital century that advances democratic values, principles, and government. This failure left a void where democracy should be, a void that was quickly filled and tenaciously defended by surveillance capitalism. A handful of companies evolved from tiny startups into trillion-dollar vertically integrated global surveillance empires thriving on an economic construct so novel and improbable, as to have escaped critical analysis for many years: the commodification of human behavior. These corporations and their ecosystems now constitute a sweeping political-economic institutional order that migrates across sectors and economies. The institutional order of surveillance capitalism is an information oligopoly upon which democratic and illiberal governments alike depend for population-scale extraction of human-generated data, computation and prediction.
Remember the early Facebook motto - one that became a mantra for Silicon Valley - ‘move fast and break things’. How has that worked out? Look at social media today, and the documented behaviour of big tech, does this give you confidence? There is extensive evidence that social media is having profoundly negative consequences on politics and society, and it is pretty easy to think of some simple fixes - identity verification, reducing post virality, changing algorithms - but instead these problems are presented as unsolvable. Despite the non-trivial possibility that TikTok might become a security risk, there is limited appetite to ban it. Given these experiences, what is the likelihood of a robust regulatory regime for AI being developed? And if there are not strong regulations in place, what is the likelihood of these technologies being rolled out at a speed that gives humanity a chance to adjust?
Returning to Zuboff:
surveillance capitalism now intermediates nearly all human engagement with digital architectures, information flows, products, and services, and nearly all roads to economic, political, and social participation lead through its institutional terrain. These conditions of practical and psychological “no exit” conjure the aura of inevitability that is both a key pillar of surveillance capitalism’s rhetorical structure and critical to all institutional reproduction.
This ‘aura of inevitability’ is again present in the way AI is being talked about. There is also a strong likelihood of a similar dynamic where a number of companies come to dominate. A new study in Science suggests that, ‘industry increasingly dominates the three key ingredients of modern AI research: computing power, large datasets, and highly skilled researchers.’ Given the track record of big tech, and our collective experience over the last decade with social media, what is the likelihood that similar companies will roll out AI in a responsible and careful manner? Considering the kitsch ‘Cold War 2.0’ frame now solidifying, this becomes another rationale for ignoring caution and barrelling ahead.
Central to Zuboff’s account is that the model of surveillance capitalism which emerged was historically contingent; there were other paths that could have been taken. This is worth remembering now. She is also clear as to the stakes, describing our conditions as ‘a death match over the politics of knowledge in our information civilization.’ And this was written before ChatGPT had arrived on the scene. In this context, the 1980s movie usually referenced is Terminator, but perhaps it should be Highlander: ‘there can be only one’. Zuboff continues:
Surveillance capitalism’s intrinsically antidemocratic economic imperatives produce a zero-sum dynamic in which the deepening order of surveillance capitalism propagates democratic disorder and deinstitutionalization. Only one of these contesting orders will emerge with the authority and power to rule, while the other will drift into deinstitutionalization, its functions absorbed by the victor. Will these contradictions ultimately defeat surveillance capitalism, or will democracy suffer the greater injury? At stake is the social order of our information civilization: the many or the few?
Undoubtedly there is much that is genuinely unknowable about our present moment, perhaps the current wave of AI progress will crest, or it might advance forward at a speed we cannot comprehend. Regardless, it appears that we’ve already reached a stage of development sufficient that these technologies will have a major impact on societies already locked in death match with surveillance capitalism.
When one considers the AI arms race dynamics, add in heightening geopolitical rivalry, selection mechanisms and incentive structures that work against safety, the rise of digital oligopolies, the woeful track record of big tech, the limited desire or capacity for serious regulations, sprinkle in some weirdo Silicon Valley millenarianism, and it seems there is low likelihood of AI-related technologies developing in ways that are in the best interests of people and societies. Best we buckle up for what looks like a bumpy ride.