Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
articles
Filter by Categories
Politics
Criticism
Examined Life
General
Letter
Essays
Dialogue
Remarks
Survey
Further Materials
Dictionary
Correspondence
Literature
Reviews
Slush Pile
Reading Room
Advice

Dispatches from the present

READ MORE

Apocalypse Soon?

|
|

I have a friend who studied physics and mathematics at university, and then decided to train in medicine, on the grounds that this was a better way to help people and make best use of his life. Last year, though, he was persuaded to drop out of his medical degree to carry out research on pandemic prevention and artificial intelligence. A few weeks ago, I called him up to ask where he was and what exactly he was doing. He told me he was working in Berkeley, California and had decided there was a roughly ten percent chance of humanity going extinct in the next decade as a result of the emergence of artificial intelligence.

I paced in the kitchen as he described various scenarios of gradually escalating severity. AI could automate us out of jobs, causing widespread unemployment. It could be used to predict stock-market fluctuations better than any existing software, concentrating power in the hands of corporations with access to it. An authoritarian state could use it to control its citizens to an unprecedented degree. An AI programmed to translate texts might decide to replace every word in every document with the word “and” to make its job easier. It might decide that eliminating humans is the best way to maximize the production of paper clips.

I’m familiar with these scenarios from books like Superintelligence by Nick Bostrom, The Precipice by Toby Ord and What We Owe the Future by William MacAskill, which all warn of the risks of artificial intelligence. But it is easy to treat these works as intellectual exercises, to keep them separate from one’s wider skein of beliefs. It hits differently to be told by a friend that, in their considered judgment, the world is on a knife’s edge and that they are considering not saving for retirement; one’s stomach curls and constricts with the sudden somatic realization that probabilities describe possible worlds, worlds that may become concrete in our lifetimes.

When I asked how he had arrived at the tendentiously precise figure of ten percent, my friend said the number was a rough estimate based on conversations with AI researchers and his knowledge of what is going on in research labs and companies like OpenAI. Any estimate, he emphasized, is inevitably riddled with uncertainty. Personally, I find it hard to believe that AI poses a serious threat to the existence of humanity, or even that it’s a problem on the level of the climate crisis. I imagine this is partly due to wishful thinking, partly because I probably don’t fully grasp the situation, and partly due to a reflexive distaste for the zealous, apolitical blitheness of writers who focus on AI to the exclusion of virtually anything else. I am of a very different disposition to many of the kinds of people who worry about artificial intelligence. Eliezer Yudkowsky, an influential if outré figure who believes it’s likely AI will destroy the world before the average person can buy a self-driving car, has written a frankly awful retelling of Harry Potter entitled “Harry Potter and the Methods of Rationality.” I am disinclined to trust bad novelists.

I am also disinclined to trust those too easily enamored of the glamour of wealth. One of the most generous funders of research in AI risk was Sam Bankman-Fried, the tech billionaire whose cryptocurrency exchange FTX spectacularly imploded late last year. Although Bankman-Fried seems to have genuinely believed in the dangers of AI, it’s hard to avoid the sense that his concern was also a way to convince himself that he could do good and get rich—indeed, do good through getting rich. Influential figures like MacAskill happily sat on the boards of his charitable foundations and touted his ethical credentials. Such people come across as unbearably naïve not just about Bankman-Fried but also about the economic systems in which we are embedded. One wonders whether they are engaged in the project of persuading themselves of the dangers of artificial intelligence so as to submerge the urgency of more thoroughly political problems.

The recent revelation that Nick Bostrom, one of the earliest thinkers to warn about the risks of artificial intelligence, has expressed wildly and gormlessly racist views in the past only deepens the suspicion. Bostrom declared in an email written when he was a graduate student in the 1990s that he “liked” the sentence “Blacks are more stupid than whites” and thought it was “true” before quoting a racial slur. Although he has disavowed these views, that he expressed them in the first place indicates a worrying lack of awareness of the fact that words have meanings, and makes one wonder about the whole domain of Bostrom-adjacent thoughtspace. But the broader problem is with a style of thinking that prioritizes clobbering, coercive argument over subtlety, relegates the political to a separate sphere, and forgets certain very important facts about being human.

It would be easy, given all these psychic discomforts and tainted associations, to remain comfortably assured in my beliefs. But I don’t think this is right. Both common sense and philosophical research agree that you ought to adjust your beliefs in response to the views of others who are equally likely or more likely to be right on a subject. My friend knows what is going on in companies such as OpenAI and Google. I don’t. He knows what AI systems can do, how they work, how they might evolve in the near term. I don’t. Beyond this, I trust him. I trust him because I know him, because he has been proven right so many times in the past when I have been wrong, and because I have seen him grapple with uncertainty over a period of several years. I cannot easily attribute to him the biases I detect or suspect in others. So I think I ought to believe that there’s a higher chance of disaster or even extinction than I previously did.

Suppose, then, I believe that there’s a non-negligible chance of extinction and a fairly high chance of disaster. What do I do? What do we do? What does this entail for life here, now, as things are, for me as a single human being alive in the present? We already go about our ordinary lives shadowed by disaster. The climate crisis at least has the virtue of clarity: we know it’s happening and why. We also know how to do our bit, in however small a way. By contrast, artificial intelligence is deeply opaque. Confronted with the prospect of an AI apocalypse, it is immensely hard to know what you or I could do about it. One feels blank in the face of a flat, enormous possibility.

When overcome by skeptical doubts regarding the existence of the external world, David Hume went off to play backgammon. I went for a run.