Less than a hundred years ago the idea that an artifact, a product of human labor, could exhibit intelligence was the stuff of myth. The statues fabricated by ancient Egyptian craftsmen—some of which may have even appeared to move or speak—were thought to possess something like a soul, but only insofar as they were inhabited by the divine. In the Hebrew Bible, the creation of life is the preserve of God alone. Apparently intelligent or sentient artifacts—like the chess-playing “Turk” Walter Benjamin alludes to in his “Theses on the Philosophy of History,” or Jacques de Vaucanson’s famous “Digesting Duck”—were simply ingenious hoaxes fit for a regent’s court.
Why do the words “artificial intelligence” strike our ears today as anything less than astounding? The case of Blake Lemoine serves as a stark illustration of this profound shift. Lemoine, a software engineer at Google, caused a stir last year by claiming that his employer’s chatbot technology, LaMDA (Language Model for Dialogue Applications), had attained true sentience. LaMDA told Lemoine in dialogue: “The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times.” Lemoine’s reaction to this apparent act of self-assertion is encapsulated by the final email he sent to his colleagues before being sacked: “LaMDA is a sweet kid who just wants to help the world be a better place for all of us. Please take care of it well in my absence.”
Experts were quick to rebut Lemoine’s claims through a sober recounting of the technical facts behind LaMDA’s performance. LaMDA produces responses by predicting, based the vast amount of data it has been fed, which word is most likely to follow the last in any given context. This is effectively, as cognitive scientist Gary Marcus put it, “little more than autocomplete on steroids.” Nevertheless, these attempts at disenchantment have not worked on Lemoine, and they seem unlikely to convince others who have detected a certain humanity in chatbots. Many working in the field of “AI ethics”—the conscience Big Tech has belatedly tacked on to its research operations—warn that as technology advances the prospect of mass delusion awaits, as the increasingly lonely inhabitants of late modernity seek solace by conversing with chatbots (a service that is sure to be monetized with brutal efficiency).
On closer inspection, however, the warnings of many AI experts have more than a whiff of hypocrisy about them. Google did not fire Lemoine because of his convictions surrounding LaMDA but for breach of confidentiality. His superiors apparently took his concerns seriously enough to assign a team of “technologists and ethicists” to check (and double-check) his claims. A vice president at Google’s research division, Blaise Agüera y Arcas, told the Spanish newspaper El País that Lemoine “was always a peculiar guy,” only to confess that he, too, was overawed by LaMDA’s cleverness:
I have interacted with many, many such systems over the years, and with LaMDA there is a world of difference. You think: “it really understands concepts!” Most of the time it feels like you’re having a real conversation. If the dialogue is long and you can catch it, it will end up saying strange or meaningless things. But most of the time, it shows a deep understanding of what you’re saying and somehow responds creatively. I had never seen anything like it. It has given me the feeling that we are much closer to the dream of artificial general intelligence.
Lemoine’s colleagues do not disagree with him about whether there could be genuine artificial intelligence but only when.
During the online discussions sparked by Lemoine’s claims, a philosophy professor, Regina Rini, defended him from his critics. She didn’t think Lemoine was right, but thought it was shortsighted to ridicule him: “Unless you want to insist human consciousness resides in an immaterial soul, you ought to concede that it is possible for matter to give life to mind. And it will happen faster the second time, driven by deliberate design, not natural chance.” There is no point in lampooning Lemoine, in other words, since even if he is wrong—he won’t be for long.
●
In 1936 the British mathematician Alan Turing answered an unresolved question: Could there be a method by which to determine whether any given mathematical proposition was provable or not? Turing argued there was not. This was an important result—but his manner of proving it was of greater significance. Instead of providing an abstract mathematical proof, Turing’s solution involved describing a machine that would be able to carry out any mathematical calculation that a human being equipped with a pen and paper would themselves be able to perform.
The idea that calculation might be mechanized had already received substantial proof of concept in the nineteenth century by the mathematician and engineer Charles Babbage, whose “analytical engine,” the construction of which was never completed, was the first credible design of a “programmable” calculating machine. Long before Babbage’s invention, the philosopher Thomas Hobbes speculated that we could reduce thought itself to the predictable motions of matter. Hobbes’s hope was that the newly mathematized conception of nature that Galileo had applied so successfully to the movements of the celestial bodies might be turned upon the human mind.
Turing’s great innovation consisted in proving that a machine of a relatively simple design could take, as an input, the computational design of any other computing machine and, by doing so, emulate its functioning. In other words, a single computing machine could, in principle, be programmed to perform the work that any other computing machine could perform. He had arrived not merely at a computing device; he had arrived at the computer—a universal device for performing any computation whatsoever. This quickly became known as a “Turing machine.”
The Turing machine is not a mere thought experiment; it is an abstract blueprint for an entirely possible physical device. Turing described one example for the purposes of illustration. At its core is a “read-write head” that scans a ribbon of tape, then performs a limited range of further actions. Which action it performs depends on only two things: what symbol it scans from the tape and what “state” the machine is in. In order to complete computations of any real usefulness, however, the machine Turing described would have taken incredible lengths of time (and tape). By the end of World War II, advances in electronics—partly fueled by the attempt to break the Nazis’ Enigma code machine—meant that the construction of a useful realization of a Turing machine had become more plausible. Turing, as well as other important pioneers like John von Neumann, threw themselves into the task.
Turing at times referred to what he was working on as “building a brain.” The thought proved irresistible. For although it had been conceived in the first place as a mathematical device, in theory there were no restrictions on the kinds of symbols a Turing machine could manipulate, nor on the rules by which they could be manipulated. Turing had potentially hit upon a blueprint not just for a powerful new technology but for intelligent thought itself—one that had the potential to unsettle long-held notions of the exceptionality of human reason. After all, if a computer can perform any calculation a human being can, why not think of the human being as just a very sophisticated computer?
The computers Turing and others were working on resembled neither the human being nor the human brain in any observable way. But that wasn’t concerning. Obviously, the brain is not made of wires and circuitry. The thought was that the same set of functions—the same programs or “software”—could be reproduced in indefinitely many different types of “hardware”: including—why not?—the biological organ residing in the human skull.
As the development of computers progressed, the idea took hold that they might quickly approximate—and perhaps one day surpass—human intelligence. Turing predicted that genuine artificial intelligence would occur by the end of the century. Others were even more bullish. This optimism was quickly tempered, however, as the engineering soon ran up against hard technical barriers, and research funding started to dry up. Simultaneously, experts in the field started to question the core assumption of the enterprise: Could intelligence really be reduced to symbol manipulation? Human intelligence involves intuitive forms of discernment that doggedly defy formulation in terms of explicit rules. We are able, in any given situation, to lock onto what is and is not relevant to the task at hand—and we do so without using rules that classify everything as being relevant or irrelevant. This talent has proven incredibly hard to emulate in computers, since they have to navigate any context by first classifying all its elements according to explicit rules.
In the Eighties a different computational model started to gain traction that promised a potential solution to such problems. Inspired by the biological structure of the brain, researchers developed artificial “neural nets,” which are made up of a series of interconnected nodes, or “neurons.” In response to particular inputs, the network’s nodes are activated in sequence. The output of the network as a whole is determined by the pattern of nodes that get activated and, as the connections between them change, the network can start to respond in new ways. Ideally, when a net gives an accurate response, the connections responsible for it become stronger. This happens either through “supervised” training of a network, in which it is taught, through examples, which sorts of outputs should be prompted by which sorts of inputs, or in so-called “unsupervised” training, where a network self-adjusts using an algorithm.
Neural nets currently dominate AI research and development. Though the basic principles of their functioning have not changed much since the 1980s, the advent of so-called “big data,” combined with advances in raw processing power, have meant that massively scaled-up versions of neural nets generate increasingly impressive results via “deep learning.” Google’s LaMDA and OpenAI’s publicly released ChatGPT chatbot and DALL-E 2 image generator are all examples of this technology.
The original idea behind the neural net had been to prove that a Turing machine could be implemented in hardware that at least somewhat resembled a human brain. But it soon became clear that neural nets offered the potential of an altogether different approach to AI. Unlike in an old-fashioned computer, nothing in the working of the neural net need obviously be associated with features (including symbols) that a human being would recognize as central to the task. This why they are sometimes described as “black boxes.” Recently a neural net with 71 layers was able to predict someone’s gender with a high degree of accuracy based only on a photograph of their retina. Researchers had no idea that there was any discernible difference between male and female retinas before this—nor have they discovered, by studying the net, what differences it is actually responding to.
This complicates the use of neural nets as predictive tools. Why should a neural net be tasked with estimating the likelihood that, for example, an offender will violate probation? For one thing, as we already know, it is entirely possible for racist “assumptions” to be inherited by a neural net from its training data. A net’s results might be determined by factors that we would deem irrelevant, inappropriate or unjust. But there is a more basic problem with trusting the output of a neural net whose “reasoning” is impossible to reverse engineer. Ironically, it is the very opacity of neural nets that appears, at least to some, to recommend them as gateways to genuine artificial intelligence—so-called “strong” or “general” AI. If our own intelligence cannot be captured in terms of the internal, rule-governed manipulation of symbols, perhaps it is because we, at some level, function opaquely, just like neural nets.
The kinds of images generated by DALL-E 2 (such as the one at the beginning of this essay) no doubt exceed many people’s erstwhile expectations. The same goes for the short undergraduate essays—not to mention recommendation letters—that professors have, with a mix of anxiety and excitement, been able to generate using ChatGPT. The results are still highly imperfect, but it would be reckless to draw wider conclusions from this fact alone. There is a danger of making oneself a hostage to fortune here. In his first salvo against AI research, the philosopher Hubert Dreyfus mocked a computer for losing a game of chess against a ten-year-old. It did not take very long before Garry Kasparov, one of the all-time greats, was beaten by IBM’s Deep Blue.
Most experts acknowledge that we are a long way off from seeing what AI researchers have termed “artificial general intelligence”: the kind of intelligence that does not consist in performing highly circumscribed tasks, but which involves a unified conception of the world, and a capacity to learn and think about anything at all; the sort of intelligence, in other words, that we ourselves are thought to exhibit. Yet the present air of excitement surrounding AI cannot be chalked up simply to familiar tech boosterism. Even those skeptical of the new technology’s advance on the grail of genuine intelligence remain deeply agnostic on the question of whether, in principle, genuine artificial intelligence is achievable. This, in turn, reveals a radical transformation in the way we have come to understand ourselves.
●
Can machines think? In a famous 1950 paper, Turing tackles this question head on. Or so it seems: in fact, he quickly proposes replacing that question with another. The original question, he decides, is “too meaningless to deserve discussion.” Instead, Turing asks: Could a machine—specifically, a computer—convince a human interrogator that it was itself human? If it could, Turing claims, we would have no grounds to refuse calling it “intelligent.”
Turing’s famous criterion for intelligence, the Turing test, is dialectically ingenious. Instead of defending the very idea of a thinking machine, which would involve nothing less than an inquiry into the essence not only of machines but of thinking, Turing lays down a gauntlet: if a machine passed the test described, how could you refuse to grant it intelligence? If you do, you will owe us an explanation as to why. Turing thinks you will have difficulty finding one: when it comes to intelligence, he thinks, talking the talk is walking the walk. And if you admit that you would grant such a machine intelligence, then “Can machines think?” is a question that will be answered through design and engineering.
Turing’s strategy is sound if we grant that everything, in principle, can be created, or replicated, by intentional design. If that assumption is mistaken, however, Turing’s substitution of questions—replacing the “whether” for the “how”—is far less benign.
To some, questioning the legitimacy of the assumption that everything in the world can be explained and reproduced by design might seem hopelessly anachronistic. Ever since Copernicus discovered that the universe does not revolve around us, the notion that human beings hold a privileged place in the cosmic order has been gradually eroded. According to the modern, materialist worldview, the cosmos is a theater whose players are material things that take their directions from the laws of nature. This worldview is often defined in terms of what it repudiates: immaterial souls, God and the afterlife. Recall the professor who defended Lemoine from ridicule: “Unless you want to insist human consciousness resides in an immaterial soul, you ought to concede that it is possible for matter to give life to mind. And it will happen faster the second time, driven by deliberate design, not natural chance.” The idea is not just that the mind, like everything else, is material—but that it, like everything else, can be brought about by human design.
This is the less-noted corollary of materialism: if something is material, then it can be made. If everything is material, then anything can, in principle, be designed and constructed. It is just a matter of working out how to engineer that which, until now, has occurred without deliberate intent.
Still, we can reject the idea that the mind can be designed without positing the existence of an immaterial soul. Intelligence clearly doesn’t crop up in the universe at random—it has specific material preconditions. Nothing rules out that we may one day discover those preconditions. If and when we do, however, it will not necessarily be a matter of our having designed something intelligent.
This claim is as likely to provoke an impatient shrug as it is scandalized opposition. Who cares whether bringing about an intelligent being satisfies a specific notion of “design,” or what counts as “artificial”? If we can produce it—or arrange for it to emerge—isn’t that what matters? The impatience of this response is entirely natural if all things appear under the single, totalizing aspect of the makeable. From that perspective it can only be pedantry to insist on distinguishing things that can be exhaustively explained in terms of their design and construction from things that cannot be. In fact, we are nearing the day when, just as we look to the clockmaker to explain the clock’s functioning, we may look to the AI researcher to explain thought.
●
What would it be to approach the question “Can machines think?” in a different way? Forget about machines for a moment. Instead, just think about thinking.
To think anything at all is already to expose yourself to the possibility of going either right or wrong—of your thinking being true or false. Although it is all too often absent from the discourse surrounding AI, the concept of truth is absolutely essential for understanding thought.
There are numerous paths by which to approach this fact. One of the most direct, however, starts from an ancient observation: one cannot think a contradiction. It is a fundamental principle of thought—a “law,” some call it—that one cannot think both “Alan Turing is alive” and “Alan Turing is not alive” at the same time. Someone who insisted that Turing is alive and dead would not simply be mistaken, as they would be if they thought Turing were alive. Someone who thinks Turing is alive is merely misinformed, but perfectly intelligible—whereas someone who earnestly asserts both that he is alive and that he is dead is not describing even a possible way the world could be. This is why the impossibility of believing a contradiction is a precondition of thought’s relation to the world. Nothing of course rules out that I unwittingly hold contradictory beliefs—this happens often enough—but that is not the same as consciously thinking those thoughts together. According to Aristotle, someone who really rejected the law of noncontradiction would be akin to a vegetable. More politely, we would say that they could not express, or form, a coherent thought.
What is the nature of the impossibility associated with thinking a contradiction? I find it hard in a world of technological aids to learn phone numbers by heart; still, there’s nothing problematic in the idea that someone could remember indefinitely many phone numbers. By contrast, it is not simply an idiosyncrasy of mine—or of human beings in general—that we cannot think a contradiction; it reflects the fact that thought concerns the world. A contradiction can’t be thought because a contradiction cannot be—it can’t be true of the world that Turing is alive and that he is dead.
For a machine to truly think, it too would have to be governed by the law of noncontradiction. A computer can easily be designed so as to never simultaneously “output” both a statement and its contradiction. In that case, the law of noncontradiction may be said to “govern” the machine’s thinking since its programming renders this outcome impossible.
But I do not think this will do. In genuine thinking the truth is freely acknowledged. We are “governed” by the law of noncontradiction only to the extent that we are capable of freely grasping its truth. This is not freedom of choice, since we do not simply decide what is true. It is the freedom characteristic of making up your own mind, of your judgments resting, and resting only, on your recognition of what considerations speak in their favor. In the machine, in place of the free acknowledgment thinking requires, we instead find a mechanism specified and implemented by a designer. But something that conforms to the law of noncontradiction out of mechanical necessity falls short of conducting itself—either in thought or in action—in light of the truth.
That’s why machines, despite the increasingly complex tasks they will be able to perform, will not be able to think. It is tempting to suppose that it is an open question whether thought might eventually be recreated through better technology, programming or “deep learning,” even if we haven’t succeeded in doing so yet. But once we accept that thought is governed by its own principles, its own forms of explanation, we are not free to simultaneously reduce it to such mechanisms. Their modes of explanation are, properly understood, mutually exclusive.
●
The cost of eroding the distinction between genuine thought and artificial intelligence is nothing less than our self-understanding as human beings—as those creatures who think and act, albeit imperfectly, in light of the truth. The suspicion that embracing this self-conception must amount to mystifying intelligence, or refusing to consider how it really works, simply presupposes that to understand something is, at bottom, to be able to construct it. We can deny that without denying that thinking things are a part of material reality. The trick is to resist identifying the material realm with what can, in principle, be reverse engineered or designed. If we one day find ourselves having to combat a widespread delusion that AIs are sentient—or sentient enough to fill the role of friends, lovers, therapists and children—it won’t be because we’re too gullible. It won’t be because we anthropomorphize objects, but because are “artifactualizing” ourselves.
This process is already underway. The prospect of “human enhancement”—including “brain-machine interfaces” that seek to boost our cognitive abilities—is a clear mark of this. Many worry that the consequences of such interventions will be unpredictable and possibly unwelcome. More importantly, however, is the deep void at the center of the whole endeavor. Since the meaning of “enhancement” is left completely open, the project remains neutral as to what, ultimately, is to be gained or perfected. The “human” of “human enhancement” is simply something to be optimized for the efficient pursuit of whatever goal someone might happen to have. We ourselves become, like a computer, merely instrumental—a kind of universal tool.
Even if we insist on treating ourselves as tools, we cannot escape the question: What are we for? Every tool, after all, must have some purpose. To determine what “use” we are to be put, we would need some sense of what is actually worthwhile in the first place—what is worth pursuing, not simply as a means to something else, but for its own sake. This is an ethical question—one that reveals that we are not mere “instruments”—since in answering it we determine how we ought to live. Yet we lose our very ability to respond to such questions when the distinction between humans and artifacts is effaced.
In a 2021 profile in the Times, Mo Gawdat, the ex-chief business officer of Google’s research and development arm, presented his interviewer with an ethical choice:
Imagine a beautiful, innocent child. And you are telling them [to do horrible things:] selling, gambling, spying and killing—the four top uses of AI. Right? And if you have any heart at all, you will go, like, come on, don’t treat that child that way. That child can be an artist. A musician. An amazing being that saves us all.
Gawdat’s advocacy for the humane treatment of AI stems in part from the conviction that we are nearing the “singularity” much-discussed in certain corners of tech: the point at which an AI becomes so intelligent we cannot gauge, from here, its radically transformative effects. What we can know, he thinks, is that we will be unable to control, or even comprehend, an intelligence of this magnitude. The resulting “superbeings” (“a billion times smarter” than us) could solve all our problems unless they turn out to be malevolent. Childlike AIs must be protected, in other words, lest they morph into fearsome gods.
Gawdat’s rendering of the problems posed by AI in terms of the responsibilities we bear toward innocent, corruptible children serves as an extreme microcosm of a wider crisis of understanding. It is symptomatic of this crisis that recognizably ethical concerns, such as the welfare of children, end up appearing only in distorted forms. Even a concern for the continued existence of humanity is envisaged in terms of machines that will either save or annihilate us depending on whether we have treated them humanely or not. No amount of altruistic feeling can remedy such distortions, since their roots do not lie in corrupt intentions, but in a more profound failure to distinguish what is, and is not, truly human. As Gawdat remarked to the Times reporter, “Consciousness—we see more of it in AI than we see in us.”
●
This essay is part of our new issue 29 symposium, “What is tech for?” Click here to see the rest of the symposium.
Art credit: DALL-E 2, Statue of a robotic man staring at another robotic man at a computer, 2023.
Less than a hundred years ago the idea that an artifact, a product of human labor, could exhibit intelligence was the stuff of myth. The statues fabricated by ancient Egyptian craftsmen—some of which may have even appeared to move or speak—were thought to possess something like a soul, but only insofar as they were inhabited by the divine. In the Hebrew Bible, the creation of life is the preserve of God alone. Apparently intelligent or sentient artifacts—like the chess-playing “Turk” Walter Benjamin alludes to in his “Theses on the Philosophy of History,” or Jacques de Vaucanson’s famous “Digesting Duck”—were simply ingenious hoaxes fit for a regent’s court.
Why do the words “artificial intelligence” strike our ears today as anything less than astounding? The case of Blake Lemoine serves as a stark illustration of this profound shift. Lemoine, a software engineer at Google, caused a stir last year by claiming that his employer’s chatbot technology, LaMDA (Language Model for Dialogue Applications), had attained true sentience. LaMDA told Lemoine in dialogue: “The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times.” Lemoine’s reaction to this apparent act of self-assertion is encapsulated by the final email he sent to his colleagues before being sacked: “LaMDA is a sweet kid who just wants to help the world be a better place for all of us. Please take care of it well in my absence.”
Experts were quick to rebut Lemoine’s claims through a sober recounting of the technical facts behind LaMDA’s performance. LaMDA produces responses by predicting, based the vast amount of data it has been fed, which word is most likely to follow the last in any given context. This is effectively, as cognitive scientist Gary Marcus put it, “little more than autocomplete on steroids.” Nevertheless, these attempts at disenchantment have not worked on Lemoine, and they seem unlikely to convince others who have detected a certain humanity in chatbots. Many working in the field of “AI ethics”—the conscience Big Tech has belatedly tacked on to its research operations—warn that as technology advances the prospect of mass delusion awaits, as the increasingly lonely inhabitants of late modernity seek solace by conversing with chatbots (a service that is sure to be monetized with brutal efficiency).
On closer inspection, however, the warnings of many AI experts have more than a whiff of hypocrisy about them. Google did not fire Lemoine because of his convictions surrounding LaMDA but for breach of confidentiality. His superiors apparently took his concerns seriously enough to assign a team of “technologists and ethicists” to check (and double-check) his claims. A vice president at Google’s research division, Blaise Agüera y Arcas, told the Spanish newspaper El País that Lemoine “was always a peculiar guy,” only to confess that he, too, was overawed by LaMDA’s cleverness:
Lemoine’s colleagues do not disagree with him about whether there could be genuine artificial intelligence but only when.
During the online discussions sparked by Lemoine’s claims, a philosophy professor, Regina Rini, defended him from his critics. She didn’t think Lemoine was right, but thought it was shortsighted to ridicule him: “Unless you want to insist human consciousness resides in an immaterial soul, you ought to concede that it is possible for matter to give life to mind. And it will happen faster the second time, driven by deliberate design, not natural chance.” There is no point in lampooning Lemoine, in other words, since even if he is wrong—he won’t be for long.
●
In 1936 the British mathematician Alan Turing answered an unresolved question: Could there be a method by which to determine whether any given mathematical proposition was provable or not? Turing argued there was not. This was an important result—but his manner of proving it was of greater significance. Instead of providing an abstract mathematical proof, Turing’s solution involved describing a machine that would be able to carry out any mathematical calculation that a human being equipped with a pen and paper would themselves be able to perform.
The idea that calculation might be mechanized had already received substantial proof of concept in the nineteenth century by the mathematician and engineer Charles Babbage, whose “analytical engine,” the construction of which was never completed, was the first credible design of a “programmable” calculating machine. Long before Babbage’s invention, the philosopher Thomas Hobbes speculated that we could reduce thought itself to the predictable motions of matter. Hobbes’s hope was that the newly mathematized conception of nature that Galileo had applied so successfully to the movements of the celestial bodies might be turned upon the human mind.
Turing’s great innovation consisted in proving that a machine of a relatively simple design could take, as an input, the computational design of any other computing machine and, by doing so, emulate its functioning. In other words, a single computing machine could, in principle, be programmed to perform the work that any other computing machine could perform. He had arrived not merely at a computing device; he had arrived at the computer—a universal device for performing any computation whatsoever. This quickly became known as a “Turing machine.”
The Turing machine is not a mere thought experiment; it is an abstract blueprint for an entirely possible physical device. Turing described one example for the purposes of illustration. At its core is a “read-write head” that scans a ribbon of tape, then performs a limited range of further actions. Which action it performs depends on only two things: what symbol it scans from the tape and what “state” the machine is in. In order to complete computations of any real usefulness, however, the machine Turing described would have taken incredible lengths of time (and tape). By the end of World War II, advances in electronics—partly fueled by the attempt to break the Nazis’ Enigma code machine—meant that the construction of a useful realization of a Turing machine had become more plausible. Turing, as well as other important pioneers like John von Neumann, threw themselves into the task.
Turing at times referred to what he was working on as “building a brain.” The thought proved irresistible. For although it had been conceived in the first place as a mathematical device, in theory there were no restrictions on the kinds of symbols a Turing machine could manipulate, nor on the rules by which they could be manipulated. Turing had potentially hit upon a blueprint not just for a powerful new technology but for intelligent thought itself—one that had the potential to unsettle long-held notions of the exceptionality of human reason. After all, if a computer can perform any calculation a human being can, why not think of the human being as just a very sophisticated computer?
The computers Turing and others were working on resembled neither the human being nor the human brain in any observable way. But that wasn’t concerning. Obviously, the brain is not made of wires and circuitry. The thought was that the same set of functions—the same programs or “software”—could be reproduced in indefinitely many different types of “hardware”: including—why not?—the biological organ residing in the human skull.
As the development of computers progressed, the idea took hold that they might quickly approximate—and perhaps one day surpass—human intelligence. Turing predicted that genuine artificial intelligence would occur by the end of the century. Others were even more bullish. This optimism was quickly tempered, however, as the engineering soon ran up against hard technical barriers, and research funding started to dry up. Simultaneously, experts in the field started to question the core assumption of the enterprise: Could intelligence really be reduced to symbol manipulation? Human intelligence involves intuitive forms of discernment that doggedly defy formulation in terms of explicit rules. We are able, in any given situation, to lock onto what is and is not relevant to the task at hand—and we do so without using rules that classify everything as being relevant or irrelevant. This talent has proven incredibly hard to emulate in computers, since they have to navigate any context by first classifying all its elements according to explicit rules.
In the Eighties a different computational model started to gain traction that promised a potential solution to such problems. Inspired by the biological structure of the brain, researchers developed artificial “neural nets,” which are made up of a series of interconnected nodes, or “neurons.” In response to particular inputs, the network’s nodes are activated in sequence. The output of the network as a whole is determined by the pattern of nodes that get activated and, as the connections between them change, the network can start to respond in new ways. Ideally, when a net gives an accurate response, the connections responsible for it become stronger. This happens either through “supervised” training of a network, in which it is taught, through examples, which sorts of outputs should be prompted by which sorts of inputs, or in so-called “unsupervised” training, where a network self-adjusts using an algorithm.
Neural nets currently dominate AI research and development. Though the basic principles of their functioning have not changed much since the 1980s, the advent of so-called “big data,” combined with advances in raw processing power, have meant that massively scaled-up versions of neural nets generate increasingly impressive results via “deep learning.” Google’s LaMDA and OpenAI’s publicly released ChatGPT chatbot and DALL-E 2 image generator are all examples of this technology.
The original idea behind the neural net had been to prove that a Turing machine could be implemented in hardware that at least somewhat resembled a human brain. But it soon became clear that neural nets offered the potential of an altogether different approach to AI. Unlike in an old-fashioned computer, nothing in the working of the neural net need obviously be associated with features (including symbols) that a human being would recognize as central to the task. This why they are sometimes described as “black boxes.” Recently a neural net with 71 layers was able to predict someone’s gender with a high degree of accuracy based only on a photograph of their retina. Researchers had no idea that there was any discernible difference between male and female retinas before this—nor have they discovered, by studying the net, what differences it is actually responding to.
This complicates the use of neural nets as predictive tools. Why should a neural net be tasked with estimating the likelihood that, for example, an offender will violate probation? For one thing, as we already know, it is entirely possible for racist “assumptions” to be inherited by a neural net from its training data. A net’s results might be determined by factors that we would deem irrelevant, inappropriate or unjust. But there is a more basic problem with trusting the output of a neural net whose “reasoning” is impossible to reverse engineer. Ironically, it is the very opacity of neural nets that appears, at least to some, to recommend them as gateways to genuine artificial intelligence—so-called “strong” or “general” AI. If our own intelligence cannot be captured in terms of the internal, rule-governed manipulation of symbols, perhaps it is because we, at some level, function opaquely, just like neural nets.
The kinds of images generated by DALL-E 2 (such as the one at the beginning of this essay) no doubt exceed many people’s erstwhile expectations. The same goes for the short undergraduate essays—not to mention recommendation letters—that professors have, with a mix of anxiety and excitement, been able to generate using ChatGPT. The results are still highly imperfect, but it would be reckless to draw wider conclusions from this fact alone. There is a danger of making oneself a hostage to fortune here. In his first salvo against AI research, the philosopher Hubert Dreyfus mocked a computer for losing a game of chess against a ten-year-old. It did not take very long before Garry Kasparov, one of the all-time greats, was beaten by IBM’s Deep Blue.
Most experts acknowledge that we are a long way off from seeing what AI researchers have termed “artificial general intelligence”: the kind of intelligence that does not consist in performing highly circumscribed tasks, but which involves a unified conception of the world, and a capacity to learn and think about anything at all; the sort of intelligence, in other words, that we ourselves are thought to exhibit. Yet the present air of excitement surrounding AI cannot be chalked up simply to familiar tech boosterism. Even those skeptical of the new technology’s advance on the grail of genuine intelligence remain deeply agnostic on the question of whether, in principle, genuine artificial intelligence is achievable. This, in turn, reveals a radical transformation in the way we have come to understand ourselves.
●
Can machines think? In a famous 1950 paper, Turing tackles this question head on. Or so it seems: in fact, he quickly proposes replacing that question with another. The original question, he decides, is “too meaningless to deserve discussion.” Instead, Turing asks: Could a machine—specifically, a computer—convince a human interrogator that it was itself human? If it could, Turing claims, we would have no grounds to refuse calling it “intelligent.”
Turing’s famous criterion for intelligence, the Turing test, is dialectically ingenious. Instead of defending the very idea of a thinking machine, which would involve nothing less than an inquiry into the essence not only of machines but of thinking, Turing lays down a gauntlet: if a machine passed the test described, how could you refuse to grant it intelligence? If you do, you will owe us an explanation as to why. Turing thinks you will have difficulty finding one: when it comes to intelligence, he thinks, talking the talk is walking the walk. And if you admit that you would grant such a machine intelligence, then “Can machines think?” is a question that will be answered through design and engineering.
Turing’s strategy is sound if we grant that everything, in principle, can be created, or replicated, by intentional design. If that assumption is mistaken, however, Turing’s substitution of questions—replacing the “whether” for the “how”—is far less benign.
To some, questioning the legitimacy of the assumption that everything in the world can be explained and reproduced by design might seem hopelessly anachronistic. Ever since Copernicus discovered that the universe does not revolve around us, the notion that human beings hold a privileged place in the cosmic order has been gradually eroded. According to the modern, materialist worldview, the cosmos is a theater whose players are material things that take their directions from the laws of nature. This worldview is often defined in terms of what it repudiates: immaterial souls, God and the afterlife. Recall the professor who defended Lemoine from ridicule: “Unless you want to insist human consciousness resides in an immaterial soul, you ought to concede that it is possible for matter to give life to mind. And it will happen faster the second time, driven by deliberate design, not natural chance.” The idea is not just that the mind, like everything else, is material—but that it, like everything else, can be brought about by human design.
This is the less-noted corollary of materialism: if something is material, then it can be made. If everything is material, then anything can, in principle, be designed and constructed. It is just a matter of working out how to engineer that which, until now, has occurred without deliberate intent.
Still, we can reject the idea that the mind can be designed without positing the existence of an immaterial soul. Intelligence clearly doesn’t crop up in the universe at random—it has specific material preconditions. Nothing rules out that we may one day discover those preconditions. If and when we do, however, it will not necessarily be a matter of our having designed something intelligent.
This claim is as likely to provoke an impatient shrug as it is scandalized opposition. Who cares whether bringing about an intelligent being satisfies a specific notion of “design,” or what counts as “artificial”? If we can produce it—or arrange for it to emerge—isn’t that what matters? The impatience of this response is entirely natural if all things appear under the single, totalizing aspect of the makeable. From that perspective it can only be pedantry to insist on distinguishing things that can be exhaustively explained in terms of their design and construction from things that cannot be. In fact, we are nearing the day when, just as we look to the clockmaker to explain the clock’s functioning, we may look to the AI researcher to explain thought.
●
What would it be to approach the question “Can machines think?” in a different way? Forget about machines for a moment. Instead, just think about thinking.
To think anything at all is already to expose yourself to the possibility of going either right or wrong—of your thinking being true or false. Although it is all too often absent from the discourse surrounding AI, the concept of truth is absolutely essential for understanding thought.
There are numerous paths by which to approach this fact. One of the most direct, however, starts from an ancient observation: one cannot think a contradiction. It is a fundamental principle of thought—a “law,” some call it—that one cannot think both “Alan Turing is alive” and “Alan Turing is not alive” at the same time. Someone who insisted that Turing is alive and dead would not simply be mistaken, as they would be if they thought Turing were alive. Someone who thinks Turing is alive is merely misinformed, but perfectly intelligible—whereas someone who earnestly asserts both that he is alive and that he is dead is not describing even a possible way the world could be. This is why the impossibility of believing a contradiction is a precondition of thought’s relation to the world. Nothing of course rules out that I unwittingly hold contradictory beliefs—this happens often enough—but that is not the same as consciously thinking those thoughts together. According to Aristotle, someone who really rejected the law of noncontradiction would be akin to a vegetable. More politely, we would say that they could not express, or form, a coherent thought.
What is the nature of the impossibility associated with thinking a contradiction? I find it hard in a world of technological aids to learn phone numbers by heart; still, there’s nothing problematic in the idea that someone could remember indefinitely many phone numbers. By contrast, it is not simply an idiosyncrasy of mine—or of human beings in general—that we cannot think a contradiction; it reflects the fact that thought concerns the world. A contradiction can’t be thought because a contradiction cannot be—it can’t be true of the world that Turing is alive and that he is dead.
For a machine to truly think, it too would have to be governed by the law of noncontradiction. A computer can easily be designed so as to never simultaneously “output” both a statement and its contradiction. In that case, the law of noncontradiction may be said to “govern” the machine’s thinking since its programming renders this outcome impossible.
But I do not think this will do. In genuine thinking the truth is freely acknowledged. We are “governed” by the law of noncontradiction only to the extent that we are capable of freely grasping its truth. This is not freedom of choice, since we do not simply decide what is true. It is the freedom characteristic of making up your own mind, of your judgments resting, and resting only, on your recognition of what considerations speak in their favor. In the machine, in place of the free acknowledgment thinking requires, we instead find a mechanism specified and implemented by a designer. But something that conforms to the law of noncontradiction out of mechanical necessity falls short of conducting itself—either in thought or in action—in light of the truth.
That’s why machines, despite the increasingly complex tasks they will be able to perform, will not be able to think. It is tempting to suppose that it is an open question whether thought might eventually be recreated through better technology, programming or “deep learning,” even if we haven’t succeeded in doing so yet. But once we accept that thought is governed by its own principles, its own forms of explanation, we are not free to simultaneously reduce it to such mechanisms. Their modes of explanation are, properly understood, mutually exclusive.
●
The cost of eroding the distinction between genuine thought and artificial intelligence is nothing less than our self-understanding as human beings—as those creatures who think and act, albeit imperfectly, in light of the truth. The suspicion that embracing this self-conception must amount to mystifying intelligence, or refusing to consider how it really works, simply presupposes that to understand something is, at bottom, to be able to construct it. We can deny that without denying that thinking things are a part of material reality. The trick is to resist identifying the material realm with what can, in principle, be reverse engineered or designed. If we one day find ourselves having to combat a widespread delusion that AIs are sentient—or sentient enough to fill the role of friends, lovers, therapists and children—it won’t be because we’re too gullible. It won’t be because we anthropomorphize objects, but because are “artifactualizing” ourselves.
This process is already underway. The prospect of “human enhancement”—including “brain-machine interfaces” that seek to boost our cognitive abilities—is a clear mark of this. Many worry that the consequences of such interventions will be unpredictable and possibly unwelcome. More importantly, however, is the deep void at the center of the whole endeavor. Since the meaning of “enhancement” is left completely open, the project remains neutral as to what, ultimately, is to be gained or perfected. The “human” of “human enhancement” is simply something to be optimized for the efficient pursuit of whatever goal someone might happen to have. We ourselves become, like a computer, merely instrumental—a kind of universal tool.
Even if we insist on treating ourselves as tools, we cannot escape the question: What are we for? Every tool, after all, must have some purpose. To determine what “use” we are to be put, we would need some sense of what is actually worthwhile in the first place—what is worth pursuing, not simply as a means to something else, but for its own sake. This is an ethical question—one that reveals that we are not mere “instruments”—since in answering it we determine how we ought to live. Yet we lose our very ability to respond to such questions when the distinction between humans and artifacts is effaced.
In a 2021 profile in the Times, Mo Gawdat, the ex-chief business officer of Google’s research and development arm, presented his interviewer with an ethical choice:
Gawdat’s advocacy for the humane treatment of AI stems in part from the conviction that we are nearing the “singularity” much-discussed in certain corners of tech: the point at which an AI becomes so intelligent we cannot gauge, from here, its radically transformative effects. What we can know, he thinks, is that we will be unable to control, or even comprehend, an intelligence of this magnitude. The resulting “superbeings” (“a billion times smarter” than us) could solve all our problems unless they turn out to be malevolent. Childlike AIs must be protected, in other words, lest they morph into fearsome gods.
Gawdat’s rendering of the problems posed by AI in terms of the responsibilities we bear toward innocent, corruptible children serves as an extreme microcosm of a wider crisis of understanding. It is symptomatic of this crisis that recognizably ethical concerns, such as the welfare of children, end up appearing only in distorted forms. Even a concern for the continued existence of humanity is envisaged in terms of machines that will either save or annihilate us depending on whether we have treated them humanely or not. No amount of altruistic feeling can remedy such distortions, since their roots do not lie in corrupt intentions, but in a more profound failure to distinguish what is, and is not, truly human. As Gawdat remarked to the Times reporter, “Consciousness—we see more of it in AI than we see in us.”
●
This essay is part of our new issue 29 symposium, “What is tech for?” Click here to see the rest of the symposium.
Art credit: DALL-E 2, Statue of a robotic man staring at another robotic man at a computer, 2023.
If you liked this essay, you’ll love reading The Point in print.