In lines he composed for a play in the mid-1930s, T. S. Eliot wrote of those who
constantly try to escape
From the darkness outside and within
By dreaming of systems so perfect that no one will need to be good.
This has always struck me as a rather apt characterization of a certain technocratic impulse, an impulse that presumes that techno-bureaucratic structures and processes can eliminate the necessity for virtue, or maybe even human involvement altogether.1 We might just as easily speak of systems so perfect that no one will need to be wise or temperate or just. Simply adhere to the code or to the technique with unbending consistency, and all will be well.
This “dream,” as Eliot put it, remains compelling in many quarters. It is also tacitly embedded in the practices fostered by many of our devices, tools and institutions. In fact, a case could be made that the imperative to automate as much of human experience as possible operates as the unacknowledged purpose of contemporary technology. So it’s worth thinking about how this dream manifests itself today and why it can so easily take on a nightmarish quality.
In Eliot’s age, increasingly elaborate and byzantine bureaucracies automated human decision-making in the pursuit of efficiency, speed and scale, thus outsourcing human judgment and, consequently, responsibility. One did not require virtue or good judgment, only a sufficiently well-articulated system of rules. Of course, under these circumstances, a bureaucratic functionary might become a “papier-mâché Mephistopheles,” in Conrad’s memorable phrase, and thus abet forms of what Arendt later called banal evil. But the scale and scope of modern societies also seemed to require such structures to operate reasonably well. Whether strictly necessary or not, these systems introduced a paradox: in order to serve human society, they have to eliminate or displace key elements of the human experience. Of course, what becomes evident eventually is that these systems are not, in fact, serving human ends, at least not necessarily so.
To take a different class of example, we might think of the preoccupation with technological fixes to what may turn out to be irreducibly social and political problems. In a prescient essay from 2020 about the pandemic response, the science writer Ed Yong observed that “instead of solving social problems, the U.S. uses techno-fixes to bypass them, plastering the wounds instead of removing the source of injury—and that’s if people even accept the solution on offer.” There’s no need for good judgment, responsible governance, self-sacrifice or mutual care if there’s an easy technological fix to ostensibly solve the problem. No need, in other words, to be good, so long as the right technological solution can be found.
Likewise, there’s no shortage of examples involving algorithmic tools intended to outsource human judgment. Consider the case of NarxCare, a predictive program developed by Appriss Health, as reported in Wired in 2021. NarxCare is “an ‘analytics tool and care management platform’ that purports to instantly and automatically identify a patient’s risk of misusing opioids.” The article details the case of a 32-year-old woman suffering from endometriosis whose pain medications were cut off, without explanation or recourse, because she triggered a high-risk score from the proprietary algorithm. The details of the story are both fascinating and disturbing, but here’s the pertinent part for my purposes:
Appriss is adamant that a NarxCare score is not meant to supplant a doctor’s diagnosis. But physicians ignore these numbers at their peril. Nearly every state now uses Appriss software to manage its prescription drug monitoring programs, and most legally require physicians and pharmacists to consult them when prescribing controlled substances, on penalty of losing their license.
This is an obviously complex and sensitive issue, but it is hard to escape the conclusion that the use of these algorithmic systems exacerbates the same demoralizing opaqueness, evasion of responsibility and cover-your-ass dynamics that have long characterized analog bureaucracies. It becomes difficult to assume responsibility for a particular decision made in a particular case. Or, to put it otherwise, it becomes too easy to claim “the algorithm made me do it,” and it becomes so, in part, because the existing bureaucratic dynamics all but require it.
This technocratic impulse is alive and well, but we might also profitably invert Eliot’s claim and apply it to our digital-media environment, in which we experience systems so imperfect that it turns out everyone will need to be extraordinarily good. I think about this every time I hear someone advocating for the cultivation of digital-media literacy. This may be helpful under certain circumstances, but it also underestimates or altogether ignores the scope of the problem and its nonintellectual elements. It seems unrealistic, for example, to expect that someone who is likely already swamped by the demands of living in a complex, fast-paced and precarious social milieu will have the leisure and resources necessary to thoroughly “do their own research” about every dubious or contested claim they encounter online, or to adjudicate the competing claims made by those who are supposed to know what they are talking about. This situation raises questions about truth, certainty, trust, authority and expertise, but here I simply want to highlight the moral demands. Searching for the truth, or a sufficient approximation of it, is more than a merely intellectual activity. It involves humility, courage and patience. It presumes a willingness to break with one’s tribe or social network with all the risks that may entail. In short, you need to be not just clever but virtuous, and, depending on the degree to which you live online, you would need to do this persistently over time.
This is but one case, the one that initially led me to invert Eliot’s line. It doesn’t take a great deal of imagination to conjure up other similar examples of the kind of virtue our digital tools and networks tacitly demand of us. Consider the discipline required to responsibly direct one’s attention from moment to moment rather than responding with Pavlovian alacrity when our devices beckon us. Or the degree of restraint necessary to avoid the casual voyeurism that powers so much of our social media feeds. Or how those same platforms can be justly described as machines for the inducement of petty vindictiveness and less-than-righteous indignation. Or, alternatively, as carefully calibrated engines of sloth, greed, envy, despair and self-loathing. The point is not that our digital-media environment necessarily generates vice; rather it’s that it constitutes an ever-present field of temptation, which can require, in turn, monastic degrees of self-discipline to manage. I’m reminded, for example, of how years ago the technology writer Evgeny Morozov described buying a timed safe in which to lock his smartphone, and how, when he discovered he could unscrew the timing mechanism, he locked the screwdriver in there too. Under certain circumstances and for certain people, maintaining a level of basic human decency or even psychic well-being may feel like an exercise in moral sainthood. Perhaps this explains the recent interest in Stoicism, although we would do well to remember Pascal’s pointed criticism of the Stoics: “They conclude that we can always do what we can sometimes do.”
We alternate, then, between environments that seek to render virtue superfluous and environments that tacitly demand a high degree of virtue in order to operate benignly. Both engender their own set of problems, and unsurprisingly, there’s a reciprocal relationship between these two dynamics. Failure to exhibit the requisite virtue creates a demand for the enhancement of rules-based systems to regulate human behavior. Speech on social media platforms is a case in point. The scale and speed of communication on social media generates infamously vexing issues related to speech and expression—issues that are especially evident during a volatile election season or a global pandemic. These issues do not, in my view, admit of obvious solutions beyond shutting down the platforms altogether. That not being a presently viable option, tech companies and lawmakers are increasingly pressured to apply ever more vigilant and stringent forms of moderation, often with counterproductive results. This is yet another complex problem, but it also illustrates the challenge of governing by codes that seek to manage human behavior by generating rules of conduct with attendant consequences for their violation—which, again, may be the only viable way of governing human behavior at the numeric, spatial and temporal scale of digital information environments. Whatever the case, the impulse is to conceive of moral and political challenges as technical problems admitting of engineered solutions.
To be clear, it is not that codes and systems are useless. They have their place, but they require sound judgment in their application, precisely to the degree that they fail to account for the multiplicity of meaningful variables and goods at play in human relations. Trouble arises when we are tempted to make the code, whether analog or digital, and its application coterminous, which would require a rule to cover every possible situation and extenuating circumstance. This is the temptation that animates the impulse to apply a code with blind consistency as if this would be equivalent to justice itself. The philosopher Charles Taylor has called this tendency “code fetishism,” and it ought to be judiciously resisted. In his essay “Perils of Moralism,” Taylor observed that code fetishism “tends to forget the background which makes sense of any code—the variety of goods which rules and norms are meant to realize—as well as the vertical dimension which arises above all these.” Code fetishism in this sense is not unrelated to what the French philosopher and social theorist Jacques Ellul called “technique”: a relentless drive toward efficiency that eventually became an end in itself, having lost sight of the goods for the sake of which efficiency was pursued in the first place. In both cases, the aim is to achieve a machinelike calibration of human relations, one in which actually existing humans, flawed and fickle as we tend to be, appear as glitches and disruptions.
It is worth noting that code fetishism may be something like a default setting for modern liberal-democratic societies, which have a tendency to tilt toward technocracy (while also naturally harboring potent countertendencies). The tilting follows from a preference for proceduralism, or the conviction that an ostensibly neutral set of rules and procedures is an adequate foundation for a just society, particularly in the absence of substantive agreement about the nature of a good society. In this way, there is a long-standing symbiosis between modern politics and modern technology. They both traffic in the ideal of neutrality—neutral tools, neutral processes and neutral institutions. It should not be surprising, then, that contemporary institutions turn toward technological tools to shore up the ideal of neutrality. The presumably neutral algorithm, for example, will solve the problem of bias in criminal sentencing or loan applications or hiring. And neither should it be surprising to discover that what we think of as modern society, built upon this tacit pact between ostensibly neutral political and technological structures, begins to fray and lose its legitimacy as the supposed neutrality of both becomes increasingly implausible.
“We think we have to find the right system of rules, of norms, and then follow them through unfailingly,” Taylor wrote in his foreword to The Rivers North of the Future, a book of conversations on modernity, technology and religion between the Austrian Catholic priest and social critic Ivan Illich and his friend David Cayley. “We cannot see any more,” Taylor continued, “the awkward way these rules fit enfleshed human beings.” These codes often spring from decent motives and good intentions, but they may be all the worse for it. He observed that “ours is a civilization concerned to relieve suffering and enhance human well-being, on a universal scale unprecedented in history, and which at the same time threatens to imprison us in forms that can turn alien and dehumanizing.”
“Codes, even the best codes,” Taylor concludes, “can become idolatrous traps that tempt us to complicity in violence.” Or, as Illich argued, if you forget the particular, bodily, situated context of the other, then the freedom to do good by them exemplified in the story of the Good Samaritan can become the imperative to impose the good as you imagine it on them. “You have,” as Illich bluntly put it, “the basis on which one might feel responsible for bombing the neighbour for his own good.”
In Taylor’s reading, Illich reminds us that “we should find the centre of our spiritual lives beyond the code, deeper than the code, in networks of living concern, which are not to be sacrificed to the code, which must even from time to time subvert it.” “This message,” Taylor acknowledges, “comes out of a certain theology, but it should be heard by everybody.”
Humans have long tried to deal with their imperfections through systems, tools and techniques. These technical solutions have resulted in many genuine improvements to our quality of life, but we must also be mindful of their limitations. Recall the paradoxical tendency of certain technological systems and environments to require an inordinate amount of virtue from the user in order to operate without profound malignant consequences. These tendencies, as I’ve suggested, produce a puzzling dialectic: when we’re unable to act with as much virtue as the system requires, there’s an ever greater demand for systems that render virtue superfluous.
The force that propels this dialectic is the desire to operate at ever greater scales, whether of quantity, space, time or speed. There are thresholds that, when crossed, demand ever more sophisticated codes of behavior or systems that eliminate human judgment and involvement. Under these conditions, networks of consumption arise that render people increasingly passive and dependent, while foreclosing opportunities for action, community and engagement with the world. In these networks, human activity will increasingly be judged in terms of risk management. From this perspective, technology appears chiefly as a force aimed at rendering humans obsolete.
The alternative is to recognize that the contingency that appears as an obstacle and a threat to the system operating at scale may be the very condition of human flourishing. It is in the face of such contingency that a person is free to exercise a measure of agency, to make judgments and assume responsibility, to adapt creatively, to work meaningfully, to experience the consolation of knowing and being known and, yes, of cultivating skill and virtue. If we may posit the existence, however difficult to define, of a human scale appropriate to various spheres of life and practice, then it is at that scale that we will find Taylor’s “networks of living concern,” networks that call forth the full range of human capacities and capabilities.
●
This essay is part of our new issue 29 symposium, “What is tech for?” Click here to see the rest of the symposium.
Art credit: Andrew Tavukciyan, Abstraction 063, 2022. Acrylic on wood panel, 30 × 40 in. Courtesy of the artist.
In lines he composed for a play in the mid-1930s, T. S. Eliot wrote of those who
This has always struck me as a rather apt characterization of a certain technocratic impulse, an impulse that presumes that techno-bureaucratic structures and processes can eliminate the necessity for virtue, or maybe even human involvement altogether.1A version of this essay originally appeared on the author’s blog, The Convivial Society, under the same title. The text has been edited and expanded. We might just as easily speak of systems so perfect that no one will need to be wise or temperate or just. Simply adhere to the code or to the technique with unbending consistency, and all will be well.
This “dream,” as Eliot put it, remains compelling in many quarters. It is also tacitly embedded in the practices fostered by many of our devices, tools and institutions. In fact, a case could be made that the imperative to automate as much of human experience as possible operates as the unacknowledged purpose of contemporary technology. So it’s worth thinking about how this dream manifests itself today and why it can so easily take on a nightmarish quality.
In Eliot’s age, increasingly elaborate and byzantine bureaucracies automated human decision-making in the pursuit of efficiency, speed and scale, thus outsourcing human judgment and, consequently, responsibility. One did not require virtue or good judgment, only a sufficiently well-articulated system of rules. Of course, under these circumstances, a bureaucratic functionary might become a “papier-mâché Mephistopheles,” in Conrad’s memorable phrase, and thus abet forms of what Arendt later called banal evil. But the scale and scope of modern societies also seemed to require such structures to operate reasonably well. Whether strictly necessary or not, these systems introduced a paradox: in order to serve human society, they have to eliminate or displace key elements of the human experience. Of course, what becomes evident eventually is that these systems are not, in fact, serving human ends, at least not necessarily so.
To take a different class of example, we might think of the preoccupation with technological fixes to what may turn out to be irreducibly social and political problems. In a prescient essay from 2020 about the pandemic response, the science writer Ed Yong observed that “instead of solving social problems, the U.S. uses techno-fixes to bypass them, plastering the wounds instead of removing the source of injury—and that’s if people even accept the solution on offer.” There’s no need for good judgment, responsible governance, self-sacrifice or mutual care if there’s an easy technological fix to ostensibly solve the problem. No need, in other words, to be good, so long as the right technological solution can be found.
Likewise, there’s no shortage of examples involving algorithmic tools intended to outsource human judgment. Consider the case of NarxCare, a predictive program developed by Appriss Health, as reported in Wired in 2021. NarxCare is “an ‘analytics tool and care management platform’ that purports to instantly and automatically identify a patient’s risk of misusing opioids.” The article details the case of a 32-year-old woman suffering from endometriosis whose pain medications were cut off, without explanation or recourse, because she triggered a high-risk score from the proprietary algorithm. The details of the story are both fascinating and disturbing, but here’s the pertinent part for my purposes:
This is an obviously complex and sensitive issue, but it is hard to escape the conclusion that the use of these algorithmic systems exacerbates the same demoralizing opaqueness, evasion of responsibility and cover-your-ass dynamics that have long characterized analog bureaucracies. It becomes difficult to assume responsibility for a particular decision made in a particular case. Or, to put it otherwise, it becomes too easy to claim “the algorithm made me do it,” and it becomes so, in part, because the existing bureaucratic dynamics all but require it.
This technocratic impulse is alive and well, but we might also profitably invert Eliot’s claim and apply it to our digital-media environment, in which we experience systems so imperfect that it turns out everyone will need to be extraordinarily good. I think about this every time I hear someone advocating for the cultivation of digital-media literacy. This may be helpful under certain circumstances, but it also underestimates or altogether ignores the scope of the problem and its nonintellectual elements. It seems unrealistic, for example, to expect that someone who is likely already swamped by the demands of living in a complex, fast-paced and precarious social milieu will have the leisure and resources necessary to thoroughly “do their own research” about every dubious or contested claim they encounter online, or to adjudicate the competing claims made by those who are supposed to know what they are talking about. This situation raises questions about truth, certainty, trust, authority and expertise, but here I simply want to highlight the moral demands. Searching for the truth, or a sufficient approximation of it, is more than a merely intellectual activity. It involves humility, courage and patience. It presumes a willingness to break with one’s tribe or social network with all the risks that may entail. In short, you need to be not just clever but virtuous, and, depending on the degree to which you live online, you would need to do this persistently over time.
This is but one case, the one that initially led me to invert Eliot’s line. It doesn’t take a great deal of imagination to conjure up other similar examples of the kind of virtue our digital tools and networks tacitly demand of us. Consider the discipline required to responsibly direct one’s attention from moment to moment rather than responding with Pavlovian alacrity when our devices beckon us. Or the degree of restraint necessary to avoid the casual voyeurism that powers so much of our social media feeds. Or how those same platforms can be justly described as machines for the inducement of petty vindictiveness and less-than-righteous indignation. Or, alternatively, as carefully calibrated engines of sloth, greed, envy, despair and self-loathing. The point is not that our digital-media environment necessarily generates vice; rather it’s that it constitutes an ever-present field of temptation, which can require, in turn, monastic degrees of self-discipline to manage. I’m reminded, for example, of how years ago the technology writer Evgeny Morozov described buying a timed safe in which to lock his smartphone, and how, when he discovered he could unscrew the timing mechanism, he locked the screwdriver in there too. Under certain circumstances and for certain people, maintaining a level of basic human decency or even psychic well-being may feel like an exercise in moral sainthood. Perhaps this explains the recent interest in Stoicism, although we would do well to remember Pascal’s pointed criticism of the Stoics: “They conclude that we can always do what we can sometimes do.”
We alternate, then, between environments that seek to render virtue superfluous and environments that tacitly demand a high degree of virtue in order to operate benignly. Both engender their own set of problems, and unsurprisingly, there’s a reciprocal relationship between these two dynamics. Failure to exhibit the requisite virtue creates a demand for the enhancement of rules-based systems to regulate human behavior. Speech on social media platforms is a case in point. The scale and speed of communication on social media generates infamously vexing issues related to speech and expression—issues that are especially evident during a volatile election season or a global pandemic. These issues do not, in my view, admit of obvious solutions beyond shutting down the platforms altogether. That not being a presently viable option, tech companies and lawmakers are increasingly pressured to apply ever more vigilant and stringent forms of moderation, often with counterproductive results. This is yet another complex problem, but it also illustrates the challenge of governing by codes that seek to manage human behavior by generating rules of conduct with attendant consequences for their violation—which, again, may be the only viable way of governing human behavior at the numeric, spatial and temporal scale of digital information environments. Whatever the case, the impulse is to conceive of moral and political challenges as technical problems admitting of engineered solutions.
To be clear, it is not that codes and systems are useless. They have their place, but they require sound judgment in their application, precisely to the degree that they fail to account for the multiplicity of meaningful variables and goods at play in human relations. Trouble arises when we are tempted to make the code, whether analog or digital, and its application coterminous, which would require a rule to cover every possible situation and extenuating circumstance. This is the temptation that animates the impulse to apply a code with blind consistency as if this would be equivalent to justice itself. The philosopher Charles Taylor has called this tendency “code fetishism,” and it ought to be judiciously resisted. In his essay “Perils of Moralism,” Taylor observed that code fetishism “tends to forget the background which makes sense of any code—the variety of goods which rules and norms are meant to realize—as well as the vertical dimension which arises above all these.” Code fetishism in this sense is not unrelated to what the French philosopher and social theorist Jacques Ellul called “technique”: a relentless drive toward efficiency that eventually became an end in itself, having lost sight of the goods for the sake of which efficiency was pursued in the first place. In both cases, the aim is to achieve a machinelike calibration of human relations, one in which actually existing humans, flawed and fickle as we tend to be, appear as glitches and disruptions.
It is worth noting that code fetishism may be something like a default setting for modern liberal-democratic societies, which have a tendency to tilt toward technocracy (while also naturally harboring potent countertendencies). The tilting follows from a preference for proceduralism, or the conviction that an ostensibly neutral set of rules and procedures is an adequate foundation for a just society, particularly in the absence of substantive agreement about the nature of a good society. In this way, there is a long-standing symbiosis between modern politics and modern technology. They both traffic in the ideal of neutrality—neutral tools, neutral processes and neutral institutions. It should not be surprising, then, that contemporary institutions turn toward technological tools to shore up the ideal of neutrality. The presumably neutral algorithm, for example, will solve the problem of bias in criminal sentencing or loan applications or hiring. And neither should it be surprising to discover that what we think of as modern society, built upon this tacit pact between ostensibly neutral political and technological structures, begins to fray and lose its legitimacy as the supposed neutrality of both becomes increasingly implausible.
“We think we have to find the right system of rules, of norms, and then follow them through unfailingly,” Taylor wrote in his foreword to The Rivers North of the Future, a book of conversations on modernity, technology and religion between the Austrian Catholic priest and social critic Ivan Illich and his friend David Cayley. “We cannot see any more,” Taylor continued, “the awkward way these rules fit enfleshed human beings.” These codes often spring from decent motives and good intentions, but they may be all the worse for it. He observed that “ours is a civilization concerned to relieve suffering and enhance human well-being, on a universal scale unprecedented in history, and which at the same time threatens to imprison us in forms that can turn alien and dehumanizing.”
“Codes, even the best codes,” Taylor concludes, “can become idolatrous traps that tempt us to complicity in violence.” Or, as Illich argued, if you forget the particular, bodily, situated context of the other, then the freedom to do good by them exemplified in the story of the Good Samaritan can become the imperative to impose the good as you imagine it on them. “You have,” as Illich bluntly put it, “the basis on which one might feel responsible for bombing the neighbour for his own good.”
In Taylor’s reading, Illich reminds us that “we should find the centre of our spiritual lives beyond the code, deeper than the code, in networks of living concern, which are not to be sacrificed to the code, which must even from time to time subvert it.” “This message,” Taylor acknowledges, “comes out of a certain theology, but it should be heard by everybody.”
Humans have long tried to deal with their imperfections through systems, tools and techniques. These technical solutions have resulted in many genuine improvements to our quality of life, but we must also be mindful of their limitations. Recall the paradoxical tendency of certain technological systems and environments to require an inordinate amount of virtue from the user in order to operate without profound malignant consequences. These tendencies, as I’ve suggested, produce a puzzling dialectic: when we’re unable to act with as much virtue as the system requires, there’s an ever greater demand for systems that render virtue superfluous.
The force that propels this dialectic is the desire to operate at ever greater scales, whether of quantity, space, time or speed. There are thresholds that, when crossed, demand ever more sophisticated codes of behavior or systems that eliminate human judgment and involvement. Under these conditions, networks of consumption arise that render people increasingly passive and dependent, while foreclosing opportunities for action, community and engagement with the world. In these networks, human activity will increasingly be judged in terms of risk management. From this perspective, technology appears chiefly as a force aimed at rendering humans obsolete.
The alternative is to recognize that the contingency that appears as an obstacle and a threat to the system operating at scale may be the very condition of human flourishing. It is in the face of such contingency that a person is free to exercise a measure of agency, to make judgments and assume responsibility, to adapt creatively, to work meaningfully, to experience the consolation of knowing and being known and, yes, of cultivating skill and virtue. If we may posit the existence, however difficult to define, of a human scale appropriate to various spheres of life and practice, then it is at that scale that we will find Taylor’s “networks of living concern,” networks that call forth the full range of human capacities and capabilities.
●
This essay is part of our new issue 29 symposium, “What is tech for?” Click here to see the rest of the symposium.
Art credit: Andrew Tavukciyan, Abstraction 063, 2022. Acrylic on wood panel, 30 × 40 in. Courtesy of the artist.
If you liked this essay, you’ll love reading The Point in print.