Last October, the Food and Drug Administration announced a shortage of the attention-deficit/hyperactivity disorder medication Adderall, leaving millions who are prescribed the drug scrambling to fill their prescriptions. The shortage, which has persisted into this year, appears to have been caused, at least in part, by the misadventures of a handful of shady telemedicine startups, which grew rapidly during the COVID-19 pandemic by making it easy to access a wide range of drugs with only brief online evaluations. Last year one of these companies, Ahead, closed down, and major pharmacies stopped accepting prescriptions from two others (Cerebral and Done) due to possible violations of the Controlled Substances Act. After a pandemic glut, Americans have been left in stimulant withdrawal.
As with many crises of the present, the amphetamine shortage was immediately politicized, with right-wing pundits like Matt Walsh jumping into the fray to cast doubt on the legitimacy of ADHD diagnosis. Many liberals and leftists instinctively reacted by coming to its defense, placing them once again in the awkward position of implicitly affirming the practices of the pharmaceutical industry. Much social media heat was generated in the back and forth, and the polarized positions on the matter, as usual, only hardened: either people need to grow up and learn to deal with reality without drugs, or there is great medical need going unmet and causing suffering. On both sides, a fuming moralism attends a rigid perspective.
Prompted by this crisis, the reporter Wilfred Chan asked a series of provocative questions in the Guardian: “Who should decide who deserves Adderall? … Is there a hard limit on the number of people who deserve care? How many patients is the DEA willing to refuse treatment before it considers revising its quotas?” Chan criticized the telemed startups that “botched their attempt to expand access to ADHD medication,” but said that federal enforcement agencies had made a bad situation worse by refusing to expand the supply to meet the demand. Chan’s answer to his own “fundamental question” about who deserves Adderall was straightforward: ADHD medications are “one of the most effective treatments in all of medicine,” he approvingly quotes one psychiatrist as saying. They offer “dramatic benefits to ADHD patients,” as opposed to those without ADHD, who sometimes desire them instead for their mood-elevating effects (a “high”). So who deserves treatment? ADHD patients. And who decides? Their doctors.
Would that it were so simple. Diagnosing and treating conditions like ADHD is not as cut-and-dry as it might seem. Pharmaceutical companies have gradually undermined physician authority and remolded patients’ self-conceptions in order to boost prescription writing. Their methods range from direct-to-consumer advertising (something that only exists in the United States and New Zealand) and the formation of astroturfed advocacy organizations to physician wining-and-dining and the promotion of their own in-house authorities, or “thought-experts.” Over time, medical authority has shifted, as the anthropologist Joseph Dumit has noted, “from doctors to clinical researchers to pharmaceutical company researchers to pharmaceutical company marketers.”
The basic issue with drug makers dictating the markets for their own medications is obvious enough—yet another case of health care being about profit rather than people. But the implications for ADHD diagnosis are particularly discomfiting. If doctors have been disempowered by a pharmaceutical industry that has rigged the game from all sides, including by preparing patients through marketing to be their own self-advocates for drug treatment (“Talk to your doctor about…”) and by coaxing doctors into keeping their prescription pads in hand at all times, it is not physicians but corporations that are ultimately deciding who deserves ADHD medications. According to simple business logic, it would make sense that those companies would want to grow the category of patients who “deserve” their often expensive, brand-name drugs. But what does this extension of the ambit of medical need mean for how we understand and treat that need? Where does legitimate need end and pharmaceutical overreach begin?
These questions touch on existential issues for the field of psychiatry. The pharmaceutical industry enjoys handsome profit margins for alleviating everyday suffering, and with the help of its well-funded advertising departments, it has also done a great deal to fashion our understanding of the need for its products. But just out of frame of the imposing mirror that Big Pharma wants us to hold up to ourselves are the social factors that shape mental health. Any serious consideration of these determinants naturally raises questions about if and how psychiatric conditions are constructed, and how we understand ourselves as constructed beings. Following those questions to their end might lead one to conclude that such conditions are illusory, but it could just as well result in a more honest conversation about why we take drugs, and what else we might do instead.
●
Anyone who starts down this road is greeted quickly with a cautionary interjection: “Wait, are you implying that ADHD isn’t real?” It is a question often bearing moral implications for the addressee, who risks stigmatizing a mental health condition in doubting its existence. To my mind, it is best simply to be a nominalist in these matters: ADHD is real because physicians and patients alike say it is. Every once in a while, one finds an account of how “ADHD does not exist,” but particularly in the realm of mental health, a rigid realism is not a very productive point of departure. ADHD exists: let’s start there. But ADHD the term did not always exist, nor has the condition it names always demarcated the group of people that deserves amphetamines—two developments that need to be reckoned with if we are to understand just how ADHD exists today.
The treatment of something akin to ADHD with amphetamines goes back to Dr. Charles Bradley, who in 1937 gave Benzedrine—the first marketed amphetamine—to children with behavioral problems. Bradley found that the drug made them less raucous and more focused. Benzedrine itself had come to market only a few years prior. While it was advertised as a mood elevator, its maker, Smith, Kline & French (SKF, today GSK), had debated the end to which their new miracle drug would be marketed, the other candidates being weight loss and mental performance enhancement.
Already by the late 1930s, stories began to come out about Benzedrine’s addictiveness—“Pep Pill Poisoning,” read a headline in Time—but any initial reservations Americans bore toward this new miracle drug were quickly muted by the war. It is fairly well known that the Nazi war machine was fueled by methamphetamine under the name Pervitin, which gave German soldiers an edge in energy and focus. According to historian Peter Andreas, “the speed of Blitzkrieg literally came from speed.” By 1942, however, the Germans were beginning to understand the pitfalls of prolonged amphetamine use and dialed back their Pervitin consumption, just as the Allies were discovering the perks of amphetamines themselves. Between 1942 and 1945, over 250 million “energy tablets” were supplied to American and British troops, and amphetamines have played a prominent role in every American war since then. (To be clear, though methamphetamine and amphetamine have come to have very different reputations, they are quite closely related in chemical structure and have been used interchangeably in medications. Tempting as it might be to see the Nazis as using the evil stimulant and the Allies the good one, the psychopharmacological practices of the two sides did not significantly differ except in timing.)
The war was a godsend to the amphetamine business. According to psychiatrists Lester Grinspoon and Peter Hedblom, “World War II probably gave the greatest impetus to date to legal medically authorized as well as illicit black-market abuse of these pills on a worldwide scale.” When soldiers came home, they helped normalize the regular use of amphetamines, which were easy to access by prescription in some forms and over the counter in others. Speed was adult candy in the postwar period—readily available, socially sanctioned and much desired. With the three uses in mind that SKF had originally mulled for Benzedrine—being thin, smart and peppy—it’s not hard to see the appeal. In various forms (Benzedrine, Dexedrine, Methedrine) and combined with other drugs (Dexamyl, Ambar, Desbutal), amphetamines were marketed during the postwar period to all three ends. Until the end of the 1960s, amphetamines bore no real social stigma, and they remained the mood lifter of choice for physicians through the early 1970s.
One remarkable feature of postwar psychopharmaceutical advertising is that it very often makes direct reference to social imperatives. An ad for Serax pictures a housewife behind bars of mops and brooms with the tagline, “You can’t set her free. But you can help her feel less anxious.” One for Librium talks about the stress associated with work (“This man thinks he may never work again”), another with world events (“Cuba and Vietnam, assassination and devaluation, Biafra and Czechoslovakia”). Before the biological revolution in psychiatry, after which all mental illnesses became theoretically reducible to brain or neurotransmitter disorders, pharmaceutical copy could be pretty direct about why amphetamines, benzodiazepines and other staples of the postwar period were appealing: Our society is a stressful one to live in. These drugs are here to help you keep up and get by.
●
While adult amphetamine consumption blossomed, researchers began looking at ways to expand the market. In the late Fifties, Bradley’s research piqued the interest of psychiatrist Leon Eisenberg and his protégé Keith Conners, just as the Swiss pharmaceutical firm CIBA was releasing a new drug, Ritalin. Today methylphenidate (in Ritalin, Concerta, Methylin, and other brand names) and amphetamine (in Adderall, Vyvanse, etc.), closely related in molecular structure, are the two most common prescriptions for ADHD. At first, Ritalin was marketed for a wide range of adult symptoms, but after a landmark study by Eisenberg and Conners in 1963, it became known as a kind of miracle cure for children (predominantly boys) whose issues fell short of severe psychopathology but were nonetheless felt to be uncontrollable in their impulsivity. They inherited the term “minimal brain damage” for this disorder, but there being no evidence of actual damage, doctors increasingly referenced “minimal brain dysfunction” or “hyperkinesis,” the two most common diagnoses mentioned in ads for Ritalin. CIBA’s first tagline in their ad copy strikes a somber chord: “Ritalin helps ‘the problem child’ become lovable again.”
Ritalin’s rise coincided with the decline of what historian Nicolas Rasmussen has called “America’s first amphetamine epidemic.” In 1967 Good Housekeeping published a sad account of a woman who became addicted to amphetamines to lose weight and keep up with her domestic duties. Soon, the Controlled Substances Act in 1970 would target not only the countercultural excesses of the 1960s but also the dangers of “white market” medications like Benzedrine. Production of amphetamines quickly cratered, and illegal stimulants like methamphetamine and cocaine began to fill the void. But there were still a few conditions for which amphetamines could be prescribed, minimal brain dysfunction being one of them.
In 1980, with criticisms of the concept of “minimal brain dysfunction” in mind, the American Psychiatric Association’s Diagnostic and Statistical Manual of Mental Disorders (DSM) introduced the term “Attention Deficit Disorder” in terms more or less derived from the work of Conners. Dr. Judith Rapoport of the National Institute of Mental Health immediately sounded a warning bell out of concern that it could become a catchall diagnosis for a wide range of pediatric behavioral issues: “ADD could replace oedipal anxiety as a new universal explanation; I urge restraint.”
With the help of advocacy groups like Children and Adults with Attention Deficit/Hyperactivity Disorder (CHADD), Rapoport’s warning went unheard. Already anxious about their children’s future in a rapidly changing world, American parents were primed by groups like CHADD to see ADD as a welcome diagnosis, and one that bore no stigma. A 1995 PBS documentary revealed that CHADD was a kind of front for Ritalin’s manufacturer Ciba-Geigy; a Ciba representative even admitted that “CHADD is essentially a conduit.” But the 1990s, announced at its outset by George H. W. Bush as “the decade of the brain,” was no time for critical reflection. In 1994, two years before getting FDA approval and in the same year that the next iteration of the DSM changed ADD to ADHD, Adderall hit the market as a “unique alternative” to Ritalin. Its name was meant to evoke its potential market: ADD for All.
Extending Ritalin’s initial promise to make children lovable again, Adderall XR ads promised to teach parents “a whole new language” for talking to their kids—phrases like “Good job on your homework!” and “I’m proud of you.”
With sales through the roof, it was only a matter of time before pharmaceutical companies began pushing for an expansion of the boundaries of ADHD. Those boundaries were at first extended forward to include young adults, then adults as such, then the elderly; they were also pushed backward to include toddlers. The CDC estimates that almost four hundred thousand children between the ages of two and five have been diagnosed with ADHD, despite the obvious difficulty of discerning the disorder’s characteristics in children that young.
●
In his 2016 book ADHD Nation, the journalist Alan Schwarz claimed that a “thriving ADHD industrial complex” had resulted in millions of false cases of ADHD. According to Schwarz, roughly two thirds of ADHD diagnoses were in fact misdiagnoses. The DSM-V (2013) had settled on the idea that 5 percent of children had ADHD, based on hundreds of studies (this was up from an estimate of 3 to 5 percent in the DSM-IV, which came out a decade earlier). Meanwhile, CDC data showed that roughly 15 percent of American children would receive an ADHD diagnosis, a number Schwarz extrapolated from its report of an existing 11 percent ADHD prevalence rate in 2011. (He was projecting that some of the children who had not yet received the diagnosis would—thus, the two-thirds claim). While Schwarz categorically asserted that ADHD was real, he also believed that most ADHD diagnoses were not.
Today the CDC’s number is down to 9.8 percent, and the American Psychiatric Association (APA) website says that 8.4 percent of children have ADHD. According to Schwarz’s calculations, this would technically mean that ADHD diagnoses have become more accurate, but the difficulty here in assessing the problems with existing practices around ADHD is apparent enough. Is the closing gap a sign that the APA is better recognizing the prevalence of an existing disorder, or that the association’s own criteria are slowly shifting to meet the reality of existing practice? Were Schwarz’s concerns for the future of “ADHD Nation” misplaced, or is legitimate need now going unmet because of concerns that he and others raised in the mid-2010s?
Brain science is often touted as the way to get to the truth here. It’s not uncommon to hear claims like “in ADHD, we don’t have normal levels of dopamine in the brain” or “ADHD is caused by a dysfunctional dopamine system.” The clinician’s task is thus to separate those patients with a dysregulation of dopamine from those without—determine this, and Schwarz’s concerns melt away under the white hot light of neuroscientific reduction.
Unfortunately, the picture is a bit murkier than we are led to believe. Many will point to dopamine as a determinant of ADHD, but other neurotransmitters like norepinephrine and serotonin seem to be involved, and the DNA Learning Center reports that children with ADHD have “a thinner cortex in the areas of the brain responsible for attention control.” Even on the terms of biological reduction, then, the story is not so simple, but most agree that the case for biological reduction itself has not really been made. The DSM-V Text Revision notes that “no biological marker is diagnostic for ADHD” and that “meta-analysis of all neuroimaging studies do not show differences between individuals with ADHD and control subjects.” It hints at psychosocial factors and environmental context but offers no full-fledged theory on this front. Schwarz relates the most honest take on our current understanding of the development of ADHD: “No one quite knows what causes it.”
For some, this simply means that the search for a cause is still ongoing, but it’s possible that the neurobiological paradigm has put us on a fool’s errand. Psychiatrists like Ross Baldessarini have indicted the “largely fruitless efforts to support a dopamine excess hypothesis of schizophrenia or mania, a norepinephrine or serotonin deficiency of major depression, a serotonin deficiency hypothesis in obsessive-compulsive disorder, and so on.” But this has not stopped pharmaceutical companies from pushing neuroscientific explanations as reductive but easily graspable reasons for taking their drugs. “How did this public health disaster come about, putting much of the population on pharmacological agents from which they would derive few of the benefits but many of the side effects?” the historian of medicine Edward Shorter has asked. “It happened,” Shorter bluntly concludes, “because the pharmaceutical companies badly needed these neurotransmitter reuptake theories in their communications with the medical profession and with the public.”
The victory of the biological revolution in psychiatry is perhaps most evidenced by the ease with which phrases like “norepinephrine reuptake” or “adenosine receptor” now roll off the tongue. Many patients have so thoroughly absorbed the language of the biological revolution that they now explain their own conditions in terms of dopamine and synapses, as though this vocabulary were adequate to the task of describing human interiority. The convenient result for drug companies and insurers is that the solution to every psychiatric problem is a pill, taken on a regular basis.
In the mid-twentieth century, mental disorder, disorganization and stress were understood not simply as a matter of brain chemistry but as largely socially determined. It was (and is) difficult to be a good worker, partner or student. Some people had difficulty keeping up in the American hustle, and it just so happened that there were pills to help those people do so. Today, by contrast, having ADHD means there is something wrong with one’s brain, and ADHD meds make the brain “normal.” Everyone else, like the college student who takes Adderall to finish a paper or “get shit done,” is simply an undeserving stimulant user.
The dream of one day finding a strictly chemical etiology for the various mental disorders that are determined presently by their outward behavioral symptoms has, so far, borne little fruit. Nevertheless, we still point to justifications like “a dysregulation of dopamine” to separate “legitimate” amphetamine users from abusers. It seems fine to say that people with ADHD have issues with hyperactivity, compulsion and distraction, and that ADHD medications help them with these issues. Why do we need to go further and say that those issues are solely rooted in an underlying brain dysfunction, and that those without it are taking stimulants for unsanctioned purposes?
A reassessment of the neurobiological paradigm that Big Pharma has pushed since the 1980s is long overdue. In addition to rigidly separating deserving from undeserving users, it has also served to occlude the social imperatives behind psychiatric diagnosis and medication. When ADHD is just a brain disorder, absent from its conceptualization are the academic stresses that children are subject to from a very young age. Would ADHD diagnosis be as prevalent if second graders had more outside play time and were not subject to standardized testing? Or if high schoolers and college students did not feel that their career prospects hinged precariously on their GPA? Absent also is any discussion of the various cultural products that constantly divide our attention, the hypercompetitive and demanding work environments that people must endure, or the perverse incentives of Medicaid and Supplemental Security Income, which have made an ADHD diagnosis a financial lifeline for many poor families. Any understanding of ADHD that does not include these social determinants seems both incomplete and dangerous, in framing a form of psychoactive drug use separately from the various things that compel it.
At the most basic level, neurobiological reduction grates against common sense. Anyone who has taken stimulants, licit or otherwise, knows what they do. They give us pep, endurance and focus to complete our tasks, but they also make us jittery and paranoid; sometimes they can burn us out, leaving us frayed and in need of recovery. We know this, and yet somehow at the same time, when it comes to the official justifications, we forget it. When children are prescribed ADHD medications that are said to make their brains “normal,” and then they demonstrate the characteristic anxiety that goes along with prolonged and intense stimulant use, they are sometimes then diagnosed with a separate disorder and prescribed another medication. Cascading drug treatments thus easily follow from this blinkered neurobiological perspective, and once again, it’s the pharmaceutical companies that benefit.
●
In 2015, the ADHD medication Vyvanse was approved for use in the treatment of moderate or severe binge-eating disorder (BED) in adults. Some tie BED to dopamine dysregulation, but the literature on the subject is even more vague with respect to causation than that on ADHD. Some might view this development as extending the realm of legitimate needs that amphetamines and other stimulants serve, while others would decry this as an illegitimate use of medications that should be reserved for ADHD patients. Still others would see it as simply making official what people have used amphetamines for since the release of Benzedrine in the 1930s: weight loss.
One could imagine a society where the many purposes for which people have used amphetamines—weight loss, endurance, control, mental focus and even simply feeling good—were all socially acceptable, and where the discourse around deserving and undeserving amphetamine users was simply illegible. Indeed, this was essentially how amphetamine use was conceptualized in the postwar era, though without a true reckoning with the negative consequences that intensive stimulant use inevitably brings. If we were to revive that conception but with a greater attention to amphetamines’ side effects, addictiveness, and generally negative health consequences, perhaps they could be marketed generically and treated like any potentially dangerous psychoactive drug: fine to take from time to time, having pretty clear effects on our bodies and minds that can be beneficial, and a lot of fun under the right circumstances, but also highly addictive substances that you probably should not take like Vitamin C.
I cannot say whether that would be a better society. All signs point in the direction of drug liberalization continuing apace, and while that trend could involve an undoing of some of the tragedies of the War on Drugs, it’s not clear that moving powerful psychoactive substances more openly into the realm of profit extraction is necessarily a good thing.
What I can say is that it would paradoxically be a better society for people with ADHD. The reason that amphetamine production has not ramped up to meet the contemporary shortage is that the official justification for the use of amphetamines is pretty narrow, clearly defining deserving and undeserving amphetamine users. In such a world, where ADHD patients are the legitimate users and everyone else is just trying to get high or get ahead, it is reasonable to expect that government authorities would limit the available supply of amphetamines to prevent illegitimate use. In other words, it is the very paradigm within which ADHD is conceptualized today that is preventing its pharmacological treatment. If the boundaries of that paradigm were not so rigorously guarded, we could not only have a more honest conversation about how amphetamines are used, but there would also be a more abundant supply of amphetamines for ADHD patients to access.
But again, I say that in simple pursuit of a greater transparency about the amphetamine predicament today. It does not seem accidental that the steady decline of the American empire and the evisceration of anything resembling “the American dream” have coincided with legalization and decriminalization movements that promise to put a society of world-historic drug use over the edge. The opiates of the masses today are undoubtedly actual opiates (some of Marx’s metaphors are becoming frighteningly literal). Opening the amphetamine floodgates by dropping the neurobiological and moralistic justifications around ADHD would help a lot of people, but so too would changing the social imperatives and circumstances (cutthroat competition at work and in schools, a fraying social safety net, intensive smartphone and social media use, etc.) that spur amphetamine use.
Unfortunately, ever since the debunking of the Federal Bureau of Narcotics “reefer madness” campaign in the 1930s, drug liberalization advocates have been too willing to keep those determinants out of sight in their efforts to destigmatize substance use and draw attention to the harms of criminalization. But in doing so, they have ceded the social critique to the right. Dr. Gabriel Nahas, darling of the anti-marijuana parent movement in the Eighties, wondered in 1973 “how long a political system can endure when drug taking becomes one of the prerequisites of happiness.” In the coming decades, Americans seem fated to find out.
Last October, the Food and Drug Administration announced a shortage of the attention-deficit/hyperactivity disorder medication Adderall, leaving millions who are prescribed the drug scrambling to fill their prescriptions. The shortage, which has persisted into this year, appears to have been caused, at least in part, by the misadventures of a handful of shady telemedicine startups, which grew rapidly during the COVID-19 pandemic by making it easy to access a wide range of drugs with only brief online evaluations. Last year one of these companies, Ahead, closed down, and major pharmacies stopped accepting prescriptions from two others (Cerebral and Done) due to possible violations of the Controlled Substances Act. After a pandemic glut, Americans have been left in stimulant withdrawal.
As with many crises of the present, the amphetamine shortage was immediately politicized, with right-wing pundits like Matt Walsh jumping into the fray to cast doubt on the legitimacy of ADHD diagnosis. Many liberals and leftists instinctively reacted by coming to its defense, placing them once again in the awkward position of implicitly affirming the practices of the pharmaceutical industry. Much social media heat was generated in the back and forth, and the polarized positions on the matter, as usual, only hardened: either people need to grow up and learn to deal with reality without drugs, or there is great medical need going unmet and causing suffering. On both sides, a fuming moralism attends a rigid perspective.
Prompted by this crisis, the reporter Wilfred Chan asked a series of provocative questions in the Guardian: “Who should decide who deserves Adderall? … Is there a hard limit on the number of people who deserve care? How many patients is the DEA willing to refuse treatment before it considers revising its quotas?” Chan criticized the telemed startups that “botched their attempt to expand access to ADHD medication,” but said that federal enforcement agencies had made a bad situation worse by refusing to expand the supply to meet the demand. Chan’s answer to his own “fundamental question” about who deserves Adderall was straightforward: ADHD medications are “one of the most effective treatments in all of medicine,” he approvingly quotes one psychiatrist as saying. They offer “dramatic benefits to ADHD patients,” as opposed to those without ADHD, who sometimes desire them instead for their mood-elevating effects (a “high”). So who deserves treatment? ADHD patients. And who decides? Their doctors.
Would that it were so simple. Diagnosing and treating conditions like ADHD is not as cut-and-dry as it might seem. Pharmaceutical companies have gradually undermined physician authority and remolded patients’ self-conceptions in order to boost prescription writing. Their methods range from direct-to-consumer advertising (something that only exists in the United States and New Zealand) and the formation of astroturfed advocacy organizations to physician wining-and-dining and the promotion of their own in-house authorities, or “thought-experts.” Over time, medical authority has shifted, as the anthropologist Joseph Dumit has noted, “from doctors to clinical researchers to pharmaceutical company researchers to pharmaceutical company marketers.”
The basic issue with drug makers dictating the markets for their own medications is obvious enough—yet another case of health care being about profit rather than people. But the implications for ADHD diagnosis are particularly discomfiting. If doctors have been disempowered by a pharmaceutical industry that has rigged the game from all sides, including by preparing patients through marketing to be their own self-advocates for drug treatment (“Talk to your doctor about…”) and by coaxing doctors into keeping their prescription pads in hand at all times, it is not physicians but corporations that are ultimately deciding who deserves ADHD medications. According to simple business logic, it would make sense that those companies would want to grow the category of patients who “deserve” their often expensive, brand-name drugs. But what does this extension of the ambit of medical need mean for how we understand and treat that need? Where does legitimate need end and pharmaceutical overreach begin?
These questions touch on existential issues for the field of psychiatry. The pharmaceutical industry enjoys handsome profit margins for alleviating everyday suffering, and with the help of its well-funded advertising departments, it has also done a great deal to fashion our understanding of the need for its products. But just out of frame of the imposing mirror that Big Pharma wants us to hold up to ourselves are the social factors that shape mental health. Any serious consideration of these determinants naturally raises questions about if and how psychiatric conditions are constructed, and how we understand ourselves as constructed beings. Following those questions to their end might lead one to conclude that such conditions are illusory, but it could just as well result in a more honest conversation about why we take drugs, and what else we might do instead.
●
Anyone who starts down this road is greeted quickly with a cautionary interjection: “Wait, are you implying that ADHD isn’t real?” It is a question often bearing moral implications for the addressee, who risks stigmatizing a mental health condition in doubting its existence. To my mind, it is best simply to be a nominalist in these matters: ADHD is real because physicians and patients alike say it is. Every once in a while, one finds an account of how “ADHD does not exist,” but particularly in the realm of mental health, a rigid realism is not a very productive point of departure. ADHD exists: let’s start there. But ADHD the term did not always exist, nor has the condition it names always demarcated the group of people that deserves amphetamines—two developments that need to be reckoned with if we are to understand just how ADHD exists today.
The treatment of something akin to ADHD with amphetamines goes back to Dr. Charles Bradley, who in 1937 gave Benzedrine—the first marketed amphetamine—to children with behavioral problems. Bradley found that the drug made them less raucous and more focused. Benzedrine itself had come to market only a few years prior. While it was advertised as a mood elevator, its maker, Smith, Kline & French (SKF, today GSK), had debated the end to which their new miracle drug would be marketed, the other candidates being weight loss and mental performance enhancement.
Already by the late 1930s, stories began to come out about Benzedrine’s addictiveness—“Pep Pill Poisoning,” read a headline in Time—but any initial reservations Americans bore toward this new miracle drug were quickly muted by the war. It is fairly well known that the Nazi war machine was fueled by methamphetamine under the name Pervitin, which gave German soldiers an edge in energy and focus. According to historian Peter Andreas, “the speed of Blitzkrieg literally came from speed.” By 1942, however, the Germans were beginning to understand the pitfalls of prolonged amphetamine use and dialed back their Pervitin consumption, just as the Allies were discovering the perks of amphetamines themselves. Between 1942 and 1945, over 250 million “energy tablets” were supplied to American and British troops, and amphetamines have played a prominent role in every American war since then. (To be clear, though methamphetamine and amphetamine have come to have very different reputations, they are quite closely related in chemical structure and have been used interchangeably in medications. Tempting as it might be to see the Nazis as using the evil stimulant and the Allies the good one, the psychopharmacological practices of the two sides did not significantly differ except in timing.)
The war was a godsend to the amphetamine business. According to psychiatrists Lester Grinspoon and Peter Hedblom, “World War II probably gave the greatest impetus to date to legal medically authorized as well as illicit black-market abuse of these pills on a worldwide scale.” When soldiers came home, they helped normalize the regular use of amphetamines, which were easy to access by prescription in some forms and over the counter in others. Speed was adult candy in the postwar period—readily available, socially sanctioned and much desired. With the three uses in mind that SKF had originally mulled for Benzedrine—being thin, smart and peppy—it’s not hard to see the appeal. In various forms (Benzedrine, Dexedrine, Methedrine) and combined with other drugs (Dexamyl, Ambar, Desbutal), amphetamines were marketed during the postwar period to all three ends. Until the end of the 1960s, amphetamines bore no real social stigma, and they remained the mood lifter of choice for physicians through the early 1970s.
One remarkable feature of postwar psychopharmaceutical advertising is that it very often makes direct reference to social imperatives. An ad for Serax pictures a housewife behind bars of mops and brooms with the tagline, “You can’t set her free. But you can help her feel less anxious.” One for Librium talks about the stress associated with work (“This man thinks he may never work again”), another with world events (“Cuba and Vietnam, assassination and devaluation, Biafra and Czechoslovakia”). Before the biological revolution in psychiatry, after which all mental illnesses became theoretically reducible to brain or neurotransmitter disorders, pharmaceutical copy could be pretty direct about why amphetamines, benzodiazepines and other staples of the postwar period were appealing: Our society is a stressful one to live in. These drugs are here to help you keep up and get by.
●
While adult amphetamine consumption blossomed, researchers began looking at ways to expand the market. In the late Fifties, Bradley’s research piqued the interest of psychiatrist Leon Eisenberg and his protégé Keith Conners, just as the Swiss pharmaceutical firm CIBA was releasing a new drug, Ritalin. Today methylphenidate (in Ritalin, Concerta, Methylin, and other brand names) and amphetamine (in Adderall, Vyvanse, etc.), closely related in molecular structure, are the two most common prescriptions for ADHD. At first, Ritalin was marketed for a wide range of adult symptoms, but after a landmark study by Eisenberg and Conners in 1963, it became known as a kind of miracle cure for children (predominantly boys) whose issues fell short of severe psychopathology but were nonetheless felt to be uncontrollable in their impulsivity. They inherited the term “minimal brain damage” for this disorder, but there being no evidence of actual damage, doctors increasingly referenced “minimal brain dysfunction” or “hyperkinesis,” the two most common diagnoses mentioned in ads for Ritalin. CIBA’s first tagline in their ad copy strikes a somber chord: “Ritalin helps ‘the problem child’ become lovable again.”
Ritalin’s rise coincided with the decline of what historian Nicolas Rasmussen has called “America’s first amphetamine epidemic.” In 1967 Good Housekeeping published a sad account of a woman who became addicted to amphetamines to lose weight and keep up with her domestic duties. Soon, the Controlled Substances Act in 1970 would target not only the countercultural excesses of the 1960s but also the dangers of “white market” medications like Benzedrine. Production of amphetamines quickly cratered, and illegal stimulants like methamphetamine and cocaine began to fill the void. But there were still a few conditions for which amphetamines could be prescribed, minimal brain dysfunction being one of them.
In 1980, with criticisms of the concept of “minimal brain dysfunction” in mind, the American Psychiatric Association’s Diagnostic and Statistical Manual of Mental Disorders (DSM) introduced the term “Attention Deficit Disorder” in terms more or less derived from the work of Conners. Dr. Judith Rapoport of the National Institute of Mental Health immediately sounded a warning bell out of concern that it could become a catchall diagnosis for a wide range of pediatric behavioral issues: “ADD could replace oedipal anxiety as a new universal explanation; I urge restraint.”
With the help of advocacy groups like Children and Adults with Attention Deficit/Hyperactivity Disorder (CHADD), Rapoport’s warning went unheard. Already anxious about their children’s future in a rapidly changing world, American parents were primed by groups like CHADD to see ADD as a welcome diagnosis, and one that bore no stigma. A 1995 PBS documentary revealed that CHADD was a kind of front for Ritalin’s manufacturer Ciba-Geigy; a Ciba representative even admitted that “CHADD is essentially a conduit.” But the 1990s, announced at its outset by George H. W. Bush as “the decade of the brain,” was no time for critical reflection. In 1994, two years before getting FDA approval and in the same year that the next iteration of the DSM changed ADD to ADHD, Adderall hit the market as a “unique alternative” to Ritalin. Its name was meant to evoke its potential market: ADD for All.
Extending Ritalin’s initial promise to make children lovable again, Adderall XR ads promised to teach parents “a whole new language” for talking to their kids—phrases like “Good job on your homework!” and “I’m proud of you.”
With sales through the roof, it was only a matter of time before pharmaceutical companies began pushing for an expansion of the boundaries of ADHD. Those boundaries were at first extended forward to include young adults, then adults as such, then the elderly; they were also pushed backward to include toddlers. The CDC estimates that almost four hundred thousand children between the ages of two and five have been diagnosed with ADHD, despite the obvious difficulty of discerning the disorder’s characteristics in children that young.
●
In his 2016 book ADHD Nation, the journalist Alan Schwarz claimed that a “thriving ADHD industrial complex” had resulted in millions of false cases of ADHD. According to Schwarz, roughly two thirds of ADHD diagnoses were in fact misdiagnoses. The DSM-V (2013) had settled on the idea that 5 percent of children had ADHD, based on hundreds of studies (this was up from an estimate of 3 to 5 percent in the DSM-IV, which came out a decade earlier). Meanwhile, CDC data showed that roughly 15 percent of American children would receive an ADHD diagnosis, a number Schwarz extrapolated from its report of an existing 11 percent ADHD prevalence rate in 2011. (He was projecting that some of the children who had not yet received the diagnosis would—thus, the two-thirds claim). While Schwarz categorically asserted that ADHD was real, he also believed that most ADHD diagnoses were not.
Today the CDC’s number is down to 9.8 percent, and the American Psychiatric Association (APA) website says that 8.4 percent of children have ADHD. According to Schwarz’s calculations, this would technically mean that ADHD diagnoses have become more accurate, but the difficulty here in assessing the problems with existing practices around ADHD is apparent enough. Is the closing gap a sign that the APA is better recognizing the prevalence of an existing disorder, or that the association’s own criteria are slowly shifting to meet the reality of existing practice? Were Schwarz’s concerns for the future of “ADHD Nation” misplaced, or is legitimate need now going unmet because of concerns that he and others raised in the mid-2010s?
Brain science is often touted as the way to get to the truth here. It’s not uncommon to hear claims like “in ADHD, we don’t have normal levels of dopamine in the brain” or “ADHD is caused by a dysfunctional dopamine system.” The clinician’s task is thus to separate those patients with a dysregulation of dopamine from those without—determine this, and Schwarz’s concerns melt away under the white hot light of neuroscientific reduction.
Unfortunately, the picture is a bit murkier than we are led to believe. Many will point to dopamine as a determinant of ADHD, but other neurotransmitters like norepinephrine and serotonin seem to be involved, and the DNA Learning Center reports that children with ADHD have “a thinner cortex in the areas of the brain responsible for attention control.” Even on the terms of biological reduction, then, the story is not so simple, but most agree that the case for biological reduction itself has not really been made. The DSM-V Text Revision notes that “no biological marker is diagnostic for ADHD” and that “meta-analysis of all neuroimaging studies do not show differences between individuals with ADHD and control subjects.” It hints at psychosocial factors and environmental context but offers no full-fledged theory on this front. Schwarz relates the most honest take on our current understanding of the development of ADHD: “No one quite knows what causes it.”
For some, this simply means that the search for a cause is still ongoing, but it’s possible that the neurobiological paradigm has put us on a fool’s errand. Psychiatrists like Ross Baldessarini have indicted the “largely fruitless efforts to support a dopamine excess hypothesis of schizophrenia or mania, a norepinephrine or serotonin deficiency of major depression, a serotonin deficiency hypothesis in obsessive-compulsive disorder, and so on.” But this has not stopped pharmaceutical companies from pushing neuroscientific explanations as reductive but easily graspable reasons for taking their drugs. “How did this public health disaster come about, putting much of the population on pharmacological agents from which they would derive few of the benefits but many of the side effects?” the historian of medicine Edward Shorter has asked. “It happened,” Shorter bluntly concludes, “because the pharmaceutical companies badly needed these neurotransmitter reuptake theories in their communications with the medical profession and with the public.”
The victory of the biological revolution in psychiatry is perhaps most evidenced by the ease with which phrases like “norepinephrine reuptake” or “adenosine receptor” now roll off the tongue. Many patients have so thoroughly absorbed the language of the biological revolution that they now explain their own conditions in terms of dopamine and synapses, as though this vocabulary were adequate to the task of describing human interiority. The convenient result for drug companies and insurers is that the solution to every psychiatric problem is a pill, taken on a regular basis.
In the mid-twentieth century, mental disorder, disorganization and stress were understood not simply as a matter of brain chemistry but as largely socially determined. It was (and is) difficult to be a good worker, partner or student. Some people had difficulty keeping up in the American hustle, and it just so happened that there were pills to help those people do so. Today, by contrast, having ADHD means there is something wrong with one’s brain, and ADHD meds make the brain “normal.” Everyone else, like the college student who takes Adderall to finish a paper or “get shit done,” is simply an undeserving stimulant user.
The dream of one day finding a strictly chemical etiology for the various mental disorders that are determined presently by their outward behavioral symptoms has, so far, borne little fruit. Nevertheless, we still point to justifications like “a dysregulation of dopamine” to separate “legitimate” amphetamine users from abusers. It seems fine to say that people with ADHD have issues with hyperactivity, compulsion and distraction, and that ADHD medications help them with these issues. Why do we need to go further and say that those issues are solely rooted in an underlying brain dysfunction, and that those without it are taking stimulants for unsanctioned purposes?
A reassessment of the neurobiological paradigm that Big Pharma has pushed since the 1980s is long overdue. In addition to rigidly separating deserving from undeserving users, it has also served to occlude the social imperatives behind psychiatric diagnosis and medication. When ADHD is just a brain disorder, absent from its conceptualization are the academic stresses that children are subject to from a very young age. Would ADHD diagnosis be as prevalent if second graders had more outside play time and were not subject to standardized testing? Or if high schoolers and college students did not feel that their career prospects hinged precariously on their GPA? Absent also is any discussion of the various cultural products that constantly divide our attention, the hypercompetitive and demanding work environments that people must endure, or the perverse incentives of Medicaid and Supplemental Security Income, which have made an ADHD diagnosis a financial lifeline for many poor families. Any understanding of ADHD that does not include these social determinants seems both incomplete and dangerous, in framing a form of psychoactive drug use separately from the various things that compel it.
At the most basic level, neurobiological reduction grates against common sense. Anyone who has taken stimulants, licit or otherwise, knows what they do. They give us pep, endurance and focus to complete our tasks, but they also make us jittery and paranoid; sometimes they can burn us out, leaving us frayed and in need of recovery. We know this, and yet somehow at the same time, when it comes to the official justifications, we forget it. When children are prescribed ADHD medications that are said to make their brains “normal,” and then they demonstrate the characteristic anxiety that goes along with prolonged and intense stimulant use, they are sometimes then diagnosed with a separate disorder and prescribed another medication. Cascading drug treatments thus easily follow from this blinkered neurobiological perspective, and once again, it’s the pharmaceutical companies that benefit.
●
In 2015, the ADHD medication Vyvanse was approved for use in the treatment of moderate or severe binge-eating disorder (BED) in adults. Some tie BED to dopamine dysregulation, but the literature on the subject is even more vague with respect to causation than that on ADHD. Some might view this development as extending the realm of legitimate needs that amphetamines and other stimulants serve, while others would decry this as an illegitimate use of medications that should be reserved for ADHD patients. Still others would see it as simply making official what people have used amphetamines for since the release of Benzedrine in the 1930s: weight loss.
One could imagine a society where the many purposes for which people have used amphetamines—weight loss, endurance, control, mental focus and even simply feeling good—were all socially acceptable, and where the discourse around deserving and undeserving amphetamine users was simply illegible. Indeed, this was essentially how amphetamine use was conceptualized in the postwar era, though without a true reckoning with the negative consequences that intensive stimulant use inevitably brings. If we were to revive that conception but with a greater attention to amphetamines’ side effects, addictiveness, and generally negative health consequences, perhaps they could be marketed generically and treated like any potentially dangerous psychoactive drug: fine to take from time to time, having pretty clear effects on our bodies and minds that can be beneficial, and a lot of fun under the right circumstances, but also highly addictive substances that you probably should not take like Vitamin C.
I cannot say whether that would be a better society. All signs point in the direction of drug liberalization continuing apace, and while that trend could involve an undoing of some of the tragedies of the War on Drugs, it’s not clear that moving powerful psychoactive substances more openly into the realm of profit extraction is necessarily a good thing.
What I can say is that it would paradoxically be a better society for people with ADHD. The reason that amphetamine production has not ramped up to meet the contemporary shortage is that the official justification for the use of amphetamines is pretty narrow, clearly defining deserving and undeserving amphetamine users. In such a world, where ADHD patients are the legitimate users and everyone else is just trying to get high or get ahead, it is reasonable to expect that government authorities would limit the available supply of amphetamines to prevent illegitimate use. In other words, it is the very paradigm within which ADHD is conceptualized today that is preventing its pharmacological treatment. If the boundaries of that paradigm were not so rigorously guarded, we could not only have a more honest conversation about how amphetamines are used, but there would also be a more abundant supply of amphetamines for ADHD patients to access.
But again, I say that in simple pursuit of a greater transparency about the amphetamine predicament today. It does not seem accidental that the steady decline of the American empire and the evisceration of anything resembling “the American dream” have coincided with legalization and decriminalization movements that promise to put a society of world-historic drug use over the edge. The opiates of the masses today are undoubtedly actual opiates (some of Marx’s metaphors are becoming frighteningly literal). Opening the amphetamine floodgates by dropping the neurobiological and moralistic justifications around ADHD would help a lot of people, but so too would changing the social imperatives and circumstances (cutthroat competition at work and in schools, a fraying social safety net, intensive smartphone and social media use, etc.) that spur amphetamine use.
Unfortunately, ever since the debunking of the Federal Bureau of Narcotics “reefer madness” campaign in the 1930s, drug liberalization advocates have been too willing to keep those determinants out of sight in their efforts to destigmatize substance use and draw attention to the harms of criminalization. But in doing so, they have ceded the social critique to the right. Dr. Gabriel Nahas, darling of the anti-marijuana parent movement in the Eighties, wondered in 1973 “how long a political system can endure when drug taking becomes one of the prerequisites of happiness.” In the coming decades, Americans seem fated to find out.
If you liked this essay, you’ll love reading The Point in print.