In May, Facebook made a widely reported change to its default privacy settings for the site’s new users, setting the audience for posts to “friends”—those users have chosen to connect with—rather than “public,” meaning anyone who uses the internet at all. It also announced plans to have a blue dinosaur pop up in a window and walk users through a “privacy checkup.” At least Facebook has a sense of humor about having tried to make privacy extinct.
Back in January 2010, when Facebook made the opposite move, setting its defaults to public, CEO Mark Zuckerberg declared that “people have really gotten comfortable not only sharing more information and different kinds, but more openly and with more people … We decided that these would be the social norms now and we just went for it.” The company sounded a different note in “Making It Easier to Share With Who You Want,” the post announcing its recent change: “We recognize that it is much worse for someone to accidentally share with everyone when they actually meant to share just with friends, compared with the reverse.”
Such rhetoric suggests that online privacy is just a matter of deciding who can see what we put online—but the stakes extend far beyond our visibility. In the introduction to The Offensive Internet, a collection of essays about online privacy issues, editors Martha Nussbaum and Saul Levmore list four distinct ways to conceive of the stakes of privacy:
There is the value of seclusion, which is the right to be beyond the gaze of others. There is intimacy, in which one chooses with whom to share certain information and experiences. There is also the interest in secrecy, which is to information as seclusion is to the physical person. And then there is autonomy, which is the set of private choices each person makes.
Facebook has shown an increasing willingness to cooperate with users’ demands for control over the first three of these, as “Making It Easier to Share With Who You Want” makes plain. But the company respects your wish for “seclusion,” “intimacy” and “secrecy” only because it does not respect your autonomy. Its privacy controls encourage users to conceive of privacy as being mainly about one’s visibility to other users—but this creates a smokescreen with respect to the more significant privacy threat: Facebook itself. Having established itself as a kind of all-purpose connective tissue in the social lives of over a billion people, the company is now using its monopoly position to exercise opaque control over the shape of social interaction, manipulating the flow of information in ways users don’t understand and can’t interrupt.
This became evident a month after its announcement concerning privacy, when journalists began to take notice of a paper on “emotional contagion” published by data scientists working with Facebook. This study revealed the company’s eagerness to covertly shape the feelings of its users by playing with what shows up on their News Feed. To Facebook’s apparent surprise, many users were outraged at its having made them subjects in a vast mood-manipulation experiment. But the experiment demonstrates concretely what has been true ever since Facebook introduced its black-box methods for managing users’ News Feeds: privacy settings don’t determine who sees what information; algorithms do—and these algorithms are optimized in Facebook’s own interest.
That sort of intervention might not seem particularly problematic. After all, in its active management of what we see, Facebook is employing proven strategies from traditional commercial media, using ratings-like feedback (in the form of “likes” and other captured user behavior) to design programming that will hold viewers’ attention and make it available to advertisers. Only Facebook is not an ordinary product. It’s not something we consume more or less of based strictly on our individual preferences and our ability to afford it. It has become the platform on which many of us stage our social life. And if our friends are using Facebook, we have to use it to continue to belong. Not having a Facebook page can raise as many flags about one’s reputation and conviviality as anything that might be discovered on the site; one simply cannot unilaterally disassociate from it without disqualifying oneself from a wide range of opportunities.
Unfortunately, this puts Facebook in a position to control which opportunities are revealed to whom. Now that its network is firmly entrenched, the company can work to impose the sort of social life it would prefer its users to have, using its control over what users see to influence whom they associate with and under what conditions. And the company is happy to auction that control off to whoever is willing to pay for it. Maybe a political party wants you to see only the updates of fellow party members, or only those that express doubt about opposing candidates. Maybe a car company wants you to see every update that mentions the pleasures of driving and none that complain of the drudgery of commuting. Can Facebook help? In principle, yes.
Perhaps we should think of Facebook users more as laborers for the company than as its clients. Unlike conventional entertainment companies, Facebook depends on its users to supply the raw materials (various instances of “sharing”) to make its product (the News Feed calibrated for maximum “stickiness”). This requires a steady flow of what we might call “surplus sharing” from its workers, who are paid in the currency of attention from other workers, which Facebook’s total control over News Feeds allows it to mint. Seen in this light, the company’s attempts to address “privacy” show up as just one part of its strategy for managing its assembly line and mitigating workplace hazards that might slow production. From Facebook’s perspective, privacy is a kind of transaction cost weighing on “peer productivity” within its vertically integrated social factory. Giving users apparent control over privacy settings pacifies their concerns and elicits more voluntary labor from them.
Yet Facebook treats its users not only as workers but also as products in themselves. The company collects data on its users in order to segment them into demographics that can be sold to advertisers. (Individuals may also purchase “reach,” which increases the visibility of their posts, essentially turning them into ads.) When Facebook offers users more “control” over privacy settings, then, it is not only inducing them to perform more labor, but also collecting more data about their preferences, including which friends fit into which categories. Using the privacy controls serves Facebook’s goal of better commodifying you.
Facebook purports to offer us the seemingly magical possibility of a social life that is at the same time more individualized and autonomous, a world in which we decide who and what appears by means of our decisions as to whom to befriend and which posts to like. In fact, Facebook structures the feed to suit advertisers’ needs, putting users in spontaneous, provisional demographics to permit the perfect match between audiences and ad strategies. It can then track the effectiveness of those matches by correlating purchasing behavior with the many streams of data it compiles on users, including information about their activities outside Facebook’s apps. As Derek Thompson of the Atlantic argues in describing Facebook’s unique effectiveness as a phone-ad server,
only Facebook combines (a) the simplicity of a single-column cascading product, (b) explicit information about who we are, what we like, and who we know, (c) controlled experiments that can connect online and offline identities to discover exactly how many people are seeing ads on Facebook and buying that product later, and (d) just ridiculous scale.
Facebook’s desire to conduct these “controlled experiments” is why it refuses its users full access to how the site represents and structures their social lives—the sort of control the site seems to promise, particularly with its privacy settings. The sheer volume of content that Facebook serves to users and the opacity with which it serves that content render its manipulations subtle and unobtrusive. 
By keeping the mechanics of its algorithms private, Facebook refuses its users the autonomy its platform might otherwise seem to afford. Facebook defends its algorithmic processing as a way of filtering content for its users’ benefit, based on their own revealed preferences. But without access to the algorithms, users have no choice but to take the company at its word on that. There is no way of knowing that Facebook is not experimenting with methods to reshape our preferences in accordance with advertisers’ demands. (And indeed its mood-manipulation experiment makes that prospect seem highly likely, since it suggests that Facebook would calibrate its algorithms to display items that might intensify feelings of insecurity as long as that led to greater engagement with the site.) The same must be true for algorithmic filter bubbles that curtail users’ sense of the breadth of opinion even within the social networks they themselves have built: if showing you only those updates that agree with your prejudices keeps you logged in and scrolling, that is what Facebook will do. Pushed to its logical conclusion, this would turn social interaction into a form of solitary consumption, augmented by the sort of Pavlovian reward schedules that gambling companies use to maximize players’ time on gaming machines. It would transform eagerness for communal participation into a mode of isolation.
The idea that individuals can use privacy settings or hack social networks to unilaterally secure their own personal security is a myth propagated by tech companies. It reflects a fantasy about how social life works, as an opt-in situation you can manage from a device with a variety of settings and filters. With its vast leverage over users’ informational ecosystem, Facebook can shape the kinds of personal choices individuals make—not through a show of force or a threat of punishment, not by exposing their private information to others, but through its behind-the-scenes reshaping of what its users experience as real, as likely, as possible, as trending. By selling its information about users to third parties, data brokers, predictive-analytics companies and the like, it enables these firms to cross-reference the information, unearth behavioral patterns and correlations among large data sets, and use these findings to discriminate among (and potentially against) users invisibly. The ubiquitous networks crowding everyday life not only permit easy surveillance but also allow companies to inflect their treatment of customer segments without seeming to constitute a class that could sue them over it. With the help of Facebook and other Big Data companies, lenders, insurers and real-estate brokers can come up with de facto proxies for categories that they are officially forbidden from using in their decision-making processes.
Better control over the visibility of one’s posts does nothing to address these kinds of threats and may actually make them worse. Users may feel it’s safe to use Facebook more, to never shut it off, letting it passively accumulate more and more information, all of which remains putatively private. Yet this data allows social control to become highly individualized, as specific to us as we are expressive on social media. The more we share, the more finely attuned the control can be, to the point where it may simply seem like convenience: apps recommending what we should and should not read, who we should or should not listen to, where we should go and the route we should take, the food we should eat and the places we should sleep. The level of effort it would take to break out of these recommendation bubbles has become increasingly prohibitive.
Even if one waged a personal disinformation campaign against the data collectors, it would have little effect. From the perspective of the Big Data companies, such inaccuracies don’t matter. They aren’t interested in targeting specific individuals, just types—and the “privacy” harms they are responsible for are at the level of populations, not persons. As Woodrow Hartzog and Evan Selinger point out in their chapter on “Obscurity and Privacy” for the forthcoming Routledge Companion to Philosophy of Technology, “Even if one keeps a relatively obscure digital trail, third parties can develop models of your interests, beliefs, and behavior based upon perceived similarities with others who share common demographics.” Regardless of what you have chosen to share, you can always be modeled more broadly. Companies like Facebook can ascribe simulated, probable data points to you, which will become factors in the way other institutions treat you, regardless of whether those probabilities are realities.
In a situation like this, conformity might be our best means of resistance. Instead of presenting a rich and complex personality on social media, the freest among us will offer only a contrived, superficial one. Then we will have come full circle. Privacy that protects our intimate, revealing secrets will no longer matter, because we won’t have anything truly personal to post.