Ben Grosser, University of Illinois at Urbana-Champaign
Facebook’s “reactions” let users express how they feel about a link, photo, or status. While such data might be helpful for one’s friends, these recorded feelings also enable increased surveillance, government profiling, more targeted advertising, and emotional manipulation. Go Rando is a web browser extension that obfuscates one’s feelings on Facebook. Every time a user clicks “Like,” Go Rando randomly chooses one of the six “reactions” for them. Over time, the user appears to Facebook’s algorithms as someone whose feelings are emotionally “balanced”—as someone who feels Angry as much as Haha or Sad as much as Love. Users can still choose a specific reaction if they want to, but even that choice will be obscured by an emotion profile increasingly filled with noise. In other words, Facebook won’t know if a reaction was genuine or not.
What’s Wrong with Facebook Reactions?
We’ve known for years now that “Likes” on Facebook not only tell one’s friends what they saw, but also change what the user sees on Facebook in the future. For example, Facebook uses “Like” activity to target ads, to decide which posts appear on the News Feed, and to manipulate user emotions as part of its own studies of human behavior. At the same time, Facebook shares its data with other corporations and government agencies, fueling increased surveillance and algorithmic decision making.
So if “Likes” were already shared widely, what’s the harm in a user selecting “Angry,” “Sad,” or “Love?” When “Like” was the only option, it was a multi-purpose signifier that could mean many things, and was thus harder to algorithmically interpret. Facebook’s “reactions” are still reductive of human emotion, but they suggest just enough nuance to encourage algorithmic analysis of state-of-mind. While these analyses will be of questionable accuracy at best, they’ll still be used to generate an emotion profile for every Facebook user. When combined with other data available to state agencies and corporations, the potential abuses and misuses are significant.
For example, emotion profiles could affect a user’s economic future. Amazon could use reactions to feed dynamic pricing. Banks might see “Sad” or “Angry” customers as a higher credit risk for a loan. Or a future employer could treat a “Sad” profile as a sign to negotiate a lower salary or to skip that candidate altogether.
Civilian police use analytics software that draws on social media data for the purposes of intelligence gathering, crowd management, and threat analysis. From “Likes” to hashtags to emojis, recent articles have revealed how this data gets used to track activist locations in real-time during protests, or to analyze the threat an individual poses based on how they “feel.” The addition of Facebook’s reactions into these systems will lead to further (questionable) analyses of state-of-mind, possibly using how one feels as partial justification for surveillance, arrest, or otherwise.
The US Government and other state actors have long been tracking everyone’s digital activities in an attempt to predict future security threats. As they integrate every “Angry,” “Sad,” or “Wow” users post into their prediction algorithms, that data could lead to increased surveillance, placement on watch lists, and/or rejection of individuals at the border.
Go Rando fills your Facebook emotion profile with noise
Finally, this should all be considered within the context of the recent US presidential election and Brexit votes. While there is still some dispute as to just how extensive their actions were, it is clear that the Trump campaign hoped to use social media data to influence citizens. For example, they engaged the predictive analytics company Cambridge Analytica to utilize such data in order to ascertain the personalities of individuals and groups. This allowed the campaign to glean people’s “needs and fears, and how they are likely to behave.” The analyses were then used to craft custom messages for voters based on a division of “the US population into 32 personality types.” (The same company played a role in the “leave” side of the Brexit vote in Great Britain). Given the policy intentions of the Trump administration on issues like immigration, terrorism, and more, it is likely that these campaign techniques will now become government surveillance techniques.
Why Go Rando?
All of the varied parties interested in social media data—whether it’s Facebook itself, other corporations, or governments—are engaged in a type of “emotional surveillance.” They are seeking to codify each user’s political and consumer identity, to figure out how users feel, how they see themselves. In reaction, Go Rando adopts the strategy of obfuscation to disrupt the increasingly fine-grained data collection practices that enable this emotional surveillance. While unlikely, if everyone started using Go Rando tomorrow, it could have broad collective effects against state and corporate emotion profiling. But regardless, for any one user it provides individual benefits by disrupting Facebook’s News Feed algorithm (and thus, blunting the “filter bubble” effect), resisting the site’s attempts at emotional manipulation, and confusing corporate and governmental surveillance.
Go Rando randomly selects a Facebook “reaction” for you
Go Rando provokes users to question how Facebook’s “reactions” are used. Who benefits when a user marks themself as “Angry” or “Sad” in response to a particular post? Which groups have the most to lose? And how might the uses of this data change the nature of privacy and democracy over the coming months or years?
Finally, when a user sees a discongruent or “inappropriate” reaction from their friends in the future, perhaps this might be a sign of a potential ally in the battle between individual freedom and the big data state-corporate machine that seeks to use our data against us.
Go Rando.