By J.Nathan Matias
People often say that online behavior would improve if every comment system forced people to use their real names. It sounds like it should be true – surely nobody would say mean things if they faced consequences for their actions?
Yet the balance of experimental evidence over the past thirty years suggests that this is not the case. Not only would removing anonymity fail to consistently improve online community behavior – forcing real names in online communities could also increase discrimination and worsen harassment.
We need to change our entire approach to the question. Our concerns about anonymity are overly-simplistic; system design can’t solve social problems without actual social change.
Why Did We Think That Anonymity Was The Problem?
The idea that anonymity is the real problem with the internet is based in part on misreadings of theories formed more than thirty years ago.
In the early 1980s, many executives were unsure if they should allow employees to use computers and email. Managers worried that allowing employees to communicate across the company would enable labor organizing, conflict, and inefficiency by replacing formal corporate communication with informal digital conversations.
As companies debated email, a group of social psychologists led by Sara Kiesler published experiments and speculations on the effects of “computer-mediated communication” in teams. Their articles inspired decades of valuable research and offered an early popular argument that anonymity might be a source of social problems online.
In one experiment, the researchers asked computer science students who were complete strangers to make group decisions about career advice. They hosted deliberations around a table, through anonymous text chat, or through chat messages that displayed names. They also compared real-time chat to email. They found that while online decisions were more equitable, the decisions also took longer. Students also used more swear words and insults in chat conversations on average. But the researchers did not find a difference between the anonymous and non-anonymous groups .
Writing about unanswered questions for future research, Kiesler speculated in 1984 that since computers included less information on social context, online communications might increase social conflict and disputes with employers . As Kiesler’s speculations became cited thousands of times, her call for more research was often taken as scientific fact. Her later, correlational findings were also misinterpreted as true effects . Along the way, Kiesler’s nuanced appeal for changes in social norms was lost and two misconceptions became common:
(a) social problems could be attributed to the design of computer systems
(b) anonymity is to blame.
These ideas aren’t reflected in the research. In 2016, a systematic review of 16 lab studies by Guanxiong Huang and Kang Li of Michigan State University found that on average, people are actually more sensitive to group norms when they are less identifiable to others .
While some non-causal studies have found associations between anonymity and disinhibited behavior, this correlation probably results from the technology choices of people who are already intending conflict or harm . Under lab conditions, people do behave somewhat differently in conversations under different kinds of social identifiability, something psychologists call a “deindividuation” effect.
Despite the experimental evidence, the misconception of online anonymity as a primary cause of social problems has stuck. Since the 1980s, anonymity has become an easy villain to blame for whatever fear people hold about social technology, even though lab experiments now point in a different direction.
Nine Key Facts on Anonymity and Social Problems Online
Beyond the lab, what else does research tell us about information disclosure and online behavior?
Roughly half of US adult victims of online harassment already know who their attacker is, according a nationally-representative study by Pew’s Maeve Duggan in 2014 . The study covered a range of behaviors from name calling to threats and domestic abuse. Even if harassment related to protected identities could be “solved” in one effort to move to ‘real names’, more than half of US harassment victims, over 16 million adults, would be unaffected.
Conflict, harassment, and discrimination are social and cultural problems, not just online community problems. In societies including the US where violence and mistreatment of women, people of color, and marginalized people is common, we can expect similar problems in people’s digital interactions . Lab and field experiments continue to show the role that social norms play in shaping individual behavior; if the norms favor harassment and conflict, people will be more likely to follow. While most research and design focuses on changing the behavior of individuals, we may achieve better results by focusing on changing climates of conflict and prejudice [17,16].
Revealing personal information exposes people to greater levels of harassment and discrimination. While there is no conclusive evidence that displaying names and identities will reliably reduce social problems, many studies have documented the problems it creates. When people’s names and photos are shown on a platform, people who provide a service to them – drivers, hosts, buyers – reject transactions from people of color and charge them more [9,5,8]. Revealing marital status on DonorsChoose caused donors give less to students with women teachers, in fields where women were a minority . Gender- and race-based harassment are only possible if people know a person’s gender and/or race, and real names often give strong indications around both of these categories. Requiring people to disclose that information forces those risks upon them.
Companies that store personal information for business purposes also expose people to potentially serious risks, especially when that information is leaked. In the early 2010s, poorly-researched narratives about the effects of anonymity led to conflicts over real-name policies known as “Nymwars.” This provided the justification for more advanced advertising-based business models to develop, which collect more of people’s personal information in the name of reducing online harm. Several high-profile hackings of websites have revealed the risks involved in trusting companies with your personal information.
We also have to better understand if there is a trade-off between privacy and resources for public safety. Since platforms that collect more personal information have high advertising revenues, they can hire hundreds of staff to work on online safety. Paradoxically, platforms that protect people’s identities have fewer resources for protecting users. Since it’s not yet possible to compare rates of harassment between platforms, we cannot know which approach works best on balance.
It’s not just for trolls: identity protections are often the first line of defense for people who face serious risks online. According to a US nationally-representative report by the Data & Society Institute, 43% of online harassment victims have changed their contact information and 26% disconnected from online networks or devices to protect themselves . When people do withdraw, they are often disconnected from the networks of support they need to survive harassment. Pseudonymity is a common protective measure. One study on the reddit platform found that women, who are more likely to receive harassment, also use multiple pseudonymous identities at greater rates than men .
Requirements of so-called “real names” misunderstand how people manage identity across multiple social contexts, exposing vulnerable people to risks. In the book It’s Complicated, danah boyd shares what she learned by spending time with American teenagers, who commonly manage multiple nickname-based Facebook accounts for different social contexts . Requiring a single online identity can collapse those contexts in embarrassing or damaging ways. In one story, boyd describes a college admissions officer who considered rejecting a black applicant after seeing gang symbols on the student’s social media page. The admissions officer hadn’t considered that the symbols might not have revealed the student’s intrinsic character; posting them might have been a way to survive in a risky situation. People who are exploring LGBTQ identities often manage multiple accounts to prevent disastrous collapses of context, safety practices that some platforms disallow .
Clear social norms can reduce problems even when people’s names and other identifying information aren’t visible. Social norms are our beliefs about what other people think is acceptable, and norms aren’t de-activated by anonymity. We learn them by observing other people’s behavior and being told what’s expected . Earlier this year, I supported a 14-million-subscriber pseudonymous community to test the effect of rule-postings on newcomer behavior. In preliminary results, we found that posting the rules to the top of a discussion caused first-time commenters to follow the rules 7 percentage points more often on average, from 75% to 82%.
People sometimes reveal their identities during conflicts in order to increase their influence and gain approval from others on their side. News comments, algorithmic trends, and other popular conversations often become networked battlegrounds, connected to existing conflict and discussions in other places online. Rather than fresh discussions whose norms you can establish, these conversations attract people who already strongly identify with a position and behavior elsewhere, which means that these large-scale struggles are very different from the small, decision-making meetings tested in anonymity lab experiments. Networks of “counterpublics” are common in democracies, where contention is a basic part of the political process [25,26,27]. This means that when people with specific goals try to reframe the politics of a conversation, they may gain more influence by revealing their pre-existing social status [28,29]. For example, in high-stakes discussions like government petitions, one case study from Germany found that aggressive commenters were more likely to reveal their identity than stay anonymous, perhaps in hopes that the comments would be more influential .
Abusive communities and hate groups do sometimes attempt to protect their identities, especially in cultures that legally protect groups while socially sanctioning them. But many hate groups operate openly in the attempt to seek legitimacy . Even in pseudonymous settings, illegal activity can often be traced back to the actors involved, and companies can be compelled by courts to share user information, in the few jurisdictions with responsive law enforcement.
Yet law is reactive and cannot respond to escalating risks until something happens. In pseudonymous communities that organize to harm others, social norms are no help because they encourage prejudice and conflict. Until people in those groups break the law, the only people capable of intervening are courageous dissenters and platform operators .
Four Lessons For Designers and Communities
Advocates of real-name policies understand the profound value of working on preventing problems, even if the balance of research does not support their beliefs. Designers can become seduced by the technology challenges of detecting and responding to problems; we need to stop playing defense.
Designers need to see beyond cultural assumptions. Many of the lab experiments on “flaming,” “aggression,” and anonymity were conducted among privileged, well-educated people in institutions with formal policies and norms. Such people often believe that problem behaviors are non-normative. But prejudice and conflict are common challenges that many people face every day, problems that are socially reinforced by community and societal norms. Any designer who fails to recognize these challenges could unleash more problems than they solve.
Designers need to acknowledge that design cannot solve harassment and other social problems on its own. Preventing problems and protecting victims is much harder without the help of platforms, designers, and their data science teams. Yes, some design features do expose people to greater risks, and some kinds of nudges can work when social norms line up. But social change at any scale takes people, and we need to apply the similar depth of thought and resources to social norms as we do to design.
Finally, designers need to commit to testing the outcomes of efforts at preventing and responding to social problems. These are big problems, and addressing them is extremely important. The history of social technology is littered with good ideas that failed for years before anyone noticed.
The idea of removing anonymity was on the surface a good idea, but published research from the field and the lab have shown its ineffectiveness. By systematically evaluating your design and social interventions, you too can add to public knowledge on what works, and increase the likelihood that we can learn from our mistakes and build better systems.
NB The sociologist Harry T Dyer has written a thoughtful response to this piece discussing how anonymity can contribute to creating an environment ripe for abuse. It’s definitely worth reading.
J. Nathan Matias is a PhD candidate at the MIT Media Lab Center for Civic Media and an affiliate at the Berkman-Klein Center at Harvard. He conducts independent, public interest research on flourishing, fair, and safe participation online.
Beyond anonymity, if you are interested to learn more about what to do about social problems online, check out the online harassment resource guide to academic research, the list of resources at the FemTechNet Center for Solutions to Online Violence, and a report I facilitated on high impact questions and opportunities for online harassment research and action. See also my recent article on the role of field experiments to monitor, understand, and establish social justice online.
- Sarah Banet-Weiser and Kate M. Miltner. #MasculinitySoFragile: culture, structure, and networked misogyny. Feminist Media Studies, 16(1):171-174, January 2016.
Robert B. Cialdini, Carl A. Kallgren, and Raymond R. Reno. A focus theory of normative conduct: A theoretical refinement and reevaluation of the role of norms in human behavior. Advances in experimental social psychology, 24(20):1-243, 1991.
Danielle Keats Citron and Helen L. Norton Intermediaries and hate speech: Fostering digital citizenship for our information age Boston University Law Review, 91:1435, 2011.
Jessie Daniels. Cyber racism: White supremacy online and the new attack on civil rights. Rowman & Littlefield Publishers, 2009.
Jennifer L. Doleac and Luke CD Stein. The visible hand: Race and online market outcomes. The Economic Journal, 123(572):F469-F492, 2013.
Maeve Duggan. Online Harassment, Pew Research, October 2014.
Stefanie Duguay. “He has a way gayer Facebook than I do” : investigating sexual identity disclosure and context collapse on a social networking site. New Media and Society, September 2014.
Benjamin G. Edelman, Michael Luca, and Dan Svirsky. Racial Discrimination in the Sharing Economy: Evidence from a Field Experiment. SSRN Scholarly Paper ID 2701902, Social Science Research
Network, Rochester, NY, January 2016.
Yanbo Ge, Christopher R. Knittel, Don MacKenzie, and Stephen Zoepf. Racial and gender discrimination in transportation network companies. Technical report, National Bureau of Economic Research, 2016.
Arlie Russell Hochschild. The Managed Heart: Commercialization of Human Feeling. University of California Press, Berkeley, third edition, updated with a new preface edition edition, 1983.
Guanxiong Huang and Kang Li. The Effect of Anonymity on Conformity to Group Norms in
Online Contexts: A Meta-Analysis. International Journal of Communication, 10(0):18, January 2016.
Sara Kiesler, Jane Siegel, and Timothy W. McGuire. Social psychological aspects of computer-mediated communication. American Psychologist, 39(10):1123-1134, 1984.
Alex Leavitt. This is a Throwaway Account: Temporary Technical Identities and Perceptions of Anonymity in a Massive Online Community. In Proceedings of the 18th ACM Conference on Computer
Supported Cooperative Work & Social Computing, pages 317-327. ACM, 2015.
Amanda Lenhart, Michelle Ybarra, Kathryn Zickuhr, and Myeshia Prive-Feeney. Online Harassment, Digital Abuse, and Cyberstalking in America. Report, Data & Society Institute, November 2016.
Elizabeth Levy Paluck. The dominance of the individual in intergroup relations research: Understanding social change requires psychological theories of collective and structural phenomena. Behavioral and Brain Sciences, 35(06):443-444, 2012.
Elizabeth Levy Paluck and Donald P. Green. Prejudice reduction: What works? A review and assessment of research and practice. Annual review of psychology, 60:339-367, 2009.
Jason Radford. Architectures of Virtual Decision-Making: The Emergence of Gender Discrimination on a Crowdfunding Website. arXiv preprint arXiv:1406.7550, 2014.
Jane Siegel, Vitaly Dubrovsky, Sara Kiesler, and Timothy W. McGuire. Group processes in computer-mediated communication. Organizational behavior and human decision processes, 37(2):157-187, 1983.
Lee Sproull and Sara Kiesler. Reducing Social Context Cues: Electronic Mail in Organizational Communication. Management Science, 32(11):1492-1512, November 1986.
Tiziana Terranova. Free labor: Producing culture for the digital economy. Social text, 18(2):33-58, 2000.
Kathi Weeks. Life within and against work: Affective labor, feminist critique, and post-Fordist politics. Ephemera, 7(1):233-249, 2007.
JoAnne Yates. Control through communication: The rise of system in
American management, volume 6. JHU Press, 1993.
danah boyd. It’s complicated: The social lives of networked teens. Yale University Press, 2014.
Nancy Fraser. Rethinking the public sphere: A contribution to the critique of actually existing democracy. Social text, (25/26):56-80, 1990.
Catherine R. Squires. Rethinking the black public sphere: An alternative vocabulary for multiple public spheres. Communication theory, 12(4):446-468, 2002.
Michael Warner. Publics and counterpublics. Public culture, 14(1):49-90, 2002.
Christian von Sikorski. The Effects of Reader Comments on the Perception of Personalized Scandals: Exploring the Roles of Comment Valence and Commenters’ Social Status. International Journal of Communication, 10:22, 2016.
Robert D. Benford and David A. Snow. Framing processes and social movements: An overview and assessment. Annual review of sociology, pages 611-639, 2000.
Katja Rost, Lea Stahel, and Bruno S. Frey. Digital social norm enforcement: Online firestorms in social media. PLoS one, 11(6):e0155923, 2016.
Photo by werner22brigitte, CC0-Public Domain