

A poster’s toxic message provides a thrill from the attention of like-minded people.

By Dr. Joseph P. Walther
Visiting Scholar at Harvard University, Distinguished Professor of Communication
University of California, Santa Barbara
Introduction
The rampantย increase of hate messagesย on social media is a scourge in todayโs technology-infused society. Racism, homophobia, xenophobia and even personal attacks onย people who have the audacity to disagreeย with someone elseโs political opinion โ these and other forms of online hate present an ugly side of humanity.
The derision on social media appears in vile and profane terms for all to see. Obviously, the sole purpose of posting online hate is to harass and harm oneโs victims, right?
Not necessarily, according to recent studies about hate messaging in social media. Although seeing hate comments is unquestionably upsetting, new research suggests thereโs a different reason people post hate: toย get attention and garner social approvalย from like-minded social media users. Itโs a social activity. Itโs exhilarating to be the nastiest or snarkiest and to get lots of thumbs-ups or hearts. Anecdotal evidence makes a good case for the social basis of online hate, and new empirical research backs it up.
Inย over 30 years of research about online interaction, Iโve documented how people make friends and form relationships online. It now appears that the same dynamics that can make some online relationships intensely positive can also fuel friendly feelings among those who join together online in expressing enmity toward identity groups and individual targets. Itโs a โhate party,โ more or less.
Online Hate Is a Social Phenomenon
When you look at online hate messages, you start to notice clues that suggest, more often than not, that hatemongers are posting messages to each other, not to those their messages implicate and denigrate.
For instance, white supremacists and neo-Nazis often include codes and symbols that have shared meaning for the in-group but are opaque to outsiders, including the very people that their messages vilify. Including โ88โ in oneโs message, hashtag or handle is one such code; theย Anti-Defamation Leagueโs lexicon of hate symbolsย explains that the 8th letter of the alphabet is H. And 88, therefore, is HH, or Heil Hitler.
Another clue that hate is for haters is the way it has shifted somewhat from mainstream social media to fringe sites that have gotten so hateful and disturbing that itโs hard to imagine any member of a targeted group wanting to peruse those spaces. The fringe sitesย say they promote unfettered free speech online. But in doing so, they attract users who write posts that are widely unacceptable and wouldnโt last a minute on mainstream sites with community standards and content moderation.
The kinds of messages that would quickly be flagged as hate speech in any offline setting come to dominate the threads and discussions in some of these spaces. Users curate meme repositories โ for instance, the anti-Jewish, anti-LGBTQ and โnew (n-word)โ collections โ that are hideous to most people but funny to those who partake in these secluded virtual backrooms. Theyโre not spaces where the targets of these epithets are likely to wander.
Ganging Up Builds Community
Further research lends credence to the hypothesis that haters are in it for social approval from one another. Internet researchersย Gianluca Stringhini,ย Jeremy Blackburnย and their colleagues have been tracking what they call cross-platform โraidsโ for a decade.
Hereโs how it works. A user on one platform recruits other users to target and harass someone on another platform โ the creator of a specific video over on YouTube, for instance. The originatorโs post contains a link to the YouTube video and a description of some race or gender issue to prey on, instilling the urge to act among prospective accomplices. Followersย head to YouTube and pile on, filling the comments section with hate messages.
The attack looks like its purpose is to antagonize a victim rather than building ties among the antagonists. And, of course, theย effects on the targeted personย can be devastating.

But backstage, the attackers circle back to the platform where the plot was organized. They boast to one another about what they did. They post screen grabs from the YouTube page to show off their denigrating deeds. They congratulate each other. It was for getting attention and approval after all, consistent with the social approval theory of online hate.
Social Approval Eggs Users on to Greater Extremes
More direct evidence of the effect of social approval on hate messaging is also emerging. Online behavior researcherย Yotam Shmargadย and his collaborators have studied newspapersโ online discussion websites. When people get โupvotesโ on antisocial comments theyโve posted, they becomeย more likely to post additional antisocial comments.
A recent study by my colleaguesย Julie Jiang,ย Luca Luceriย andย Emilio Ferraraย looked at users of X, the platform formerly known as Twitter, and what happened when they received signs of social approval to their xenophobic tweets. When postersโ toxic tweets got an unusually high number of โlikesโ from other users,ย their subsequent messages were even more toxic. The more their messages were retweeted by others, the more posters doubled down with more extreme hate.
These findings do nothing to diminish the real hurt and anger that justifiably arise when people see themselves or their identity groups disparaged online.
The social approval theory of online hate doesnโt explain how people come to hate others or become bigoted in the first place. It does provide a new account for the expression of hate on social media, though, and how social gratifications encourage the ebb and flow of this problematic practice.
Originally published by The Conversation, 12.04.2023, under the terms of a Creative Commons Attribution/No derivatives license.


