Can Soccer's Abuse Problem on Social Media Be Fixed?
Why yes, it appears that it can.
In late April, the Premier League and the rest of English soccer announced that they would be boycotting social media for an entire weekend. Those involved in the sport had had enough. Something had to be done about the runaway abuse on social media—the racism, the hate, the vitriol. They wanted the major social media companies to act, to address this ugly problem.
The issue had festered for long enough. Again and again, players and other figures in soccer were viciously attacked online, sometimes for doing nothing more than existing. Arsenal striker Eddie Nketiah was racially abused after posting a picture of himself on Twitter, captioned “Working with a smile!”
A few months earlier, Karen Carney, a pundit and former England player, suggested that Leeds United might run out of steam at the end of the season because of its up-tempo playing style. Encouraged by the club itself, which tweeted out the video, she was abused so mercilessly that she shut down her Twitter account altogether.
In February, a group of leading figures in English soccer wrote a letter to the CEOs of Facebook and Twitter, Mark Zuckerberg and Jack Dorsey, in hopes that they might crack down on abuse.
The boycott, while impressive in its scope and discipline, didn’t work. Nothing has worked. The valiant efforts of organizations like the Fare network and the Kick it Out campaign for equality in soccer haven’t produced a game shorn of its toxic tribalism and bigotry.
Marcus Rashford, Manchester United’s Black striker, was confronted with “at least 70 racial slurs” on social media after a loss to Villareal in the Europa League final, including “a mountain of monkey emojis.”
After Manchester City lost the Champions League final to Chelsea, two of its prominent Black players, Raheem Sterling and Kyle Walker, were inundated with their own helping of racist abuse on Instagram—more monkey emojis.
The abuse doesn’t just target players of color and women. White Sweden striker Marcus Berg missed a promising scoring chance at Euro 2020 and received so much social media abuse that the Swedish federation filed a police report on his behalf.
And it isn’t just a European problem either. United States men’s national team defender Mark McKenzie was racially abused on social media after he committed several errors during a game in June. It even happens in Major League Soccer, where the LA Galaxy’s Derrick Williams received abuse on social media as well.
But if a boycott of social media isn’t a solution to soccer’s problem of abuse on social media, what is?
First, it helps to understand the nature of the issue better, to consider a theory on why people resort to abuse on social media. Dr. William Watkin, a professor at Brunel University in the UK who has studied online abuse and trolling, argues that a combination of intimacy and distance make for a combustible mix in soccer’s online discourse.
Now that every player is on social media, fans, trolls and bots alike are able to contact them. Whether they run their own accounts or not, you at least get the impression that you are communicating with players directly. Yet you are doing so from a safe remove, where a world class athlete has no opportunity to punch you.
“There is a carefully calibrated balance online between direct access—here to footballers, that is the intimacy part—and actual distance—you are not talking to them face to face—as well as the anonymizing power of social media, the mask,” Watkin said via email. “Which also switches off behavioral controls, because not being face to face removes a huge amount of information from body language and facial expression, which allows you to see if you have gone too far, how much you are affecting the other person, and what those around you feel as well.”
To Watkin, abuse of soccer players on social media mimics the dynamics of the mob violence that has sometimes surrounded the games in person. Soccer fans inclined to hooliganism feel safety in their numbers and the anonymity of the crowd allows them to act without a filter on their behavior. This is replicated online by the remove from the victim of their abuse. It’s the same kind of untouchability.
Both in the stadium and online, Watkins said, “You can put on a mask, or avatar, and indulge in violent, threatening behavior without repercussions. You can also leave the persona behind when you leave the stadium or, extending the analogy, switch off your Twitter feed. “
As such, the abuse—whether virtual or in person—merely reflects the toxicity the sport has allowed to fester for decades. “The ‘fault’ lies in the sporting culture where banter becomes insults, becomes racism, becomes death threats,” Watkin said. “Football, like most sports and games, is a kind of symbolic battle, or a way of having wars where no one dies.”
But it’s important to note here that if the online abuse springs from the tribalism at the heart of soccer, the racist stuff is also very much the residue of actual racism.
Yes. Sure. We know all of this. We know racism and abuse in soccer are vast and endemic problems, as they are everywhere. And we know that they are exacerbated by a sport that casts everything as an existential battle between different ways of life.
But what could a solution look like?
How might we begin to do the hard and necessary work of cleaning up the sport’s discourse?
Dr. Andy Phippen, an expert in online abuse and how to change digital behavior at Bournemouth University, suggests that if social media is a different kind of society, it should, like all societies, also police itself. That is to say, it shouldn’t be left to the social media companies to set the rules, but the users themselves.
Social media’s short-form nature and sense of anonymity strips away whatever varnish might be applied to a controversial statement in a social setting. There is no opportunity to dress up a hateful statement in a broader argument that might soften it. Conversely, on the receiving end, there’s no body language or other social cues that you might get in a bar—an eye-roll, a glare, a shrug, a grimace—to indicate that a line has been crossed and that what was said is unacceptable.
“You get a few notifications that people are liking it and you go, ‘Right, that’s my view legitimized,’” Phippen explained. “Communicating digitally removes empathy. You can say things with complete impunity. There have been comments that social media have media society worse. But there’s an alternative perspective that it’s effectively just a mirror of society.”
Except in this mirrored image, there is no social cost to saying nasty, racist, bigoted things. There’s no larger group to set boundaries. That’s because people largely isolate their social feeds from the views and people they find abhorrent. Rather than telling a vocal racist to shut up, as you might in a bar, the online user is simply muted, blocked or ignored. Most people don’t bother with confronting the fellow users they deem unworthy of their time and mental energy.
That’s why English soccer’s boycott of social media didn’t achieve anything.
“The sorts of people who want to abuse Premier League footballers aren’t going to go, ‘Oh shit, my favorite player hasn’t been tweeting for three days, I’d better stop being racist,’” Phippen said. “It was done with the best of intentions but it was an extremely naïve view to put all the blame on the platforms. It doesn’t tackle the fundamental problem. The reason there is racist abuse of footballers on Twitter is there are racists in society who don’t like successful black players.”
Phippen argues the solution isn’t to threaten or shun the medium with a boycott but to either address societal issues or to police abusers better. He argues social media platforms are getting better at responding to reports of racist or other problematic behavior. And that, as in real society, this offers opportunity to create a cost to airing your vile views in public with potential suspensions and bans. But in order for that to work effectively, the community of users has to buy in and police itself by reporting the bad actors.
If enough people report abuse on social media, the perpetrators of it will eventually be run off.
“The tools are there,” said Phippen. “If communities use those tools, then things can happen. It’s not perfect, but you can’t change society by demanding that platforms develop algorithms to do that. Most people don’t have those racist views. So if you can challenge those views and empower people to do something about that, even if it is just reporting an account because it’s being abusive, then that’s a positive step forward. But if people don’t believe it, or don’t see any point in reporting, we’re still in the same position.”
So mute and block less; report more.
Education, travel and conversation with civility may see racism slowly erode. Playing with people from all over the world happens in little Beacon.