Is it Possible to Prevent Online Hatred from Turning into Offline Tragedy?

NeQF7CFqDOyH9YVc2V2ojyPtAUDHmxTF0E3xT5H-ILU

By Daniel Simmons

The Internet is the most comprehensive communication tool ever contrived, so broad in scope that it defies categorization. It has grown to encompass nearly every niche, culture, and idea that humanity has to offer. Therefore, it is inevitable that some corners of the Internet encourage ideas ranging from mildly offensive to legitimately dangerous. In the wake of two recent mass shootings, each involving suspects who frequented extremist websites, many are revisiting the idea of increased transparency and censorship online.
Popular websites have long grappled with how to elevate discourse without alienating their user base. For example, for much of its early history, YouTube’s comments section was frequently flooded with “internet trolls” (commenters who stir controversy for their own amusement), who attempted to drown out all meaningful conversation. In 2013, Google sought to remedy the problem by filtering comments by relevancy and empowering the channels owners with tools to block or bury offensive posts. More recently, Reddit banned certain subreddits (areas of the site dedicated to specific topics) that were known to be hotbeds of hate, racism, and misogyny. In both cases, the efforts were met with fierce opposition from a vocal subset of users. In Reddit’s case, the change inspired a mass exodus to competing website Voat.co, and ended with the dismissal of Reddit CEO, Ellen Pao.
Whether or not these changes have been effective is a matter of debate. When delving into the comments on a YouTube video, what you’ll find is still a crapshoot, and Reddit still has its fair share of hateful subreddits to sift through. One thing that is certain is that these sites’ users do not enjoy being censored. The sites have to maintain the precarious balance between allowing free speech and keeping a lid on offensive content. While removing anonymity would no doubt improve the conduct of many of the sites’ worse actors, it would come at a great cost for users living under oppressive regimes that seek retribution against those espousing unpopular beliefs.
Despite the tradeoffs, this seems to be the approach that the UK, under the direction of Prime Minister David Cameron, has chosen to take. David Cameron has a long history of running afoul of internet users’ rights, and true to form, he has recently proposed a ban on “hard encryption,” a policy that, if effective, would end online privacy for the United Kingdom’s millions of Internet users.
The banning of encryption is not only nearly impossible to institute, but also smacks of the sort of action you’d see from an authoritarian regime. Proponents of such policies both in the U.S. and abroad often tout their (largely unproven) power to unmask potential terrorists. In the U.S., a commonly proposed solution would require tech companies like Google, Apple, and Microsoft to create “backdoors” in their encryption software that would extend law enforcement’s already expansive reach into our online communications. For now, many of the companies are refusing to comply, but with each subsequent mass shooting or national-security scare comes a new justification for why it is necessary.
Recently, the Illinois Supreme Court ordered Comcast to release the identity of an Internet commenter calling himself “Fuboy” after he made defamatory comments about local political candidate Bill Hadley. Hadley reportedly spent three years and a significant portion of his wealth in the legal effort to unmask the commenter. Comcast has yet to comply with the court’s wishes, but if successful, this case could set an important precedent for what, if any, legal consequences there are for being offensive online.
The argument against this legal approach is that users should just ignore offensive comments, or stop visiting the site in question. Unfortunately, the consequences of online speech can go far beyond a few hurt feelings. Both Dylann Roof, the alleged killer of 9 people in Charleston, SC, and John Houser, who shot 11 people at a Louisiana movie theater, were known to be frequent users of right-wing extremist websites that encouraged racism and hatred. While the argument that these sites spawn mass killers is impossible to prove, having an echo chamber for hateful ideologies may have influenced these men’s eventual actions. The question is, are incidents like these just an unfortunate cost of having free speech in a digital era, or should we attempt to limit what ideas we allow to spread online? Do we leave it up to the sites themselves to police users, or is government oversight warranted? As of yet, there are no clear answers to any of these questions, but in the wake of recent events, they are certainly worth asking.