LONDON — When it comes to tackling fake news and hate speech, politicians from Brussels to Washington are relying on the “have your cake it and eat it” style of rulemaking.
As the world gears up for a new round of elections (from the U.S. midterms to the Swedish and European Parliament votes), lawmakers want the likes of Facebook and Google to take greater ownership of policing what’s posted on their social networks, while also warning that these tech giants are gaining too much sway over every aspect of people’s online lives.
Here’s the thing: You can’t have it both ways.
Policymakers can certainly hand over the regulatory keys to Big Tech, outsourcing what can and can’t be shared online to companies with the financial and technical resources to get the job done.
But such efforts (and we’re already seeing them take hold with a spate of tech firms banning Alex Jones, the far-right U.S. media personality, and the upcoming anniversary of Germany’s hate speech rules) will likely cement these firms as digital gatekeepers at a time when even the most pro-tech of politicians now openly questions if Silicon Valley has too much power.
Big Tech must be held more accountable for the reams of photos, posts and — increasingly — misinformation and extremist speech that have come to define social media.
It comes down to an uncomfortable choice over whom we want making the tough calls between online freedom of speech, misinformation and hate speech: democratically elected officials, many of whom don’t know their way around the technical complexities of these issues, or private companies whose responsibility lies with their shareholders, not voters.
* * *
Don’t get me wrong: Big Tech must be held more accountable for the reams of photos, posts and — increasingly — misinformation and extremist speech that have come to define social media in 2018.
Gone are the days when Facebook could call itself a neutral platform and Twitter could hide behind the First Amendment. These companies are loath to admit it, but they have morphed from mere digital platforms into the 21st century equivalent of media barons.
And with such power, inevitably, comes great responsibility, particularly when two out of every five Europeans visit some form of social media site each day, according to EU statistics. (The figure is even higher in the United States, according to Pew Research Center.)
So far, though, officials have been more than willing to let companies take the lead in how to respond to online misinformation and extremist posts.
The European Commission’s voluntary code — a series of measures aimed at tackling the web’s worst forms of hate speech and at promoting digital media literacy — is just that: voluntary.
And despite threats by Věra Jourová, the EU’s justice commissioner, to push through more onerous regulation if Facebook and others don’t pull their socks up, Brussels is unlikely to follow through on such warnings, according to several people with understanding of the Commission’s thinking.
Officials remain divided over what role governments should play in deciding what constitutes hate speech and disinformation in the age of Twitter. Others say that because Big Tech makes so much money they, not regulators, should pay for the legions of content moderators that would be required to keep ahead of the fake news peddlers and social media bots.
The U.S. is no better.
Even as Facebook suspends a raft of “inauthentic accounts” that have been trying to fan existing social and cultural divisions, politicians can’t get new rules over the line to increase transparency over online political advertising, let alone decide how to clamp down on hate speech in ways that would comply with the First Amendment.
The result? Officials are overly reliant on tech firms to supply even the most basic of information, particularly about who’s buying political advertising in the run-up to November’s midterms. That is like a farmer asking turkeys to keep tabs on Christmas.
* * *
By making tech companies the first port of call in tackling misinformation and extremist speech, politicians are setting themselves up for a nasty fall.
Yes, no regulator anywhere in the world has the technical expertise or deep pockets to match Silicon Valley, which believes the likes of artificial intelligence and machine learning can solve the underlying problem of people (or machines) writing and spreading harmful or misleading posts online.
But by placing their faith in Big Tech — many of the same companies, it should go without saying, that created these platforms in the first place — politicians are making two crucial mistakes.
For one, they’re empowering tech firms as pseudo-regulators with little, if any, oversight by government agencies, something that has already happened with Google under Europe’s strict privacy rules. If a vocal minority are uncomfortable about Facebook, for instance, collecting their online data, how will they feel when the social networking giant routinely makes quasi-judicial decisions?
There are no easy answers in the tussle between protecting free speech and policing harmful content online.
And in a world where many now question the dominance of a small number of the West Coast’s biggest names, officials are also doubling down on that supremacy by giving these firms central roles in how governments respond to digital misinformation and hate speech.
Sure, it would be unreasonable to expect policymakers to tackle disinformation and hate speech on their own. And the likes of Google, Facebook and Twitter have (reluctantly) made changes to weed out the worst forms of online content, particularly when it comes to alleged election meddling and online trolling.
But officials must draw a firmer line between themselves and the tech firms whose social media platforms have given them almost unprecedented sway over people’s daily online lives.
There are no easy answers in the tussle between protecting free speech and policing harmful content online. Yet those tough decisions should be made (for good or bad) by elected officials, not by tech moguls.
Otherwise it won’t just be hate speech and misinformation that could undermine countries’ democratic institutions.
Mark Scott is chief technology correspondent at POLITICO.