When they woke up and glanced at their phones Monday morning, Americans may have been shocked to learn that the man behind the mass shooting in Las Vegas late on Sunday was an anti-Trump liberal who liked Rachel Maddow and MoveOn.org, that the FBI had already linked him to the Islamic State, and that mainstream news organizations were suppressing that he had recently converted to Islam.
They were shocking, gruesome revelations. They were also entirely false — and widely spread by Google and Facebook.
In Google's case, trolls from 4Chan, a notoriously toxic online message board with a vocal far-right contingent, had spent the night scheming about how to pin the shooting on liberals. One of their discussion threads, in which they wrongly identified the gunman, was picked up by Google's "top stories" module, and spent hours at the top of the site's search results for that man's name.
In Facebook's case, an official "safety check" page for the Las Vegas shooting prominently displayed a post from a site called "Alt-Right News." The post incorrectly identified the shooter and described him as a Trump-hating liberal. In addition, some users saw a story on a "trending topic" page on Facebook for the shooting that was published by Sputnik, a news agency controlled by the Russian government. The story's headline claimed, incorrectly, that the FBI had linked the shooter with the "Daesh terror group."
Google and Facebook blamed algorithm errors for these. But this was no one-off incident. Over the past few years, extremists, conspiracy theorists and government-backed propagandists have made a habit of swarming major news events, using search-optimized "keyword bombs" and algorithm-friendly headlines. These organizations are skilled at reverse-engineering the ways that tech platforms parse information, and they benefit from a vast real-time amplification network that includes 4Chan and Reddit as well as Facebook, Twitter and Google.
Even when these campaigns are thwarted, they often last hours or days — long enough to spread misleading information to millions of people.
Part of the problem is that these companies have largely abrogated the responsibility of moderating the content that appears on their platforms, instead relying on rule-based algorithms to determine who sees what. Facebook, for instance, previously had a team of trained news editors who chose which stories appeared in its trending topics section, a huge driver of traffic to news stories. But it disbanded the group and instituted an automated process last year, after reports surfaced that the editors were suppressing conservative news sites. The change seems to have made the problem worse — this year, Facebook redesigned the trending topics section again, after complaints that hoaxes and fake news stories were showing up in users' feeds.
There is also a labeling issue. A Facebook user looking for news about the Las Vegas shooting on Monday morning, or a Google user searching for information about the shooter, would have found posts from 4Chan and Sputnik alongside articles by established news organizations like CNN and NBC News, with no obvious cues to indicate which ones came from reliable sources.
More thoughtful design could help solve this problem, and Facebook has begun to label some disputed stories with the help of professional fact checkers, including Tampa Bay Times' PolitiFact. But fixes that require identifying "reputable" news organizations are inherently risky because they open companies up to accusations of favoritism.
The automation of editorial judgment, combined with tech companies' reluctance to appear partisan, has created a lopsided battle between those who want to spread misinformation and those tasked with policing it. Posting a malicious rumor on Facebook, or writing a false news story that is indexed by Google, is a nearly instantaneous process; removing such posts often requires human intervention. This imbalance gives an advantage to rule-breakers, and makes it impossible for even an army of well-trained referees to keep up.
But just because the war against misinformation may be unwinnable doesn't mean it should be avoided. Roughly two-thirds of American adults get news from social media, which makes the methods these platforms use to vet and present information a matter of national importance.
Facebook, Twitter and Google are some of the world's richest and most ambitious companies, but they still have not shown that they're willing to bear the costs — or the political risks — of fixing the way misinformation spreads on their platforms.
The investigations into Russia's exploitation of social media during the 2016 presidential election will almost certainly continue for months. But dozens of less splashy online misinformation campaigns are happening every day, and they deserve attention, too.
Tech companies should act decisively to prevent hoaxes and misinformation from spreading on their platforms, even if it means hiring thousands more moderators or angering some partisan organizations.
Facebook and Google have spent billions of dollars developing virtual reality systems. They can spare a billion or two to protect actual reality.
© 2017 New York Times