In December 2013, a man set up a fake Facebook profile in the name of Meryem Ali, a woman in Texas. When the woman's family and friends clicked on the man's friend requests, they saw doctored photographs of the woman, she says, cut and pasted to look like porn. One photo featured her head atop a nude body. Another photo showed her having sex. The woman hadn't consented to posting the images, and she reported them to Facebook. But she heard nothing back from the site for three months, she claims, until the police opened a criminal investigation. Only after the police asked Facebook for the poster's identifying information did the fake profile come down.
Ali is now suing the poster (who was a former friend) for intentional infliction of emotional distress, as well she should. She is also suing Facebook, to the tune of $123 million, for failing to respond to her request to remove the fake profile more quickly. That suit, I think, is less deserved — and certainly not the way to get Facebook and other social media sites to protect their users from similar abuse.
In 1996, Congress passed a law, called the Communications Decency Act, which immunizes online providers from being held responsible for most of what their users do and say. In a few cases, individuals have been allowed to sue online providers for breaking promises, and Ali is trying to spin her claim as a broken promise based on Facebook's terms of service agreement, which bans nudity, harassment, and bullying. But having a policy against those things is not the same as making a promise to each and every user to remove content that contains nudity or amounts to harassment or bullying.
So, Ali's claims are unlikely to stick. But Facebook and other content providers should heed her lawsuit's message. Ali's claims express dissatisfaction with the enormous, unchecked power that digital gatekeepers wield. Her suit essentially says: Hey Facebook, I thought that you had a "no nudity" and "no harassment" policy. Other people reporting abuse got results, why not me? Why would you take down photos of women breastfeeding but not doctored photos portraying me as engaged in porn without my permission?
Facebook could have alleviated a lot of Ali's frustration by actually responding to her when she first made contact. With great power comes great responsibility, and Facebook needs to improve its terms-of-service enforcement process by creating an official means of review that includes notifying users about the outcome of their complaints. (Right now, Facebook sends an automated message to policy transgressors notifying them that their content has been removed because it "violates Facebook's Statement of Rights and Responsibilities" without saying more, and, as Ali's case shows, those reporting abuse do not necessarily hear back from Facebook about its decisions.)
Facebook can also improve the enforcement process by ensuring that reports of certain abuse — like harassment, nude images, and bullying — get priority review over others, such as spam.
Of course, Facebook is not the only company reviewing user violations. For small startups, if hiring staff is not an option, companies should recruit users to help them enforce community norms. The multiplayer online game League of Legends has enlisted its users to help address players' abusive behavior, notably harassment and bigoted epithets, with much success. With a little incentive and some oversight, trusted users can be effective enforcers of a site's community norms.
Bottom line: Facebook needs to start explaining its decisions when users file complaints, no matter the result. Ali should have been told whether or not Facebook viewed what happened to her as a violation. She should have been told whether or not it would be taking the content down, or what the next step would be. And to ensure the fairness of the process, Facebook should not only notify users of decisions but also permit them to appeal. When people perceive a process to be fair, they are more inclined to accept its results.
The key takeaway of Ali's case is that Facebook and its peers need to be more transparent and accountable to users to engender public support. They might not care about doing the right thing for the right reasons (indeed, they may enforce safety policies to keep advertisers or shareholders happy), but clear policies, a means of review, and transparent enforcement decisions will help protect users from destructive abuse, no matter the inspiration.
Danielle Citron is the Lois K. Macht Research Professor of Law at the University of Maryland Francis King Carey School of Law and author of the forthcoming book Hate Crimes in Cyberspace. This article is part of Future Tense, a collaboration among Arizona State University, the New America Foundation, and Slate.