Can Facebook actually keep up with the hate speech and misinformation that pores through the portal? Facebook seems to be working on the misinformation side. But even that is coming in fits and starts. Facebook had to retreat from using red flags that signal articles are fake news after discovering the flags instead spurred people to click on them or share them and has gone instead to listing links below the article to related articles with a more balanced view.
Now a new investigation from ProPublica shows that Facebook’s policy regarding hate speech is also having issues. In an analysis of almost 1,000 crowd-sourced posts, reporters at ProPublica found that Facebook fails to evenly enforce its own community standards. A sample of 49 of the 900 posts were sent directly to Facebook, which admitted that in 22 cases its human reviewers had erred, mistakenly flagging frank conversations about sexism or racism as hate speech. The problem is that the company also does not offer a formal appeal process for decisions its users disagree with so seemingly innocent outbursts may also get caught up in the reviewer’s net.
It is definitely a tough issue and this year Germany will enforce a law requiring social-media sites—including Facebook, YouTube, and Twitter, but also more broadly applying to Web sites like Tumblr, Reddit, and Vimeo—to act within 24 hours to remove illegal material, hate speech, and misinformation. Failure to do so could lead to fines of up to 50 million euros, or about $60 million. Is this this what should be done here in the US or is that too strict? Perhaps the topic of policing content is a good dinner table discussion to have with your teens?