After working as one of a multitude of content moderators for Facebook, the recent buzz over large social media platforms and political interference came as no surprise. While contracting for Facebook’s Spam team, we saw a plethora of grisly content, from child pornography to graphic decapitation.
Understandably, provided the myriad users utilizing Facebook, Twitter and other platforms to share obviously illicit material, skimming for ‘fake news’ was last on the companies’ priority list. Just imagine you are sifting through thousands of tickets daily – what stands out more? A photo of a woman doing unspeakable things with a horse or a legitimate enough looking news article that may or may not even be in your language? Expectedly, many content reviewers focus more on the lower-hanging fruit: porn.
However, given the billions of users on these platforms, any false political material that slipped under the moderation radar could have contributed to election interference in the US, France or any of the other nations impacted by the fake news fiasco. As a former moderator myself, I certainly feel everyone who once worked and still works to control the amount of filth on these platforms has learned an important lesson about what other types of material need to be moderated apart from sex and blood.
What are your thoughts? If you had the chance to secure social media platforms against fake news, what are some steps you would take to protect users and safeguard your company’s reputation? Do you have any measures to filter out illegitimate information when perusing current events?
For 2019 reading, check out sci-fi debut Apex Five!