Facebook’s CTO: AI is already screening out bad stuff, with more to come
Software handles a significant part of the world’s biggest content-moderation job, says Mike Schroepfer. And it’s poised to take on more heavy lifting.
In 2017, as Facebook was roiled by an array of controversies relating to content on its platforms—from fake news, hate speech, and more—it became clear that the company believed part of the solution involved the oldest information-processing device of them all: the human eyeball. It announced that it would hire thousands of additional moderators to scan users’ posts for material that was offensive, illegal, or otherwise questionable, That was an acknowledgement that technology alone couldn’t tamp down on social networking’s bad actors. And many pundits have declared that there’s no sign AI will ever be up to the task of identifying and eliminating problematic material without human intervention.