Facebook might be taking its bullying problem a lot more seriously than you think. Rather than hire more humans to flag troubling content, it plans on developing AI with human level intelligence to do the job. An official company blog post today, on the subject of content moderation, laid out a road map for machine learning solutions to bullying on Facebook: One potential answer is an approach that Facebook Chief AI Scientist, Yann LeCun, has been discussing for years: self-supervision. Instead of relying solely on data that’s been labeled for training purposes by humans — or even on weakly supervised…
This story continues at The Next Web
Or just read more coverage about:
Facebook