Facebook says it removed 8.7M child exploitation posts with new machine learning tech

Facebook announced today that it has removed 8.7 million pieces of content last quarter that violated its rules against child exploitation, thanks to new technology. The new AI and machine learning tech, which was developed and implemented over the past year by the company, removed 99 percent of those posts before anyone reported them, said […]

Facebook announced today that it has removed 8.7 million pieces of content last quarter that violated its rules against child exploitation, thanks to new technology. The new AI and machine learning tech, which was developed and implemented over the past year by the company, removed 99 percent of those posts before anyone reported them, said Antigone Davis, Facebook’s global head of safety, in a blog post.

The new technology examines posts for child nudity and other exploitative content when they are uploaded and, if necessary, photos and accounts are reported to the National Center for Missing and Exploited Children. Facebook had already been using photo-matching technology to compare newly uploaded photos with known images of child exploitation and revenge porn, but the new tools are meant to prevent previously unidentified content from being disseminated through its platform.

The technology isn’t perfect, with many parents complaining that innocuous photos of their kids have been removed. Davis addressed this in her post, writing that in order to “avoid even the potential for abuse, we take action on nonsexual content as well, like seemingly benign photos of children in the bath” and that this “comprehensive approach” is one reason Facebook removed as much content as it did last quarter.

But Facebook’s moderation technology is by no means perfect and many people believe it is not comprehensive or accurate enough. In addition to family snapshots, it’s also been criticized for removing content like the iconic 1972 photo of Phan Thi Kim Phuc, known as the “Napalm Girl,” fleeing naked after suffering third-degree burns in a South Vietnamese napalm attack on her village, a decision COO Sheryl Sandberg apologized for.

Last year, the company’s moderation policies were also criticized by the United Kingdom’s National Society for the Prevention of Cruelty to Children, which called for social media companies to be subject to independent moderation and fines for non-compliance. The launch of Facebook Live has also at times overwhelmed the platform and its moderators (software and human), with videos of sexual assaults, suicides, and murder—including that of an 11-month-old baby by her father—being broadcast.

Moderating social media content, however, is one noteworthy example of how AI-based automation can benefit human workers. Last month, Selena Scola, a former Facebook content moderator, sued the company claiming that screening thousands of violent images had caused her to develop post-traumatic stress disorder. Other moderators, many of whom are contractors, have also spoken of the job’s psychological toll and said Facebook does not offer enough training, support, or financial compensation.

Data & News supplied by www.cloudquote.io
Stock quotes supplied by Barchart
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms and Conditions.