Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Facebook’s Child Grooming Machine Learning system helped remove 8.7 million abusive images of children

Save for later
  • 3 min read
  • 26 Oct 2018

article-image

On Wednesday, Facebook posted that during the last quarter, its content moderators had removed 8.7 million user images of child nudity from their platform with the help of a software that automatically flags such photos. This software was previously undisclosed.

In 2016, Joaquin Candela, the company’s director of applied machine learning, told Reuters that Facebook was using artificial intelligence to find offensive material. It is “an algorithm that detects nudity, violence, or any of the things that are not according to our policies,” he added.

The machine learning tool rolled out last year identifies images that contain both nudity and a child, allowing the increased enforcement of Facebook’s ban on photos that show minors in a sexualized context.The software which was disclosed yesterday, is similar to the one released last year. However, this time it catches the users engaged in “grooming,” or befriending minors for sexual exploitation.

Facebook’s Global Head of Safety, Antigone Davis told Reuters in an interview that the “machine helps us prioritize” and “more efficiently queue” problematic content for the company’s trained team of reviewers.

Due to pressure from regulators and lawmakers, Facebook has promised to speed up the process of removing extremist and illicit material. Machine learning programs that sift through billions of pieces of content that its users post each day play a crucial role in this plan. Though Facebook has not disclosed data on child nudity removals before. This year, news agencies and advertisers have been among those that complained about Facebook’s automated systems wrongly blocking their posts. Facebook’s rules have banned even family photos of lightly clothed children uploaded with “good intentions”, considering how others might abuse such images.

Davis said, “the child safety systems would make mistakes but users could appeal. We’d rather err on the side of caution with children.”

Previously, Facebook relied on users or its adult nudity filters to report abusive posts. Facebook said that the program, which learned from its collection of nude adult photos and clothed children photos, has led to more removals. It makes exceptions for art and history, such as the Pulitzer Prize-winning photo of a naked girl fleeing a Vietnam War napalm attack.

Davis said, “The child grooming system evaluates factors such as how many people have blocked a particular user and whether that user quickly attempts to contact many children.”

Michelle DeLaune, Chief Operating Officer at the National Center for Missing and Exploited Children (NCMEC), said, “the organization expects to receive about 16 million child porn tips worldwide this year from Facebook and other tech companies, up from 10 million last year.”

Michelle acknowledged that a crucial blind spot is encrypted chat apps and secretive “dark web” sites where much of new child pornography originates.

For example, message encryption on Facebook-owned WhatsApp, prevents machine learning from analyzing them.


“Facebook is the new Cigarettes”, says Marc Benioff, Salesforce Co-CEO

Facebook releases Skiplang, a general purpose programming language

Facebook’s largest security breach in its history leaves 50M user accounts compromised

Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime