AI tool recognizes child abuse images with 99% accuracy

Translation of the article was prepared on the eve of the start of the course “Computer vision”


Developers of the new artificial intelligence-based tool claim that it detects images of child abuse with almost 99 percent accuracy.

Safer is a tool developed by Thorn, a non-profit organization, to help businesses that do not have their own filtering systems detect and remove these images.

According to the Internet Watch Foundation in the UK, reports of child abuse during the COVID-19 quarantine have increased by 50 percent. In the 11 weeks since March 23, their hotline has received 44,809 images found, up from 29,698 last year. Many of these images are from children who spent a lot of time on the Internet and had to publish their images.

Andy Burroughs, Head of Child Online Safety at the NSPCC, recently said BBC: “The harm could be reduced if social media invested smarter in technology, investing in safer design features in a crisis.”

Safer is one tool that helps you quickly flag child abuse content to reduce the harm done.

Safer discovery services include:

  • Image Hash Matching: A flagship service that generates cryptographic and perceptual hashes for images and compares these hashes to known CSAM hashes. At the time of publication, the database includes 5.9 million hashes. Hashing takes place in the client infrastructure to preserve user privacy.
  • CSAM Image Classifier: A machine learning classification model developed by Thorn and used by Safer that returns a prediction of whether a file is CSAM. The classifier is trained on datasets totaling hundreds of thousands of images, including pornography, CSAM and various lightweight images, and can help identify potentially new and unknown CSAM.
  • Video Hash Matching: A service that generates cryptographic and perceptual hashes for video scenes and compares them to hashes representing scenes of the alleged CSAM. At the time of publication, the database includes more than 650 thousand hashes of suspicious CSAM scenes.
  • SaferList for Discovery: A service for Safer customers to leverage Safer community data by matching against hash sets provided by other Safer customers to enhance discovery efforts. Clients can customize which hash sets they would like to include.

However, the problem is not limited to content tagging. It has been documented that moderators of social media platforms often require therapy or even help prevent suicide after being exposed to the most disturbing content posted on the Internet day in and day out.

Thorn claims Safer is designed with the well-being of the moderators in mind. To this end, content is automatically blurred (the company says it currently only works for images).

Safer has APIs available to developers that “are designed to expand the general knowledge of child abuse content by adding hashes, scanning with hashes from other industries, and submitting false positive feedback.”
Flickr is one of Thorn’s most famous clients right now. Using Safer, Flickr discovered an image of child abuse on its platform, which, following a law enforcement investigation, resulted in the seizures of 21 children between the ages of 18 months and 14 and the arrests of criminals.

Safer is currently available to any company operating in the US. Thorn plans to expand its operations to other countries next year, after adapting to the national reporting requirements of each country.

You can read more about this tool and how to get started with it. here

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *