AI tool recognizes child abuse images with 99% accuracy

The translation of the article was prepared on the eve of the start of the "Computer vision" course .














The developers of the new artificial intelligence-based tool claim that it detects images of child abuse with almost 99 percent accuracy.



Safer is a tool developed by Thorn, a non-profit organization, to help businesses that do not have their own filtering systems detect and remove these images.



According to the Internet Watch Foundation in the UK, reports of child abuse during the COVID-19 quarantine have increased by 50 percent. In the 11 weeks starting March 23, their hotline received 44,809 image recovered reports, up from 29,698 last year. Many of these images are from children who spent a lot of time on the Internet and had to publish their images.



Andy Burroughs, head of child safety on the Internet at the NSPCC, recently told the BBC: "The harm could be reduced if social media invested smarter in technology, investing in safer design features in a crisis."



Safer is one tool that helps you quickly flag child abuse content to reduce the harm done.



Safer discovery services include:



  • Image Hash Matching: A flagship service that generates cryptographic and perceptual hashes for images and compares these hashes to known CSAM hashes. At the time of publication, the database includes 5.9 million hashes. Hashing takes place in the client infrastructure to preserve user privacy.
  • CSAM (CSAM Image Classifier): , Thorn Safer, , CSAM. , , CSAM , CSAM.
  • (Video Hash Matching): , , CSAM. 650 . CSAM.
  • SaferList : Safer, Safer , Safer, . , - .


However, the problem is not limited to content tagging. It has been documented that moderators of social media platforms often need therapy or even help prevent suicide after being exposed to the most disturbing content posted on the Internet day in and day out.



Thorn claims Safer is designed with the well-being of the moderators in mind. To this end, content is automatically blurred (the company says it currently only works for images).



Safer has APIs available to developers that "are designed to expand the general knowledge of child abuse content by adding hashes, scanning using hashes from other industries, and submitting false positive feedback."



Flickr is one of Thorn's most famous clients at the moment. Using Safer, Flickr found an image of child abuse on its platform, which, following a law enforcement investigation, resulted in the seizures of 21 children between the ages of 18 months and 14 and the arrests of criminals.



Safer is currently available to any company operating in the US. Thorn plans to expand its operations to other countries next year, after adapting to the national reporting requirements of each country.



You can read more about this tool and how to get started here .



All Articles