Reports indicate a massive uptick in AI-generated CSAM throughout the internet

AI-generated child sexual abuse material (CSAM) has been flooding the internet, . Researchers at organizations like the Internet Watch Foundation and the National Center for Missing & Exploited Children are warning that this new AI-created CSAM is nearly indistinguishable from the real thing.
Let’s go over some numbers. The Internet Watch Foundation, a nonprofit that , has identified 1,286 AI-generated videos so far this year. This is compared with just two videos identified in the first half of 2024. That’s an exponential increase.
š Developments in artificial intelligence (AI) come with a range of benefits, including supporting learning and innovation. There is, however, growing concern for how AI can also be misused to create and share child sexual abuse material (CSAM), referred to as AI-CSAM.
In⦠pic.twitter.com/lgfRQNBk8N
ā Internet Watch Foundation (IWF) (@IWFhotline) July 8, 2025
The re-affirms those statistics. It told NYT that it has received 485,000 reports of AI-generated CSAM, including still images and videos, in the first half of 2025. This is compared to 67,000 for all of 2024. That’s another massive uptick
āItās a canary in the coal mine,ā said Derek Ray-Hill, interim chief executive of the Internet Watch Foundation. āThere is an absolute tsunami we are seeing.ā
This technology is constantly improving, so the videos and images have become more realistic. The Internet Watch Foundation found an internet forum in which users were praising how realistic the new videos were. Reporting suggests that this content is distributed through the dark web, making it harder for law enforcement agencies to identify the offenders.
It’s worth remembering how AI image generators work. They are trained using real images and videos. The New York Times says that much of this new glut of AI-generated content includes real CSAM that has been repurposed by the algorithm. Some of the material even uses real photos of children scraped from school websites and social media.
The issue dates back to the early days of this technology. In 2023, researchers at the Stanford Internet Observatory found hundreds of examples of CSAM in a of the image generator Stable Diffusion. Stability AI says it has introduced safeguards to improve safety standards and “is deeply committed to preventing the misuse of our technology, particularly in the creation and dissemination of harmful content, including CSAM.”
That did lead other companies to to the National Center for Missing & Exploited Children. Amazon 380,000 instances of AI-generated CSAM in the first half of this year, all of which it took down. OpenAI .
NCMEC Applauds the California State Legislature for Passing AB 1831 and looks forward to it being signed into law.
NCMEC supports AB 1831 because it addresses gaps in Californiaās legal remedies for child victims of Generative AI CSAM. We are heartened to see states move⦠pic.twitter.com/qZt1mgD7Eo
ā National Center for Missing & Exploited Children (@NCMEC) September 4, 2024
Courts have been slow to catch up with this tech. The DOJ made its first known arrest last year of a man suspected of . A UK man recently for using AI to generate the foul images, which he sold.
āThe Department of Justice views all forms of AI-generated CSAM as a serious and emerging threat,ā Matt Galeotti, head of the Justice Departmentās criminal division, told NYT.
It’s worth noting that despite the alarming uptick in occurrences, AI-generated content still represents a mere fraction of all CSAM identified by authorities and watchdog organizations. For instance, the Internet Watch Foundation and, as previously noted, just two instances were AI-generated.