The Internet Watch Foundation (IWF) is concerned that the prevalence of AI-generated child sexual abuse material (CSAM) on the internet may continue to rise, spurred by advances in AI tools.
Also read: AI deepfakes are making it hard for US authorities to protect children – report
The safety watchdog has indicated that as generative AI tools become sophisticated, it has also become difficult to distinguish between real images or videos and fake AI-generated content. According to the organization, AI-generated pictures from 2024 have become more “realistic” than those that were first seen in 2023.
Internet Watch Foundation sees an increase in CSAM
The IWF is worried that more AI-generated CSAM might find its way on the internet as perverts take advantage of advances in AI tech, which makes tools easier to use and more widespread. The safety watchdog has already noted an increase in AI-generated CSAMs on the internet.
The organization has already noted cases of manipulation of CSAM, while other cases involved adult pornography but with a child’s face used on the footage. According to the IWF, some of the content involved videos that are about 20 seconds long.
Chief technology officer at the IWF Dan Sexton revealed there are chances that more AI CSAM will emerge if AI video tools follow the same trend as AI image-generating tools, as the tech is growing a a fast pace.
“I would tentatively say that if it follows the same trends, then we will see more videos,” he said.
Sexton added that chances are also high that future AI videos will be of “higher quality and realism.”
According to IWF analysts, most of the videos that were reviewed by the organization on a dark forum by pedophiles were partial deepfakes. In this case, AI models freely available online were utilized to manipulate the images, including those of known CSAM victims, on existing CSAM videos and adult pornography.
The organization found nine such videos, while a small number of fully AI-made videos were of basic quality.
The watchdog wants to criminalize the production of CSAM
The IWF is now pushing for laws that criminalize the production of such material, including making guides to make such content. It should also criminalize the development of tools and platforms that enable the creation of CSAM.
This comes as a study carried out this year of a single dark web forum revealed 12,000 new AI-generated images posted over a month-long period.
According to IWF, nine out of 10 of those images were realistic, and they could be prosecuted under the same UK laws covering real CSAM.
Also read: Texas man sentenced for buying child pornography with cryptocurrency
Last year, the National Center for Missing and Exploited Children (NCMEC) said it received numerous reports of perverts using AI in varying ways, like entering text prompts to create child abuse imagery. In some instances, the predators altered previously uploaded pictures to make them sexually explicit and abusive.
IWF CEO Susie Hargreaves highlighted the need to govern generative AI tools to limit the proliferation of CSAM.
“Without proper controls, generative AI tools provide a playground for online predators to realize their most perverse and sickening fantasies,” said Hargreaves.
Hargreaves added that the IWF was beginning to see more of CSAM being shared and sold on commercial child sexual abuse sites.
According to The Guardian, another organization that operates a hotline for reporting abuse revealed it found AI-made CSAM on sale on online platforms by offenders.