Deepfake child abuse material now features “extreme realism,” experts warn
The article reports on a new Internet Watch Foundation (IWF) report highlighting a rapid rise in AI generated child sexual abuse material (CSAM), driven largely by deepfake technology.
The report finds that AI is significantly increasing both the scale and severity of abusive content. In 2025 alone, more than 8,000 AI generated images and videos depicting child sexual abuse were identified, showing how quickly the problem is growing.
A key concern is how realistic this material has become. Many AI generated images and videos are now highly convincing and difficult to distinguish from real abuse, with a large share classified as the most extreme category.
The report also highlights that such content is no longer limited to hidden areas of the internet. It is spreading across both dark web communities and mainstream online platforms, making it more accessible and harder to control.
Another major issue is the accessibility of AI tools. Offenders can now create harmful material with relatively low technical skill, enabling faster production and wider distribution of abuse content.
Experts warn that this trend could overwhelm existing moderation and law enforcement systems, while also causing serious harm to victims, especially when real children’s likenesses are used in deepfake content.
Overall, the report calls for urgent action, including stronger regulation, improved safeguards in AI systems, and greater international cooperation to combat the growing threat.




