Dok-Film

hessian.AI researchers featured in ARD documentary on safe AI systems against abuse imagery​

In the ARD documentary “Gefährliche Intelligenz · Kindesmissbrauch mit KI” (Dangerous Intelligence – Child Abuse with AI), hessian.AI researchers explain how artificial intelligence can be designed so that it does not generate abusive imagery.​

On 1 December 2025, ARD broadcast the documentary “Gefährliche Intelligenz · Kindesmissbrauch mit KI”, which addresses the rapid increase in AI-generated depictions of sexualized violence against children. The film shows how offenders misuse freely available AI models to create highly realistic images within seconds and what challenges this poses for law enforcement, platform operators and child protection.​​

In the documentary, hessian.AI researchers Patrick Schramowski and Prof. Kristian Kersting (Co-Director of hessian.AI) appear as experts. They explain how AI systems can be trained and structured in such a way that sensitive or illegal content – such as depictions of sexualized violence against children – is not generated in the first place, and how technical safeguards can make abusive behaviour more difficult.​​

One example of such safety mechanisms is LlavaGuard, a vision safety system for visual content that was presented at the International Conference on Machine Learning (ICML) 2025.

LlavaGuard uses vision-language models to automatically analyse images, compare them against adaptable safety policies, and thus assess whether visual content complies with predefined safety standards; experimental results show that it outperforms existing safety solutions in terms of accuracy and flexibility.​

📺 ARD documentary:

Gefährliche Intelligenz · Kindesmissbrauch mit KI (ARD Mediathek)

📄 More on LlavaGuard:

LlavaGuard: An Open VLM-based Framework for Safeguarding Vision