Sexual offences online reported three times higher in 2024 as in the previous year.
Jugendschutz.net registered 17,630 violations of the protection of children and adolescents on the Internet in 2024. Sexualised violence accounted for 90 per cent of all cases.
Last year, the Jugendschutz.net platform recorded almost 15,700 cases of sexualised violence against children and adolescents on the Internet. This corresponds to an “enormous increase” of more than 10,000 cases compared to the previous year, those responsible explained in a press conference in Berlin. In 2023, just over 5000 cases had been registered in the area. The number has thus more than tripled.
The joint competence center of the federal and state governments explains the increase primarily by more information, for example via the hotline or by international partners, as it was said in response to a dpa inquiry.
In total, Jugendschutz.net registered 17,630 cases in which the protection of children and adolescents on the Internet was violated. In previous years, the number of violations had averaged about 7000 cases. According to the annual report for 2024, sexualised violence accounted for 90 percent of all cases last year – in 2023 it was still 67 percent of all violations.
This includes cases of AI-generated nude images. Stefan Glaser, head of Jugendschutz.net, said: “Unfortunately, it is now child’s play to turn everyday photos into nude pictures. Deepnudes are then used to bully or blackmail – a perfidious dimension of digital violence.”
The federal abuse commissioner, Kerstin Claus, called the results “alarming and at the same time not surprising”. Becoming a victim of sexualized violence in the digital space is a “very real danger – not only for young people, but also for children,” Claus told dpa. The federal government must act urgently to implement digital child protection both nationally and internationally.
However, the new Federal Minister for Family Affairs, Karin Prien (CDU), did not hold out the prospect of any new legislative initiatives: “The platforms must finally assume their responsibility,” Prien said at the presentation of the annual report. “The legal framework must be consistently enforced. But technical protective measures are not everything. Children need informed parents, teachers and professionals. They become strong through media literacy at school and through their parents, who do not look away and are role models. They are the first digital companions of their children.”
In order to be able to better protect young people, Jugendschutz.net once again pleads for reliable age verification when registering in digital services. The competence center still sees major gaps here. Often only the date of birth is queried and insufficiently checked whether an age indication corresponds to reality. “A system that enables reliable and data-saving age control would be a central building block for effective youth media protection,” says the report for 2024.
This is where the airis:protect service can help, because we detect misconduct on pages and unauthorized content and ensure that it is not visible.
Our image, video and text moderation service using artificial intelligence (AI) provides an automated solution for reviewing and evaluating content on platforms, websites or social networks. This service helps to ensure compliance with guidelines, legal requirements and ethical standards.
Features of an AI-based moderation service to avoid sexual offences
Image moderation
- Inappropriate content detection: Identification of nudity, violence, hate symbols, drugs, and other problematic elements.
- Analyze text in images (OCR): Automatically analyze text within images to detect offensive or forbidden language.
- Content categorization: Automatically assign images to predefined categories, such as advertisements, memes, user photos.
Videomoderation
- Frame-by-frame analysis: Checking video content for problematic scenes, such as explicit violence or sensitive topics.
- Audio analysis: Transcription and analysis of spoken text or background noise to detect hate speech, threats, or copyright infringement.
- Motion detection: Detection of specific actions, such as aggressive gestures or dangerous behavior.
Textmoderation
- Speech recognition: Analysis of language, tone of voice, and semantics to identify offensive, discriminatory, or prohibited content.
- Spam and phishing detection: Automatically block text that contains links to fraudulent websites or offensive content.
- Tone and Context Analysis: Understanding irony, sarcasm, and cultural nuances to accurately detect abuse.
Benefits of AI moderation
- Speed: Scanning millions of pieces of content in real time.
- Accuracy: Minimizing human error and consistently applying moderation policies.
- Scalability: Can handle large amounts of data with ease, regardless of platform size.
- Personalization: Customizable moderation rules based on the specific needs of the platform or region.
Applications
- Social networks: Moderation of user content to create safe communities.
- E-commerce platforms: Review reviews, images, and descriptions for rule violations.
- Gaming platforms: Protection against insults and harassment in chats and forums.
- Education and streaming: Control of content in online courses and live broadcasts.
Technology
- Machine learning: Models are trained with huge data sets to understand different types of content.
- Natural Language Processing (NLP): Understanding and processing of texts and language.
- Computer Vision: Visual AI recognizes content, colors, shapes, and symbols in images and videos.
With our AI-based moderation service, platforms can create a safe environment, protect their users, and avoid legal consequences.
Want to know more? Feel free to contact us via [email protected] – we will create a concept or adapt the tools individual to your needs!