<aside> 💡 During the project, these are some identified challenges, commonly faced by actors developing and researching on AI-related initiatives and experts within child-rights questions.

</aside>

Where is useful data and what is required to access it?

What data is available in society that can provide valuable insight into the issue of child sexual abuse? What can we learn from, for example, court cases, chat logs and health data? What partnerships are needed to access the data? How do we ensure that it is of sufficiently high quality? What is the risk of bias in the data?

There is a lack of timely and easily accessible statistics on child sexual abuse. Can AI facilitate the search of public documents such as judgments and decisions, and show extent, trends and patterns? E.g. does it differ between courts when it is judged as sexual posing vs. rape of children? How many files are required for it to be considered an aggravated child pornography crime? How often do offenders use Snapchat to contact children? What is the modus operandi? How much does child sexual abuse cost society in pure legal costs and damages? Because it is not only AI that lacks data. It's all of us who lack data; policy decisions, legislation, funding - all based on things we don't actually know.

Privacy and confidentiality issues are often a barrier to effective collaboration between different actors. Is it possible to use AI to pseudonymize e.g. court cases and preliminary investigations to make the data available so that AI and researchers can follow conversations and discover patterns? Are there legal conditions for this?

How can AI streamline time-consuming routine tasks to free up time for those working to prevent child sexual abuse? One example is the AI translation tool Deepl, which uses AI to translate better than other tools and is user- and integration-friendly. In addition, the research assistant system Elicit has also shown promising results. Could AI be used to systematize the analysis of grooming chat logs and more quickly identify relevant information and patterns for the investigation?

How can we work more preventively with offenders?

Can AI improve online treatments for perpetrators and potential perpetrators who want help to stop watching abuse material? Can AI be used to evaluate treatments, e.g. number of hours a user is online, time patterns, predicting when patients need more support? Are there solutions in the health sector that can be applied? Can AI identify which users in darknet forums respond best to which messages of support and treatment?

How can stakeholders in the Stella Polaris network identify and address legal and ethical challenges in using AI? Issues of privacy and sensitive data often arise in relation to AI and its application. However, child sexual abuse that is documented and disseminated is an example of huge privacy violations. How to balance exceptions and potential risks to make progress in the fight against sexual abuse?