UK police have discovered that pedophiles are utilizing AI technology to generate and distribute lifelike child sexual abuse material. A BBC investigation revealed that individuals are creating these disturbing images using AI software Stable Diffusion, originally intended for art or graphic design. These images are then sold via subscriptions on mainstream content-sharing platforms like Patreon and promoted on the Japanese site Pixiv.
The National Police Chief’s Council (NPCC) has described the situation as “outrageous,” condemning platforms that profit without taking moral responsibility. The NPCC’s child safeguarding lead, Ian Critchley, stressed the dangers of synthetic child abuse imagery, warning it could escalate from thought to actual harm to a live child.
Octavia Sheepshanks, a freelance researcher, has been looking into the issue for several months. According to her, creators generate child abuse images on an industrial scale. There is a three-stage process: generating the images using AI software, promoting them on platforms like Pixiv, and then directing customers to explicit images on Patreon through links.
Critics say that Pixiv, a social media platform popular among manga and anime artists, provides a loophole as Japan doesn’t outlaw sharing sexualized cartoons and drawings of children. Pixiv responded by stating that it had banned all photo-realistic depictions of sexual content involving minors from 31 May and was investing substantial resources to counter AI-related issues.
Patreon, valued at $4bn and boasting over 250,000 creators, has also come under fire. Investigations revealed accounts offering AI-generated obscene images of children for sale, with tiered pricing based on material types. Patreon maintained that it had a “zero tolerance” policy towards content involving minors and was making efforts to counteract the surge in AI-generated harmful content.
Stability AI, the UK company behind Stable Diffusion, stated it strongly supports law enforcement efforts and prohibits misuse of its platforms for illegal or nefarious purposes. GCHQ, the UK government’s intelligence agency, has voiced its support for such law enforcement, noting the importance of staying ahead of threats such as AI-generated content.
However, concerns remain, especially about the potential for realistic AI images to hamper efforts to identify real victims of abuse. The NSPCC echoed this sentiment and called for tech companies to take action, claiming they can’t feign ignorance about the misuse of their products. The UK government responded, highlighting the upcoming Online Safety Bill, which will impose stringent obligations on companies to combat all forms of online child sexual abuse.