Deepfake images app risks ‘harm on steroids’

The release of a new AI platform that will make it easier for Australians to generate sexualised deepfake images will put “potential online harms on steroids”, says E-Safety Commissioner Julie Inman Grant.

Words by Mackenzie Scott and Natasha Bita for The Australian

 

The release of a new artificial intelligence platform that will make it easier for Australians to generate sexualised deepfake images will put “potential online harms on steroids”, says E-Safety Commissioner Julie Inman Grant.

Speaking on the Gold Coast, Ms Inman Grant said the release of OpenAI’s Sora 2 platform, which can create hyper-realistic 20-second-long videos in seconds, will likely see the number of complaints about generated ­images rise.

“We have seen a doubling of deepfake image-based abuse ­reports to us over the past 18 months,” she said.

“We are seeing deepfake abuse incidents at least once a week in Australian schools.

“It is still a small proportion of our image-based abuse reports, but it is just the tip of the iceberg.

“This is a real cause for concern, this is putting real potential online harms on steroids”.

 

Julie Inman Grant, Australia's E-Safety Commissioner, says the release of OpenAI’s Sora 2 platform will likely see complaints about generated ­images rise. Picture: NewsWire / Martin Ollman
Julie Inman Grant, Australia’s E-Safety Commissioner, says the release of OpenAI’s Sora 2 platform will likely see complaints about generated ­images rise. Picture: NewsWire / Martin Ollman

 

The platform won’t be captured in the Albanese government’s world-first social media bans for teens, which will come into effect from December 10.

The e-Safety Commission is taking action against a UK-based company responsible for creating some of the most popular ­undressing apps.

A NSW parliamentary inquiry into the impacts of harmful pornography also warned on Friday of the dangers of AI-generated image-based abuse. It found children as young as 10 were seeing pornography online.

This was causing “an increase in sexual aggression’’, including children sexually attacking siblings or classmates.

“Children are accessing pornography while still in primary school,’’ the report states. “The committee is concerned about the prevalence of pornography in the lives of many young people and … themes of violence and misogyny; real or implied lack of consent; racism, homophobia and transphobia; and illegal themes such as child abuse and incest.

“While some children have ­accessed pornography from as young as eight or nine, the evidence is clear that the majority of young boys will have some experience of pornography by 13, and young girls soon after.’’

The report says AI “nudify’’ apps are causing trauma and distress to children when their photos are taken from social media accounts or school photos, and then uploaded into deepfake ­images of sexual abuse.

This article first appeared in The Australian as Deepfake images app risks ‘harm on steroids’