Advertisement

‘Horrifying’ explicit AI deepfakes targeting children

Artificial intelligence used for ill poses a terrifying threat to children and young people.

Artificial intelligence used for ill poses a terrifying threat to children and young people. Photo: AAP

The rapid development of generative AI is making online abuse easier for predators targeting children to create explicit deepfake videos and images.

A curious Noelle Martin was reverse-searching an image of herself taken at the age of 17 when she came across several sexually explicit deepfakes carrying her face.

Years later, the activist is fighting for a world where people have ownership over their lives, images, and bodies as the risk soars of perpetrators fabricating intimate images and deepfakes.

“I don’t want to fight for, or live in, a world where younger generations have to be careful over what they share.”

Professionals concerned

Toby Walsh is one of more than 1200 concerned professionals working in artificial intelligence and tech-related industries who have signed an open letter pushing for more government-based action to disrupt deepfake output.

“People are used to believing in what they see,” the University of NSW professor told AAP.

“We are fearful that this technology is going to be misused.

“It is already starting to have a negative impact upon our lives, especially in areas like child pornography.”

Deepfake pornography makes up roughly 98 per cent of all deepfake videos online, according to research by identity theft protection group Home Security Heroes cited in the letter.

The same research notes a total of 95,820 deepfake videos online in 2023 – a 550 per cent increase from what had been seen in 2019.

The popular uptake of generative AI concerns eSafety Commissioner Julie Inman Grant who says rapid deployment means it no longer takes copious amounts of computing power or content to create convincing synthetic explicit images of children.

“The rise of synthetic child sexual abuse material poses horrifying issues,” she told AAP.

“The high-speed, large-scale sexualisation of children delays law enforcement’s efforts to help real children since they’re forced to sift through ever-growing amounts of abusive content to differentiate between real and synthetic.”

A team of victim identification specialists working with the Australian Federal Police-led Australian Centre to Counter Child Exploitation review reports submitted to police.

Authorities are concerned by the potential for AI to be “used to exploit our most vulnerable”, an AFP spokesperson said.

“By having to devote time to ascertaining whether child abuse material depicts real human victims or AI-generated material, specialists have less time available to identify and help real victims of child sexual exploitation.”

The centre’s research shows 52 per cent of parents and carers talk to their children about online safety, which is not enough.

At the same time, only three per cent of that group list online grooming as a concern, and the centre also found children experiencing some form of online sexual abuse may feel reluctant to speak up because of a fear of being punished.

“It is important (for guardians) to create an environment where their child feels like they can come to them if they ever need to,” the spokesperson said.

The AFP charged more than 180 people with child exploitation-related offences between 2022-23.

More than 40,232 reports about explicit images and videos of children and young people were made to the centre’s Child Protection Triage Unit within that same period.

Must do more

Noelle Martin says that governments, regulators and police bear the greatest responsibility in tackling the issue and must do more to hold perpetrators and digital platforms accountable.

“The eSafety Commissioner (historically) has an abysmal track record of failing to use their powers in any meaningful way,” she said.

“Unless we have a regulator who is willing and prepared, we are going nowhere fast.”

About 13 companies covering 27 different services have been issued notices by the eSafety Commissioner for breach of harm covering child sexual abuse material, sexual extortion and the safety of recommender algorithm systems on social platforms.

But regulation can only take us so far, Ms Grant says, arguing that children need to feel reassured they can reach out to an appropriate adult for guidance and support if someone they don’t know contacts them online.

Most of the online grooming that eSafety investigators uncover is happening behind closed doors and in homes.

“Our advice to parents and carers is to be regular, active participants in their children’s online and offline lives,” Ms Grant said.

“Make time to co-view and play the online games they love to play, like you’d make time to kick a ball or play a board game.

“Just as we discuss school, friends and sport with our children, have regular open conversations about what they’re doing online.”

However, while it’s generally assumed that children are targeted by strangers on the internet, according to the AFP’s ThinkUKnow program, potential offenders can also be someone familiar to the minor or their family.

The education program addresses online child sexual exploitation and encourages potential victims to seek help.

“By opening discussion, we reduce the stigma and build awareness so survivors are more likely to seek support and offenders will find it harder to hide,” the AFP spokesperson said.
Lifeline 13 11 14

Kids Helpline 1800 55 1800 (for people aged 5 to 25)

—AAP

Stay informed, daily
A FREE subscription to The New Daily arrives every morning and evening.
The New Daily is a trusted source of national news and information and is provided free for all Australians. Read our editorial charter
Copyright © 2024 The New Daily.
All rights reserved.