
Topics: Artificial Intelligence, Technology
AI-generated images of Giorgia Meloni wearing lingerie were shocking, but according to one of the world's leading experts on deepfakes, they were also entirely predictable.
Dr Henry Ajder has spent nearly eight years studying the rise of AI-generated sexual content, and his verdict on where we are right now is bleak: it has never been easier to make a deepfake of someone.
The laws aren't stopping it, and the number of people being prosecuted is a 'drop in the ocean' compared to what's actually happening every day.
When asked about whether Trump's TAKE IT DOWN Act would have any big effect on people creating these images, Ajder told UNILAD: "Nothing has really changed,, If anything it has got worse."
Advert
When Meloni posted the AI-generated images of herself to X this week, she was doing something most victims never get to do - fight back publicly with the resources and social media platform to do so.
But Ajder wants people to understand that what's happened to her is far from an isolated incident targeting the powerful and famous.
The tools that created those images are available on people's phones right now.

"It has never been easier to create deepfakes that non-consensually sexualise people," he said.
There are now tens of successful 'nudification' tools freely available, many of which run directly in a browser or smartphone and need as little as a single photograph to work.
The expert breaks the explosion in this content down to four factors that have converged to create 'the perfect storm'.
Realism has improved dramatically. Efficiency means the tools need far less data than before. Accessibility has transformed the landscape entirely; these tools, Ajder says are now 'gamified' and stripped of any technical barrier that might one have slowed someone down.
And functionality has expanded far beyond crude face-swapping, to include full AI generated videos, animated images, and in a development which Ajder believes has been underplayed, voice cloning.
"People can create synthetic, phone-sex style content using someone's voices," Ajder warned.

When Donald Trump signed the Take it Down Act last year, and Italy became the first EU country to criminalise deepfakes, it felt like a turning point.
The TAKE IT DOWN Act is a U.S. federal law signed on May 19, 2025, that criminalizes the non-consensual publication of intimate images - including AI-generated deepfakes and "revenge porn". It mandates that online platforms implement notice-and-removal procedures to delete such content within 48 hours
Ajder is careful not to dismiss the legislation - he's equally careful not to overstate it.
The first thing he wants to clear up is what the laws actually cover.
They don't ban the tools, for instance, Grok hasn't been banned. The open source software anyone can download and adapt hasn't been banned - what's been criminalised is the act of using those tools to generate this kind of content.

What legislation has done, he argues, is send a message. "It has helped signal more clearly to the general public that this is a form of sexual offending - you are a sex offender if you do this."
For example, in the UK, convictions can result in custodial sentences and a place on the sex offender's register. That matters, he says, because communities creating this content have long treated it with a disturbing flippancy, almost a 'memification' of it, as the fact that it's digital means it's less real.
Clearly, it doesn't.
"Just because this is AI-generated doesn't mean it's less harmful or that it's not on the same spectrum as physical assault, harassment, or abuse."
But enforcement is another matter. Perpetrators can be anywhere in the world, and they can be anonymous.
And the number of people actually being arrested and tried, Ajder says, 'is a drop in the ocean compared to the number of cases happening on a daily basis."
"It's naive to expect we'll ever be able to truly eradicate this problem; it's endemic.

Detection tech isn't the silver bullet people hope for, Ajder explains. This is partly because on platforms that already host legal adult content. identifying whether something is AI-generated still requires a human to judge whether it's consensual and depicts a real person. That's an enormous task.
But there is one emerging technology that has real promise for everyday people, even if it can't help public figures because of the huge amounts of media of them already in the public domain.
It's called data poisoning.
According to IBM: Data poisoning is an adversarial cyberattack where malicious actors deliberately inject, manipulate, or delete training data to corrupt a machine learning model's integrity. This "poison" causes AI to learn incorrect patterns, introducing backdoors or bias, resulting in unreliable performance or targeted malfunctions.
The idea is that signals or artefacts are added to your existing photos, often invisibly, that corrupt any AI tool that attempts to train on them. If someone attempted this, it would cause the model to break down.
"It's almost like a shield or defensive layer; if social media platforms or dating apps built this into their tech, it could have a huge impact and protect ordinary people."
For the young woman posting on her Instagram, it could make a real difference.

Ajder concluded with a sobering reminder of how we behave online. In an age where public figures are treated as objects for entertainment, he says, there's a tendency to assume their fame makes them somehow more resilient to this attack. It doesn't.
"Just because you're famous doesn't mean that these kinds of attacks and this kind of content doesn't hurt and traumatise in the same way as they do for a private person," he added.
But as Meloni put it this week, "I can defend myself. Many others cannot."