unilad homepage
unilad homepage
    • News
      • UK News
      • US News
      • World News
      • Crime
      • Health
      • Money
      • Sport
      • Travel
    • Music
    • Technology
    • Film and TV
      • News
      • DC Comics
      • Disney
      • Marvel
      • Netflix
    • Celebrity
    • Politics
    • Advertise
    • Terms
    • Privacy & Cookies
    • LADbible Group
    • LADbible
    • SPORTbible
    • GAMINGbible
    • Tyla
    • UNILAD Tech
    • FOODbible
    • License Our Content
    • About Us & Contact
    • Jobs
    • Latest
    • Archive
    • Topics A-Z
    • Authors
    Facebook
    Instagram
    X
    Threads
    TikTok
    YouTube
    Submit Your Content
    AI expert issues chilling warnings about deepfakes after Italian Prime Minister shares AI lingerie photo
    Home>Technology
    Updated 17:30 8 May 2026 GMT+1Published 16:31 8 May 2026 GMT+1

    AI expert issues chilling warnings about deepfakes after Italian Prime Minister shares AI lingerie photo

    Even laws around deepfakes won't fix the problem, the expert claims

    Thomas Bamford

    Thomas Bamford

    google discoverFollow us on Google Discover
    Featured Image Credit: Pier Marco Tacca/Getty Images

    Topics: Artificial Intelligence, Technology

    Thomas Bamford
    Thomas Bamford

    Advert

    Advert

    Advert

    AI-generated images of Giorgia Meloni wearing lingerie were shocking, but according to one of the world's leading experts on deepfakes, they were also entirely predictable.

    Dr Henry Ajder has spent nearly eight years studying the rise of AI-generated sexual content, and his verdict on where we are right now is bleak: it has never been easier to make a deepfake of someone.

    The laws aren't stopping it, and the number of people being prosecuted is a 'drop in the ocean' compared to what's actually happening every day.

    When asked about whether Trump's TAKE IT DOWN Act would have any big effect on people creating these images, Ajder told UNILAD: "Nothing has really changed,, If anything it has got worse."

    Advert

    When Meloni posted the AI-generated images of herself to X this week, she was doing something most victims never get to do - fight back publicly with the resources and social media platform to do so.

    But Ajder wants people to understand that what's happened to her is far from an isolated incident targeting the powerful and famous.

    The tools that created those images are available on people's phones right now.

    Henry Adjer has warned about the ease of making deepfakes (henryadjer.com)
    Henry Adjer has warned about the ease of making deepfakes (henryadjer.com)

    "It has never been easier to create deepfakes that non-consensually sexualise people," he said.

    There are now tens of successful 'nudification' tools freely available, many of which run directly in a browser or smartphone and need as little as a single photograph to work.

    The expert breaks the explosion in this content down to four factors that have converged to create 'the perfect storm'.

    Realism has improved dramatically. Efficiency means the tools need far less data than before. Accessibility has transformed the landscape entirely; these tools, Ajder says are now 'gamified' and stripped of any technical barrier that might one have slowed someone down.

    And functionality has expanded far beyond crude face-swapping, to include full AI generated videos, animated images, and in a development which Ajder believes has been underplayed, voice cloning.

    "People can create synthetic, phone-sex style content using someone's voices," Ajder warned.

    Donald Trump and Melania sign the Take It Down legislation into law (Photo by Chip Somodevilla/Getty Images)
    Donald Trump and Melania sign the Take It Down legislation into law (Photo by Chip Somodevilla/Getty Images)

    What is the Take it Down Act?

    When Donald Trump signed the Take it Down Act last year, and Italy became the first EU country to criminalise deepfakes, it felt like a turning point.

    The TAKE IT DOWN Act is a U.S. federal law signed on May 19, 2025, that criminalizes the non-consensual publication of intimate images - including AI-generated deepfakes and "revenge porn". It mandates that online platforms implement notice-and-removal procedures to delete such content within 48 hours

    Ajder is careful not to dismiss the legislation - he's equally careful not to overstate it.

    The first thing he wants to clear up is what the laws actually cover.

    They don't ban the tools, for instance, Grok hasn't been banned. The open source software anyone can download and adapt hasn't been banned - what's been criminalised is the act of using those tools to generate this kind of content.

    Grok was underfire earlier this year for its deepfake technology (Photo Illustration by Jonathan Raa/NurPhoto via Getty Images)
    Grok was underfire earlier this year for its deepfake technology (Photo Illustration by Jonathan Raa/NurPhoto via Getty Images)

    "You are a sex offender if you do this"

    What legislation has done, he argues, is send a message. "It has helped signal more clearly to the general public that this is a form of sexual offending - you are a sex offender if you do this."

    For example, in the UK, convictions can result in custodial sentences and a place on the sex offender's register. That matters, he says, because communities creating this content have long treated it with a disturbing flippancy, almost a 'memification' of it, as the fact that it's digital means it's less real.

    Clearly, it doesn't.

    "Just because this is AI-generated doesn't mean it's less harmful or that it's not on the same spectrum as physical assault, harassment, or abuse."

    But enforcement is another matter. Perpetrators can be anywhere in the world, and they can be anonymous.

    And the number of people actually being arrested and tried, Ajder says, 'is a drop in the ocean compared to the number of cases happening on a daily basis."

    "It's naive to expect we'll ever be able to truly eradicate this problem; it's endemic.

    Are deepfake detectors the answer? (Getty stock image)
    Are deepfake detectors the answer? (Getty stock image)

    Should there be more deepfake detection technology?

    Detection tech isn't the silver bullet people hope for, Ajder explains. This is partly because on platforms that already host legal adult content. identifying whether something is AI-generated still requires a human to judge whether it's consensual and depicts a real person. That's an enormous task.

    But there is one emerging technology that has real promise for everyday people, even if it can't help public figures because of the huge amounts of media of them already in the public domain.

    It's called data poisoning.

    According to IBM: Data poisoning is an adversarial cyberattack where malicious actors deliberately inject, manipulate, or delete training data to corrupt a machine learning model's integrity. This "poison" causes AI to learn incorrect patterns, introducing backdoors or bias, resulting in unreliable performance or targeted malfunctions.

    The idea is that signals or artefacts are added to your existing photos, often invisibly, that corrupt any AI tool that attempts to train on them. If someone attempted this, it would cause the model to break down.

    "It's almost like a shield or defensive layer; if social media platforms or dating apps built this into their tech, it could have a huge impact and protect ordinary people."

    For the young woman posting on her Instagram, it could make a real difference.

    Italian Prime Minister Meloni has hit out at people who create the images (Photo by Massimo Valicchia/NurPhoto via Getty Images)
    Italian Prime Minister Meloni has hit out at people who create the images (Photo by Massimo Valicchia/NurPhoto via Getty Images)

    "Just because you're famous doesn't mean these kind of attacks don't hurt"

    Ajder concluded with a sobering reminder of how we behave online. In an age where public figures are treated as objects for entertainment, he says, there's a tendency to assume their fame makes them somehow more resilient to this attack. It doesn't.

    "Just because you're famous doesn't mean that these kinds of attacks and this kind of content doesn't hurt and traumatise in the same way as they do for a private person," he added.

    But as Meloni put it this week, "I can defend myself. Many others cannot."

    Choose your content:

    2 hours ago
    a day ago
    2 days ago
    3 days ago
    • NBC Bay Area
      2 hours ago

      Doctor had college students take 9-week digital detox and revealed 'scary' impact on the brain

      'After I removed this negative presence, I realized all the positive aspects of my life,' one student said

      Technology
    • (Photo by Emanuele Cremaschi/Getty Images)
      a day ago

      Playstation users who bought games within four-year period eligible for Sony $7.85 million settlement

      Sony has been accused of monopolizing the market through its PlayStation Store

      Technology
    • Dhiraj Singh/Bloomberg via Getty Images
      2 days ago

      iPhone users can check if they’re eligible for Apple's $250m payout over AI accusations

      The payout applies to people who bought certain iPhones between June 2024 and March 2025

      Technology
    • Christopher Willard/Disney via Getty Images
      3 days ago

      Shark Tank star Lori Greiner issues warning over hidden Gmail setting and reveals how to disable it

      Lori Greiner has warned 1.8 billion Gmail users about a setting that allows access to their private emails

      Technology
    • Italian Prime Minister Giorgia Meloni shares AI lingerie photo in warning to country
    • AI company responds to backlash over app to create 'digital twin' of people that can be used after they die
    • Sex expert predicts the top bedroom trends of 2025 and the results are terrifying
    • OpenAI explains four ways it's changing after teen died by suicide following ChatGPT conversation