
Ashley St Clair has made a chilling prediction about the future of AI as Elon Musk's Grok is investigated for generating indecent images of children and women.
The author first made headlines last year after claiming she'd given birth to Musk's baby thirteenth child, Romulus.
St Clair and the tech billionaire allegedly met for the first time in the spring of 2023, with their relationship turning romantic after Musk welcomed St. Clair to visit the San Francisco office of X (formerly Twitter) and during a New Year’s trip to St. Barts, she claims they conceived a baby boy together, The Wall Street Journal reports. A public feud regarding child support has since raged on, but St Clair is now voicing her concerns over Musk's xAI chatbot, Grok.
The LLM launched in November 2023 and raised eyebrows with its NSFW 'spicy' mode which essentially sexted users when prompted to do so.
Advert

But users have more recently been using Grok to generate explicit images of women without their consent.
Last month, Grok's X account even apologized after it generated and shared an AI image of two young girls - estimated to be between 12 and 16 years old - in sexualized attire based on a user's prompt.
Other social media users have reported their own photos being sexualized by X users via Grok.
Since then, the UK communications body, Ofcom, has launched its own 'urgent' probe over 'serious concerns' of Grok producing 'undressed images of people and sexualized images of children'.
"We have made urgent contact with X and xAI to understand what steps they have taken to comply with their legal duties to protect users in the UK," their statement, issued on Monday (January 5), said.

"Based on their response we will undertake a swift assessment to determine whether there are potential compliance issues that warrant investigation."
St Clair - who claims someone on X made a photo of her 'removing her clothes' as a 14-year-old - has since sounded the alarm over the worrying trend.
"When Grok went full MechaHitler, the chatbot was paused to stop the content," she wrote on X.
"When Grok is producing explicit images of children and women, xAI has decided to keep the content up + overwhelm law enforcement with cases they will never solve with foreign bots and take resources from other victims.
"This issue could be solved very quickly. It is not, and the burden is being placed on victims."

She then went on to make a worrying prediction about AI's future, writing: "They are going to use this to push for Section 230 like protections for AI. And it should not be allowed to happen."
In the US, Section 230 of the Communications Decency Act of 1996 protects online platforms from legal responsibility for content created by others, as the University of Chicago Business Law Review explains.
It essentially ensures that websites, social media companies and other online services are not treated as 'publishers' or 'speakers' of user-generated content.
Platforms are also protected when they moderate or remove content in 'good faith'.
And while the law might shield platforms from many civil liability claims, it does not protect platforms from federal criminal laws, copyright claims and certain privacy and child protection statutes.
Still, if Section 230 or a similar law provides X and xAI with protection, it would set a concerning precedent for victims of generative AI going forward.
Of course, that specific law only applies to the US and Ofcom's investigation is in the UK. But the country has its own law, the Online Safety Act, which imposes duties of care on platforms and fines them for failing to remove harmful content. The outcome of Ofcom's investigation and potential consequences for xAI remains to be seen.
UNILAD has contacted xAI and X for comment.
Topics: Artificial Intelligence, Elon Musk, Social Media, Twitter