
Topics: News, Artificial Intelligence, Technology
People are increasingly turning to artificial intelligence for day-to-day tasks, including creating passwords, but what feels like a clever shortcut could actually put your security at risk.
Most of us know the frustration. You’re signing up for a new service or updating an old login, staring at a blank password field while your mind goes blank.
The requirements soon pile up: a mix of upper and lowercase letters, a minimum character count, numbers, special symbol... and, of course, it has to be completely unique.
With dozens of accounts spanning banking, shopping, streaming, and social media, along with constant warnings from cybersecurity experts about the dangers of reusing credentials, coming up with a fresh, complex password every time can feel almost impossible.
Advert
So it’s perhaps no surprise that some users are outsourcing the task to AI.
However, new research suggests people are turning to artificial intelligence chatbots, including OpenAI’s ChatGPT, Anthropic’s Claude, and Google’s Gemini to generate ‘strong’ passwords for them.

AI systems are trained on vast amounts of datasets composed of public, openly accessible data, and with that they generate what appears to be complex sequence of characters for your password. Security experts warn that this approach may be misguided and could be putting your personal information at risk.
The research, from AI cybersecurity firm Irregular and verified by Sky News, found that all three major models - ChatGPT, Claude, and Gemini - generated ‘highly predictable passwords’.
“You should definitely not do that,” Irregular co-founder Dan Lahav told Sky News.
“And if you’ve done that, you should change your password immediately. And we don’t think it’s known enough that this is a problem.”
One of the reasons behind the warning is that predictable patterns threaten good cybersecurity because it means cybercriminals can use automated tools to help guess passwords.

As a result of large language models (LLMs) generating results based on patterns in their training data, as opposed to generating passwords randomly, they are not creating strong password options. Instead, it just looks like a strong password, but one that could be highly predictable.
While AI can generate those passwords that look complicated, they should not be used as password managers.
Shockingly, many AI-generated passwords are visible to the naked eye, whereas others just need mathematical analysis to reveal just how unsafe they are.
When Irregular used Claude AI to generate a sample of 50 passwords, only 23 unique ones were produced.
One password - K9#mPx$vL2nQ8wR - was used 10 times.
Others included K9#mP2$vL5nQ8@xR, K9$mP2vL#nX5qR@j and K9$mPx2vL#nQ8wFs.

When Sky used Claude to check this research, the first password it produced was K9#mPx@4vLp2Qn8R. ChatGPT and Gemini were ‘slightly less regular’ with the results they gave, but the results were still ‘repeated passwords’.
These passwords also passed tests using online password checking tools, with the results claiming that the passwords were ‘extremely strong’.
"Our best assessment is that currently, if you're using LLMs to generate your passwords, even old computers can crack them in a relatively short amount of time," Lahav warned.
The experts said you should pick a long phrase you’ll remember, and avoid using AI to find one.
A Google spokesperson told Sky: "LLMs are not built for the purpose of generating new passwords, unlike tools like Google Password Manager, which creates and stores passwords safely.
"We also continue to encourage users to move away from passwords and adopt passkeys, which are easier and safer to use."
UNILAD has contacted OpenAI and Anthropic for comment.