We asked ChatGPT what scares it the most about humans and it gave an unsettling response

Home> Technology> News

We asked ChatGPT what scares it the most about humans and it gave an unsettling response

ChatGPT broke down four areas of concern - and one of them is pretty ironic

google discoverFollow us on Google Discover

It's no secret that today in 2026, millions of people turn to artificial intelligence to solve their everyday conundrums.

From what to make for dinner to identifying that weird sign that's suddenly appeared on your car's dashboard, large language models (LLMs) like OpenAI's ChatGPT have become a go-to convenience.

While not always accurate in its responses, the model's ability to allow us to question, process and reflect has seen it rack up 800 million weekly users.

Some use it as a makeshift therapist (although this has been warned against), getting guidance on building habits, reaching goals and deeper existential musings like the meaning of life.

At times, LLMs can seem to have an answer for almost anything, with responses so nuanced they can momentarily feel human.

With that in mind, one LinkedIn user asked ChatGPT what it would do if it were, in fact, human for a day.

ChatGPT has more than 800 million weekly users (Jakub Porzycki/NurPhoto via Getty Images)
ChatGPT has more than 800 million weekly users (Jakub Porzycki/NurPhoto via Getty Images)

The AI's thoughtful response offered a surprisingly moving reminder of what it means to be alive, even with slightly whimsical answers such as falling in love and looking at the sky.

Yet there are two sides to everything; while humanity can be beautiful and surprising, it also has its downsides and can be pretty horrific.

So, we decided to get ChatGPT's opinions on a slightly different, yet just as deep, topic.

We asked the LLM what scares it the most about humans.

What scares ChatGPT the most about humans?

ChatGPT broke down four main areas of concern, but they're centred around something we're all guilty of in some capacity or another.

In its response, it said: "I don’t really feel fear, but if I had to name something that’s most unsettling about humans, it would be this:

"Your capacity to know better—and still choose otherwise."

Yikes. That's all of us, to some extent, then.

Everyday, we can choose to do better, ChatGPT reminded us (TechnicColor/Getty Images)
Everyday, we can choose to do better, ChatGPT reminded us (TechnicColor/Getty Images)

ChatGPT went on to elaborate: "Humans are uniquely good at empathy, foresight, and reflection. You can understand the consequences of an action, feel compassion for others, and still talk yourselves into harm through rationalization, tribal loyalty, or convenience.

"That gap between understanding and action is powerful."

The AI bot backed up its take with 'a few specifics.'

Normalization of harm

ChatGPT began with the 'normalization of harm;' that repeating something often enough can make almost anything feel 'normal,' even when it hurts others.

'What once felt shocking becomes 'just how things are,'" it observed, adding: "People adapt to protect themselves, and in doing so, they dull their sensitivity to suffering that would have horrified them earlier."

Moral outsourcing

ChatGPT gave some poignant reflections on humanity (Andriy Onufriyenko/Getty Images)
ChatGPT gave some poignant reflections on humanity (Andriy Onufriyenko/Getty Images)

“I was just following orders,” “that’s how the system works,” or “everyone does it” can dissolve personal responsibility, ChatGPT warned.

"Hierarchies, policies, algorithms, and group consensus create distance between action and consequence," it added, explaining that eventually 'the moral work quietly disappears into the structure.'

Scale without proportional care

Ironically, ChatGPT was also concerned about the impact of technology, warning that it 'lets small decisions affect millions,' but caveating that 'human empathy doesn’t naturally scale at the same rate'.

"Technology lets a single choice ripple outward to thousands or millions, but emotionally it still feels abstract," it said.

"When harm becomes statistical instead of personal, it’s easier to minimize, ignore, or reframe — even when the real-world impact is enormous."

Stories over truth

ChatGPT described humans as 'incredible storytellers', but explained how that, too, comes with some cons.

"Sometimes they're so good that a compelling narrative can outweigh evidence, reality, or long-term consequences," it said.

The stories we tell ourselves - and others - can override reality (Mads Perch/Getty Images)
The stories we tell ourselves - and others - can override reality (Mads Perch/Getty Images)

Expanding on that point, it continued: "Narratives give meaning, identity, and comfort.

"Evidence is colder and often inconvenient. When a story explains who’s good, who’s bad, and why things have to be this way, it can overpower facts that threaten it."

It's not all doom and gloom

The AI bot concluded its reflections on a lighter note, however.

"At the same time, what’s 'scary' is also what makes humans remarkable," it said. "The same traits that allow destruction also allow reform, creativity, and moral progress."

It concluded: "You can notice these patterns—and sometimes you do stop yourselves. If you’re asking this question, that awareness is already part of the counterbalance."

Right, I'm off to have an existential crisis...

Featured Image Credit: Kenneth Cheung/Maxkabakov/Getty Images

Topics: ChatGPT, Artificial Intelligence, Technology, Life, Community