
Here's exactly what the disturbing 'dead internet theory' actually means as the CEO of ChatGPT suggested there is actually some validity to it.
The internet is full of conspiracy theories - with the majority of them being debunked pretty quickly.
However, there are some that stick around for a long while, and recent comments made by Sam Altman, the CEO of ChatGPT creators OpenAI, will only make the noise stick around.
The basic premise of the 'dead internet theory' is that the majority of people online are being automatically generated by computers, rather than being actual people sharing their opinions - hence the 'dead internet' name.
Advert
I mean, you only have to launch X, formerly known as Twitter, to see the amount of bots living on the app - something which becomes obvious by the rather robotic language that is being used.

The theory has seemingly been debunked multiple times, while Altman has previously dismissed it himself. However, it now seems as though he may have changed his tune.
"I never took the dead internet theory that seriously but it seems like there are really a lot of LLM-run twitter accounts now," the ChatGPT boss wrote on X.
Advert
It all centres around Large Language Model (LLM), a type of technology that underpins programs such as ChatGPT.
LLM can basically interpret human language or other types of complex data because it has 'been fed enough examples to be able to recognize' such, according to Cloudflare.
The website adds: "Many LLMs are trained on data that has been gathered from the Internet — thousands or millions of gigabytes' worth of text. Some LLMs continue to crawl the web for more content after they are initially trained."

Advert
Of course, we best know LLM for its use in generative AI such as ChatGPT, with the software providing a solution when asked a question or given a prompt.
LLMs can also be used in sentiment analysis, DNA research, customer service, chatbots and online search, Cloudflare states.
While AI seems like the future, a recent study found long-term brain development may be negatively impacted by the use of AI.
Lead author Nataliya Kosmyna said: "What really motivated me to put it out now before waiting for a full peer review is that I am afraid in 6-8 months, there will be some policymaker who decides, ‘let’s do GPT kindergarten.’ I think that would be absolutely bad and detrimental. Developing brains are at the highest risk."
Topics: Conspiracy Theories, Technology, Artificial Intelligence, Twitter