unilad homepage
unilad homepage
  • News
    • UK News
    • US News
    • World News
    • Crime
    • Health
    • Money
    • Sport
    • Travel
  • Music
  • Technology
  • Film and TV
    • News
    • DC Comics
    • Disney
    • Marvel
    • Netflix
  • Celebrity
  • Politics
  • Advertise
  • Terms
  • Privacy & Cookies
  • LADbible Group
  • LADbible
  • SPORTbible
  • GAMINGbible
  • Tyla
  • UNILAD Tech
  • FOODbible
  • License Our Content
  • About Us & Contact
  • Jobs
  • Latest
  • Archive
  • Topics A-Z
  • Authors
Facebook
Instagram
X
Threads
TikTok
YouTube
Submit Your Content
Google Fires Employee Who Claims Their AI Is Sentient And Has Preferred Pronouns
Home>Technology
Published 12:14 24 Jul 2022 GMT+1

Google Fires Employee Who Claims Their AI Is Sentient And Has Preferred Pronouns

Software engineer Blake Lemoine shared his concerns after becoming convinced that a programme known as LaMDA had gained sentience

Jess Hardiman

Jess Hardiman

google discoverFollow us on Google Discover
Featured Image Credit: LinkedIn/Blake Lemoine/Google AI

Topics: News, US News, Technology, Google

Jess Hardiman
Jess Hardiman

Jess is Entertainment Desk Lead at LADbible Group. She graduated from Manchester University with a degree in Film Studies, English Language and Linguistics. You can contact Jess at [email protected].

X

@Jess_Hardiman

Advert

Advert

Advert

Google has fired an employee who claimed the company’s AI is sentient and has preferred pronouns, saying the worker ‘violated’ its security policies.  

Software engineer Blake Lemoine shared his concerns after becoming convinced that a programme known as LaMDA (language model for dialogue applications) had gained sentience, having assessed its behaviour over a series of conversations. 

In a blog post, he explained: “If my hypotheses withstand scientific scrutiny then they would be forced to acknowledge that LaMDA may very well have a soul, as it claims to, and may even have the rights that it claims to have.” 

Google later placed Lemoine on leave, saying he had violated company policies and that his claims were ‘wholly unfounded’. 

Advert

He has now been dismissed by the tech giant, which said in a statement: “It’s regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information." 

Blake Lemoine has now been fired by Google.
Alamy

Lemoine, 41, compiled a transcript of the conversations, which he believed included proof of a number of behaviours.

At one point, had asked the programme - which controls chatbots - about its ‘preferred pronouns’, adding: “LaMDA told me that it prefers to be referred to by name but conceded that the English language makes that difficult and that its preferred pronouns are ‘it/its’.” 

Speaking to the Washington Post, Lemoine said he felt the technology was 'going to be amazing', but that Google 'shouldn’t be the ones making all the choices'.

“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a seven-year-old, eight-year-old kid that happens to know physics," he said.

However, Google and other scientists dismissed his claims as misguided, stressing that LaMDA was just a complex algorithm aiming to generate human language.

In a blog post last year, Google said its ‘breakthrough conversation technology’ had been ‘years in the making', explaining how it always works to ensure technologies adhere to its AI Principles. 

Google

“Language might be one of humanity’s greatest tools, but like all tools it can be misused,” Google said. 

“Models trained on language can propagate that misuse — for instance, by internalizing biases, mirroring hateful speech, or replicating misleading information. And even when the language it’s trained on is carefully vetted, the model itself can still be put to ill use.  

“Our highest priority, when creating technologies like LaMDA, is working to ensure we minimize such risks. We're deeply familiar with issues involved with machine learning models, such as unfair bias, as we’ve been researching and developing these technologies for many years. 

“That’s why we build and open-source resources that researchers can use to analyze models and the data on which they’re trained; why we’ve scrutinized LaMDA at every step of its development; and why we’ll continue to do so as we work to incorporate conversational abilities into more of our products.” 

Choose your content:

2 hours ago
5 hours ago
9 hours ago
a day ago
  • Getty Stock Images
    2 hours ago

    Security experts share key advice as Instagram DMs are no longer 'private' after huge change

    It's recommended you move 'sensitive conversations' to other platforms

    Technology
  • Justin Sullivan/Getty Images
    5 hours ago

    AI responds to ChatGPT CEO's warning that the tech will surpass humans by 2030

    Sam Altman said AI could become 'superintelligent' within a matter of years

    Technology
  • Brendan Smialowski - Pool/Getty Images
    9 hours ago

    Trump forced to ditch his trusty cellphone as he barreled into high-stakes China summit with Xi

    Donald Trump left China today (May 15) following a two-day state trip

    Technology
  • Graham Hughes/Bloomberg via Getty Images
    a day ago

    'AI godfather' issues grim 10-year warning as he raises concerns about serious risks to humanity

    Yoshua Bengio said that giving AI's rights would be like giving citizenship to 'hostile extraterrestrials'

    Technology
  • Commencement speaker awkwardly booed on stage after controversial AI comments
  • Expert issues urgent warning to 184,000,000 Apple and Google users amid 'security breach'
  • Taylor Swift collaborator Jack Antonoff shares brutal thoughts on people who use AI to make music
  • Experts issue urgent warning to 1,800,000,000 Gmail users over new type of attack that fools AI