To make sure you never miss out on your favourite NEW stories, we're happy to send you some reminders

Click 'OK' then 'Allow' to enable notifications

Apple says all iPhones will soon be able to speak in your voice after 15 minutes of training
Featured Image Credit: Alamy/Aleksei Poprotskii/olga Yastremska

Apple says all iPhones will soon be able to speak in your voice after 15 minutes of training

New accessibility features from tech giant Apple will allow users to create a synthesised version of their voice using AI

In a cool new update, Apple has announced that you'll soon be able to hear your iPhone or iPad speak in your own voice by the end of the year.

'Personal Voice' is one of a set of new software features for cognitive, speech, vision and mobility accessibility that the tech giant previewed on Tuesday (16 May).

The upcoming feature will display randomised text prompts to generate 15 minutes of audio and capture your voice.

The 'Live Speech' tool will allow users to save commonly-used phrases for the device to speak on their behalf during phone and FaceTime calls as well as in-person conversations.

It may seem like a weird addition to the iPhone's list of features. After all, what sort of narcissist likes the sound of their own voice?

But the tools are actually part of Apple's latest drive towards accessibility.

By the end of the year, you'll be able to create a synthesised version of your voice.
Pixabay

While they'll be fun features to play around with for many of us, the software will provide a lifeline to those who otherwise struggle to communicate.

'Live Speech' allows nonspeaking individuals to type to speak during calls and conversations, while those at risk of losing their ability to speak can use 'Personal Voice' to create a synthesised voice that sounds like them for connecting with friends and family.

The company pointed to conditions like ALS — where people are at risk of losing their ability to speak — to exemplify the need for the technology.

Just imagine how incredible it would be to be able to preserve your voice before an incurable health condition takes it away forever.

Apple CEO, Tim Cook explained: "At Apple, we've always believed that the best technology is technology built for everyone."

"At the end of the day, the most important thing is being able to communicate with friends and family," added Philip Green, a board member at the Team Gleason nonprofit whose speech has changed significantly since being diagnosed with ALS.

The update will also be on IPads.
Pixabay

"If you can tell them you love them, in a voice that sounds like you, it makes all the difference in the world."

Apple says the features will use machine learning, a type of AI, to create the voice on the device rather than externally, allowing users' data to be secure and private.

Of course, this isn't the first time Apple has used AI audio in its products. Anyone with an iPhone will have had some experience using — and arguing with — Siri, which uses machine learning to understand what people are saying and answer questions, make recommendations and generally be passive aggressive and snarky.

And at the Apple Macintosh 128K's unveiling back in 1984, Steve Jobs got the machine to say 'Hello' in a voice demo that was a technological leap for the time.

An official launch date for the new accessibility features is yet to be announced but Apple has confirmed that it will be by the end of 2023.

Topics: Technology, Apple, iPhone, Artificial Intelligence