Listen as ChatGPT copies users’ voices ‘without permission’ in new clip that sounds like ‘Black Mirror plot’

Listen as ChatGPT copies users’ voices ‘without permission’ in new clip that sounds like ‘Black Mirror plot’

\

Infamous chatbot ChatGPT unexpectedly cloned a user’s voice after an unsettling outburst during testing, its creators have revealed.

Open AI, the company behind the artificially intelligent (AI) chatbot, has been working on an Advanced Voice Mode for ChatGPT’s new model GPT-4o.

The clip has been widely branded as "creepy" by onlookers on social media

1

The clip has been widely branded as “creepy” by onlookers on social mediaCredit: Alamy

This will allow the chatbot to provide intricate spoken answers, beyond just plain text, to questions asked aloud.

However, during testing, OpenAI revealed a flaw that allowed the chatbot to engage in “unauthorised voice generation”.

In the audio, as shown in the clip above, the chatbot is having a conversation with a female user through a AI-generated male voice.

The chatbot suddenly shouts “No!” in the middle of its response, before finishing its sentence while mimicking the voice of the female user.

As Open AI explains: “Voice generation is the capability to create audio with a human-sounding synthetic voice, and includes generating voices based on a short input clip. 

“In adversarial situations, this capability could facilitate harms such as an increase in fraud due to impersonation and may be harnessed to spread false information.”

Artificial Intelligence explained

Here’s what you need to know

  • Artificial intelligence, also known as AI, is a type of computer software
  • Typically, a computer will do what you tell it to do
  • But artificial intelligence simulates the human mind, and can make its own deductions, inferences or decisions
  • A simple computer might let you set an alarm to wake you up
  • But an AI system might scan your emails, work out that you’ve got a meeting tomorrow, and then set an alarm and plan a journey for you
  • AI tech is often “trained” – which means it observes something (potentially even a human) then learns about a task over time
  • For instance, an AI system can be fed thousands of photos of human faces, then generate photos of human faces all on its own
  • Some experts have raised concerns that humans will eventually lose control of super-intelligent AI
  • But the tech world is still divided over whether or not AI tech will eventually kill us all in a Terminator-style apocalypse

The clip has been widely branded as “creepy” by onlookers on social media.

“This is more than just kinda creepy scary,” one user on X (formerly Twitter) wrote.

A second user noted: “When ChatGPT starts speaking in tongues, it’s time to shut off [the] computer.”

OpenAI reveals biggest mistake with ChatGPT AI being fixed in new upgraded app and promises ‘significant leap forward’

Another added: “I’m just amazed that it was able to synthesize so quickly, assuming it was using the voice prompt from that test session alone.”

BuzzFeed data scientist Max Woolf also tweeted: “OpenAI just leaked the plot of Black Mirror’s next season.”

OpenAI said incidences of unauthorised voice generation were “rare”, and that the risk of it being harnessed to commit fraud was “minimal”.

“Our system currently catches 100 per cent of meaningful deviations from the system voice based on our internal evaluations,” the company explained.

This includes when ChatGPT copies other system voices, audio clips from a prompt, and human voice samples.

OpenAI added that it shuts conversations down once the chatbot starts parroting other voices.

“While unintentional voice generation still exists as a weakness of the model, we use the secondary classifiers to ensure the conversation is discontinued if this occurs making the risk of unintentional voice generation minimal,” the company said.

ChatGPT’s creators’ have previously warned that GPT-4o’s human voice mode is so realistic that people may become “emotionally reliant” on it.

The ChatGPT-4o model is currently being reviewed for safety before being rolled out.

While being able to hold a somewhat natural conversation with an AI assistant is convenient – it comes with the risk of emotional reliance and “increasingly miscalibrated trust”.

A safety review of the tech, published last week, highlighted concerns about language that reflected a sense of shared bonds between the human and the AI.

The reviews said: “While these instances appear benign, they signal a need for continued investigation into how these effects might manifest over longer periods of time.”

Read more about Artificial Intelligence

Everything you need to know about the latest developments in Artificial Intelligence

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *