In a post on the social media platform X, OpenAI said it is “working to pause” Sky - the name of one of five voices that ChatGPT users can choose to speak with. The company said it had “heard questions” about how it selects the lifelike audio options available for its flagship artificial intelligence chatbot, particularly Sky, and wanted to address them.
OpenAI was also quick to debunk the internet’s theories about Johansson in an accompanying blog post detailing how ChatGPT’s voices were chosen.
“We believe that AI voices should not deliberately mimic a celebrity’s distinctive voice - Sky’s voice is not an imitation of Scarlett Johansson but belongs to a different professional actress using her own natural speaking voice,” the company wrote. It said it could not share the name of its voice actors for privacy reasons.
San Francisco-based OpenAI did not comment further on why it still decided to pause Sky’s use.
OpenAI first rolled out voice capabilities for ChatGPT, which included the five different voices, in September, allowing users to engage in back-to-forth conversation with the AI assistant. “Voice Mode” was originally just available to paid subscribers, but in November, OpenAI announced that the feature would become free for all users with the mobile app.
And ChatGPT’s interactions are becoming more and more sophisticated. Last week, OpenAI said the latest update to its generative AI model can mimic human cadences in its verbal responses and can even try to detect people’s moods.
OpenAI says the newest model, dubbed GPT-4o, works faster than previous versions and can reason across text, audio and video in real time. In a demonstration during OpenAI’s May 13 announcement, the AI bot chatted in real time, adding emotion - specifically “more drama” - to its voice as requested. It also took a stab at extrapolating a person’s emotional state by looking at a selfie video of their face, aided in language translations, step-by-step math problems and more.
GPT-4o, short for “omni”, isn’t widely available yet. It will progressively make its way to select users in the coming weeks and months. The model’s text and image capabilities have already begun rolling out, and is set to reach even some of those that use ChatGPT’s free tier - but the new voice mode will just be available for paid subscribers of ChatGPT Plus.
‘Programmed to feed dudes’ egos’
While most have yet to get their hands on these newly announced features, the capabilities have conjured up even more comparisons to Spike Jonze’s dystopian romance Her, which follows an introverted man (Joaquin Phoenix) who falls in love with an AI-operating system (Johansson), leading to many complications.
OpenAI’s Altman appeared to tap into this, too - simply posting the word “her” on the social media platform X the day of GPT-4o’s unveiling.
Many reacting to the model’s demos last week also found some of the interactions struck a strangely flirtatious tone. In one video posted by OpenAI, a female-voiced ChatGPT compliments a company employee on “rocking an OpenAI hoodie,” for example, and in another the chatbot says “oh stop it, you’re making me blush” after being told that it’s amazing.
That’s sparked some conversation on the gendered ways critics say tech companies have long used to develop and engage with voice assistants, dating back far before the latest wave of generative AI advanced the capabilities of AI chatbots. In 2019, the United Nations’ culture and science organization pointed to “hardwired subservience” built into default female-voiced assistants (like Apple’s Siri to Amazon’s Alexa), even when confronted with sexist insults and harassment.
“This is clearly programmed to feed dudes’ egos,” The Daily Show senior correspondent Desi Lydic said of GPT-4o in a segment last week. “You can really tell that a man built this tech.”