OpenAI Set to Launch Advanced Voice Mode on ChatGPT Soon 

OpenAI released GPT-4o at its latest Spring Update event earlier this year, winning hearts with its ‘omni’ capabilities across text, vision, and audio. The post OpenAI Set to Launch Advanced Voice Mode on ChatGPT Soon  appeared first on AIM.

OpenAI Set to Launch Advanced Voice Mode on ChatGPT Soon 

INCREASE YOUR SALES WITH NGN1,000 TODAY!

Advertise on doacWeb

WhatsApp: 09031633831

To reach more people from NGN1,000 now!

INCREASE YOUR SALES WITH NGN1,000 TODAY!

Advertise on doacWeb

WhatsApp: 09031633831

To reach more people from NGN1,000 now!

INCREASE YOUR SALES WITH NGN1,000 TODAY!

Advertise on doacWeb

WhatsApp: 09031633831

To reach more people from NGN1,000 now!

OpenAI is set to launch ‘Advanced Voice Mode’ on ChatGPT this Tuesday, September 24, 2024, according to a screenshot posted by a user on X.

“As of now, access to Advanced Voice mode is being rolled out in a limited alpha to a select group of users. While being a long-time Plus user and having been selected for SearchGPT are both indicators of your active engagement with our platform, access to the Advanced Voice mode alpha on September 24, 2024, will depend on a variety of factors including but not limited to participation invitations and the specific criteria set for the alpha testing phase,” read the blog post attached in the screenshot.

OpenAI released GPT-4o at its latest Spring Update event earlier this year, which won hearts with its ‘omni’ capabilities across text, vision, and audio. OpenAI’s demos, which included a real-time translator, a coding assistant, an AI tutor, a friendly companion, a poet, and a singer, soon became the talk of the town. However, its Advanced Voice Mode wasn’t released. 

When OpenAI recently released o1, one of them queried if they would be launching voice features soon. “How about a couple of weeks of gratitude for magic intelligence in the sky, and then you can have more toys soon?” replied Sam Altman, with a tinge of sarcasm. 

However, a couple of weeks later, Kyutai, a French non-profit AI research laboratory, launched Moshi, a real-time native multimodal foundational AI model capable of conversing with humans in real time, much like what OpenAI’s advanced model was intended to do. 

Hume AI  recently  introduced EVI 2, a new foundational voice-to-voice AI model that promises to enhance human-like interactions. Available in beta, EVI 2 can engage in rapid, fluent conversations with users, interpreting tone and adapting its responses accordingly. The model supports a variety of personalities, accents, and speaking styles and includes multilingual capabilities. 

Meanwhile, Amazon Alexa is partnering with Anthropic to improve its conversational abilities, making interactions more natural and human-like. Earlier this year, Google launched Astra, an ‘universal AI agent’ built on the Gemini family of AI models. Astra features multimodal processing, enabling it to understand and respond to text, audio, video, and visual inputs simultaneously.

The post OpenAI Set to Launch Advanced Voice Mode on ChatGPT Soon  appeared first on AIM.

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow