ChatGPT, OpenAI’s popular AI chatbot, now supports Advanced Voice Mode. The new ChatGPT Plus and Team mode offer more natural and real-time interactions, emotional awareness, and five new voices.

Advanced Voice Mode uses GPT-4o, a new multimodal text, vision, and audio model for faster replies and more sophisticated interactions. The AI senses and responds to user emotions, and users can interrupt the chatbot anytime.

Read also: ChatGPT reveals new o1 model, possesses ‘reasoning’ capabilities for complex tasks

Unlocking the potential of advanced voice mode 

Users can use Custom Instructions and Memory in Advanced Voice Mode to make the chatbot respond more relevant to their tastes and past conversations. Five new voices have been added: Arbour, Maple, Sol, Spruce, and Vale. These join the four voices already there (Breeze, Juniper, Cove, and Ember).

OpenAI is gradually rolling out Advanced Voice Mode to all ChatGPT Plus and Team users. Enterprise and Edu subscribers will gain access starting next week. Users will be notified via a pop-up message when the feature is available.

These features, like video and screen sharing, will be added later but are not part of the original rollout. EU, UK, Switzerland, Iceland, Norway, and Liechtenstein can’t currently use Advanced Voice Mode.

Read also: OpenAI shifts focus to maximising profit

Insights from OpenAI’s safety testing initiatives 

Through comprehensive safety testing with external red teamers from various languages and geographical locations, OpenAI has demonstrated its commitment to safety. Additionally, the company has released a GPT-4o system card that provides a comprehensive overview of the measures it has taken to guarantee the functionality and user-friendliness of Advanced Voice Mode.

Through the use of its AI chatbot, OpenAI intends to offer a conversational experience that is more human-like and engaging through the use of Advanced Voice Mode.