OpenAI’s ChatGPT Will Soon Gain the Ability to See, Hear, and Speak


OpenAI’s ChatGPT is getting a big update that will let the famous chatbot talk to users by voice and use images to communicate. This will make it more like Siri and other popular artificial intelligence (AI) assistants.

In a blog post on Monday, OpenAI said that the voice tool opens the door to many creative and accessible apps.

Siri, Google’s voice assistant, and’s Alexa are all similar artificial intelligence services that are built into the devices they run on. These services are often used to set alarms and alerts and get information from the internet.

Since its release last year, companies have used ChatGPT for a wide range of tasks, from summarizing papers to writing computer code. This has started a race among Big Tech companies to launch their own products based on generative AI.

The new voice feature in ChatGPT can also read bedtime stories, settle arguments at the dinner table, and read out text that users type.

OpenAI said that Spotify uses the technology behind it to help podcasters translate their material into different languages.

With image support, users can take pictures of things around them and ask the chatbot to figure out why your grill won’t start, look in your fridge to plan a meal, or look at a complex graph for work-related data.

Google Lens from Alphabet is the most popular way to find out more about a picture.

Over the next two weeks, users of the Plus and Enterprise plans will be able to use the new ChatGPT tools.