At this week’s Open AI Spring Updates event, the business unveiled ChatGPT 4o, its newest flagship model. According to the business, GPT 4o has the intelligence of GPT 4 but is faster and has better text, voice, and vision capabilities.
At this week’s Spring Updates event, OpenAI unveiled its newest flagship product, ChatGPT 4o, marking yet another advancement in artificial intelligence. GPT 4 has been enhanced in this model. According to OpenAI, GPT 4o provides GPT 4-level intelligence with increased speed and text, voice, and vision capabilities. This version, according to OpenAI, is a step towards far more natural human-computer interaction because it can produce any combination of text, voice, image, and video as input and receive any combination of these as outputs. The main feature of GPT 4o is that it is free for all users, whereas ChatGPT 4 is only accessible to those who have paid for it. The new GPT 4o can be used as follows.
How to start using ChatGPT 4o
Since GPT 4o is openly accessible, both free and subscription versions are available. You only need to sign in to your account at https://chatgpt.com to test the free version.
It’s important to note that some users may still have accounts that use GPT 3.5. However, there is no need for concern. The cycle will be refreshed for you in a few hours or days if the drop-down menu is preventing you from accessing the GPT 4o.
As of the now, the desktop and mobile app versions of GPT 4o are being released progressively, as is the browser version. According to rumours, GPT 4o may not yet be available to customers on iOS or Android, while the new desktop program for Mac is still in the development stage. Although a Windows version is anticipated later this year, OpenAI intends to make the desktop app more widely accessible in the upcoming weeks.
Users that have the option to select the GPT 4o version can observe the differences between the two versions. You can now send files to GPT 4o for analysis, for example, if you have it and are on the free plan. Images, movies, or even PDFs could be among them. Next, you can ask any questions about the content.
Troubles of the ChatGPT 4o
OpenAI proudly demonstrated GPT 4o’s speech and vision capabilities at the Spring Update event. However, it has not yet been implemented worldwide. According to reports, these capabilities are currently only available to developers via the API.
There is no information on when or whether the assistant mode would be made accessible for free, despite OpenAI’s obvious plans to make the voice features available to ChatGPT Plus members shortly.
Another drawback of ChatGPT 4o was that, after ten to fifteen prompts, the account will be sent to GPT 3.5. It implies that you can only send out a small number of enquiries at once. As soon as you hit the limit, you are downgraded back to GPT 3.5 and cannot access GPT 4o again for a few hours until the chatbot resets and the window to use GPT 4o opens again.
ChatGPT 4o: More for you
Mira Murati, the chief technology officer at OpenAI, stated during the ceremony that GPT-4o is 50% less expensive and twice as quick as the earlier iterations. Users of GPT 4o will effectively have free access to the GPT shop and a million GPTs, including personalised ChatGPTs. GPT developers will be able to reach a wider audience in this way, according to Murati.
Users of ChatGPT 4o will be able to text the AI tools and upload documents with text and images, as well as screenshots. Up until now, the technology only allowed paying customers to utilise documents and images, and free users could only communicate with the AI chatbot via text. Users of GPT 4o will also get access to Memory, which enables the AI chatbot to recall previous conversations with you, providing a sense of continuity during conversations.
Currently, OpenAI’s ChatGPT AI platform has over 100 million users.