The new ChatGPT AI model now operates like a human and understands images

The new ChatGPT AI model now operates like a human and understands images

The multimodal is now accessible to ChatGPT Plus users. It reduces your ‘hallucinations’ and improves your creativity, as well as your confidence.

new ChatGPT AI model

The revolution brought about by linguistic models based on artificial intelligence as ChatGPT, powered by OpenAI’s GPT-3 technology, is yet to be determined. Not only has it shaken the foundations of Google, but it threatens to cause real change in all manner of workplaces and industries around the world. Now, OpenAI has presented the latest innovation in this regard: GPT-4.

GPT-4 is the successor to the generative artificial intelligence model that can generate written text. More specifically, it is an autoregressive learning model that seeks to generate texts similar to what the user would write. GPT-3 without going any further, is part of Natural Language Processing, a branch within generative AI that seeks to improve communication mechanisms between people and machines.

Therefore, GPT-4 is a more polished version of GPT-3 and 3.5, which until now were the models that were giving life to several of the OpenAI projects, such as ChatGPT. Some of the headlines thrown by this new version are that creativity is increased and, above all, the lack of control of the model is reduced, something that has been reflected in the latest versions of Bing with AI recently launched by Microsoft. However, one of its best advantages is that it now understands text and image inputs.

GPT-4 arrives

GPT-4 is the last great multimodal model of OpenAI, which can accept input not only text but also images. OpenAI, in its statement, speaks of performance “at a human level” in certain key sections related to the academic and professional world. GPT-3.5, the predecessor of GPT-4, was trained as a year-long first test for a supercomputer co-designed from the ground up by rebuilding the company’s entire deep learning stack. 

OpenAI ensures that the GPT-4 training exercises are tremendously stable, becoming the first large model of the firm “whose performance we were able to predict in advance.” The company is releasing GPT-4 via ChatGPT and the model API to encourage its adoption among users and developers.

OpenAI ensures that the differences between 3.5 and 4 may be subtle but palpable. We talk about GPT-4 being more reliable, and above all much better able to handle nuanced instructions than GPT-3.5, one of the points of contention among users. Recall, again, the crazy responses from Bing AI.

GPT-4 was trained to predict the next word in a document and trained using data in the public domain and data licensed by the company. The web-scale dataset includes correct and incorrect solutions to mathematical problems, weak and strong reasoning, self-contradictory and consistent statements “and a wide variety of ideologies.” In short, it has a level of operability similar to a human.

They used a fairly wide variety of benchmarks, including simulation tests designed for humans to check the evolution between both models. They used the most recent publicly available tests in addition to purchasing the newest editions of the practice tests. All this without practice or specific training for these exams.

PostA survey reveals that ChatGPT is already replacing workers in the US

GPT-4 was able to detect a “minority of test problems during training,” something that may be representative of the results. They also tested GPT-4 on traditional benchmarks, already designed for standardized machine learning models. Thus, GPT-4 far exceeds existing models, including most of the latest-generation models.

So much so that OpenAI explains that GPT-4 has been used internally with a great impact on functions such as support, sales, content moderation, and programming. It has been used to help human operators evaluate the results of the AI itself.

new ChatGPT AI model

GPT-4 can generate text output as a natural language or code by accepting text, and image prompts, allowing the user to specify any language or vision task. “In a variety of domains, including text documents and photographs, diagrams or screenshots, GPT-4 exhibits similar capabilities to single text input.”

OpenAI has also revised the definition of artificial intelligence behavior, including “manageability.” And it is that instead of the classic personality of ChatGPT (limited to a fixed verbosity tone and styles), the developers and the users of ChatGPT will be able to prescribe the style and task of the AI describing these instructions in the system message. These messages allow API users to “significantly” customize the user experience within those limits.

PostAlphabet’s Waymo cuts 8% of its workforce after a second round of layoffs this year

PostMicrosoft extends ChatGPT integration to more developer tools

Aspects to consider

The company wanted to make it clear that GPT-4, like many other models (including its predecessors), has limitations. For example, she keeps going a little crazy, as she “hallucinates” the facts and “makes reasoning errors”. The firm cautions that care must be taken when using the results of the language model, particularly in “high-risk” contexts.

new ChatGPT AI model

Still having issues, GPT-4 “significantly reduces hallucinations relative to previous models.” In this way, it has obtained scores up to 40% higher than GPT-3.5 in internal tests of contradictory factuality. That doesn’t take away from the fact that the model can return biased results, as according to OpenAI, “we’ve made progress on these [biases], but there’s still more to do.”

That has an explanation. GPT-4 lacks knowledge of events after September 2021, as most of its data was cut off. In addition, you may make “simple reasoning errors,” which do not seem to correspond with proficiency in certain domains. Or it may also be, according to OpenAI, that the model “may be overly gullible in accepting obvious false statements from a user.”

Additionally, OpenAI has made sure to make this model more secure, with efforts “including pre-training data screening and filtering, expert engagement and assessments, model security enhancements, and monitoring.” and the application”.

new ChatGPT AI model

The security that OpenAi mentions is based on the potential damage that the model can cause, such as generating harmful advice, code with errors, or misinformation. To try to mitigate these issues, OpenAI hired more than 50 experts in domains such as cybersecurity, biohazard, or international security. In this way, they collected additional data to, for example, improve GPT-4’s ability to serve requests on how to synthesize dangerous chemicals.

According to OpenAI, the mitigations have significantly improved the security properties of the model compared to GPT-3.5, drastically decreasing “the model’s tendency to respond to requests for disallowed content by 82%.” GPT-4 responds to confidential requests (medical advice, self-harm…) by OpenAI policies 29% more frequently. Still, there are ways to generate problematic content with certain methodologies.

ChatGPT Plus

Subscribers of the payment method of OpenAI, ChatGPT Plus able to directly access GPT-4 with a usage limit at this link. The exact usage limit will be adjusted “according to demand and practical system performance.” They warn that they will have a limited capacity, which expanded over the months.

It is registered on a waiting list to access the GPT-4 API. Some developers will start to be invited starting today, gradually scaling to balance capacity with demand. Once with access, the developer can make text-only requests to the model for now.

Related Post

Leave a Reply

Your email address will not be published.