OpenAI is unveiling GPT-4, the successor to an artificial intelligence tool that spawned viral services ChatGPT and the Dall-E image-creation program. The company said the new version of the technology is more accurate, creative and collaborative.
GPT-4, which stands for generative pretrained transformer 4, will be available to OpenAI’s paid ChatGPT Plus subscribers, and developers can sign up to build applications with it. OpenAI said Tuesday the tool is “40% more likely to produce factual responses than GPT-3.5 on our internal evaluations.”
GPT-3 was released in 2020, and along with the 3.5 version, was used to create Dall-E and the chatbot ChatGPT — two products that caught the public imagination and spurred other tech companies to pursue AI more aggressively. Since then, buzz has grown over whether the next model will be more proficient and possibly able to take on additional tasks.
OpenAI said Morgan Stanley is using GPT-4 to organize data, while Stripe Inc., an electronic payments company, is testing whether it will help combat fraud. Other customers include language learning company Duolingo Inc., the Khan Academy and the Icelandic government.
In a January interview, OpenAI Chief Executive Sam Altman tried to keep expectations in check.
“The GPT-4 rumor mill is a ridiculous thing,” he said. “I don’t know where it all comes from. People are begging to be disappointed and they will be.” The company’s chief technology officer, Mira Murati, told Fast Company earlier this month that “less hype would be good.”
GPT-4 is what’s called a large language model, a type of AI system that analyzes vast quantities of writing from across the internet in order to determine how to generate human-sounding text. The technology has spurred excitement as well as controversy in recent months. In addition to fears that text-generation systems will be used to cheat on schoolwork, it can perpetuate biases and misinformation.
When OpenAI initially released GPT-2 in 2019, it opted to make only part of the model public because of concerns about malicious use. Researchers have noted that large language models can sometimes meander off topic or wade into inappropriate or racist speech. They’ve also raised concerns about the carbon emissions associated with all the computing power needed to train and run these AI models.
OpenAI said it spent six months making the artificial intelligence software safer.
“GPT-4 still has many known limitations that we are working to address, such as social biases, hallucinations, and adversarial prompts,” the company said Tuesday in a blog, referring to things like submitting a prompt or question designed to provoke an unfavorable action or damage the system. “We encourage and facilitate transparency, user education, and wider AI literacy as society adopts these models. We also aim to expand the avenues of input people have in shaping our models.”
The release is part of a flood of AI announcements coming from OpenAI and backer Microsoft Corp., as well as rivals in the nascent industry. Companies have released new chatbots, AI-powered search and novel ways to embed the technology in corporate software meant for salespeople and office workers.
Google-backed Anthropic, a startup founded by former OpenAI executives, announced the release of its Claude chatbot to business customers earlier Tuesday.
Google, meanwhile, said it is giving customers access to some of its language models, and Microsoft is scheduled to talk Thursday about how it plans to offer AI features for Office software.
(Except for the headline, this story has not been edited by PostX Digital and is published from a syndicated feed.)