November 25, 2024

OpenAI: Anthropic launches more powerful AI model: How it’s different from predecessor

[ad_1]

Google is working on Gemini – an upgraded and capable AI model than PaLM 2, OpenAI recently launched GPT-4 Turbo – a more powerful large language model (LLM) than GPT-4, and now Anthropic has announced an advanced version of its AI model, Claude 2.1.
The Google-backed startup launched Claude 2.1 and it now powers claude.ai chat experience. It is available over API in Console and is claimed to deliver advancements in key capabilities for enterprises, including a 200K token context window, reductions in rates of model hallucination and system prompts.
“We are also updating our pricing to improve cost efficiency for our customers across models,” Anthropic said. Notably, OpenAI also announced GPT-4 Turbo’s availability at lower prices.
Why context window is important
In comparison to Claude’s 200K context window, OpenAI’s GPT-4 Turbo comes with a 128K context window and Meta’s LLaMA 2 model has a 32K context window.
Every LLM has a maximum number of tokens, or fragments of a word, that it can process at once. This is called a context window. Anthropic said that 200,000 tokens roughly translates to 150,000 words, or over 500 pages of material.
“Processing a 200K length message is a complex feat and an industry first. While we’re excited to get this powerful new capability into the hands of our users, tasks that would typically require hours of human effort to complete may take Claude a few minutes. We expect the latency to decrease substantially as the technology progresses,” the company said.
Decrease in hallucination rates
Hallucination with respect to LLMs is the ability of a model to spit out wrong information but present it in a way that it seems true. Anthropic says that Claude 2.1 has made significant gains in honesty, with a 2x decrease in false statements compared to our previous Claude 2.0 model.
Additionally, Claude 2.1 is said to have also made improvements in comprehension and summarisation.



[ad_2]

Source link