Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

Anthropic activates a new AI model for ‘thinking’ as desired


Clode 3.7 is released a new border AI model called anthropic, users ” think about questions about questions.

Anthropic Claude 3.7 Sonnet, the first “Hybrid EU meditation model”, because it is a model that can respond to real time, is a model that can answer “thoughtful” questions. Users can choose to activate the “Think” ability to “think” for a short or long term “thinking” of the “Think” of the AI ​​model “Think” for a short or long term.

Model represents the extensive effort of anthropy to facilitate user experience around AI products. AI chatBots, the majority of today has a challenging model selector that forces users to choose one of several different choices varies with the price and ability. Laboratories like anthropic do not need to think about it more – Ideally, a model does all the work.

Claude 3.7 Sonnet, Monday, Anthropic, Anthropik’s award-winning Claude ChatBot plans will receive access to the model justification. Free Claude users will receive a standard, grounded version of Claude 3.7 Sonnet confirming the previous border AI model of anthropic claims Claude 3.5 Sonnet. (Yes, the company jumped a number.)

Claude 3.7 Sonnet costs one million input verses (almost 750,000 words, more words, more words, more words, 3 dollars for $ 3) and $ 3 in $ 3) and $ 3 to $ 3. This is more expensive than Openai’s O3-Mini ($ 1 million in entry to Tokens $ 4.40/440) and Deepseek’s R1 (1 million entry into 55 kopecks / 1 million dollars per 1 million / $ 2.19), but Please note that hybrids such as O3-mini and R1, Clod 3.7 Sonnet are serious reasoning models.

Anthropik’s new mode of thinking Photo credits:Anthropical

Claude 3.7 Sonnet is the first AI model of anthropic that can cause “cause”, a technique ” Many AI laboratories appealed as traditional methods to correct AI performance.

O3-Mini, R1, Google’s’s’S’S’S’S Gemini 2.0 Flash Thing and Xai’s Grock 3 (Think), use more time and computing power before answering questions. Models are breaking those who tend to improve the correctness of the final answer to the smaller steps. Drafted models do not think and do not think as a human, but their process is modeled after their process is deducted.

Finally, the anthropic, users, anthropic product and research facility, Dianne Penn, Dianne Penn, Dianne Penn, Dianne Penn, Dianne Penn, Dianne Penn, Dianne Penn, Dianne Penn, Dianne Penn, Dianne Penn’s Controls would understand “Think” the “think”.

“There are two separate brains because of the questions that people think of the thought, which can be answered immediately,” he wrote an anthropic Blog Post TechCrunch, “We think more than anything to be provided in a separate model, other opportunities and other opportunities, which is a sense of one of the opportunities that need to be smooth smoothly.”

Anthropic says that the clone 3.7 sonnet allows you to show the internal planning phase through a “visible scratch pad”. Penn TechCrunch will see Clody’s full-thinking process for most tips for Clody, but some parts can be rebuilt for confidence and security purposes.

Claude’s thinking process in Claude application Photo credits:Anthropical

Anthropic says that it optimizes the mindset modes for real world assignments such as difficult coding problems or agent tasks. The developers who hit the API of Anthropic API can manage the “budget” for the quality of thinking, trade speed and answer quality.

In a test to measure real word coding tasks, SWE-SCENCH, Claude 3.7 Sonnet, 49.3% was 62.3% accurate compared to Openai O3-mini model. Another test to measure an AI model’s ability to interact with simulated users and foreign API, Tau-Dench, Claude 3.7 Sonnet, 73.5% of the Openai O1 model 81.2%, 81.2 % hit.

Anthropic, Clode 3.7 Sonnet will refuse to answer fewer questions than previous models, claiming that the model is able to make more difference between harmful and good proposals, he said he would refuse to answer fewer questions. Anthropically, this has reduced the unnecessary rejection answers to Clod 3.5 in reducing 45% compared to Sonnet. This is when it comes Some other AI laboratories consider their approach to limit the answers to AI chatbot.

In addition to Clode 3.7 Sonnet, an agent named an antropical club code also releases an agent coding vehicle. The tool to preview the research allows developers to perform custom tasks from a direct terminal to the terminal.

In a demo, anthropic workers show that the Clode Code can analyze the coding project with a simple command, for example, Explain this project structure. “Prinay can change a codbase using the command line in English. Claudit code will describe a project to describe the corrections for making changes and even test a project for errors or pushing a project for mistakes.

Claude code first told an anthropic spokesman Techcrunch, for a limited number of users based on “first arrival, first service”.

Claude releases Claude 3.7 at a time when anthropic, AI laboratories sent new AI models in a leap pace. Anthropic historically received a more methodical, safety-oriented approach. But this time the company wants to lead the package.

As long as it is a question, it is a question. Openai can be close to releasing his hybrid AI modelOpen company General Director Sam Altman said this would come to the months.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *