Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
The CEO of Nvidia, Jensen Huang, arrives to attend the Siliconware Precision Industries Co. (Spil) opening ceremony in Tan Ke Plant Site in Taichung, Taiwan, on January 16, 2025.
Ann Wang | Reuters
Nvidia He announced new chips to build and display artificial intelligence models at his annual GTC conference on Tuesday.
The Jensen Huang CEO revealed Blackwell Ultra, a family of chips shipping in the second half of this year, also Like Vera RubinThe company’s next generation graphics processing unit, or GPU, which is expected to be sent in 2026.
Nvidia sales rose more than six times since their business was transformed by launching Openai chatpt at the end of 2022. That is because their “large gpus” have most of the market to develop advanced AI, a process called training.
Software developers and investors are closely observing the company’s new chips to see if they offer enough additional performance and efficiency to convince the company’s largest end customers, cloud companies that include Microsoft, Google and Amazon – To continue spending billions of dollars to build data centers based on Nvidia chips.
“This last year is where almost everyone got involved. The computational requirement, the AI scale law, is more resistant and, in fact, is hyperaceled,” said Huang.
Tuesday’s ads are also a proof of the new annual cadence of NVIDIA. The company strives to announce new chips families per year. Before the AI boom, Nvidia launched new chip architectures every two years.
The GTC conference in San José, California, is also a sample of force for Nvidia.
The event is expected, the second conference in Person of Nvidia from the pandemic, to have 25,000 attendees and hundreds of companies that discuss the ways in which they use the hardware of the company for AI. Which includes Waymo, Microsoft and Fordinter alia. General Motors He also announced that he will use the NVIDIA service for his Next generation vehicles.
Chips architecture after Rubin will be named after physicist Richard Feynman, Nvidia said Tuesday, continuing his tradition of naming Chips families after scientists. Nvidia Feynman chips are expected to be available in 2028, according to a slide that Huang shows.
Nvidia will also show your other products and services in the event.
For example, Nvidia announced new laptops and desktop computers with its chips, including two PC -centered PCs DGX Spark and DGX Station That can execute great models of AI as a call or deepseek. The company also announced updates of its network pieces to join hundreds or thousands of GPU together, so they function as one, as well as a software package called Dynamo that helps users to make the most of their chips.
Jensen Huang, co -founder and executive director of Nvidia Corp., speaks during the GPU (GTC) technology conference of Nvidia in San José, California, USA, UU., Tuesday, March 18, 2025.
David Paul Morris | Bloomberg | Getty images
Nvidia hopes to start shipping systems in your next -generation GPU family in the second half of 2026.
The system has two main components: a CPU, called Vera, and a new GPU design, called Rubin. Keep the name Astronoma Vera Rubin.
Vera is the first personalized CPU design of Nvidia, said the company, and is based on a central design called Olympus.
Previously, when I needed CPU, Nvidia used a standard design of Arm. Companies that have developed personalized arm core designs, such as Qualcomm and Apple, say they can be more personalized and unlock better performance.
Vera’s personalized design will be twice as quickly as the CPU used in Grace Blackwell chips last year, the company said.
When combined with Vera, Rubin can administer 50 petaflops while performing inference, more than double the 20 petaflops for the company’s current Blackwell chips. Rubin can also admit up to 288 rapid memory gigabytes, which is one of the central specifications that IA developers observe.
Nvidia is also making a change to what calls a GPU. Rubin is actually two GPU, Nvidia said.
The Blackwell GPU, which is currently on the market, is actually two separate chips that gathered and made them work as a chip.
Starting with Rubin, Nvidia will say that when he combines two or more trochers to make a single chip, he will refer to them as separate GPUs. In the second half of 2027, Nvidia plans to launch a “Rube Next” chip that combines four trochers to make a single chip, doubleing Rubin’s speed, and will refer to that like four GPUs.
Nvidia said it will come on a shelf called Vera Rubin NVL144. The previous versions of the NVIDIA shelf were called NVL72.
Jensen Huang, co -founder and executive director of Nvidia Corp., speaks during the GPU (GTC) technology conference of Nvidia in San José, California, USA, UU., Tuesday, March 18, 2025.
David Paul Morris | Bloomberg | Getty images
Nvidia also announced new versions of her Blackwell Chips family called Blackwell Ultra.
That chip can produce more tokens per second, which means that the chip can generate more content in the same amount of time as its predecessor, the company said in an informative session.
Nvidia says that means that cloud suppliers can use Blackwell Ultra to offer a premium IA service for time sensitive applications, which allows them to obtain up to 50 times the income of the new chips such as the generation of the hopper, which was sent in 2023.
Blackwell Ultra will come in a version with two matches with a NVIDIA ARM CPU, called GB300, and a version with only the GPU, called B300. It will also come in versions with eight GPUs in a single server blade and a rack version with 72 Blackwell chips.
The four main cloud companies have deployed the number of Blackwell chips as a hopper chips three times, Nvidia said.
China’s Deepseek R1 model may have scared Nvidia investors when it was launched in January, but Nvidia has adopted the software. The chips manufacturer will use the model to compare several of its new products.
Many AI observers said that Deepseek’s model, which according to the reports required less chips than the models made in the United States, threatened the Nvidia business.
But Huang said earlier this year that Depseek was actually a good sign for Nvidia. This is because Depseek uses a process called “reasoning”, which requires more computer power to provide users with better answers.
The new Blackwell Ultra chips are better for reasoning models, said Nvidia.
He has developed his chips to make an inference more efficiently, so when new reasoning models require more computer power at the time of implementation, Nvidia chips can handle it.
“In the last 2 to 3 years, a great advance occurred, a fundamental advance occurred in artificial intelligence. We call it Agent,” Huang said. “You can reason how to respond or how to solve a problem.”
LOOK: Nvidia starts its GTC conference: the committee debates how to trade it