Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Deepseek has shaken the eco -led by the United States with its latest model, shaving hundreds of billions in the chip leader Nvidia Market cape. While the leaders of the sector deal with the consequences, the smallest companies see the opportunity to climb with the Chinese startup.
Several companies related to CNBC told CNBC that Deepseek’s emergency is a “massive” opportunity for them, instead of a threat.
“The developers are very interested in replacing the expensive and closed openai models with open source models such as Deepseek R1 …”, said Andrew Feldman, CEO of the starting brains of artificial intelligence chips starting brains.
The company competes with NVIDIA graphic processing units and offers cloud -based services through its own computer groups. Feldman said that the launch of the R1 model generated one of the largest brains in demand for its services.
“R1 shows that the growth (AI market) will not be dominated by a single company: the hardware and software outbreaks do not exist for open source models,” Feldman added.
Open source refers to the software in which the source code is available for free on the web for possible modification and redistribution. Deepseek’s models are open source, unlike competitors such as OpenAi.
Deepseek also states that its reasoning model R1 rivals the best American technology, despite running at lower costs and being trained without vanguard graphic processing units, although the observers and competitors of the industry have He questioned these statements.
“As in PC and Internet markets, prices fall helps feed global adoption. The AI market is on a similar secular growth route,” said Feldman.
Depseek could increase the adoption of new chip technologies by accelerating the AI cycle of the “inference” training phase, the new chips companies and industry experts said.
Inference refers to the act of using and applying AI to make predictions or decisions based on new information, instead of the construction or training of the model.
“In a nutshell, the training of AI is about building a tool or an algorithm, while inference is about implementing this tool for use in real applications,” said Phelix Lee, Capital Analyst of Morningstar, with an approach in semiconductors.
While Nvidia has a dominant position in the GPUs used for AI training, many competitors see Expansion space In the “inference” segment, where they promise greater efficiency for lower costs.
The training of AI is very intensive in computation, but Lee added a less powerful inference to perform a closer range of tasks.
Several new AI chips companies told CNBC that they were seeing more demand for inference and computer science as customers adopt and build on the Deepseek open source model.
“(Deepseek) has shown that smaller open models can be trained to be so capable or more capable than larger patented models and this can be done at a cost fraction,” said Sid Sheth, CEO of Start-up D-matrix.
“With the wide availability of small capable models, they have catalyzed the era of inference,” he told CNBC, adding that the company has recently seen an increase in the interest of global customers who seek to accelerate their inference plans.
Robert Wachen, co -founder and Operations Director of the IA recorded chips manufacturer, said that dozens of companies have communicated with the startup since Depseek launched its reasoning models.
“Companies are (now) changing their expenditure of training groups to inference groups,” he said.
“Deepseek-R1 showed that the computation of inference is now the approach (avant-garde) for each provider of main models and thoughts is not cheap: we will only need more and more computing capacity to climb these models to climb these models millions of users of users “
Analysts and experts in the industry agree that Deepseek’s achievements are an impulse for the inference of AI and the broader AI chips industry.
“Depseek’s performance seems to be based on a series of engineering innovations that significantly reduce inference costs while improving training cost,” according to a report Bain & Company.
“In a bullish scenario, continuous efficiency improvements would lead to cheaper inference, which stimulates a greater adoption of AI,” he added.
This pattern explains Jevon’s paradox, a theory in which cost reductions in a new technology generate greater demand.
The Financial Services and Investment Firm Wedbush said in a research note last week that it continues to wait for the use of AI in business and retail consumers worldwide to boost demand.
Talking with CNBC’s “fast money” Last week, Sunny Madra, Coo in Groq, who develops the inference of AI, suggested that as the general demand of AI grows, smaller players will have more space to grow.
“As the world will need more chips (a data unit than an AI process) NVIDIA cannot provide enough chips to all, so it gives us opportunities to sell to the market even more aggressively,” Madra said.