Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Hey guys, and welcome to TechCrunch’s regular AI newsletter. Sign up if you want this in your inbox every Wednesday here.
OpenAI is profiting at the expense of its main competitors.
Tuesday, company announced Project Stargate, a new joint venture between Japanese conglomerate SoftBank, Oracle and others to build AI infrastructure for OpenAI in the US Stargate, could raise up to $500 billion in funding for AI data centers over the next four years. plan.
The news certainly saddened OpenAI rivals like Anthropic and Elon Musk’s xAI, which won’t see any comparable massive infrastructure investment.
xAI intends to Expanding its data center in Memphis to 1 million GPUs, and Anthropic recently signed a contract with Amazon Web Services (AWS), Amazon’s cloud computing division, to use and improve the company’s custom AI chips. But it’s hard to imagine that any AI company, even Anthropic, with Amazon’s vast resources, can outdo Stargate.
True, Stargate may not deliver on its promises. There are no other technology infrastructure projects in the US. Recall that in 2017, the Taiwanese manufacturer Foxconn promised to spend 10 billion dollars for a factory near Milwaukee, and then failed to do so.
But Stargate has more supporters – and as it seems at this point – behind it. The first data center funded by the effort has already broken ground in Abilene, Texas. And the companies involved in Stargate promised to invest 100 billion dollars in the beginning.
Indeed, Stargate seems poised to cement OpenAI’s place in the exploding AI sector. OpenAI has more active users — 300 million per week – more than any other AI initiative. And it has more customers. More than 1 million businesses pays For OpenAI services.
OpenAI had a first-mover advantage. Now it could have an infrastructural advantage. Competitors will have to be smart if they hope to compete. Brute force will not be a viable option.
Microsoft exclusivity no longer exists: Microsoft was once the exclusive provider of data center infrastructure for OpenAI to train and run AI models. Not anymore. Now the company only has a “right of first refusal”.
Runs the Perplexity API: AI-powered search engine Perplexity has launched an API service called Sonar that allows businesses and developers to build the startup’s generative AI search tools into their apps.
Artificial intelligence accelerates the “kill chain”: My colleague Max interviewed Radha Plumb, the Pentagon’s chief digital and artificial intelligence officer. The Defense Department is using artificial intelligence to gain a “significant advantage” in identifying, tracking and assessing threats, Plumb said.
The mentioned criteria are: An organization developing math benchmarks for artificial intelligence didn’t disclose funding from OpenAI until relatively recently, drawing allegations of impropriety from some in the AI community.
DeepSeek’s new model: Chinese AI lab DeepSeek has released an open version of DeepSeek-R1, its so-called reasoning model, which it claims outperforms OpenAI. o1 on certain AI criteria.
Last week Microsoft is in focus a pair of AI-powered tools, MatterGen and MatterSim, that it claims can help design advanced materials.
MatterGen predicts potential materials with unique properties based on scientific principles. As described in an article published in the journal Nature, MatterGen generates thousands of candidates with “user-defined constraints” — suggesting new materials that meet highly specific needs.
As for MatterSim, it predicts which of the materials offered by MatterGen are stable and viable.
Microsoft reports that a team at the Shenzhen Institute of Advanced Technology has been able to use MatterGen to synthesize a new material. The material was not flawless. But Microsoft has released MatterGen’s source code, and the company says it plans to work with other outside collaborators to further develop the technology.
Google has released Gemini 2.0 Flash Thinking Experimental, a new version of its experimental “reasoning” model. The company claims it performs better than the original on math, science and multimodal reasoning benchmarks.
Thinking models like the Gemini 2.0 Flash Thinking Experimental effectively test themselves. helping them avoid some of the pitfalls that usually derail models. As a result, reasoning models take slightly longer to obtain solutions than a typical “non-reasoning” model—typically seconds or minutes.
The new Gemini 2.0 Flash Thinking also has a 1 million token context window, meaning it can analyze long documents such as research papers and policy papers. One million tokens is equivalent to about 750,000 words or 10 medium-length books.
an AI project called Game Factory Minecraft videos show that it is possible to “create” interactive simulations by training a model and then extending that model to different domains.
The researchers behind GameFactory, mostly from the University of Hong Kong and partly from state-owned Chinese company Kuaishou, have posted several examples of simulations on the project’s website. They leave something to be desired, but the concept is still interesting: a model that can create worlds of infinite styles and themes.