Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Welcome to the regular AI bulletin of Hiya, People, TechCrunch. If you like this in the Inbox every Wednesday, please register here.
You saw that we jumped the newsletter last week. The reason? The chaotic AI news period is further pandemic China’s company has led to the sudden rise in DeepSeekAnd the modern and the government is practically responding from no corner.
Fortunately, the last weekend is not a moment, not a moment, not a moment, taking a moment, taking into account the news of the Openai last weekend.
Openai CEO SAM Altman, Japanese Conglomerate Softbank CEO Masayoshi oglu Masayoshi oglu stopped in Tokyo to talk for a disposable. SoftBank is a great openai investor and partnerhave Help Foundation Openai Mass Information Center Infrastructure Project in the United States
Thus, Altman probably felt the time of his son’s time was a few hours.
What did the two billionaires speak? A large number of abstract work through AI “agents” for the second report. Son, the company will spend $ 3 billion a year a year and unite with Openai to develop a platform, “Cristal (SIC) exploration”, automate traditional traditional white workflows traditionally.
“By automating all tasks and workflows and autonomies, SoftBank Corp., will change the work and services and create a new value,” SoftBank said Monday press release.
I ask What is the modest worker think about this automation and autonomy?
Sebastian Siemiatkowski, as a CEO of Fintech Klarna, who is often bragging about the EU replacement of peopleSon says that the agent’s agent stands could collapse only incredible wealth. The glitter is the value of abundance. Should go through the broad spread of work, Unemployment on a very large scale is the most conclusion.
In the forefront of the AI race – Send to choose to draw a picture of automated corporations that are less employees in Payroll, like Openai and Softbank. Of course, enterprises – not charity. And the development of AI does not come cheap. But perhaps people The ai would trust Those who head to his placement showed a little more concern for their welfare.
Food for thinking.
Deep research: Openai launched a new AI “Agent” designed to help the depth of people, using the company’s AI-Pight Shoutbot platform.
O3-mini: In other Openai news, the company launched a new AI “Reasoning” model, O3-mini, a “grounding” model in the last December. O3-mini is not the strongest model, but O3-mini has improved efficiency and response speed.
I’m taking a risky there: On Sunday, the regulators of the bloc in the European Union can prohibit the use of AI systems they consider “unacceptable risk” or harm. This includes AI used for social goals and subliminal advertising.
A game about AI “Doomers”: There is a new game about the AI ”Doomer” culture, is based freely As Sam Altman’s Openai General Director In November 2023. My colleagues share their ideas after watching the Premiere of Dominic and Rebecca.
Technical to increase plant products: Google’s X “Moonshot Factory” said it was the last graduation this week. Inmate Agriculture Information and machine learning management is the start to improve how plants are largely.
Propaganda models are better than your average account in solving problems, especially mathematics. But they are not silver bullets.
One New research on researchers in Chinese company TENCENT Prior to the time of the models, explicitly investigates the topic of “undatehinking” that the explicitable promises potential. According to the results of the study, “Ondhinking” occurs more often among the leading models between the chains of the minds between the chains of difficulties, difficult problems, without coming.
The team offers a “thoughtful penalty” adjustment to promote the models, before increasing the accuracy of models, models “thorough” before increasing the accuracy of the models.
A researcher group, supported by the owner of the Tiktok, left a new open pattern that can create high quality music from China’s Moonshot and others, relatively high quality music.
Model, called YueWith vocal and support tracks, you can remove a song from a few minutes. This is due to the Apache 2.0 license, the model can be used commercial without restricting.
However, there are lower falls. Running Yue requires a MALE GPU; Creating a song for 30 seconds, a NVIDIA RTX takes six minutes to 4090. Moreover, it is not clear if the model is trained using copyrighted information; They did not say his creators. Copyrighted songs were really in the training set of the model, users may encounter future IP problems.
AI Laboratory claims that an anthropical technique is a more reliable protection against the AI ”jailbreactions and develop a technique to protect the AI system in addition to security measures.
The technique, Constitutional classifiers“Classifier” relies on two sets of AI models: “Introduction” classification and “output” classification. Access classification, a model, protected by templates that describe jailbreaks and other allowed content, calculate the probability of a model of a model of a model.
The anthropic says the constitutional classifier can filter Jailbreaks the “over-cluster”. But this is a cost. Each survey requires 25% more calculation and the protected model is 0.38% less to answer innocent questions.