hiya welcome to TechCrunch’s regular AI newsletter. If you need this in your inbox every Wednesday, sign up here.
You may have noticed that you skipped the newsletter last week. reason? The chaotic AI news cycle has become even more volatile due to the sudden rise in Chinese AI company Deepseek and the response from a virtual corner of the industry and government.
Luckily, we’ve come back well, considering developments like the news from Openai last weekend.
Openai CEO Sam Altman stopped in Tokyo to chat on stage with Masayoshi Son, CEO of Japanese conglomerate Softbank. SoftBank is a leading investor and partner in Openai and pledges to fund Openai’s large-scale data center infrastructure projects in the US
So Altman probably felt that he owed several hours to his son.
What did the two billionaires talk about? Many abstractions are addressed with each ongoing report through AI “agents.” His son is working with Openai products to help his company spend $3 billion a year on Openai products and works with Openai to automate millions of traditionally white-colored workflows. He said he will develop the
“By automating and automating all tasks and workflows, SoftBank Corp. transforms business and services and creates new value,” SoftBank said in a press release Monday.
But do I ask that humble workers should think about this automation and autonomy?
Like Sebastian Siemiatkowski, CEO of Fintech Klarna, the son seems to be the opinion that the position of a worker’s agent can only sediment great wealth, just as AI often boasts of replacing humans. . It’s a wealth of cost. If extensive automation of employment is successful, massive unemployment appears to be the most likely outcome.
People at the forefront of the AI race (companies like Openai and investors like SoftBank) are discouraged from choosing to choose a press conference that paints pictures of automated companies with fewer workers on top of their pay. I’m sorry. Of course, they are not charities. AI development is not cheap. But perhaps people will trust AI if the people leading the deployment expressed a little more concern about their welfare.
Food for thought.
news
Deep Research: Openai has launched a new AI “agent” designed to allow people to conduct detailed and complex research using its AI-powered Chatbot platform ChatGPT.
O3-MINI: Other Openai news launched a new AI “inference” model, O3-Mini, following its preview in December last year. Though not Openai’s most powerful model, the O3-Mini boasts improved efficiency and response speed.
The EU bans risky AI. As of the European Union’s Sunday, bloc regulators could ban the use of AI systems that they consider to be “unacceptable risks” or harm. This includes AI used for social scoring and subliminal ads.
Plays on AI’s “Doomers”: A new play about the AI’s “Doomers” culture, loosely based on the ouster of Sam Altman, who was kicked out as Openai’s CEO in November 2023.
Tech To To Bost Crop yield: Google’s X “Moonshot Factory” announced its latest alumni this week. Inheritance agriculture is a data -based startup that aims to improve the cultivation method of crops and machine learning.
This week’s research paper
Inference models are superior to the average AI when solving problems, particularly science-related and mathematics-related queries. But they are not silver bullets.
A new study from researchers at Tencent, a Chinese company, investigates the problem of “underlinking” in inference models. In the model, the model inexplicably abandons a potentially promising chain of thought. Research findings show that “rethinking” patterns tend to occur more frequently with more difficult problems, ensuring that the model switches the inference chain without reaching the answer.
The team has revised the “thinking switch penalty” to encourage the model to “deep switching penalty” to encourage the model to “deeply” develop each line of reasoning before it considers alternatives and improves the accuracy of the model. I’ll suggest it.
This week’s model

A team of researchers, supported by Tiktok’s owner, Byte Dance, Chinese AI company Moonshot, and others, have released a new open model that can generate relatively high quality music from the prompt.
A model called Yue can output songs up to a few minutes long with vocals and backing tracks. It is under the Apache 2.0 license. This means that the model can be used commercially without limitation.
However, there are drawbacks. You need a thick GPU to run Yue. It takes 6 minutes on the NVIDIA RTX 4090 to generate a 30-second song. Furthermore, it is not clear whether the model was trained using copyrighted data. The creator of it has not said. If the copyrighted song is actually found to be in the model’s training set, users may face future IP challenges.
Glove bag

AI Lab Humanity claims to have developed a more reliable defense against AI “jailbreak” that can be used to bypass AI systems’ safety measures.
This method, the constitutional classifier, relies on two sets of “classifier” AI models. “Input” and “Output” classifiers. The input classifier adds prompts to protected models using templates that describe jailbreaks and other permitted content, while the output classifier describes information that responds from the model are harmful Calculate the possibility.
Humanity says that constitutional classifiers can filter the “overwhelming majority” of prison breaks. However, there is a cost. Each query is a computationally tough 25% increase, with protected models being 0.38% less likely to answer harmless questions.