Chinese AI research institute DeepSeek has released an open version of its so-called inference model, DeepSeek-R1, and claims it performs on par with OpenAI’s o1 on certain AI benchmarks.
R1 is available from the AI development platform Hugging Face under the MIT license. This means it can be used commercially without restrictions. According to DeepSeek, R1 outperforms o1 on AIME, MATH-500, and SWE bench-validated benchmarks. While AIME uses other models to evaluate model performance, MATH-500 is a collection of word problems. SWE-bench Verified, on the other hand, focuses on programming tasks.
Because R1 is an inferential model, it effectively fact-checks itself and helps you avoid some of the pitfalls that models typically stumble upon. Inferential models take a little longer to arrive at a solution (typically seconds to minutes longer) than typical non-inferential models. The advantage is that they tend to be more reliable in fields such as physics, science, and mathematics.
DeepSeek’s technical report revealed that R1 contains 671 billion parameters. Parameters roughly correspond to the model’s problem-solving skills, and models with more parameters generally perform better than models with fewer parameters.
While 671 billion parameters is a huge number, DeepSeek has also released “distilled” versions of R1 ranging in size from 1.5 billion parameters to 70 billion parameters. The smallest can be run on a laptop. Full R1 requires more powerful hardware, but is available through DeepSeek’s API for 90% to 95% less than OpenAI’s o1.
R1 has its drawbacks. As a Chinese model, it will be subject to benchmarking by China’s internet regulator to see if its response “embodies core socialist values.” R1 does not answer questions about Tiananmen Square or Taiwan’s autonomy, for example.

Many Chinese AI systems, including other inference models, refuse to respond to topics that could anger domestic regulators, such as speculation about Xi Jinping’s government.
R1 comes days after the outgoing Biden administration proposed tougher export controls and restrictions on AI technology for Chinese ventures. Chinese companies are already prohibited from purchasing advanced AI chips, but if the new rules take effect as written, companies will face tighter restrictions on both the semiconductor technology and models needed to build advanced AI systems. You will have to face it.
In a policy paper last week, OpenAI called on the U.S. government to support the development of U.S. AI to ensure that Chinese models do not match or surpass them in capabilities. In an interview with The Information, Chris Lehane, vice president of policy at OpenAI, singled out High Flyer Capital Management, DeepSeek’s parent company, as an organization of particular concern.
So far, at least three Chinese labs have developed models that they claim are comparable to o1: Deep Seek, Alibaba, and Kimi, owned by Chinese unicorn Moonshot AI. (Notably, DeepSeek was the first, announcing a preview of R1 in late November.) In a post on X, Dean Ball, an AI researcher at George Mason University, says that this trend China’s AI research institute will continue to be a “fast follower.”
“The impressive performance of DeepSeek’s distilled models (…) means that highly capable inferencers will continue to be widely available and run on local hardware,” Ball said. I wrote it. “It is far removed from the eyes of top-down management structures.”