Labs like Openai, like AI Labs, claim that so-called “inference” AI models that can “think” through problems are more capable than irrational counterparts in a particular domain, such as physics. However, while this appears to be a general fact, it is difficult to test these claims independently, as inference models are much more expensive than benchmarks.
Evaluating OpenAI’s O1 inference model in a suite of seven popular AI benchmarks costs $2,767.05, according to data from artificial analysis, a third-party AI test equipment.
For each artificial analysis, the recent Claude 3.7 Sonnet from Bentmarks, a “hybrid” inference model, costs $1,485.35, while testing OpenAI’s O3-MINI-HIGH cost of $344.59.
Some inference models have cheaper benchmarks than others. Artificial analysis spent an valuation of $141.22 for Openai’s O1-Mini, for example. But on average, they tend to be expensive. Artificial analysis spends around $5,200 on an valuation of about $5,200, nearly twice the amount spent on analyzing over about 80 irrational models ($2,400).
Released in May 2024, Openai’s irrational GPT-4O model costs $108.85 to evaluate an artificial analysis, but the Claude 3.6 Sonnet – the Claude 3.7 Sonnet’s irrational predecessor – costs $81.41.
George Cameron, co-founder of artificial analytics, told TechCrunch that the organization plans to increase benchmark spending as more AI labs develop inference models.
“In artificial analysis, we run hundreds of assessments each month and spend a considerable amount of money on these,” Cameron said. “We plan to increase this spending as the models are released more frequently.”
Artificial analysis isn’t the only outfit of this type that deals with rising AI benchmark costs.
Ross Taylor, CEO of AI Startup General Inference, said he recently spent $580 on valuing the Claude 3.7 Sonnet at around 3,700 unique prompts. Taylor estimates a single execution through in MMLU Pro as a question set designed to benchmark the language understanding skills of the model.
“We move to a world where labs report x% in benchmarks that use Y amount calculations, but resources for academics have been posted to X recently.” (n)O One could replicate the results. ”
Why are inference models so expensive to test? Mainly because they generate a lot of tokens. The token represents a bit of raw text, such as the word “fantastic” divided into syllables “fan”, “TA”, and “TIC”. According to artificial analysis, Openai’s O1 generated over 44 million tokens during company benchmark testing, about eight times the amount GPT-4o produced.
The majority of AI companies are charged for using the model with tokens, so you can see how this cost is added.
Also, Jean-Stanislas Denain, a senior researcher at Epoch AI, who develops his own model benchmarks, says that the questions include complex multi-step tasks tend to draw many tokens from the model.
“The benchmarks (today) are more complicated (but) the number of questions per benchmark overall has decreased,” Dennaine told TechCrunch. “They often try to assess the ability of the model to perform real tasks, such as writing and running code, browsing the internet, or using a computer.”
Dennaine added that the most expensive models are becoming more expensive per token over time. Anthropic’s Claude 3 Opus was, for example, the most expensive model when it was released in May 2024. Both Openai’s GPT-4.5 and O1-Pro were launched earlier this year, costing $150 per million and $600 per million respectively.
“The (s)Ince model has improved over time, and it remains true that the cost of reaching a certain level of performance has decreased significantly over time,” Denain says. “But if you want to evaluate the biggest model at any time, you still pay a lot.”
Many AI labs, including Openai, offer benchmark organizations free or grants to models for testing purposes. However, this color the results, some experts say – even if there is no evidence of operation, mere suggestions for involvement of AI labs could harm the integrity of assessment scoring.
“(a) From a scientific perspective, if you publish results that no one can replicate in the same model, is that science now?” I wrote Taylor in a follow-up post for X. “(It’s been science up until now, lol).”