Meta-Executive on Monday denied rumours that the company had trained new AI models to better present it in certain benchmarks, while hiding the weaknesses of the model.
Ahmad al-Dar, vice president of Meta Generation AI, said in X’s post that Meta trained the Rama 4 Maverick and the Rama 4 Scout model in the “test set.” In AI benchmarks, a test set is a collection of data used to evaluate performance after the model has been trained. Training on a test set can mislead and inflate the model’s benchmark scores, which can make the model more capable than it actually is.
Over the weekend, unfounded rumors began to circulate on X and Reddit that Meta artificially increased the benchmark results of the new model. The rumor appears to have stemmed from a post on a Chinese social media site from users who claimed they had resigned from Meta in protest of the company’s benchmark practices.
Maverick and Scout have driven rumors as reports of poor performance on certain tasks. This promoted rumors, as well as Meta’s decision to use an experimental and unpublished version of Maverick to achieve better scores at the benchmark LM arena. X researchers have observed significant differences in the behavior of publicable Mavericks compared to models hosted at LM Arena.
Al-Dahle has admitted that some users see “mixed quality” from Maverick and Scouts at various cloud providers that host the models.
“We dropped as soon as the model was ready, so we expect it will take several days for all public implementations to be dialed,” says Al-Dahle. “We continue to work through bug fixes and onboarding partners.”