On Tuesday, Meta will hold its first Llamacon AI developer meeting at Menlo Park headquarters, where it will try to propose developers to build applications using the open llama AI model. Just a year ago, it wasn’t a big seller.
But in recent months, Meta has struggled to keep up with both “open” AI labs like Deepseek and closed competitors like Openai in the rapidly evolving AI races. Ramacons comes at a critical moment for Meta in their quest to build a vast Rama ecosystem.
Winning developers may be as easy as shipping a better open model. But that may be more difficult to achieve than it sounds.
Promising early start
Earlier this month, Meta launched the Llama 4, with numerous benchmark scores under models like Deepseek’s R1 and V3, which was overwhelming by developers. It was far from the old llama: a lineup of boundary pushing models.
When Meta launched the Llama 3.1 405B model last summer, CEO Mark Zuckerberg touted it as a big win. In the blog post, Meta is called Lama 3.1 405B and is called “the most competently published foundation model.”
Certainly it was an impressive model. And so were other models from Meta’s Lama 3 family. Jeremy Nixon, who has hosted hackathons at AGI House in San Francisco for the past few years, is called the Llamas 3, who launches “historic moments.”
The Llama 3 undoubtedly made Meta a beloved among AI developers, providing the freedom to host models in their chosen locations, providing cutting-edge performance. Today, Meta’s Llama 3.3 model is downloaded more frequently than the Llama 4, said Jeff Boudier, head of product and growth at Face in an interview.
In contrast, it is a reception to the Meta Lama 4 family, and the difference is tough. However, Lama 4 was controversial from the start.
Shenangan benchmark
Meta optimized the version of the Lama 4 Maverick, one of the Lama 4 models. This helped us to win the top spot for the crowdsourcing benchmark LM Arena. However, Meta never released this model. The widely deployed version of Maverick has been much worse at LM Arena.
The group behind the LM Arena said Meta should be “more clearer” about the contradiction. Ion Stoica, co-founder of LM Arena and professor at UC Berkeley, has also co-founded companies including anyscale and Databricks, and the incident has hampered the trust of the developer community in Meta.
“(Meta) should have made it clearer that the Maverick model that was in (LM Arena) was different from the models that were released,” Stoika told TechCrunch in an interview. “When this happens, there’s a bit of a loss of trust in the community. Of course, we can recover it by releasing a better model.”
There’s no reason
The obvious omission from the Llama 4 family was the AI inference model. The inference model can behave carefully before answering the question. Last year, many in the AI industry released inference models that tend to perform better on certain benchmarks.
Meta is making fun of Lama 4’s inference model, but the company doesn’t show when it expects it.
AI2 researcher Nathan Lambert says the fact that Meta did not release the inference model in Lama 4 suggests that the company may have rushed to launch.
“Everyone has released inference models and makes the model look really good,” Lambert said. “Why couldn’t (meta) wait for it to do that? I don’t have the answer to that question. It seems like the oddity of a normal company.”
Lambert said that rival open models are closer to the frontier than ever before, and now have more shapes and sizes, with a significant increase in pressure on the meta. For example, on Monday, Alibaba released its QWEN3 collection of models. This is said to outweigh some of Openai and Google’s best coding models in the programming benchmark CodeForces.
According to Ravid Shwartz-Ziv, an AI researcher at NYU’s Data Science Center, to regain the lead in open models, Meta simply needs to offer a superior model. He told TechCrunch.
It is unclear whether Meta is in a position to take a big risk right now. The current and former employee previously said Fortune Meta’s AI Research Lab is “arounds dying slowly.” Joel Pineau, the company’s vice president of AI research, announced this month that she was leaving.
Llamacon is a meta chance to show what cooking is to break releases from AI labs like Openai, Google, Xai and more. If it fails to deliver, the company could fall behind even further into the ultra-competitive space.