Meta dropped three new models over the weekend: Scout, Maverick and the still-training giant were billed as the next evolution of the “open-ish” AI. But instead of excitement, the reaction was almost shrugging. Critics have been making the release overwhelming and say it lacks the edge expected in today’s Breakneck AI race. The clear attempt to draw Meta’s attention back quickly became messy. The accusations began circulating on X and Reddit, centering on the big gap between benchmark tampering, mysterious ex-employers and models’ public and private performance.
Today, on TechCrunch’s equity podcast, Kirsten Korosec, Max Zeff and Anthony HA unpack the rocky rollout of Meta. The obsession with looking smart on paper in the AI industry, and what Kirsten said, “It’s not always translated into a good business on tests.”
Listen to the full episode of:
Equity will be back next week, so stay tuned!
Equity is TechCrunch’s flagship podcast produced by Teresa Loconsolo, posted every Wednesday and Friday. Subscribe to us with Apple Podcasts, Cloudy, Spotify, and all casts. You can also follow X and thread equity, @EquityPod. For full episode transcripts, check out the full episode archive for those who prefer to read listening.