Technology News Worldwide

Meta’s benchmarks for its new AI models are a bit misleading | TechCrunch

meta’s-benchmarks-for-its-new-ai-models-are-a-bit-misleading-|-techcrunch
Meta sign
Image Credits:Kelly Sullivan / Getty Images

One of the new flagship AI models Meta released on Saturday, Maverick, ranks second on LM Arena, a test that has human raters compare the outputs of models and choose which they prefer. But it seems the version of Maverick that Meta deployed to LM Arena differs from the version that’s widely available to developers.

As several AI researchers pointed out on X, Meta noted in its announcement that the Maverick on LM Arena is an “experimental chat version.” A chart on the official Llama website, meanwhile, discloses that Meta’s LM Arena testing was conducted using “Llama 4 Maverick optimized for conversationality.”

As we’ve written about before, for various reasons, LM Arena has never been the most reliable measure of an AI model’s performance. But AI companies generally haven’t customized or otherwise fine-tuned their models to score better on LM Arena — or haven’t admitted to doing so, at least.

The problem with tailoring a model to a benchmark, withholding it, and then releasing a “vanilla” variant of that same model is that it makes it challenging for developers to predict exactly how well the model will perform in particular contexts. It’s also misleading. Ideally, benchmarks — woefully inadequate as they are — provide a snapshot of a single model’s strengths and weaknesses across a range of tasks.

Indeed, researchers on X have observed stark differences in the behavior of the publicly downloadable Maverick compared with the model hosted on LM Arena. The LM Arena version seems to use a lot of emojis, and give incredibly long-winded answers.

Okay Llama 4 is def a littled cooked lol, what is this yap city pic.twitter.com/y3GvhbVz65

— Nathan Lambert (@natolambert) April 6, 2025

for some reason, the Llama 4 model in Arena uses a lot more Emojis

on together . ai, it seems better: pic.twitter.com/f74ODX4zTt

— Tech Dev Notes (@techdevnotes) April 6, 2025

We’ve reached out to Meta and Chatbot Arena, the organization that maintains LM Arena, for comment.

Kyle Wiggers is TechCrunch’s AI Editor. His writing has appeared in VentureBeat and Digital Trends, as well as a range of gadget blogs including Android Police, Android Authority, Droid-Life, and XDA-Developers. He lives in Manhattan with his partner, a music therapist.

View Bio

Related posts

SignalFire raises over $1B as LPs embrace data-driven investing | TechCrunch

Here’s everything Nintendo has revealed about the Switch 2’s Joy-Cons

X users treating Grok like a fact-checker spark concerns over misinformation | TechCrunch