4 min read
Mistral AI dropped Mistral Medium 3.5 on April 29. The Paris-based lab announced a dense 128-billion-parameter model, a set of agentic features—and walked straight into a wall of online “meh” reactions.
The release came in three parts. First, the model itself. Second, remote coding agents via Mistral Vibe CLI—cloud-based coding sessions that can push pull requests to GitHub and run in parallel without you sitting at a terminal. Third, Work Mode in Le Chat, Mistral's ChatGPT-style consumer interface, which now handles multi-step autonomous tasks like email triage, research synthesis, and cross-tool workflows.
Big ambitions, but a messy benchmark reality.
Medium 3.5 scores 77.6% on SWE-Bench Verified—a coding benchmark that tests whether a model can fix real GitHub issues by generating working patches. It also hits 91.4% on τ³-Telecom, which measures agentic tool use in specialized environments. Mistral also merged three previously separate models (Medium 3.1, Magistral, and Devstral 2) into one set of weights with configurable reasoning effort per request.
Unified model replacing three is a real engineering win. The problem is what it costs and who it's up against.
Mistral charges $1.50 per million input tokens and $7.50 per million output tokens. Alibaba's Qwen 3.6 at 27 billion parameters—less than a quarter of Medium 3.5's parameter count—scores 72.4% on the same SWE-Bench Verified benchmark and ships under Apache 2.0, meaning you can download and run it for free.
Scroll through the open-source leaderboards and the picture is stark. The top spots belong to Alibaba’s Qwen, GLM from China's Zhipu AI, and MiMo-V2 from Xiaomi, all of them cheaper, more powerful and competitive than Mistral’s new release. Medium 3.5 hasn't even ranked on major independent leaderboards yet—third-party evaluations are still pending.
The only good thing though, as some argue, is that Mistral is, at this point, the lone non-Chinese model with any serious presence in the open-source conversation.
Pedro Domingos, a machine learning professor at the University of Washington, wasn't gentle:
"Regular AI companies brag about how much better their model is on benchmarks. Only Mistral brags about how much worse its one is."
He followed up with a sharper question: "I don't know what's worse, for Europe to not be in the AI race or for it to be represented by a laughingstock like Mistral."
Youssof Altoukhi, founder of Yoyo Studios, did the math: Qwen 3.6, at 27 billion parameters, is 4.7 times smaller than Medium 3.5 and scores comparably on coding. Medium 3.5's output pricing puts it alongside closed models that score significantly higher on every major benchmark.
“If it wasn’t for their political skill they would have been bankrupt by now,” he said.
Not everyone was purely dismissive. AI developer Michal Langmajer captured the ambivalence:
"I'm genuinely glad there's still a non-US, non-Chinese lab trying to build frontier LLMs but boy we have to level up the game in Europe. Their new flagship model is basically 'not the best' on any benchmark, yet costs multiple times more than most competitors."
Some developers argued open weights are a durability play, not a leaderboard play. A model anyone can download, fine-tune, and self-host doesn't need to win rankings today to stay relevant. Others pointed to Mistral's real enterprise deployments across Europe as evidence the moat isn't purely technical.
This is where Mistral's actual pitch lives.
European enterprises under GDPR, banks handling sensitive customer data, and governments that won't route AI workloads through Chinese infrastructure have limited options. As Decrypt reported last December, HSBC signed a multi-year deal with Mistral specifically to self-host models on its own infrastructure. The appeal of an EU-headquartered open-weight lab with a $14 billion valuation doesn't show up in benchmark tables—but it shows up in procurement decisions.
Not the best at coding, and not the cheapest. But it is: not American, not Chinese, auditable, self-hostable, and legally safe for European enterprise.
Decrypt-a-cookie
This website or its third-party tools use cookies. Cookie policy By clicking the accept button, you agree to the use of cookies.