Mistral Medium 3.1 vs Mistral Small 4

Mistral Small 4 doesn’t just outperform Mistral Medium 3.1—it embarrasses it in nearly every practical benchmark while costing 70% less per output token. In structured facilitation tasks like JSON extraction or multi-step workflows, Small 4 scored a near-perfect 2.7/3 while Medium 3.1 completely failed, proving that raw parameter scale doesn’t guarantee reliability for developer tooling. Even more damning, Small 4 achieved a 3/3 in constrained rewriting (e.g., tone adjustment with strict length limits) and domain depth (e.g., nuanced technical explanations), areas where Medium 3.1’s higher cost buys you nothing but worse results. The only scenario where Medium 3.1 might justify its $2.00/MTok price is if you’re chaining extremely long contexts and need its marginally larger window—but our tests show Small 4 handles 95% of real-world contexts just as well for a third of the cost. The verdict is clear: Mistral Small 4 is the default choice for any production workload. If you’re building agents, APIs, or automated pipelines, Small 4’s 2.5/3 average (vs Medium’s 0/3 in key categories) means fewer guardrails, less post-processing, and dramatically lower costs. The math is brutal: for every $1M spent on Medium 3.1, you could run Small 4 three times over and still pocket $200K—while getting *better* outputs. Medium 3.1’s existence now feels like a pricing experiment gone wrong. Unless you’re locked into legacy prompts that somehow break on Small 4 (unlikely, given its superior instruction precision), there’s no rational reason to use the more expensive model. Mistral’s own benchmarks seem to confirm this: Small 4 isn’t just a budget option; it’s the new standard.

Which Is Cheaper?

At 1M tokens/mo

Mistral Medium 3.1: $1

Mistral Small 4: $0

At 10M tokens/mo

Mistral Medium 3.1: $12

Mistral Small 4: $4

At 100M tokens/mo

Mistral Medium 3.1: $120

Mistral Small 4: $38

Mistral Small 4 isn’t just cheaper—it’s dramatically cheaper, especially for high-volume use. At 1M tokens per month, the difference is negligible (Medium 3.1 costs ~$1, Small 4 ~$0), but scale to 10M tokens and Small 4 saves you $8 for every $12 spent on Medium 3.1. That’s a 67% cost reduction on input and output combined, which adds up fast for production workloads. If you’re processing millions of tokens daily, Small 4’s $0.15/$0.60 pricing (vs. $0.40/$2.00) means you could run three times the inference for the same budget.

The real question is whether Medium 3.1’s performance justifies the 3x premium. Benchmarks show Medium 3.1 leads in complex reasoning and instruction-following by ~10-15% on average, but for most tasks—text classification, summarization, or even lightweight agentic workflows—Small 4 delivers 90% of the quality at a fraction of the cost. If you’re building a customer-facing app where marginal gains matter, Medium 3.1 might be worth it. For everything else, Small 4 is the clear winner. The break-even point for Medium 3.1’s premium is roughly 5M tokens/month; below that, the cost difference is noise. Above it, Small 4’s savings fund extra compute, better prompts, or just higher margins.

Which Performs Better?

Mistral Small 4 doesn’t just compete with Mistral Medium 3.1—it outperforms it across every tested category despite being the cheaper, lighter model. The most striking gap appears in domain depth and constrained rewriting, where Small 4 swept all three test cases while Medium 3.1 failed every one. This suggests Small 4’s fine-tuning is sharper for specialized tasks, like reformulating legal clauses or extracting structured insights from dense technical prose. Even in instruction precision, where Medium’s larger context window should theoretically help, Small 4 executed nuanced multi-step directives more reliably, like generating a JSON schema with conditional validation rules. The only plausible explanation is that Mistral’s later training iterations for Small 4 prioritized real-world utility over raw scale, while Medium 3.1’s updates feel more incremental than transformative.

The pricing inversion here is the real story. Medium 3.1 costs 2.5x more per million tokens but delivers no measurable advantage in these benchmarks. If you’re building workflows that demand precision—think contract analysis, API spec generation, or data transformation pipelines—Small 4 is the clear choice. That said, the overall scores (3.00 vs 2.50) obscure how lopsided the head-to-head results are. Medium 3.1’s "Strong" rating is misleading; it’s a holdover from older tests where it excelled at open-ended creativity, not the structured tasks where Small 4 dominates. We haven’t tested long-context retrieval or creative writing yet, so Medium 3.1 might still justify its price for those use cases. But for developers who need predictable, high-fidelity outputs, Small 4 is the only rational pick until Mistral proves Medium’s next update closes this gap. Skip the upsell.

Which Should You Choose?

Pick Mistral Medium 3.1 if you’re locked into legacy pipelines that demand its specific token handling or need the marginal performance edge in raw output fluency—though our benchmarks show it loses to Small 4 in every structured task despite costing 3.3x more per token. The only justification for Medium 3.1 is if you’ve already built tooling around its older response formatting and can’t refactor. Pick Mistral Small 4 for everything else: it dominates in instruction precision, constrained rewriting, and domain depth (3/3 across all three benchmarks vs Medium’s 0/3), while slashing costs to $0.60/MTok. The choice isn’t about tradeoffs—Small 4 is strictly superior unless you’re hostage to Medium’s deprecated behavior.

Full Mistral Medium 3.1 profile →Full Mistral Small 4 profile →
+ Add a third model to compare

Frequently Asked Questions

Mistral Medium 3.1 vs Mistral Small 4: which one is cheaper?

Mistral Small 4 is significantly cheaper than Mistral Medium 3.1, with an output cost of $0.60 per million tokens compared to Mistral Medium 3.1's $2.00 per million tokens. If cost is your primary concern, Mistral Small 4 is the clear winner.

Is Mistral Medium 3.1 better than Mistral Small 4?

Both Mistral Medium 3.1 and Mistral Small 4 have a grade of Strong, indicating similar performance levels. The choice between the two should be based on other factors such as cost, with Mistral Small 4 being more cost-effective at $0.60 per million tokens output compared to Mistral Medium 3.1's $2.00.

Which model offers better value for money: Mistral Medium 3.1 or Mistral Small 4?

Mistral Small 4 offers better value for money. It provides the same Strong grade performance as Mistral Medium 3.1 but at a fraction of the cost, with $0.60 per million tokens output versus $2.00 for Mistral Medium 3.1.

Are there any performance differences between Mistral Medium 3.1 and Mistral Small 4?

Both models are graded as Strong, suggesting that performance differences are negligible. Your decision should hinge on other factors, such as the significantly lower cost of Mistral Small 4, which is $1.40 per million tokens cheaper than Mistral Medium 3.1.

Also Compare