Skip to content
AI Primer
release

MiniMax M2.7 reportedly opens weights in about 2 weeks

Skyler Miao said MiniMax M2.7 open weights are due in roughly two weeks, with updates tuned for agent tasks. Separate replies also confirm multimodal M3, so local-stack builders should watch both the drop and the benchmark setup.

3 min read
MiniMax M2.7 reportedly opens weights in about 2 weeks
MiniMax M2.7 reportedly opens weights in about 2 weeks

TL;DR

  • MiniMax appears set to release M2.7 open weights in roughly two weeks: Skyler Miao said "open weights coming in ~2 weeks" in a screenshot shared by open-weights post, and multiple community posts repeat the same timing home-run reaction.
  • The company is still tuning M2.7 for agent-style workloads: the same open-weights post says the team "updated a new version" that is "noticeably better on OpenClaw," while MiniMax's benchmark repost says it did "extensive optimizations" and built a dedicated benchmark for that work.
  • MiniMax also confirmed that vision is planned for the next generation, with Skyler Miao replying "Sure, in M3" in screenshots captured by M3 vision reply and echoed by MiniMax repost.
  • For local-stack builders, the near-term story is M2.7 weights plus early ecosystem signals: practitioners are already framing it as a model you could "realistically run at home" home-run reaction, and MiniMax now has a visible Hugging Face org page at its Hugging Face org.

What MiniMax actually confirmed

The clearest hard signal is the M2.7 weights timeline. In the screenshot embedded in open-weights post, Skyler Miao says "M2.7 open weights coming in ~2 weeks" and adds that the team is "still actively iterating." That is stronger than rumor, but it is still a social-post commitment rather than a formal launch note with license, parameter count, or deployment requirements.

MiniMax separately confirmed a multimodal follow-on. The M3 vision reply shows a user asking whether MiniMax models will have vision, and Miao answers: "Sure, in M3." MiniMax's own account later amplified that same exchange in MiniMax repost, which makes the M3 multimodal plan look intentional rather than a stray reply. One context post also claims M3 will be a "big 1T model" speculation thread, but that size figure is not confirmed by MiniMax in the provided evidence.

Why engineers care about M2.7

The interesting engineering detail is that MiniMax is positioning M2.7 as more than a generic open release. In benchmark repost, the company says it performed "extensive optimizations" and "established a dedicated benchmark" for this workload. The attached MM-ClawBench chart shows M2.7 at 62.7 versus M2.5 at 57.6, slightly above Gemini 3.1 Pro at 61.8, below Claude Sonnet 4.6 at 64.2, and behind Opus 4.6 and GPT-5.4.

The OpenClaw mention matters because it points to agentic, tool-using evaluation rather than pure chat vibes. The screenshot in open-weights post says the newest revision is "noticeably better on OpenClaw," suggesting the team is still optimizing around autonomous coding or browser-style task completion before release. Community reaction is already translating that into deployment terms: home-run reaction calls M2.7 "the best model you can realistically run at home" and says the author already runs M2.5 locally. Meanwhile, HF org page shows MiniMax has a public Hugging Face presence with prior model and research activity, a practical signpost for where engineers may want to watch for the actual drop.

Further reading

Discussion across the web

Where this story is being discussed, in original context.

On X· 3 threads
TL;DR2 posts
What MiniMax actually confirmed2 posts
Why engineers care about M2.71 post
Share on X