MiMo-V2-Flash

MiMo-V2-Flash

Provider: Xiaomi
Category: Text Generation
Endpoint: POST /v1/chat/completions
Context window: 256K
Served from:

Lightweight, high-speed reasoning model with hybrid attention and multi-token prediction for low-cost inference and strong benchmark scores.

At a glance

FieldValue
Model idmimo-v2-flash
Input modalitiestext, image
Output modalitiestext
Context window256K
Region
Featuresvision
NewNo
Native inferenceNo

Pricing

ChargeSpecRate
Inputper 1M tokens$0.10
Outputper 1M tokens$0.30
Web Searchper call$0.015

Example request

$curl https://api.empiriolabs.ai/v1/chat/completions \
> -H 'Authorization: Bearer $EMPIRIOLABS_API_KEY' \
> -H 'Content-Type: application/json' \
> -d '{"model": "mimo-v2-flash", "messages": [{"role":"user","content":"Hello"}]}'

Parameters

ParameterTypeRequiredDefaultDescription
deep_thinkingbooleannotrueEnable extended chain-of-thought reasoning
web_search_enabledbooleannofalseAllow real-time web search
web_search_forcebooleannofalseForce the model to issue at least one web search
web_search_max_keywordnumberno3Max keywords per search query · Range: 1 – 5
web_search_limitnumberno5Max search results per query · Range: 1 – 10
temperaturenumberno0.7Sampling temperature · Range: 0 – 1
top_pnumberno1Range: 0 – 1
max_tokensnumberno4096Range: 1 – 32768
disable_formattingbooleannofalseReturn raw upstream response with no formatting wrappers

Live machine-readable schema is also available at GET https://api.empiriolabs.ai/v1/models/mimo-v2-flash.