MiniMax-M2.7-highspeed

MiniMax-M2.7-highspeed

Provider: MiniMax
Category: Text Generation
Endpoint: POST /v1/chat/completions
Context window: 256K
Served from:

High-speed M2.7 variant tuned for fast, large-batch inference with strong general-purpose performance at a discounted price point.

At a glance

FieldValue
Model idminimax-m2-7-highspeed
Input modalitiestext
Output modalitiestext
Context window256K
Region
Featuresreasoning
NewNo
Native inferenceNo

Pricing

ChargeSpecRate
Inputper 1M tokens0.075(was0.075 (was 0.10)
Outputper 1M tokens0.225(was0.225 (was 0.30)

Example request

$curl https://api.empiriolabs.ai/v1/chat/completions \
> -H 'Authorization: Bearer $EMPIRIOLABS_API_KEY' \
> -H 'Content-Type: application/json' \
> -d '{"model": "minimax-m2-7-highspeed", "messages": [{"role":"user","content":"Hello"}]}'

Parameters

This model accepts the standard chat completion parameters (see the API reference).

Live machine-readable schema is also available at GET https://api.empiriolabs.ai/v1/models/minimax-m2-7-highspeed.