Mistral-Medium-3.1

Mistral-Medium-3.1

Provider: Mistral AI
Category: Text Generation
Endpoint: POST /v1/chat/completions
Context window: 131K
Served from:

Enterprise-grade model with strong reasoning, coding, and STEM performance, supporting hybrid, on-prem, and in-VPC deployments.

At a glance

FieldValue
Model idmistral-medium-3-1
Input modalitiestext, image
Output modalitiestext
Context window131K
Region
Featuresvision
NewNo
Native inferenceNo

Pricing

ChargeSpecRate
Input (text)per 1M tokens$0.52
Output (text)per 1M tokens$2.60

Example request

$curl https://api.empiriolabs.ai/v1/chat/completions \
> -H 'Authorization: Bearer $EMPIRIOLABS_API_KEY' \
> -H 'Content-Type: application/json' \
> -d '{"model": "mistral-medium-3-1", "messages": [{"role":"user","content":"Hello"}]}'

Parameters

ParameterTypeRequiredDefaultDescription
temperaturenumberno0.7Sampling temperature · Range: 0 – 2
top_pnumberno1Nucleus sampling · Range: 0 – 1
max_tokensnumberno4096Max output tokens · Range: 1 – 65536
frequency_penaltynumberno0Range: -2 – 2
presence_penaltynumberno0Range: -2 – 2
streambooleannofalseServer-Sent Events streaming
stopstringnoComma-separated stop sequences
disable_formattingbooleannofalseReturn raw upstream response with no formatting wrappers

Live machine-readable schema is also available at GET https://api.empiriolabs.ai/v1/models/mistral-medium-3-1.