DeepSeek-V4-Flash

DeepSeek-V4-Flash

Provider: DeepSeek
Category: Text Generation
Endpoint: POST /v1/chat/completions
Context window: 1M
Served from: Germany (Frankfurt)

Lightweight MoE model with 284B total / 13B active parameters and native 1M context, tuned for low-latency, cost-effective high-concurrency use.

At a glance

FieldValue
Model iddeepseek-v4-flash
Input modalitiestext, image
Output modalitiestext
Context window1M
RegionGermany (Frankfurt)
Featuresvision, reasoning
NewYes
Native inferenceNo

Pricing

ChargeSpecRate
Inputper 1M tokens$0.14
Outputper 1M tokens$0.56

Example request

$curl https://api.empiriolabs.ai/v1/chat/completions \
> -H 'Authorization: Bearer $EMPIRIOLABS_API_KEY' \
> -H 'Content-Type: application/json' \
> -d '{"model": "deepseek-v4-flash", "messages": [{"role":"user","content":"Hello"}]}'

Parameters

ParameterTypeRequiredDefaultDescription
temperaturenumberno0.7Sampling temperature · Range: 0 – 2
top_pnumberno1Nucleus sampling · Range: 0 – 1
max_tokensnumberno4096Max output tokens · Range: 1 – 65536
frequency_penaltynumberno0Range: -2 – 2
presence_penaltynumberno0Range: -2 – 2
streambooleannofalseServer-Sent Events streaming
stopstringnoComma-separated stop sequences
disable_formattingbooleannofalseReturn raw upstream response with no formatting wrappers

Live machine-readable schema is also available at GET https://api.empiriolabs.ai/v1/models/deepseek-v4-flash.