Integrations

Connect EmpirioLabs to coding agents, IDEs, CLIs, chat frontends, and OpenAI-compatible tools

Most tools only need three values: an API key, a base URL, and a model ID. EmpirioLabs exposes OpenAI-compatible chat completions plus an Anthropic-style Messages endpoint, so setup is usually a provider dropdown and one URL change.

Fastest setup

Run one setup command to create selected local config files. Add user-level tools and a smoke test with flags.

OpenAI-compatible tools

Use https://api.empiriolabs.ai/v1 as the base URL and your EmpirioLabs key as the bearer token.

Claude Code

Claude Code expects the Anthropic Messages shape. Use https://api.empiriolabs.ai without /v1 and set the custom model option.

Live model catalog

Fetch GET /v1/models?available=true before hard-coding model IDs into team templates or shared scripts.

Fastest setup

Use this when you want a working setup without hand-editing config files. The command fetches the helper script from the docs site, runs it with Python, and writes only the scopes you choose. For tools that support local persisted config, the helper stores the key in gitignored project files so reopened app sessions do not depend on a shell export.

1

Run the setup command

This default writes project-local files for OpenCode, Aider, Qwen Code, and OpenHands, including gitignored persistent credentials for tools that can read them locally.

$export EMPIRIOLABS_API_KEY="sk-empiriolabs-your_key_here"
$script="${TMPDIR:-/tmp}/empirio-integrations-setup.py"
$curl -fsSL "https://docs.empiriolabs.ai/integrations/setup.py" -o "$script"
$python3 "$script" \
> --scope project \
> --tools opencode,aider,qwen-code,openhands \
> --model qwen3-max
2

Choose scope and tools

Change the last flags when you want a different setup:

GoalFlags
Project files only--scope project --tools opencode,aider,qwen-code,openhands
User-level tools too--scope all --tools all
One tool only--tools opencode, --tools claude-code, or any tool from the table below
Pick the default model--model <model-id>
Only register the default model (skip auto-populate)--no-populate-models
Verify key and creditsAdd --smoke-test
Print supported tool names--list-tools

The --tools flag takes exact, comma-separated values. Do not include spaces unless your shell keeps the whole value quoted.

--tools valueScopeWrites
opencodeProjectopencode.json plus .empiriolabs-api-key
aiderProject.aider.empiriolabs.yml
qwen-codeProject or user.qwen/settings.json or ~/.qwen/settings.json
openhandsProjectopenhands.empiriolabs.toml
continueUser~/.continue/config.yaml or sidecar config
claude-codeUser~/.claude/settings.json env values
codexUserMarked block in ~/.codex/config.toml
hermesUser~/.hermes/empiriolabs.config.yaml sidecar
gooseUsergoose custom provider JSON
openclawUser~/.openclaw/empiriolabs.example.json5 sidecar
allChosen by --scopeEvery helper-supported tool for that run
3

Manual download fallback

Use this only if your shell blocks remote fetches.

Download empirio-integrations-setup.py

Then run the helper wherever you want project-local config files:

$export EMPIRIOLABS_API_KEY="sk-empiriolabs-your_key_here"
$python3 empirio-integrations-setup.py --scope project --tools opencode,aider,qwen-code,openhands --model qwen3-max

The helper creates timestamped backups before changing existing files, but it can write API keys into local .env, .empiriolabs-api-key, .qwen/settings.json, openhands.empiriolabs.toml, and some user config files. Review generated files before committing anything. The helper does not install the tools themselves.

What the helper writes

By default the helper fetches the live /v1/models?available=true catalog and writes every chat-capable model (text, multimodal, code, reasoning) into tools whose configs natively support a multi-model picker (OpenCode, Continue, Qwen Code, goose). The --model flag selects the default within that populated set. Pass --no-populate-models if you want only the default model registered.

Tool or file--tools valueScopeWhat gets created
Shared envAlways for project scopesProject.env, .empiriolabs-api-key, empirio-env.sh, empirio-env.ps1, and .gitignore entries for local secrets
OpenCodeopencodeProjectopencode.json provider named empiriolabs, populated with every chat-capable model, reasoning-capable models marked with reasoning: true, and pointed at .empiriolabs-api-key for persistence
AideraiderProject.aider.empiriolabs.yml (single default model, switch via --model)
Qwen Codeqwen-codeProject or user.qwen/settings.json or ~/.qwen/settings.json with one provider entry per chat model, selected OpenAI auth, and fallback env values
OpenHandsopenhandsProjectopenhands.empiriolabs.toml plus LLM_* values in the generated env files
ContinuecontinueUser~/.continue/config.yaml or ~/.continue/empiriolabs.config.yaml models: array fully populated, plus ~/.continue/.env
Claude Codeclaude-codeUser~/.claude/settings.json env values
Codex CLIcodexUserMarked block in ~/.codex/config.toml
Hermes AgenthermesUser~/.hermes/empiriolabs.config.yaml sidecar and ~/.hermes/.env
goosegooseUserCustom provider JSON with every chat-capable model in the models[] array
OpenClawopenclawUser~/.openclaw/empiriolabs.example.json5 sidecar

The helper validates tool names and exits with an error for unknown values. If a selected tool does not match the chosen scope, the helper prints a note. For example, --scope project --tools codex does not write Codex config because Codex is a user-level config.

Integrations not listed in this table are manual UI or app-level setups. Use the connection values below for Cline, Zed, Kilo Code, Roo Code, Cursor-style fields, chat frontends, and hosted web UIs.

Connection values

SettingUse this value
OpenAI-compatible base URLhttps://api.empiriolabs.ai/v1
Anthropic / Claude Code base URLhttps://api.empiriolabs.ai
API keyYour dashboard key, usually sk-empiriolabs-...
Authorization headerAuthorization: Bearer $EMPIRIOLABS_API_KEY
First model to testqwen3-max
Live model catalogGET https://api.empiriolabs.ai/v1/models?available=true

For OpenAI-compatible tools, the base URL should usually end at /v1. Do not paste the full /v1/chat/completions path into a base URL field unless the tool explicitly asks for a full endpoint URL.

Thinking and reasoning controls

EmpirioLabs exposes reasoning controls only on models that list them in their model page or machine-readable schema. Do not send these fields to every model by default.

For OpenAI-compatible Chat Completions and Responses, supported controls can include enable_thinking, thinking_budget, or reasoning_effort, depending on the model:

1{
2 "model": "qwen3-max-thinking",
3 "messages": [
4 { "role": "user", "content": "Answer briefly." }
5 ],
6 "enable_thinking": false
7}

For the Anthropic-style Messages endpoint, use Anthropic-style thinking when the model supports thinking:

1{
2 "model": "qwen3-max-thinking",
3 "messages": [
4 { "role": "user", "content": "Work through this carefully." }
5 ],
6 "thinking": {
7 "type": "enabled",
8 "budget_tokens": 1024
9 }
10}

Tool support varies:

ToolHow to manage reasoning
OpenCodeThe helper marks reasoning-capable models with reasoning: true in opencode.json. When OpenCode sends a variant such as low, medium, high, or max, EmpirioLabs normalizes it to the selected model’s supported reasoning fields. none means no override, so the model default still applies. Some model families may show reasoning capability while OpenCode sends no variant field.
AiderUse --reasoning-effort low, --reasoning-effort medium, --reasoning-effort high, /reasoning-effort low, --thinking-tokens 0, or /thinking-tokens 0 when the selected model supports that control.
Qwen CodeProvider entries can carry model metadata and generation settings. Keep the helper defaults unless you want to pin a team-wide reasoning mode for one model.
Codex CLIUse model_reasoning_effort or plan_mode_reasoning_effort in ~/.codex/config.toml for reasoning-capable models. The helper only wires the EmpirioLabs provider and leaves the effort unset.
Chat frontendsUse custom parameters or advanced model settings only when the app exposes them. If it does not, choose a model whose default thinking behavior matches the workflow.

Smoke test

Run this before configuring a larger tool. If this works, your key, credits, network, and model ID are good.

$export EMPIRIOLABS_API_KEY="sk-empiriolabs-your_key_here"
$
$curl "https://api.empiriolabs.ai/v1/chat/completions" \
> -H "Authorization: Bearer $EMPIRIOLABS_API_KEY" \
> -H "Content-Type: application/json" \
> -d '{
> "model": "qwen3-max",
> "messages": [
> { "role": "user", "content": "Reply with one sentence." }
> ]
> }'

To list current model IDs:

$curl "https://api.empiriolabs.ai/v1/models?available=true" \
> -H "Authorization: Bearer $EMPIRIOLABS_API_KEY"

Chat and roleplay frontends

Use this section for BYOK chat apps, roleplay tools, and shared web UIs. These tools normally do not need the helper script. Use your EmpirioLabs API key, pick a chat model such as qwen3-max, and keep secrets in the app’s local settings or environment variables.

For roleplay chats, we generally recommend starting with EmpirioLabs Native Inference models first, then models or variants listed in the China region when native coverage does not fit your use case. Check the Models page or Pricing page before choosing a model. Each model lists its served location there. For models with variants, check the variant entries too, since a variant can be served from a different region.

ToolEndpoint fieldValue
SillyTavernCustom Endpoint / Base URLhttps://api.empiriolabs.ai/v1
PersonaLLMCustom text engine base URLhttps://api.empiriolabs.ai/v1
Janitor AIProxy URLhttps://api.empiriolabs.ai/v1/chat/completions
TypingMind custom modelEndpoint APIhttps://api.empiriolabs.ai/v1/chat/completions
Open WebUIOpenAI connection URLhttps://api.empiriolabs.ai/v1
LibreChatbaseURLhttps://api.empiriolabs.ai/v1
LobeChat self-hostOPENAI_PROXY_URLhttps://api.empiriolabs.ai/v1

SillyTavern

SillyTavern is a local roleplay and character chat frontend. EmpirioLabs works through its custom OpenAI-compatible Chat Completion source.

  1. Open SillyTavern and click the plug icon to open API Connections.
  2. Set API type to Chat Completion.
  3. Set Chat Completion Source to Custom (OpenAI-compatible).
  4. Set Custom Endpoint / Base URL to https://api.empiriolabs.ai/v1.
  5. Paste your EmpirioLabs API key into the custom API key field.
  6. Click Connect, then choose a model from the dropdown or type a model ID such as qwen3-max.

Do not paste https://api.empiriolabs.ai/v1/chat/completions into SillyTavern’s base URL field. SillyTavern appends the chat completions path itself.

If the model dropdown is empty but your smoke test works, type the model ID manually. If a roleplay sampler causes a request error, remove non-standard extra parameters and retry with standard chat settings first.

PersonaLLM

PersonaLLM is an iOS roleplay and character chat app with bring-your-own-key provider settings. EmpirioLabs works through PersonaLLM’s custom text engine.

  1. From the home screen, tap the three-dot menu in the top left.
  2. Open Settings.
  3. Open Text Engine.
  4. Choose Custom.
  5. Set the base URL to https://api.empiriolabs.ai/v1.
  6. Paste your EmpirioLabs API key.
  7. In the models field, tap the button on the right to fetch the live model list.
  8. Choose a chat model such as qwen3-max or glm-5-1, then save the text engine settings.

PersonaLLM’s thinking toggle sends a reasoning setting when enabled and omits reasoning controls when disabled. EmpirioLabs treats the omitted PersonaLLM field as thinking off only for models whose default is thinking on. This compatibility behavior is scoped to PersonaLLM requests; other tools should send explicit reasoning parameters when they need to override a model default.

Janitor AI

Janitor AI can call EmpirioLabs through its Proxy configuration. Use this path when you want to keep using Janitor’s chat UI while bringing your own EmpirioLabs key.

  1. Open a Janitor AI chat.
  2. Click using janitor or the menu button near the top of the chat.
  3. Open API Settings.
  4. Select the Proxy tab.
  5. In Proxy Configurations, click + New.
  6. Set Name to EmpirioLabs.
  7. Set Model to qwen3-max, or another model ID from GET /v1/models?available=true.
  8. Set Proxy URL to https://api.empiriolabs.ai/v1/chat/completions.
  9. Paste your EmpirioLabs API key into API Key.
  10. Leave Custom Prompt blank unless you already use one for that character or chat.
  11. Click Add, save the settings, then refresh the Janitor AI page before sending the next message.

If Janitor AI offers a + /chat/completions helper next to the Proxy URL field, start with https://api.empiriolabs.ai/v1 and let the helper append the path. The saved URL should end in /v1/chat/completions.

TypingMind

TypingMind supports custom chat models where you provide an endpoint, model ID, and optional headers.

  1. Open Models from the left sidebar.
  2. Open Model Settings, then click Add Custom Models.
  3. Use API type OpenAI Chat Completions API if the form asks.
  4. Set Endpoint API to https://api.empiriolabs.ai/v1/chat/completions.
  5. Set Model ID to qwen3-max or another available model.
  6. Add header Authorization: Bearer sk-empiriolabs-your_key_here, or paste the key into TypingMind’s API key field if the form provides one.
  7. Click Test, then Add Model.

TypingMind custom model setup is the main exception on this page: it usually asks for the full chat completions endpoint, not just the /v1 base URL.

Open WebUI

Open WebUI can connect to OpenAI-compatible providers from the admin connection screen.

  1. Open Admin Settings.
  2. Go to Connections and add a new OpenAI connection.
  3. Set URL to https://api.empiriolabs.ai/v1.
  4. Paste your EmpirioLabs API key.
  5. If model discovery is slow or too broad, add model IDs such as qwen3-max to the Model IDs filter.
  6. Save, then choose the EmpirioLabs model in chat.

For server launches, set:

$OPENAI_API_BASE_URL=https://api.empiriolabs.ai/v1
$OPENAI_API_KEY=sk-empiriolabs-your_key_here

LibreChat

LibreChat supports custom OpenAI-compatible endpoints through librechat.yaml. Use an environment variable for one shared deployment key, or user_provided if each user should bring their own key in the UI.

librechat.yaml
1version: 1.3.5
2cache: true
3endpoints:
4 custom:
5 - name: "EmpirioLabs"
6 apiKey: "${EMPIRIOLABS_API_KEY}"
7 baseURL: "https://api.empiriolabs.ai/v1"
8 models:
9 default: ["qwen3-max"]
10 fetch: true
11 titleConvo: true
12 titleModel: "qwen3-max"
13 modelDisplayLabel: "EmpirioLabs"
.env
$EMPIRIOLABS_API_KEY=sk-empiriolabs-your_key_here

For BYOK multi-user deployments, change apiKey to:

1apiKey: "user_provided"

Restart LibreChat after changing librechat.yaml.

LobeChat

For self-hosted LobeChat, use the OpenAI provider with an EmpirioLabs proxy URL:

.env
$OPENAI_API_KEY=sk-empiriolabs-your_key_here
$OPENAI_PROXY_URL=https://api.empiriolabs.ai/v1
$OPENAI_MODEL_LIST=+qwen3-max,+glm-5-1,+deepseek-v4-pro:variant2

Then restart LobeChat and choose an enabled EmpirioLabs model in the model selector.

OpenCode

The helper can write this automatically:

$python3 empirio-integrations-setup.py --tools opencode --model qwen3-max

Manual setup:

opencode.json
1{
2 "$schema": "https://opencode.ai/config.json",
3 "provider": {
4 "empiriolabs": {
5 "npm": "@ai-sdk/openai-compatible",
6 "name": "EmpirioLabs",
7 "options": {
8 "baseURL": "https://api.empiriolabs.ai/v1",
9 "apiKey": "{file:.empiriolabs-api-key}"
10 },
11 "models": {
12 "qwen3-max": {
13 "name": "EmpirioLabs Qwen3-Max"
14 },
15 "qwen3-max-thinking": {
16 "name": "EmpirioLabs Qwen3-Max-Thinking",
17 "reasoning": true
18 }
19 }
20 }
21 }
22}
$printf '%s' 'sk-empiriolabs-your_key_here' > .empiriolabs-api-key
$printf '\n.empiriolabs-api-key\n' >> .gitignore
$opencode

In OpenCode, run /models and choose the EmpirioLabs provider. The file-backed key keeps working after you close and reopen OpenCode.

Claude Code

The helper can write the user-level settings automatically:

$python3 empirio-integrations-setup.py --scope user --tools claude-code --model qwen3-max

Claude Code is not an OpenAI-chat-completions client. It talks to LLM gateways through the Anthropic Messages shape, which EmpirioLabs exposes at /v1/messages.

$export EMPIRIOLABS_API_KEY="sk-empiriolabs-your_key_here"
$
$# Claude Code sends ANTHROPIC_AUTH_TOKEN as a Bearer token.
$export ANTHROPIC_AUTH_TOKEN="$EMPIRIOLABS_API_KEY"
$
$# Claude Code appends /v1/messages itself, so do not include /v1 here.
$export ANTHROPIC_BASE_URL="https://api.empiriolabs.ai"
$
$export ANTHROPIC_CUSTOM_MODEL_OPTION="qwen3-max"
$export ANTHROPIC_CUSTOM_MODEL_OPTION_NAME="EmpirioLabs Qwen3-Max"
$export ANTHROPIC_MODEL="qwen3-max"
$
$claude

Persistent user-level setup:

~/.claude/settings.json
1{
2 "env": {
3 "ANTHROPIC_AUTH_TOKEN": "sk-empiriolabs-your_key_here",
4 "ANTHROPIC_BASE_URL": "https://api.empiriolabs.ai",
5 "ANTHROPIC_CUSTOM_MODEL_OPTION": "qwen3-max",
6 "ANTHROPIC_CUSTOM_MODEL_OPTION_NAME": "EmpirioLabs Qwen3-Max",
7 "ANTHROPIC_MODEL": "qwen3-max"
8 }
9}

Use a model whose page lists POST /v1/messages under supported endpoints. If Claude Code reports a gateway-specific token counting or model discovery error, run it through an Anthropic-format gateway or adapter that implements Claude Code’s full gateway contract, then point that gateway at EmpirioLabs.

Cline

In the Cline extension UI:

  1. Open Cline settings.
  2. Set API Provider to OpenAI Compatible.
  3. Set Base URL to https://api.empiriolabs.ai/v1.
  4. Paste your EmpirioLabs API key.
  5. Enter a model ID such as qwen3-max.
  6. Click Verify, then start a new task.

For Cline CLI:

$npm install -g cline
$
$export EMPIRIOLABS_API_KEY="sk-empiriolabs-your_key_here"
$
$cline auth \
> -p openai \
> -k "$EMPIRIOLABS_API_KEY" \
> -b "https://api.empiriolabs.ai/v1" \
> -m "qwen3-max"
$
$cline "Inspect this repository and suggest the safest next refactor."

Qwen Code

The helper can write project or user settings automatically:

$python3 empirio-integrations-setup.py --scope project --tools qwen-code --model qwen3-max

Launch Qwen Code directly with EmpirioLabs as the OpenAI-compatible provider:

$export EMPIRIOLABS_API_KEY="sk-empiriolabs-your_key_here"
$
$qwen \
> --auth-type openai \
> --openaiApiKey "$EMPIRIOLABS_API_KEY" \
> --openaiBaseUrl "https://api.empiriolabs.ai/v1" \
> --model "qwen3-max"

For persistent project setup:

.qwen/settings.json
1{
2 "model": {
3 "name": "qwen3-max"
4 },
5 "security": {
6 "auth": {
7 "selectedType": "openai"
8 }
9 },
10 "env": {
11 "EMPIRIOLABS_API_KEY": "sk-empiriolabs-your_key_here"
12 },
13 "modelProviders": {
14 "openai": [
15 {
16 "id": "qwen3-max",
17 "name": "EmpirioLabs Qwen3-Max",
18 "envKey": "EMPIRIOLABS_API_KEY",
19 "baseUrl": "https://api.empiriolabs.ai/v1"
20 }
21 ]
22 }
23}

Add .qwen/settings.json to .gitignore if you store the key there.

Codex CLI

The helper can write the user-level provider block automatically:

$python3 empirio-integrations-setup.py --scope user --tools codex --model qwen3-max

Add EmpirioLabs as a custom model provider in ~/.codex/config.toml:

~/.codex/config.toml
1model = "qwen3-max"
2model_provider = "empiriolabs"
3
4[model_providers.empiriolabs]
5name = "EmpirioLabs"
6base_url = "https://api.empiriolabs.ai/v1"
7env_key = "EMPIRIOLABS_API_KEY"
8wire_api = "responses"

Then launch Codex with your key in the environment:

$export EMPIRIOLABS_API_KEY="sk-empiriolabs-your_key_here"
$codex

Use this path with EmpirioLabs models that support POST /v1/responses.

Aider

The helper can write a project-local Aider config automatically:

$python3 empirio-integrations-setup.py --tools aider --model qwen3-max

Aider uses the OpenAI-compatible environment variables. Prefix the model with openai/.

$export OPENAI_API_BASE="https://api.empiriolabs.ai/v1"
$export OPENAI_API_KEY="sk-empiriolabs-your_key_here"
$
$aider --model openai/qwen3-max

Continue

The helper can write the user-level Continue config automatically:

$python3 empirio-integrations-setup.py --scope user --tools continue --model qwen3-max

Continue’s OpenAI provider can target any OpenAI-compatible endpoint by setting apiBase. Put secrets in .env or Continue’s secret store rather than committing them into config.yaml.

~/.continue/config.yaml
1name: EmpirioLabs
2version: 0.0.1
3schema: v1
4
5models:
6 - name: EmpirioLabs Qwen3-Max
7 provider: openai
8 model: qwen3-max
9 apiBase: https://api.empiriolabs.ai/v1
10 apiKey: ${{ secrets.EMPIRIOLABS_API_KEY }}
11 capabilities:
12 - tool_use

Add the secret in one of Continue’s supported .env locations:

~/.continue/.env
$EMPIRIOLABS_API_KEY=sk-empiriolabs-your_key_here

OpenHands

The helper can write a project-local OpenHands config automatically:

$python3 empirio-integrations-setup.py --tools openhands --model qwen3-max

OpenHands exposes provider settings in the UI and passes them through to its LLM layer.

FieldValue
LLM ProviderOpenAI
LLM Modelopenai/qwen3-max
API KeyYour EmpirioLabs key
Base URLhttps://api.empiriolabs.ai/v1

For environment-based launches:

$export LLM_MODEL="openai/qwen3-max"
$export LLM_BASE_URL="https://api.empiriolabs.ai/v1"
$export LLM_API_KEY="sk-empiriolabs-your_key_here"

For persistent project setup:

openhands.empiriolabs.toml
1[llm]
2model = "openai/qwen3-max"
3api_key = "sk-empiriolabs-your_key_here"
4base_url = "https://api.empiriolabs.ai/v1"

Run OpenHands with:

$openhands --config-file openhands.empiriolabs.toml

Add openhands.empiriolabs.toml to .gitignore if you store the key there.

Hermes Agent

The helper can write the user-level Hermes sidecar automatically:

$python3 empirio-integrations-setup.py --scope user --tools hermes --model qwen3-max

Hermes has an interactive model wizard. Choose Custom endpoint, then enter:

PromptValue
API base URLhttps://api.empiriolabs.ai/v1
API keyYour EmpirioLabs key
Model nameqwen3-max

Manual config:

~/.hermes/config.yaml
1custom_providers:
2 - name: empiriolabs
3 base_url: https://api.empiriolabs.ai/v1
4 key_env: EMPIRIOLABS_API_KEY
5
6model:
7 provider: custom:empiriolabs
8 default: qwen3-max
~/.hermes/.env
$EMPIRIOLABS_API_KEY=sk-empiriolabs-your_key_here

OpenClaw

The helper can write a user-level OpenClaw sidecar automatically:

$python3 empirio-integrations-setup.py --scope user --tools openclaw --model qwen3-max

The safest setup is the OpenClaw wizard:

$openclaw configure --section model

Choose a custom or OpenAI-compatible provider and use:

FieldValue
Provider IDempiriolabs
API adapteropenai-completions
Base URLhttps://api.empiriolabs.ai/v1
API keySecretRef to EMPIRIOLABS_API_KEY, or your key for a local-only test
Modelqwen3-max

For manual JSON5 config, use this as a sidecar or merge it into OpenClaw’s config:

~/.openclaw/empiriolabs.example.json5
1{
2 secrets: {
3 providers: {
4 default: { source: "env" }
5 },
6 defaults: {
7 env: "default"
8 }
9 },
10 models: {
11 mode: "merge",
12 providers: {
13 empiriolabs: {
14 baseUrl: "https://api.empiriolabs.ai/v1",
15 apiKey: { source: "env", provider: "default", id: "EMPIRIOLABS_API_KEY" },
16 authHeader: true,
17 api: "openai-completions",
18 models: [
19 {
20 id: "qwen3-max",
21 name: "EmpirioLabs Qwen3-Max",
22 input: ["text"],
23 contextWindow: 256000
24 }
25 ]
26 }
27 }
28 },
29 agents: {
30 defaults: {
31 model: {
32 primary: "empiriolabs/qwen3-max"
33 }
34 }
35 }
36}

Run openclaw config validate after manual edits.

goose

The helper can write the user-level goose custom provider automatically:

$python3 empirio-integrations-setup.py --scope user --tools goose --model qwen3-max

goose supports custom OpenAI-compatible providers. The helper writes this as empiriolabs.json in the goose custom provider directory.

empiriolabs.json
1{
2 "name": "empiriolabs",
3 "engine": "openai",
4 "display_name": "EmpirioLabs",
5 "description": "EmpirioLabs OpenAI-compatible API",
6 "api_key_env": "EMPIRIOLABS_API_KEY",
7 "base_url": "https://api.empiriolabs.ai/v1/chat/completions",
8 "models": [
9 {
10 "name": "qwen3-max",
11 "context_limit": 256000
12 }
13 ],
14 "supports_streaming": true,
15 "requires_auth": true
16}
$export EMPIRIOLABS_API_KEY="sk-empiriolabs-your_key_here"
$goose session start --provider empiriolabs

Zed

Zed supports OpenAI-compatible providers in the Agent Panel. Use the UI’s Add Provider flow, or edit settings:

Zed settings.json
1{
2 "language_models": {
3 "openai_compatible": {
4 "EmpirioLabs": {
5 "api_url": "https://api.empiriolabs.ai/v1",
6 "available_models": [
7 {
8 "name": "qwen3-max",
9 "display_name": "EmpirioLabs Qwen3-Max",
10 "max_tokens": 256000,
11 "capabilities": {
12 "tools": true,
13 "images": false,
14 "parallel_tool_calls": false,
15 "prompt_cache_key": false
16 }
17 }
18 ]
19 }
20 }
21 }
22}

Add the API key through the Agent Panel so Zed stores it in the OS credential store.

Kilo Code, Roo Code, Cursor, and similar IDEs

Use this table anywhere a tool exposes OpenAI Compatible, Custom OpenAI, or Override OpenAI Base URL.

FieldValue
ProviderOpenAI Compatible
Base URLhttps://api.empiriolabs.ai/v1
API keyYour EmpirioLabs key
Modelqwen3-max or another available model ID

Kilo Code and Roo-style VS Code extensions normally expose this as an API configuration profile. Roo Code’s public docs and product notices indicate a shutdown/archive path on May 15, 2026, so prefer Cline or Kilo Code for new team-wide templates unless your team already depends on Roo.

Cursor’s custom API key behavior depends on the version and feature surface. If your Cursor build only accepts provider API keys and does not expose a custom base URL for the feature you want, it cannot be pointed directly at EmpirioLabs for that feature.

Troubleshooting

SymptomFix
401 UnauthorizedCheck the key, make sure it starts with sk-empiriolabs-, and verify the tool is sending it as a bearer token or x-api-key.
402 Payment RequiredAdd credits in the dashboard Billing page.
404 or model_not_foundUse GET /v1/models?available=true and copy the exact id.
Tool says the endpoint is invalidUse https://api.empiriolabs.ai/v1 as the base URL, not the full /chat/completions URL.
Agent tool calls are weak or ignoredPick a model with tool/function-calling support and check GET /v1/models/{model_id} for supported parameters.
Claude Code does not show the modelSet ANTHROPIC_CUSTOM_MODEL_OPTION and ANTHROPIC_MODEL to the EmpirioLabs model ID.
Streaming fails in a clientRetry with streaming disabled, then check the model page for streaming support.

Keep agents grounded

When an AI coding assistant is implementing an EmpirioLabs integration for you, give it the machine-readable docs bundle first:

Tell the agent to use https://docs.empiriolabs.ai/ai-agent-api-reference-context.md as the API reference, https://docs.empiriolabs.ai/ai-agent-docs-context.md for model and pricing details, and GET https://api.empiriolabs.ai/v1/models/{model_id} for live model metadata.

That prevents the agent from guessing endpoint shapes, stale model IDs, or parameter names.