Integrations
Connect EmpirioLabs to coding agents, IDEs, CLIs, chat frontends, and OpenAI-compatible tools
Most tools only need three values: an API key, a base URL, and a model ID. EmpirioLabs exposes OpenAI-compatible chat completions plus an Anthropic-style Messages endpoint, so setup is usually a provider dropdown and one URL change.
Run one setup command to create selected local config files. Add user-level tools and a smoke test with flags.
Use https://api.empiriolabs.ai/v1 as the base URL and your EmpirioLabs key as the bearer token.
Claude Code expects the Anthropic Messages shape. Use https://api.empiriolabs.ai without /v1 and set the custom model option.
Fetch GET /v1/models?available=true before hard-coding model IDs into team templates or shared scripts.
Codex CLI
Aider
Continue
OpenHands
SillyTavern
LobeChatKilo, Roo, CursorFastest setup
Use this when you want a working setup without hand-editing config files. The command fetches the helper script from the docs site, runs it with Python, and writes only the scopes you choose. For tools that support local persisted config, the helper stores the key in gitignored project files so reopened app sessions do not depend on a shell export.
Run the setup command
This default writes project-local files for OpenCode, Aider, Qwen Code, and OpenHands, including gitignored persistent credentials for tools that can read them locally.
The helper creates timestamped backups before changing existing files, but it can write API keys into local .env, .empiriolabs-api-key, .qwen/settings.json, openhands.empiriolabs.toml, and some user config files. Review generated files before committing anything. The helper does not install the tools themselves.
What the helper writes
By default the helper fetches the live /v1/models?available=true catalog and writes every chat-capable model (text, multimodal, code, reasoning) into tools whose configs natively support a multi-model picker (OpenCode, Continue, Qwen Code, goose). The --model flag selects the default within that populated set. Pass --no-populate-models if you want only the default model registered.
The helper validates tool names and exits with an error for unknown values. If a selected tool does not match the chosen scope, the helper prints a note. For example, --scope project --tools codex does not write Codex config because Codex is a user-level config.
Integrations not listed in this table are manual UI or app-level setups. Use the connection values below for Cline, Zed, Kilo Code, Roo Code, Cursor-style fields, chat frontends, and hosted web UIs.
Connection values
For OpenAI-compatible tools, the base URL should usually end at /v1. Do not paste the full /v1/chat/completions path into a base URL field unless the tool explicitly asks for a full endpoint URL.
Thinking and reasoning controls
EmpirioLabs exposes reasoning controls only on models that list them in their model page or machine-readable schema. Do not send these fields to every model by default.
For OpenAI-compatible Chat Completions and Responses, supported controls can include enable_thinking, thinking_budget, or reasoning_effort, depending on the model:
For the Anthropic-style Messages endpoint, use Anthropic-style thinking when the model supports thinking:
Tool support varies:
Smoke test
Run this before configuring a larger tool. If this works, your key, credits, network, and model ID are good.
To list current model IDs:
Chat and roleplay frontends
Use this section for BYOK chat apps, roleplay tools, and shared web UIs. These tools normally do not need the helper script. Use your EmpirioLabs API key, pick a chat model such as qwen3-max, and keep secrets in the app’s local settings or environment variables.
For roleplay chats, we generally recommend starting with EmpirioLabs Native Inference models first, then models or variants listed in the China region when native coverage does not fit your use case. Check the Models page or Pricing page before choosing a model. Each model lists its served location there. For models with variants, check the variant entries too, since a variant can be served from a different region.
SillyTavern
SillyTavern is a local roleplay and character chat frontend. EmpirioLabs works through its custom OpenAI-compatible Chat Completion source.
- Open SillyTavern and click the plug icon to open API Connections.
- Set API type to
Chat Completion. - Set Chat Completion Source to
Custom (OpenAI-compatible). - Set Custom Endpoint / Base URL to
https://api.empiriolabs.ai/v1. - Paste your EmpirioLabs API key into the custom API key field.
- Click Connect, then choose a model from the dropdown or type a model ID such as
qwen3-max.
Do not paste https://api.empiriolabs.ai/v1/chat/completions into SillyTavern’s base URL field. SillyTavern appends the chat completions path itself.
If the model dropdown is empty but your smoke test works, type the model ID manually. If a roleplay sampler causes a request error, remove non-standard extra parameters and retry with standard chat settings first.
PersonaLLM
PersonaLLM is an iOS roleplay and character chat app with bring-your-own-key provider settings. EmpirioLabs works through PersonaLLM’s custom text engine.
- From the home screen, tap the three-dot menu in the top left.
- Open Settings.
- Open Text Engine.
- Choose Custom.
- Set the base URL to
https://api.empiriolabs.ai/v1. - Paste your EmpirioLabs API key.
- In the models field, tap the button on the right to fetch the live model list.
- Choose a chat model such as
qwen3-maxorglm-5-1, then save the text engine settings.
PersonaLLM’s thinking toggle sends a reasoning setting when enabled and omits reasoning controls when disabled. EmpirioLabs treats the omitted PersonaLLM field as thinking off only for models whose default is thinking on. This compatibility behavior is scoped to PersonaLLM requests; other tools should send explicit reasoning parameters when they need to override a model default.
Janitor AI
Janitor AI can call EmpirioLabs through its Proxy configuration. Use this path when you want to keep using Janitor’s chat UI while bringing your own EmpirioLabs key.
- Open a Janitor AI chat.
- Click
using janitoror the menu button near the top of the chat. - Open
API Settings. - Select the
Proxytab. - In Proxy Configurations, click
+ New. - Set Name to
EmpirioLabs. - Set Model to
qwen3-max, or another model ID fromGET /v1/models?available=true. - Set Proxy URL to
https://api.empiriolabs.ai/v1/chat/completions. - Paste your EmpirioLabs API key into API Key.
- Leave Custom Prompt blank unless you already use one for that character or chat.
- Click Add, save the settings, then refresh the Janitor AI page before sending the next message.
If Janitor AI offers a + /chat/completions helper next to the Proxy URL field, start with https://api.empiriolabs.ai/v1 and let the helper append the path. The saved URL should end in /v1/chat/completions.
TypingMind
TypingMind supports custom chat models where you provide an endpoint, model ID, and optional headers.
- Open
Modelsfrom the left sidebar. - Open Model Settings, then click
Add Custom Models. - Use API type
OpenAI Chat Completions APIif the form asks. - Set Endpoint API to
https://api.empiriolabs.ai/v1/chat/completions. - Set Model ID to
qwen3-maxor another available model. - Add header
Authorization: Bearer sk-empiriolabs-your_key_here, or paste the key into TypingMind’s API key field if the form provides one. - Click Test, then Add Model.
TypingMind custom model setup is the main exception on this page: it usually asks for the full chat completions endpoint, not just the /v1 base URL.
Open WebUI
Open WebUI can connect to OpenAI-compatible providers from the admin connection screen.
- Open Admin Settings.
- Go to
Connectionsand add a new OpenAI connection. - Set URL to
https://api.empiriolabs.ai/v1. - Paste your EmpirioLabs API key.
- If model discovery is slow or too broad, add model IDs such as
qwen3-maxto the Model IDs filter. - Save, then choose the EmpirioLabs model in chat.
For server launches, set:
LibreChat
LibreChat supports custom OpenAI-compatible endpoints through librechat.yaml. Use an environment variable for one shared deployment key, or user_provided if each user should bring their own key in the UI.
For BYOK multi-user deployments, change apiKey to:
Restart LibreChat after changing librechat.yaml.
LobeChat
For self-hosted LobeChat, use the OpenAI provider with an EmpirioLabs proxy URL:
Then restart LobeChat and choose an enabled EmpirioLabs model in the model selector.
OpenCode
The helper can write this automatically:
Manual setup:
In OpenCode, run /models and choose the EmpirioLabs provider. The file-backed key keeps working after you close and reopen OpenCode.
Claude Code
The helper can write the user-level settings automatically:
Claude Code is not an OpenAI-chat-completions client. It talks to LLM gateways through the Anthropic Messages shape, which EmpirioLabs exposes at /v1/messages.
Persistent user-level setup:
Use a model whose page lists POST /v1/messages under supported endpoints. If Claude Code reports a gateway-specific token counting or model discovery error, run it through an Anthropic-format gateway or adapter that implements Claude Code’s full gateway contract, then point that gateway at EmpirioLabs.
Cline
In the Cline extension UI:
- Open Cline settings.
- Set API Provider to
OpenAI Compatible. - Set Base URL to
https://api.empiriolabs.ai/v1. - Paste your EmpirioLabs API key.
- Enter a model ID such as
qwen3-max. - Click Verify, then start a new task.
For Cline CLI:
Qwen Code
The helper can write project or user settings automatically:
Launch Qwen Code directly with EmpirioLabs as the OpenAI-compatible provider:
For persistent project setup:
Add .qwen/settings.json to .gitignore if you store the key there.
Codex CLI
The helper can write the user-level provider block automatically:
Add EmpirioLabs as a custom model provider in ~/.codex/config.toml:
Then launch Codex with your key in the environment:
Use this path with EmpirioLabs models that support POST /v1/responses.
Aider
The helper can write a project-local Aider config automatically:
Aider uses the OpenAI-compatible environment variables. Prefix the model with openai/.
Continue
The helper can write the user-level Continue config automatically:
Continue’s OpenAI provider can target any OpenAI-compatible endpoint by setting apiBase. Put secrets in .env or Continue’s secret store rather than committing them into config.yaml.
Add the secret in one of Continue’s supported .env locations:
OpenHands
The helper can write a project-local OpenHands config automatically:
OpenHands exposes provider settings in the UI and passes them through to its LLM layer.
For environment-based launches:
For persistent project setup:
Run OpenHands with:
Add openhands.empiriolabs.toml to .gitignore if you store the key there.
Hermes Agent
The helper can write the user-level Hermes sidecar automatically:
Hermes has an interactive model wizard. Choose Custom endpoint, then enter:
Manual config:
OpenClaw
The helper can write a user-level OpenClaw sidecar automatically:
The safest setup is the OpenClaw wizard:
Choose a custom or OpenAI-compatible provider and use:
For manual JSON5 config, use this as a sidecar or merge it into OpenClaw’s config:
Run openclaw config validate after manual edits.
goose
The helper can write the user-level goose custom provider automatically:
goose supports custom OpenAI-compatible providers. The helper writes this as empiriolabs.json in the goose custom provider directory.
Zed
Zed supports OpenAI-compatible providers in the Agent Panel. Use the UI’s Add Provider flow, or edit settings:
Add the API key through the Agent Panel so Zed stores it in the OS credential store.
Kilo Code, Roo Code, Cursor, and similar IDEs
Use this table anywhere a tool exposes OpenAI Compatible, Custom OpenAI, or Override OpenAI Base URL.
Kilo Code and Roo-style VS Code extensions normally expose this as an API configuration profile. Roo Code’s public docs and product notices indicate a shutdown/archive path on May 15, 2026, so prefer Cline or Kilo Code for new team-wide templates unless your team already depends on Roo.
Cursor’s custom API key behavior depends on the version and feature surface. If your Cursor build only accepts provider API keys and does not expose a custom base URL for the feature you want, it cannot be pointed directly at EmpirioLabs for that feature.
Troubleshooting
Keep agents grounded
When an AI coding assistant is implementing an EmpirioLabs integration for you, give it the machine-readable docs bundle first:
Endpoint shapes, request bodies, responses, examples, errors, jobs, usage, and saved Playground conversation APIs.
Combined context links for the Documentation tab, API Reference tab, and both together.
Tell the agent to use https://docs.empiriolabs.ai/ai-agent-api-reference-context.md as the API reference, https://docs.empiriolabs.ai/ai-agent-docs-context.md for model and pricing details, and GET https://api.empiriolabs.ai/v1/models/{model_id} for live model metadata.
That prevents the agent from guessing endpoint shapes, stale model IDs, or parameter names.
