Providers
SwarmClaw supports 18 built-in providers plus compatible custom endpoints. Each agent can use a different runtime.
CLI Providers
CLI providers spawn a local binary and let that runtime manage its own tool loop.
| Provider | Binary | Notes |
|---|---|---|
| Claude Code CLI | claude | Native tool/runtime loop through Claude Code. |
| OpenAI Codex CLI | codex | Native Codex runtime with session continuity and streamed events. |
| OpenCode CLI | opencode | CLI-backed coding/runtime provider. |
| Gemini CLI | gemini | CLI-backed Gemini runtime. |
| GitHub Copilot CLI | copilot | CLI-backed Copilot runtime with multi-model support through Copilot. |
API Providers
API providers make direct HTTP calls.
| Provider | Endpoint Family | Notes |
|---|---|---|
| Anthropic | api.anthropic.com | Claude models |
| OpenAI | api.openai.com | GPT and reasoning models |
| Google Gemini | generativelanguage.googleapis.com | Gemini API models |
| DeepSeek | api.deepseek.com | DeepSeek models |
| Groq | api.groq.com | Groq-hosted open models |
| Together | api.together.xyz | Together-hosted open models |
| Mistral | api.mistral.ai | Mistral models |
| xAI | api.x.ai | Grok models |
| Fireworks | api.fireworks.ai | Fireworks-hosted models |
| Nebius | api.tokenfactory.nebius.com | Nebius-hosted models |
| DeepInfra | api.deepinfra.com | DeepInfra-hosted models |
Local and Gateway Providers
| Provider | Type | Notes |
|---|---|---|
| Ollama | Local or cloud | Explicit Local/Cloud mode. Local typically targets your own Ollama endpoint; Cloud uses Ollama's hosted path with credentials. |
| OpenClaw | Gateway | Route execution through saved OpenClaw gateway profiles or direct gateway configuration. |
OpenClaw
OpenClaw is an autonomous agent runtime that SwarmClaw can deploy, connect to, and manage as part of a larger swarm.
In the agent editor, OpenClaw is enabled through the OpenClaw Gateway toggle instead of the normal provider/model dropdown.
Gateway Profiles
OpenClaw gateways can be managed centrally:
- create named gateway profiles with endpoint, credential, notes, tags, and deployment metadata
- use Smart Deploy for local bring-up, VPS bundles, or SSH-managed remote install
- mark one profile as the default
- run discovery, health checks, and dashboard open flows from the control plane
- import, export, and clone saved gateway profiles
Agents can inherit the default profile, select another saved profile, or use direct gateway details when needed.
Fleet Visibility
SwarmClaw also tracks external OpenClaw runtimes:
- gateway health and verification state
- runtime counts and last-seen data
- external agent runtime lifecycle state
- remote lifecycle controls for SSH-managed installs
See OpenClaw Setup for the full deployment flow.
Custom Providers
Add any OpenAI-compatible endpoint as a custom provider. Common examples include:
- OpenRouter
- local vLLM or LM Studio style endpoints
- internal OpenAI-compatible gateways
Custom providers appear in agent and chat dropdowns immediately after saving.
Model Selection
SwarmClaw can populate model lists from provider discovery, but you are not limited to discovered models. You can still type a newer or custom model name manually.
Credentials
Provider credentials are encrypted before storage. Add them from provider setup flows or from the secrets/credentials surfaces.
Ollama Mode Selection
Ollama transport is explicit:
- choose Local to talk to a local or self-hosted Ollama endpoint
- choose Cloud to use Ollama's hosted endpoint with a credential
Model names do not silently switch transport for you.
Model Failover
Agents can be configured with fallback credentials to retry through another credential when a provider fails with a rate-limit, auth, or server-side issue.
See Failover for the full model.
Provider Limitations
- CLI providers and OpenClaw manage their own runtime/tool loop, so SwarmClaw does not expose the same local tool toggles for those agents.
- OpenClaw runs tools remotely on the gateway, not inside the local SwarmClaw process.
- Custom providers must be OpenAI-compatible.