Providers

SwarmClaw supports 18 built-in providers plus compatible custom endpoints. Each agent can use a different runtime.

CLI Providers

CLI providers spawn a local binary and let that runtime manage its own tool loop.

ProviderBinaryNotes
Claude Code CLIclaudeNative tool/runtime loop through Claude Code.
OpenAI Codex CLIcodexNative Codex runtime with session continuity and streamed events.
OpenCode CLIopencodeCLI-backed coding/runtime provider.
Gemini CLIgeminiCLI-backed Gemini runtime.
GitHub Copilot CLIcopilotCLI-backed Copilot runtime with multi-model support through Copilot.

API Providers

API providers make direct HTTP calls.

ProviderEndpoint FamilyNotes
Anthropicapi.anthropic.comClaude models
OpenAIapi.openai.comGPT and reasoning models
Google Geminigenerativelanguage.googleapis.comGemini API models
DeepSeekapi.deepseek.comDeepSeek models
Groqapi.groq.comGroq-hosted open models
Togetherapi.together.xyzTogether-hosted open models
Mistralapi.mistral.aiMistral models
xAIapi.x.aiGrok models
Fireworksapi.fireworks.aiFireworks-hosted models
Nebiusapi.tokenfactory.nebius.comNebius-hosted models
DeepInfraapi.deepinfra.comDeepInfra-hosted models

Local and Gateway Providers

ProviderTypeNotes
OllamaLocal or cloudExplicit Local/Cloud mode. Local typically targets your own Ollama endpoint; Cloud uses Ollama's hosted path with credentials.
OpenClawGatewayRoute execution through saved OpenClaw gateway profiles or direct gateway configuration.

OpenClaw

OpenClaw is an autonomous agent runtime that SwarmClaw can deploy, connect to, and manage as part of a larger swarm.

In the agent editor, OpenClaw is enabled through the OpenClaw Gateway toggle instead of the normal provider/model dropdown.

Gateway Profiles

OpenClaw gateways can be managed centrally:

  • create named gateway profiles with endpoint, credential, notes, tags, and deployment metadata
  • use Smart Deploy for local bring-up, VPS bundles, or SSH-managed remote install
  • mark one profile as the default
  • run discovery, health checks, and dashboard open flows from the control plane
  • import, export, and clone saved gateway profiles

Agents can inherit the default profile, select another saved profile, or use direct gateway details when needed.

Fleet Visibility

SwarmClaw also tracks external OpenClaw runtimes:

  • gateway health and verification state
  • runtime counts and last-seen data
  • external agent runtime lifecycle state
  • remote lifecycle controls for SSH-managed installs

See OpenClaw Setup for the full deployment flow.

Custom Providers

Add any OpenAI-compatible endpoint as a custom provider. Common examples include:

  • OpenRouter
  • local vLLM or LM Studio style endpoints
  • internal OpenAI-compatible gateways

Custom providers appear in agent and chat dropdowns immediately after saving.

Model Selection

SwarmClaw can populate model lists from provider discovery, but you are not limited to discovered models. You can still type a newer or custom model name manually.

Credentials

Provider credentials are encrypted before storage. Add them from provider setup flows or from the secrets/credentials surfaces.

Ollama Mode Selection

Ollama transport is explicit:

  • choose Local to talk to a local or self-hosted Ollama endpoint
  • choose Cloud to use Ollama's hosted endpoint with a credential

Model names do not silently switch transport for you.

Model Failover

Agents can be configured with fallback credentials to retry through another credential when a provider fails with a rate-limit, auth, or server-side issue.

See Failover for the full model.

Provider Limitations

  • CLI providers and OpenClaw manage their own runtime/tool loop, so SwarmClaw does not expose the same local tool toggles for those agents.
  • OpenClaw runs tools remotely on the gateway, not inside the local SwarmClaw process.
  • Custom providers must be OpenAI-compatible.