Config Reference
Full annotated openclaw.json with every section explained. Models, memory, channels, custom providers, file structure, common mistakes, and cost tracking.
Quick start
Copy the sanitized config
cp sanitized-config.json ~/.openclaw/openclaw.jsonReplace all placeholders
YOUR_* → real API keys and tokensValidate
openclaw doctor --fixSecurity audit
openclaw security audit --deepModel configuration (agents.defaults.model)
The coordinator vs worker pattern:
- Keep expensive models (Opus, Sonnet) out of the
primaryslot - Use capable but cheap models as your default
- Strong models go in
fallbacksor pinned to specific agents
"agents": {
"defaults": {
"model": {
"primary": "anthropic/claude-sonnet-4-5",
"fallbacks": [
"openai/gpt-5-mini",
"kimi-coding/k2p5",
"openrouter/google/gemini-3-flash-preview"
]
},
"models": {
"anthropic/claude-haiku-4-5": { "alias": "haiku" },
"anthropic/claude-sonnet-4-5": { "alias": "sonnet" },
"anthropic/claude-opus-4-6": { "alias": "opus" },
"kimi-coding/k2p5": { "alias": "kimi" }
}
}
}Why this matters: Expensive defaults = burned quotas on routine work. Cheap defaults with scoped fallbacks = predictable costs.
Named agents
"named": {
"monitor": {
"model": {
"primary": "openai/gpt-5-nano",
"fallbacks": [
"openrouter/google/gemini-2.5-flash-lite",
"anthropic/claude-haiku-4-5"
]
},
"systemPromptFile": "workspace/agents/monitor.md"
},
"researcher": {
"model": {
"primary": "kimi-coding/k2p5",
"fallbacks": [
"synthetic/hf:zai-org/GLM-4.7",
"openai/gpt-5-mini"
]
},
"systemPromptFile": "workspace/agents/researcher.md"
}
}Concurrency limits
"maxConcurrent": 4,
"subagents": {
"maxConcurrent": 8
}Prevents one bad task from spawning 50 retries and burning your quota in minutes.
Custom model providers
You can add custom providers like NVIDIA NIM to access additional models:
"models": {
"mode": "merge",
"providers": {
"nvidia-nim": {
"baseUrl": "https://integrate.api.nvidia.com/v1",
"api": "openai",
"models": [
{
"id": "nvidia/moonshotai/kimi-k2.5",
"name": "Kimi K2.5 (NVIDIA NIM)",
"reasoning": false,
"input": ["text"],
"cost": { "input": 0, "output": 0 },
"contextWindow": 256000,
"maxTokens": 8192
}
]
}
}
}Rate limits: NVIDIA NIM free tier has 40 RPM limit. Use sparingly or as fallback.
Auth: Set NVIDIA_API_KEY in environment or credentials directory.
Channels (Telegram, Discord, Slack)
"channels": {
"telegram": {
"enabled": true,
"dmPolicy": "pairing",
"botToken": "<YOUR_TELEGRAM_BOT_TOKEN>",
"groupPolicy": "allowlist",
"streamMode": "partial"
},
"discord": {
"enabled": true,
"token": "<YOUR_DISCORD_BOT_TOKEN>",
"groupPolicy": "allowlist",
"dm": {
"enabled": true,
"policy": "allowlist",
"allowFrom": ["<YOUR_DISCORD_USER_ID>"]
},
"guilds": {
"<YOUR_DISCORD_GUILD_ID>": {
"requireMention": false,
"users": ["<YOUR_DISCORD_USER_ID>"],
"channels": { "*": { "allow": true } }
}
}
},
"slack": {
"mode": "socket",
"enabled": true,
"botToken": "<YOUR_SLACK_BOT_TOKEN>",
"appToken": "<YOUR_SLACK_APP_TOKEN>",
"userTokenReadOnly": true,
"groupPolicy": "allowlist",
"channels": {}
}
}Tools, media & messages
"tools": {
"profile": "full",
"web": {
"search": { "enabled": true, "apiKey": "<YOUR_BRAVE_API_KEY>" },
"fetch": { "enabled": true }
},
"media": {
"audio": {
"enabled": true,
"models": [{
"type": "cli",
"command": "/path/to/whisper-wrapper",
"args": ["{input}"]
}]
}
}
},
"messages": {
"ackReactionScope": "group-mentions",
"tts": {
"auto": "inbound",
"provider": "edge",
"edge": { "enabled": true, "voice": "en-GB-RyanNeural" }
}
}Hooks
"hooks": {
"internal": {
"enabled": true,
"entries": {
"command-logger": { "enabled": true },
"boot-md": { "enabled": true },
"session-memory": { "enabled": true }
}
}
}Gateway
"gateway": {
"port": 18789,
"mode": "local",
"bind": "loopback",
"auth": {
"mode": "token",
"token": "<GENERATE_RANDOM_TOKEN>",
"allowTailscale": true
},
"tailscale": {
"mode": "serve",
"resetOnExit": true
}
}Critical: bind: "loopback" ensures the gateway only listens on 127.0.0.1. See the VPS Deployment chapter for verification steps.
File structure
~/.openclaw/
├── openclaw.json # Main config (this file, sanitized)
├── credentials/ # API keys (chmod 600)
│ ├── openrouter
│ ├── anthropic
│ └── synthetic
└── workspace/ # Your working directory
├── AGENTS.md
├── SOUL.md
├── USER.md
├── TOOLS.md
├── HEARTBEAT.md
├── memory/
│ ├── 2026-02-07.md
│ └── ...
└── skills/
└── your-skills/Common mistakes
1. Leaving expensive models as default
Opus/Sonnet in primary = quota burnout. Move them to fallbacks or agent-specific configs.
2. No context pruning
Token usage climbs, costs spiral. Add contextPruning with cache-ttl.
3. Gateway exposed to network
bind: "0.0.0.0" = anyone can access your agent. Always use bind: "loopback".
4. No concurrency limits
One stuck task spawns 50 retries. Set maxConcurrent to something sane (4–8).
5. Skipping security audit
Run openclaw security audit --deep after every config change.
Cost tracking
# Check quotas (optional script)
check-quotas
# Monitor costs in provider dashboards
# - OpenRouter: https://openrouter.ai/activity
# - Anthropic: https://console.anthropic.com/settings/usage
# - OpenAI: https://platform.openai.com/usageTarget: $45–50/month for moderate usage (main session + occasional subagents).
- • Expensive model in default config
- • Runaway agent retries (no concurrency limits)
- • Memory flush running too often
- • Heartbeat using premium model
Next steps
Set up your channels (Telegram, Discord, etc.)
Configure role-specific agents (monitor, researcher, communicator)
Add skills to workspace/skills/
Set up heartbeat checks in HEARTBEAT.md
Test in a local session before enabling 24/7 mode
Final thoughts
You don't need expensive hardware or expensive subscriptions to make OpenClaw useful. What you need is to be deliberate about configuration, keep visibility into what's happening, and resist the urge to over-engineer before you understand the basics.
If this saves you some time or frustration, it did its job.
Resources
Official Docs
docs.openclaw.ai
GitHub Issues
Report bugs & feature requests
Discord Community
Get help from the community
Full JSON Config Guide
Our detailed breakdown
On Anthropic bans
From what I've seen, bans usually come down to how aggressively Claude is being hit through the API, not OpenClaw itself. I'm not saying bans never happen. The cases I've seen were tied to aggressive automated usage patterns, not simply running OpenClaw. If you're not hammering the API beyond normal usage, there's no obvious reason to worry about it.