Back to Runbook
📋 Chapter 8

Config Reference

Full annotated openclaw.json with every section explained. Models, memory, channels, custom providers, file structure, common mistakes, and cost tracking.

📖 15 min read⚙️ Full config

Quick start

1

Copy the sanitized config

cp sanitized-config.json ~/.openclaw/openclaw.json
2

Replace all placeholders

YOUR_* → real API keys and tokens
3

Validate

openclaw doctor --fix
4

Security audit

openclaw security audit --deep

Model configuration (agents.defaults.model)

The coordinator vs worker pattern:

  • Keep expensive models (Opus, Sonnet) out of the primary slot
  • Use capable but cheap models as your default
  • Strong models go in fallbacks or pinned to specific agents
openclaw.json — Model defaults
"agents": {
  "defaults": {
    "model": {
      "primary": "anthropic/claude-sonnet-4-5",
      "fallbacks": [
        "openai/gpt-5-mini",
        "kimi-coding/k2p5",
        "openrouter/google/gemini-3-flash-preview"
      ]
    },
    "models": {
      "anthropic/claude-haiku-4-5": { "alias": "haiku" },
      "anthropic/claude-sonnet-4-5": { "alias": "sonnet" },
      "anthropic/claude-opus-4-6": { "alias": "opus" },
      "kimi-coding/k2p5": { "alias": "kimi" }
    }
  }
}
💡

Why this matters: Expensive defaults = burned quotas on routine work. Cheap defaults with scoped fallbacks = predictable costs.

Named agents

openclaw.json — Named agents
"named": {
  "monitor": {
    "model": {
      "primary": "openai/gpt-5-nano",
      "fallbacks": [
        "openrouter/google/gemini-2.5-flash-lite",
        "anthropic/claude-haiku-4-5"
      ]
    },
    "systemPromptFile": "workspace/agents/monitor.md"
  },
  "researcher": {
    "model": {
      "primary": "kimi-coding/k2p5",
      "fallbacks": [
        "synthetic/hf:zai-org/GLM-4.7",
        "openai/gpt-5-mini"
      ]
    },
    "systemPromptFile": "workspace/agents/researcher.md"
  }
}

Concurrency limits

openclaw.json
"maxConcurrent": 4,
"subagents": {
  "maxConcurrent": 8
}

Prevents one bad task from spawning 50 retries and burning your quota in minutes.

Custom model providers

You can add custom providers like NVIDIA NIM to access additional models:

openclaw.json — NVIDIA NIM example
"models": {
  "mode": "merge",
  "providers": {
    "nvidia-nim": {
      "baseUrl": "https://integrate.api.nvidia.com/v1",
      "api": "openai",
      "models": [
        {
          "id": "nvidia/moonshotai/kimi-k2.5",
          "name": "Kimi K2.5 (NVIDIA NIM)",
          "reasoning": false,
          "input": ["text"],
          "cost": { "input": 0, "output": 0 },
          "contextWindow": 256000,
          "maxTokens": 8192
        }
      ]
    }
  }
}

Rate limits: NVIDIA NIM free tier has 40 RPM limit. Use sparingly or as fallback.

Auth: Set NVIDIA_API_KEY in environment or credentials directory.

Channels (Telegram, Discord, Slack)

openclaw.json — Channels
"channels": {
  "telegram": {
    "enabled": true,
    "dmPolicy": "pairing",
    "botToken": "<YOUR_TELEGRAM_BOT_TOKEN>",
    "groupPolicy": "allowlist",
    "streamMode": "partial"
  },
  "discord": {
    "enabled": true,
    "token": "<YOUR_DISCORD_BOT_TOKEN>",
    "groupPolicy": "allowlist",
    "dm": {
      "enabled": true,
      "policy": "allowlist",
      "allowFrom": ["<YOUR_DISCORD_USER_ID>"]
    },
    "guilds": {
      "<YOUR_DISCORD_GUILD_ID>": {
        "requireMention": false,
        "users": ["<YOUR_DISCORD_USER_ID>"],
        "channels": { "*": { "allow": true } }
      }
    }
  },
  "slack": {
    "mode": "socket",
    "enabled": true,
    "botToken": "<YOUR_SLACK_BOT_TOKEN>",
    "appToken": "<YOUR_SLACK_APP_TOKEN>",
    "userTokenReadOnly": true,
    "groupPolicy": "allowlist",
    "channels": {}
  }
}

Tools, media & messages

openclaw.json — Tools, media, messages
"tools": {
  "profile": "full",
  "web": {
    "search": { "enabled": true, "apiKey": "<YOUR_BRAVE_API_KEY>" },
    "fetch": { "enabled": true }
  },
  "media": {
    "audio": {
      "enabled": true,
      "models": [{
        "type": "cli",
        "command": "/path/to/whisper-wrapper",
        "args": ["{input}"]
      }]
    }
  }
},
"messages": {
  "ackReactionScope": "group-mentions",
  "tts": {
    "auto": "inbound",
    "provider": "edge",
    "edge": { "enabled": true, "voice": "en-GB-RyanNeural" }
  }
}

Hooks

openclaw.json
"hooks": {
  "internal": {
    "enabled": true,
    "entries": {
      "command-logger": { "enabled": true },
      "boot-md": { "enabled": true },
      "session-memory": { "enabled": true }
    }
  }
}

Gateway

openclaw.json
"gateway": {
  "port": 18789,
  "mode": "local",
  "bind": "loopback",
  "auth": {
    "mode": "token",
    "token": "<GENERATE_RANDOM_TOKEN>",
    "allowTailscale": true
  },
  "tailscale": {
    "mode": "serve",
    "resetOnExit": true
  }
}

Critical: bind: "loopback" ensures the gateway only listens on 127.0.0.1. See the VPS Deployment chapter for verification steps.

File structure

Expected workspace layout
~/.openclaw/
├── openclaw.json          # Main config (this file, sanitized)
├── credentials/           # API keys (chmod 600)
│   ├── openrouter
│   ├── anthropic
│   └── synthetic
└── workspace/             # Your working directory
    ├── AGENTS.md
    ├── SOUL.md
    ├── USER.md
    ├── TOOLS.md
    ├── HEARTBEAT.md
    ├── memory/
    │   ├── 2026-02-07.md
    │   └── ...
    └── skills/
        └── your-skills/

Common mistakes

1. Leaving expensive models as default

Opus/Sonnet in primary = quota burnout. Move them to fallbacks or agent-specific configs.

2. No context pruning

Token usage climbs, costs spiral. Add contextPruning with cache-ttl.

3. Gateway exposed to network

bind: "0.0.0.0" = anyone can access your agent. Always use bind: "loopback".

4. No concurrency limits

One stuck task spawns 50 retries. Set maxConcurrent to something sane (4–8).

5. Skipping security audit

Run openclaw security audit --deep after every config change.

Cost tracking

bash
# Check quotas (optional script)
check-quotas

# Monitor costs in provider dashboards
# - OpenRouter: https://openrouter.ai/activity
# - Anthropic: https://console.anthropic.com/settings/usage
# - OpenAI: https://platform.openai.com/usage

Target: $45–50/month for moderate usage (main session + occasional subagents).

💡
If costs climb above $100/month, check for:
  • • Expensive model in default config
  • • Runaway agent retries (no concurrency limits)
  • • Memory flush running too often
  • • Heartbeat using premium model

Next steps

1

Set up your channels (Telegram, Discord, etc.)

2

Configure role-specific agents (monitor, researcher, communicator)

3

Add skills to workspace/skills/

4

Set up heartbeat checks in HEARTBEAT.md

5

Test in a local session before enabling 24/7 mode

Final thoughts

You don't need expensive hardware or expensive subscriptions to make OpenClaw useful. What you need is to be deliberate about configuration, keep visibility into what's happening, and resist the urge to over-engineer before you understand the basics.

If this saves you some time or frustration, it did its job.

Resources

On Anthropic bans

From what I've seen, bans usually come down to how aggressively Claude is being hit through the API, not OpenClaw itself. I'm not saying bans never happen. The cases I've seen were tied to aggressive automated usage patterns, not simply running OpenClaw. If you're not hammering the API beyond normal usage, there's no obvious reason to worry about it.