Configuration

ContextBay uses TOML configuration files with environment variable overrides. This document covers every option for both master and worker nodes.

Config File Location

The master and worker load configuration in this order (first found wins):

  1. Path specified by CONTEXTBAY_CONFIG environment variable
  2. $XDG_CONFIG_HOME/contextbay/master.toml
  3. ~/.config/contextbay/master.toml

Data Paths

PathDefaultPurpose
Database/data/contextbay.dbSQLite database (WAL mode)
Knowledge vault/data/vaultMarkdown pages + version history
Generated configs/data/generatedprometheus.yml, alertmanager.yml, recording rules — written by CB on changes
Mesh state/data/tsnetHeadscale tsnet client state

Master Configuration

A fresh deploy works with no config file at all — every default is tuned for a working single-machine install. The example below shows the most common knobs.

# Server
[server]
host                  = "0.0.0.0"  # Listen address for HTTP and WebSocket
port                  = 7480       # HTTP/WebSocket port
grpc_port             = 7481       # gRPC port for worker connections (bound to mesh)
advertise_addr        = ""         # LAN IP shown to workers; auto-detected if empty
stale_threshold_secs  = 90         # Mark a node degraded after this many seconds without heartbeat

# Database
[database]
driver = "sqlite"      # "sqlite" or "postgres"
path   = ""            # SQLite file path (defaults to /data/contextbay.db)
# url  = ""            # PostgreSQL connection URL

# Authentication
[auth]
jwt_secret          = ""    # Auto-generated and persisted on first startup if empty
jwt_secret_previous = ""    # Previous secret for dual-secret rotation
setup_complete      = false # Flipped automatically once /api/auth/setup runs

# Mesh networking
[mesh]
mode                  = "headscale"          # "headscale" (default) or "direct" (single-node dev only)
hostname              = "contextbay-master"  # Mesh hostname
control_url           = ""                   # Master-side Headscale URL (in-cluster)
headscale_public_url  = ""                   # Worker-facing Headscale URL; derived from /api/enroll Host header if empty
master_public_url     = ""                   # Worker-facing master HTTP URL; derived from /api/hosts Host header if empty
auth_key              = ""                   # Pre-auth key for the master's own tsnet identity
api_key               = ""                   # Headscale management API key (for nodes/preauth-key APIs)
state_dir             = "/data/tsnet"        # tsnet client state
ephemeral             = false                # Auto-cleanup ephemeral nodes
tags                  = ""                   # ACL tags applied to enrolled nodes (comma-separated)
auto_enroll           = false                # Auto-approve enrollments (off by default)
shared_secret         = ""                   # Shared secret for /api/enroll
network               = "100.64.0.0/10"      # Mesh network CIDR
cert_file             = ""                   # mTLS server cert for gRPC
key_file              = ""                   # mTLS private key for gRPC
ca_file               = ""                   # CA cert used to verify worker mTLS clients
auto_tls              = false                # Auto-generate self-signed certs on first boot

# Modules (most are on by default; disable any to remove its routes + services)
[modules]
monitoring   = true
alerting     = true
workflows    = true   # n8n integration
knowledge    = true   # Brain / RAG
ai           = true   # Tiered router (Ollama + Claude)
security     = true   # Wazuh + scanners
discord      = false  # Requires bot token
backups      = true
catalog      = true
terminal     = true
topology     = true
planner      = true
proxy_routes = true
webhooks     = true

# AI: tiered router
[ai]
ollama_endpoints           = ["http://cb-ollama:11434"]
default_model              = "llama3:8b"          # Default Ollama chat model
embed_model                = "nomic-embed-text"   # Auto-pulled by cb-ollama on boot for RAG
claude_enabled             = true                  # Enable Claude Code sessions (needs claude_api_key)
claude_api_key             = ""                    # Anthropic API key, passed to the agent service
claude_model               = ""                    # Empty = use Anthropic default
claude_max_turns           = 10
claude_max_budget          = 0                     # Per-task USD cap; 0 = unlimited
claude_permission          = "default"             # "default", "acceptEdits", "bypass"
claude_effort              = "high"                # "low", "medium", "high", "max"
claude_mcp_config          = ""                    # Path to MCP config JSON (optional)
claude_work_dir            = ""                    # Working directory for Claude Code sessions
claude_allowed_tools       = []                    # Pre-approved tool names (skips per-call approval)
background_agents_enabled  = false                 # Seed scheduled background AI agents on startup

# Metrics: Prometheus + Alertmanager (cb-prometheus + cb-alertmanager)
[metrics]
prometheus_url    = "http://cb-prometheus:9090"
alertmanager_url  = "http://cb-alertmanager:9093"
discovery_path    = "/data/prometheus_sd.json"  # file_sd_configs target file written by CB
retention         = "30d"
scrape_interval   = "15s"

# Knowledge / Brain
[knowledge]
vault_path = "/data/vault"

# n8n integration
[n8n]
url     = "http://cb-n8n:5678"
api_key = ""  # Auto-bootstrapped on first deploy and persisted in CB settings

# Wazuh
[wazuh]
url                      = "https://cb-wazuh:55000"
user                     = ""
password                 = ""
auto_deploy_wazuh_agent  = true   # Push wazuh-agent stack to every fused host
wazuh_agent_image        = ""     # Optional override for the agent image

# Discord (off by default)
[discord]
enabled              = false
token                = ""    # Bot token from Developer Portal
guild_id             = ""    # Discord server ID
alerts_channel       = ""    # Channel ID for bot-routed alert posts (optional)
status_channel       = ""    # Channel ID for the auto-updating status dashboard (optional)
allowed_roles        = []    # Roles whitelisted to invoke any slash command
allowed_users        = []    # Specific user IDs whitelisted (in addition to roles)
admin_role_id        = ""    # Role required for destructive commands (deploy, exec, etc.)
sanitize_outbound    = true  # Scrub IPs/secrets/internal URLs before sending to Discord
allowed_ip_ranges    = []    # CIDRs allowed to appear in outbound messages (rest get scrubbed)
rate_limit_per_min   = 10    # Per-user slash-command rate limit

# Logging
[logging]
level  = "info"  # "debug", "info", "warn", "error"
format = "text"  # "text" or "json"

# Sub-container fleet (CB owns lifecycle via Portainer API)
[subcontainers]
enabled        = true
network        = "contextbay-internal"
check_interval = "30s"

# Per-service tuning. Each entry shares the same shape:
#   enabled: deploy this stack at all
#   image:   container image (without tag)
#   version: tag pinned by CB (override for air-gapped or custom builds)
#   memory_mb: memory limit applied to the stack's main container
#   env:     extra env vars merged into the stack's compose
[subcontainers.portainer]   # version pinned to portainer/portainer-ce 2.25.x
[subcontainers.headscale]   # version pinned to headscale/headscale 0.23.x
[subcontainers.prometheus]  # version pinned to prom/prometheus v3.x
[subcontainers.alertmanager]# version pinned to prom/alertmanager v0.28.x
[subcontainers.grafana]     # version pinned to grafana/grafana 11.x
[subcontainers.n8n]         # version pinned to n8nio/n8n 1.82.x
[subcontainers.wazuh]       # version pinned to wazuh/wazuh-manager 4.12.x
[subcontainers.loki]        # version pinned to grafana/loki 3.x
[subcontainers.tempo]       # version pinned to grafana/tempo 2.x
[subcontainers.pyroscope]   # version pinned to grafana/pyroscope 1.x
[subcontainers.ollama]      # version pinned to ollama/ollama 0.21.x

# Logs / traces / profiles (push from CB master + workers into the cb-* stacks)
[loki]
url           = "http://cb-loki:3100"
batch_size    = 100
flush_interval = "1s"

[tracing]
enabled     = true
tempo_url   = "http://cb-tempo:4318"
sample_rate = 1.0

[profiling]
enabled         = true
pyroscope_url   = "http://cb-pyroscope:4040"
mutex_profiling = true
block_profiling = false

Worker Configuration

Workers are normally configured by the install snippet copied from the Hosts page in the master UI — you don't hand-write a worker TOML. The fields below are what that snippet sets behind the scenes.

master_addr   = "<MASTER_MESH_IP>:7481"  # Master gRPC over the Headscale mesh
node_name     = ""                       # Defaults to the system hostname
shared_secret = ""                       # Must match master's [mesh].shared_secret
docker_host   = "unix:///var/run/docker.sock"
metrics_port  = 9100                     # Local /metrics, scraped by cb-prometheus

[mesh]
mode = "headscale"                       # Always headscale for multi-node

[logging]
level  = "info"
format = "text"

[loki]
url           = "http://cb-loki:3100"
batch_size    = 100
flush_interval = "1s"

[tracing]
enabled     = true
tempo_url   = "http://cb-tempo:4318"
sample_rate = 1.0

[profiling]
enabled         = true
pyroscope_url   = "http://cb-pyroscope:4040"
mutex_profiling = true
block_profiling = false

<MASTER_MESH_IP> is the master's address on the Headscale mesh, in the 100.64.0.0/10 CGNAT range — the master is always assigned 100.64.0.1, so the shape is the same on every install.

Environment Variables

Every TOML field can be overridden with an environment variable using the pattern CONTEXTBAY_<SECTION>_<FIELD>.

Master Variables

VariableDefaultDescription
CONTEXTBAY_SERVER_HOST0.0.0.0HTTP listen address
CONTEXTBAY_SERVER_PORT7480HTTP/WebSocket port
CONTEXTBAY_SERVER_GRPC_PORT7481gRPC port (bound to mesh)
CONTEXTBAY_SERVER_ADVERTISE_ADDR(auto)LAN IP shown to workers
CONTEXTBAY_DATABASE_DRIVERsqliteDatabase driver
CONTEXTBAY_DATABASE_PATH/data/contextbay.dbSQLite file path
CONTEXTBAY_DATABASE_URL(empty)PostgreSQL URL
CONTEXTBAY_AUTH_JWT_SECRET(auto)JWT signing secret
CONTEXTBAY_MESH_MODEheadscaleMesh networking mode
CONTEXTBAY_MESH_SHARED_SECRET(empty)Worker enroll secret
CONTEXTBAY_MESH_HEADSCALE_PUBLIC_URL(derived)Worker-facing Headscale URL
CONTEXTBAY_MESH_MASTER_PUBLIC_URL(derived)Worker-facing master URL — set behind a reverse proxy
CONTEXTBAY_AI_OLLAMA_ENDPOINTShttp://cb-ollama:11434Ollama endpoints (comma-separated)
CONTEXTBAY_AI_DEFAULT_MODELllama3:8bDefault Ollama chat model
CONTEXTBAY_AI_EMBED_MODELnomic-embed-textEmbedding model for RAG (auto-pulled)
CONTEXTBAY_AI_CLAUDE_API_KEY(empty)Anthropic API key for Claude sessions
CONTEXTBAY_N8N_API_KEY(auto)Auto-bootstrapped on first deploy
CONTEXTBAY_DISCORD_TOKEN(empty)Discord bot token
CONTEXTBAY_DISCORD_GUILD_ID(empty)Discord server ID
CONTEXTBAY_LOGGING_LEVELinfoLog level
CONTEXTBAY_LOGGING_FORMATtextLog format

Worker Variables

Every worker env var uses the CONTEXTBAY_* prefix. The first five below are read by the install snippet on first boot — the remaining ones override TOML fields once a worker config exists.

VariableDefaultDescription
CONTEXTBAY_MASTER_URL(required)LAN HTTP URL to the master for the one-time /api/enroll call (e.g. http://<MASTER_LAN_IP>:7480)
CONTEXTBAY_MASTER_ADDR(required)Mesh gRPC address used after enroll (e.g. <MASTER_MESH_IP>:7481)
CONTEXTBAY_NODE_NAME(required)Host record id — must match the host you registered in the UI
CONTEXTBAY_ENROLL_TOKEN(required)One-time pairing token minted by the Hosts page (~24h TTL, single-use)
CONTEXTBAY_SHARED_SECRET(required)Cluster-wide shared secret matching the master's [mesh].shared_secret
CONTEXTBAY_TSNET_STATE_DIR/var/lib/contextbay/tsnetWhere the worker's mesh identity is persisted; non-empty means already enrolled
CONTEXTBAY_DOCKER_HOSTunix:///var/run/docker.sockDocker socket path
CONTEXTBAY_METRICS_PORT9100Local /metrics port for cb-prometheus to scrape
CONTEXTBAY_MESH_MODEheadscaleAlways headscale for multi-node

Docker secret loading via _FILE

Every CONTEXTBAY_* env var has a sibling CONTEXTBAY_*_FILE variant. If the _FILE variant is set, CB reads the file contents (trimmed) as the value. File-based takes precedence over the inline env var. This matches the Docker secret idiom — mount a Docker secret at /run/secrets/contextbay_shared_secret and set CONTEXTBAY_MESH_SHARED_SECRET_FILE=/run/secrets/contextbay_shared_secret instead of injecting the value into the environment.

Type Conversion

  • Booleans: true, 1, yes are truthy
  • Integers: Parsed as decimal
  • String lists: Comma-separated

Database

SQLite (default)

SQLite with WAL mode is the default. No external dependencies required. Migrations run automatically on startup.

[database]
driver = "sqlite"
path   = "/data/contextbay.db"

PostgreSQL

For larger deployments or multi-instance setups:

[database]
driver = "postgres"
url    = "postgres://contextbay:CHANGE-ME@localhost:5432/contextbay?sslmode=disable"

Volume Dependencies

Three volumes form a chain — wiping any one of them in isolation will leave the others with stale credentials and crash-loop the master. On a fresh install, wipe all three together.

VolumeHolds
contextbay-dataCB database, Portainer JWT, n8n encryption key, generated configs
contextbay-portainer-dataPortainer BoltDB and admin password
cb-n8n_contextbay-n8n-datan8n SQLite + credentials encrypted with the persisted key

Validation

The master validates configuration on startup. Invalid configurations cause an immediate exit with a descriptive error:

  • Ports must be in range 1-65535 and must differ
  • database.driver must be sqlite or postgres
  • database.url is required when driver is postgres
  • mesh.mode must be headscale or direct
  • Worker requires master_addr