[SELF-HOSTED]

One binary to manage them all

A self-hosted cluster platform for monitoring, automation, security, and AI. One binary per node. Works on a single machine — scales when you add more.

monitoring · workflows · security · AI development · multi-node mesh

See what's inside ↓
ContextBay KrakenContextBay Kraken

Four dashboards. Four logins. Four too many.

PortainerContainers
GrafanaMonitoring
n8nWorkflows
WazuhSecurity
contextbay master
ContainersMonitoringWorkflowsSecurityAIKnowledge

One process. One config file. No separate databases, reverse proxies, or message queues.

Everything you need, nothing you don't

Modular by design. Enable what you need, disable the rest.

Container Management

Full Docker lifecycle. Create, start, stop, remove. Real-time stats.

Monitoring Dashboards

Embedded Prometheus and Grafana as managed sub-containers. Custom ECharts dashboards. Real-time WebSocket metrics. PromQL query proxy.

Workflow Automation

Embedded n8n as a sub-container with 500+ built-in integrations. CB auto-bootstraps the n8n admin and API key on first deploy, seeds 36 workflows, and exposes the typed event bus to n8n via webhooks.

Security Monitor

Embedded Wazuh manager as sub-container. Command Center dashboard, vulnerability scanning, compliance checks, secret scanning, file integrity monitoring, and auto-remediation via Claude Code.

Tiered AI: Claude + Ollama

Claude Code for reasoning, planning, and tool-use sessions. Local cb-ollama for cheap classification and RAG embeddings (auto-pulls nomic-embed-text on boot). One router picks the right tier per request.

Headscale Mesh

Self-hosted WireGuard control plane (cb-headscale) deployed automatically. Workers enroll once over LAN with a one-time token, then every heartbeat and gRPC call rides the encrypted mesh.

Knowledge Base

Markdown wiki with bidirectional [[links]], tag system, and interactive graph visualization powered by Cytoscape.js.

Project Planner

Built-in Kanban, Gantt, sprints, velocity, and burndown — plus a Claude-context bundle so AI sessions inherit the project's full state. Issues live in CB, not GitHub.

Discord Bot

Slash commands for container management, node monitoring, alerts. Interactive buttons, confirmation dialogs, and voice channel notifications.

App Catalog

One-click Docker Compose templates. Deploy Grafana, Prometheus, n8n, Ollama, Nextcloud, and more to any worker node.

Discovery Engine

Auto-detect services, containers, and devices on your network. Zero manual configuration.

Full Observability Stack

Loki for logs, Tempo for traces, Pyroscope for continuous profiling — deployed as cb-* sub-containers and pushed to from the master and every worker out of the box.

MCP Tools for Claude

First-class MCP server registry so Claude Code can inspect nodes, query metrics, manage containers via Portainer, search the Brain, file Planner issues, and trigger n8n workflows — all from within an AI session.

Web Terminal

Browser-based shell access to any node. No SSH client needed.

Modular, sub-container-driven, fully scriptable. Toggle modules in the config to shape the platform around exactly what you run.

One dashboard for everything

Real-time metrics, container status, and alerts across all your nodes. Built-in dashboards with time-series charts and host overview.

ContextBay
Updated 1m agoU
All hosts/Dashboard

Containers

0

Running

0

Stopped

0

Errors

0

Avg CPU

0%

master-nodemaster
CPU42%
Memory18.2G
worker-1worker
CPU28%
Memory12.8G
worker-2worker
CPU78%
Memory20.1G
worker-3worker
CPU0%
Memory

CPU Usage

master-node
worker-1
worker-2
00:0004:0008:0012:0016:0020:00Now

Memory Usage

63.1 / 104 GB
master-node18.2 / 32 GB
worker-112.8 / 32 GB
worker-220.1 / 24 GB
worker-3 / 16 GB
Containers
Filter...
NameHostStatusCPUMemoryUptime
portainer
portainer/portainer-ce:2.21
master-nodeRunning2.1%184M14d 3h
prometheus
prom/prometheus:v2.54
worker-1Running8.4%1.2G14d 3h
n8n
n8nio/n8n:1.72
worker-1Running4.2%512M7d 12h
ollama
ollama/ollama:0.5
master-nodeRunning45.2%6.8G2d 8h
crawl4ai
unclecode/crawl4ai:0.4
worker-1Error0%0M
caddy
caddy:2.8-alpine
worker-1Running0.3%42M14d 3h
Activity

crawl4ai exited with code 137 (OOM killed)

12 min ago

worker-2 memory above 80% threshold

23 min ago

n8n updated to 1.72 on worker-1

2 hours ago

ollama model llama3.2 pulled

3 hours ago

worker-3 went offline

4 hours ago

AI sits in the middle — Claude Code is the default brain. Pluggable architecture means any AI can sit here.

Master + Worker

One Go binary on the master, monitoring-only Sentinel agents on workers. All node ↔ master traffic rides an encrypted Headscale mesh — no exposed gRPC ports on the LAN.

ContextBay MasterContextBay MasterMASTERGo API + Next.js UI
API ServerPortainer ControlSub-Container FleetKnowledge / BrainTiered AI RouterSecurity MonitorProject Plannern8n BridgeEvent BusMCP Registry
gRPC · Headscale
WORKER
  • Sentinel Agent
  • Metrics Collector
gRPC · Headscale
WORKER
  • Census + Docker Events
  • Portainer Edge
gRPC · Headscale
WORKER
  • Wazuh Agent
  • Log/Trace Shippers
GOGRPCSQLITENEXT.JSPORTAINERPROMETHEUSGRAFANAHEADSCALEOLLAMA

Auto-bootstraps the cb-* fleet on first boot: Portainer, Headscale, Prometheus, Grafana, Alertmanager, n8n, Wazuh, Loki, Tempo, Pyroscope, and Ollama.

So what do you actually get?

Containers, dashboards, workflows, security, AI — six tools replaced by one platform where everything talks to each other.

Works on a single machine — no cluster required. Scale later with worker nodes.
Powered by Claude Code

The Kraken — AI that actually operates

Claude Code is the brain. Every service, metric, and container is exposed via APIs and MCPs — it sees your entire stack live. Tell it what you want and it builds the workflow, wires the triggers, deploys it. When something breaks at 3am, it fixes it before you wake up. Sub-agents run in the background keeping docs, configs, and plans in sync so the AI always knows what's where.

Full-stack monitoring

Zero-config observability

Deploy the binary, see everything. CPU, memory, disk, network, containers, services — real-time dashboards appear automatically. No Grafana setup, no Prometheus YAML, no scrape configs. Your entire stack, monitored in seconds.

Docker management built in

Start, stop, deploy, inspect containers and compose stacks from your browser. View logs, check resource stats, manage images. Everything Portainer does — without running Portainer.

Visual workflow automation

Automate everything

Your Plex server goes down at 3am — ContextBay auto-restarts it and sends you a notification. Build health checks, scheduled backups, deployment pipelines, and alert routing with a visual DAG editor. If-this-then-that for your infrastructure.

Security that's always on

Someone SSHs into your server from a new IP — you know about it immediately. Intrusion detection, file integrity monitoring, and log analysis run from day one. Get alerts before things break. Know when something changes that shouldn't.

Your homelab, documented

Every service, every config, every connection — searchable in a built-in markdown wiki with [[wikilinks]] and an interactive graph showing how your services connect. Documentation lives where your infrastructure lives.

Run the master on your server, NAS, or Raspberry Pi. Add workers when you're ready to go multi-node.

Get started in 30 seconds

Get started in 30 seconds

Install the master, then add workers to scale across your homelab.

docker run -d \
  --name contextbay \
  -p 7480:7480 \
  -v /var/run/docker.sock:/var/run/docker.sock \
  -v contextbay-data:/data \
  ghcr.io/contextbay/contextbay-master:latest

# On first boot the master starts cb-portainer and then deploys
# the cb-* sub-container fleet (Headscale, Prometheus, Grafana,
# Alertmanager, n8n, Wazuh, Loki, Tempo, Pyroscope, Ollama)
# through the Portainer API.
Then add workers:
# 1. Add the host from the master UI (Hosts -> Add Host).
#    The master mints a one-time enroll token and renders an
#    install snippet pre-filled with the CONTEXTBAY_* env vars
#    below. Paste it on the new node:
docker run -d --name contextbay-worker \
  -v /var/run/docker.sock:/var/run/docker.sock \
  -v contextbay-worker-data:/var/lib/contextbay \
  -e CONTEXTBAY_MASTER_URL=http://<MASTER_LAN_IP>:7480 \
  -e CONTEXTBAY_MASTER_ADDR=<MASTER_MESH_IP>:7481 \
  -e CONTEXTBAY_NODE_NAME=<worker-1> \
  -e CONTEXTBAY_ENROLL_TOKEN=<one-time-token-from-ui> \
  -e CONTEXTBAY_SHARED_SECRET=<shared-secret-from-ui> \
  ghcr.io/contextbay/contextbay-worker:latest

# 2. The worker calls MASTER_URL/api/enroll once over LAN, joins
#    the Headscale mesh, then switches to MASTER_ADDR (the mesh
#    gRPC address). From then on all traffic flows over the
#    encrypted mesh (<MESH_IP>:7481) — the LAN URL is never
#    used again.

Self-hosted homelab management. Free and open source. Licensed under MIT.