Backups
ContextBay's backup module schedules and runs volume-level backups using a temporary container with the Docker volumes mounted in. Records are encrypted (when enabled), checksummed, and retrievable through the API.
What gets backed up
A backup job specifies a list of Docker volumes on a target node. For the master, the typical job covers:
contextbay-data— the SQLite DB (CB metadata, planner, knowledge index), the generated configs (prometheus.yml, alertmanager.yml, rules), and the JWT secret.contextbay-portainer-data— Portainer's BoltDB.cb-n8n_contextbay-n8n-data— n8n SQLite + encrypted credentials.cb-wazuh_*— Wazuh agent state, rule sets, decoders.- Knowledge vault — physically lives inside
contextbay-dataso it's captured automatically.
Each job has an opt-in encryption flag. When set, the produced tarball is AES-256-GCM encrypted with a key persisted in CB settings (backup_encryption_key). SHA-256 checksums are recorded for every archive.
Creating a backup job
POST /api/backup-jobs (operator role). Required fields:
| Field | Type | Notes |
|---|---|---|
| name | string | Display name (e.g. "Daily master"). |
| node_id | string | Node where the backup runs. The temporary backup container is started on this node. |
| volumes | string[] | Docker volume names to capture (e.g. ["contextbay-data","contextbay-portainer-data"]). |
| schedule | string | Cron expression (e.g. 0 3 * * * for nightly 03:00). |
| retention_count | integer | Keep at most N records per job; older records are pruned after a successful run. |
| dest_path | string | Path inside the master's data volume where archives land. Defaults to /data/backups/<job_id>/. |
| encrypted | bool | Encrypt archives with AES-256-GCM using the persisted key. |
| enabled | bool | Disabled jobs don't fire on schedule but can still be run manually. |
curl -sS -X POST http://localhost:7480/api/backup-jobs \
-H "X-API-Key: cb_..." \
-H "Content-Type: application/json" \
-d '{
"name": "Daily master core volumes",
"node_id": "<master-node-id>",
"volumes": [
"contextbay-data",
"contextbay-portainer-data",
"cb-n8n_contextbay-n8n-data"
],
"schedule": "0 3 * * *",
"retention_count": 14,
"encrypted": true,
"enabled": true
}'Trigger an immediate run with POST /api/backup-jobs/{id}/run — useful before risky maintenance.
Listing + history
Every job's execution history lives under /api/backup-jobs/{id}/records. Each record has:
- status:
running,success, orfailed. - size_bytes: archive size on disk.
- checksum: SHA-256 hex of the archive (always recorded, regardless of encryption).
- file_path: location inside the master's data volume.
- encrypted: whether the archive was AES-encrypted.
- started_at / finished_at and a error field if status is failed.
curl -sS http://localhost:7480/api/backup-jobs/<job_id>/records \
-H "X-API-Key: cb_..." | jq '.[] | {id, status, size_bytes, started_at}'Restoring from a record
Restore is destructive — it overwrites the volumes captured by the original job. The recommended sequence:
- Stop the master (
docker stop contextbay) so nothing is writing to the target volumes. - Wipe the volumes you're about to restore to (so the tarball lands on a clean slate).
- Start the master again — restore runs from inside the master container, since it needs Docker access to mount the target volumes.
- Call
POST /api/backup-jobs/{id}/records/{rid}/restore. make deployto recreate the master cleanly with the restored volumes.
curl -sS -X POST \
http://localhost:7480/api/backup-jobs/<job_id>/records/<rid>/restore \
-H "X-API-Key: cb_..."For partial restores (only one volume out of several), the simplest approach today is to download the archive (next section), un-tar it manually inside a temporary container, and rsync the subset you want into the live volume.
Verifying a record
Records can rot — disk corruption, partial encryption failure, truncated writes. The verify endpoint re-reads the archive and re-computes the SHA-256 checksum, comparing it against what was stored at create time:
curl -sS -X POST \
http://localhost:7480/api/backup-jobs/<job_id>/records/<rid>/verify \
-H "X-API-Key: cb_..."
# Returns:
# { "ok": true, "checksum": "<sha256>" } # matches
# { "ok": false, "error": "checksum mismatch" } # corruptRun a verify pass on a regular cadence (a scheduled n8n workflow against this endpoint is the easiest way) so you find out about a bad archive before you need to restore it.
Downloading for offsite copies
Backups inside the master's data volume only protect against most failure modes. For host-loss disaster recovery, pull the archive offsite:
curl -sS -OJ \
http://localhost:7480/api/backup-jobs/<job_id>/records/<rid>/download \
-H "X-API-Key: cb_..."The response uses Content-Disposition: attachment with a filename derived from the record. Pair this with an n8n workflow that posts to S3/Backblaze/whatever — schedule it right after the backup job runs.
Backup API summary
| Method | Path | Purpose |
|---|---|---|
GET | /api/backup-jobs | List all jobs |
POST | /api/backup-jobs | Create job |
GET | /api/backup-jobs/{id} | Get job |
PUT | /api/backup-jobs/{id} | Update job (schedule, volumes, retention) |
DELETE | /api/backup-jobs/{id} | Delete job |
POST | /api/backup-jobs/{id}/run | Trigger immediate run |
GET | /api/backup-jobs/{id}/records | List execution records |
POST | /api/backup-jobs/{id}/records/{rid}/restore | Restore from a record |
GET | /api/backup-jobs/{id}/records/{rid}/download | Download the archive |
POST | /api/backup-jobs/{id}/records/{rid}/verify | Verify the archive's checksum |
Related
- Operations — volume protocol (why wiping one volume corrupts the others) and auth recovery flows.
- API Reference — full request/response shapes for every backup endpoint.

