Skip to main content

Docker Deployment

CoreCube ships as a Docker image with three compose variants depending on your requirements.

Deployment modes

ModeUse caseDatabasesExternal API calls
MinimalEvaluation, small teamsSQLite FTS onlyLLM API required
ProductionTeams and organizationsPostgreSQL + pgvectorLLM API required
Fully localAir-gapped, data sovereigntyPostgreSQL + pgvectorNone required

:::info SQLite vs PostgreSQL SQLite mode provides basic full-text search for evaluation and small teams (< 10K chunks). For production retrieval quality — including vector similarity search and hybrid ranking — use PostgreSQL with pgvector. :::

Production deployment

The recommended setup for teams: CoreCube + PostgreSQL/pgvector.

services:
corecube:
image: registry.arantic.cloud/corecube/corecube:latest
ports:
- '7400:7400'
environment:
CUBE_ADMIN_EMAIL: admin@example.com
CUBE_ADMIN_PASSWORD: changeme123
PGVECTOR_URL: postgresql://corecube:changeme123@pgvector:5432/corecube
ENCRYPTION_KEY: your-256-bit-encryption-key
volumes:
- corecube-data:/data
depends_on:
pgvector:
condition: service_healthy
restart: unless-stopped

pgvector:
image: pgvector/pgvector:pg16
environment:
POSTGRES_USER: corecube
POSTGRES_PASSWORD: changeme123
POSTGRES_DB: corecube
volumes:
- pgvector-data:/var/lib/postgresql/data
healthcheck:
test: ['CMD-SHELL', 'pg_isready -U corecube']
interval: 10s
timeout: 5s
retries: 5
restart: unless-stopped

volumes:
corecube-data:
pgvector-data:

Start:

docker compose up -d

Stop:

docker compose down

Fully local deployment

Zero external API calls. Adds the CoreCube Inference sidecar for local embedding, reranking, and OCR.

docker compose --profile inference up -d

With a self-hosted LLM (e.g., Ollama), configure it as a custom LLM provider in the Admin Console pointing to http://ollama:11434/v1.

NVIDIA GPU (CUDA)

For GPU-accelerated inference on Linux, enable the CUDA device reservation in your compose file:

corecube-inference:
build:
args:
DEVICE: cuda
environment:
DEVICE: cuda
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]

Apple Silicon (MPS)

On Apple Silicon Macs, Docker cannot expose Apple Metal/MPS to Linux containers. Run inference natively:

make deploy-local-mps

This starts the inference servers as native macOS Python processes and runs the rest of the stack in Docker.

Environment variables

VariableDefaultDescription
PORT7400Server port
CUBE_ADMIN_EMAILadmin@example.comInitial admin email
CUBE_ADMIN_PASSWORDchangeme123Initial admin password
PGVECTOR_URLPostgreSQL connection string (required for pgvector mode)
ENCRYPTION_KEYauto-generatedAES-256-GCM key for credential encryption
DATA_DIR/dataPersistent data directory
KNOWLEDGE_BACKENDpgvectorpgvector or sqlite
SESSION_MAX_AGE_HOURS168Session lifetime in hours (default: 7 days)

Docker secrets

Sensitive variables support the _FILE suffix pattern for Docker secrets:

environment:
ENCRYPTION_KEY_FILE: /run/secrets/encryption_key
PGVECTOR_URL_FILE: /run/secrets/pgvector_url
secrets:
encryption_key:
file: ./secrets/encryption_key.txt
pgvector_url:
file: ./secrets/pgvector_url.txt

Data volumes

CoreCube uses two persistent volumes:

VolumeContents
corecube-dataSQLite database, encryption key, uploaded files
pgvector-dataPostgreSQL data (chunks, embeddings, entities)

:::danger Backup both volumes Knowledge data is split across both volumes. A backup of only one is incomplete. See Backup & Recovery below. :::

URLs

ServiceURL
Admin Consolehttp://localhost:7400/admin
Headless APIhttp://localhost:7400/v1
OpenAPI spechttp://localhost:7400/v1/openapi.json
Interactive API docshttp://localhost:7400/v1/docs
Health checkhttp://localhost:7400/health

Backup & recovery

Three backup domains

DomainWhatCommand
ConfigSQLite databasedocker exec corecube sqlite3 /data/corecube.db ".backup /data/corecube-backup.db"
KnowledgePostgreSQLdocker exec pgvector pg_dump -U corecube corecube > backup.sql
StorageUploaded filesCopy /data/uploads/ from the corecube volume

Config-only restore

If you restore only the SQLite config (without the PostgreSQL knowledge), knowledge can be rebuilt by re-syncing all connections. This is slower but valid for disaster recovery.

Encryption key

The encryption key is required to decrypt connector credentials. If the key is lost, all connection credentials become permanently unreadable. Back up DATA_DIR/encryption.key separately from the data volume.

Upgrading

docker compose pull
docker compose up -d

CoreCube applies schema migrations automatically on startup.

Security hardening

See Security Hardening for the full guide. Key points for Docker:

  • The container runs as a non-root user (UID 1000)
  • The filesystem is read-only except for the data volume
  • cap_drop: ALL removes unnecessary Linux capabilities
  • PostgreSQL and CoreCube Inference are on an internal Docker network, not exposed to the host

We use cookies for analytics to improve our website. More information in our Privacy Policy.