Skip to content

Installation

One-Line Install (Recommended)

The fastest way to get DjinnBot running. A single command installs all prerequisites, launches the setup wizard, and starts the stack:

curl -fsSL https://raw.githubusercontent.com/BaseDatum/djinnbot/main/install.sh | bash

The installer automatically detects your platform and installs everything needed. Then the setup wizard walks you through:

Clone the repo

Detects an existing checkout or clones fresh from GitHub.

Generate secrets

Creates .env with encryption keys, internal tokens, and API key for the mcpo proxy.

Enable authentication

Recommended for anything beyond localhost — sets up JWT auth with optional 2FA.

Configure networking

Detects your server IP and sets up network access. Optional SSL/TLS with Traefik and automatic Let’s Encrypt certificates.

Choose a model provider

Enter your API key for OpenRouter, Anthropic, OpenAI, or any supported provider.

Start the stack

Launches Docker Compose with all 6 services — your AI team is ready.

Supported platforms: Ubuntu, Debian, Fedora, CentOS/RHEL, Rocky/Alma, Amazon Linux, Arch, macOS (Intel and Apple Silicon).

The setup wizard is idempotent — safe to re-run anytime. It detects existing configuration and skips what’s already done. Run djinn setup again to change settings or add SSL later.

Manual Install

If you prefer to set things up yourself:

Prerequisites

That’s it. No Node.js, no Python, no database setup. Docker handles everything.

Clone & Configure

git clone https://github.com/BaseDatum/djinnbot.git
cd djinnbot
cp .env.example .env

Open .env in your editor and set your API key:

# Required — this is the only thing you must set
OPENROUTER_API_KEY=sk-or-v1-your-key-here
OpenRouter gives you access to Claude, GPT-4, Gemini, Kimi, and dozens of other models through a single API key. It’s the fastest way to get started. You can also use direct provider keys (Anthropic, OpenAI, etc.) — see LLM Providers for details.

Generate Secrets

For production deployments, generate all required secrets:

# Encryption key for secrets at rest (AES-256-GCM)
python3 -c "import secrets; print('SECRET_ENCRYPTION_KEY=' + secrets.token_hex(32))" >> .env

# Internal service-to-service auth token
python3 -c "import secrets; print('ENGINE_INTERNAL_TOKEN=' + secrets.token_urlsafe(32))" >> .env

# JWT signing key for user authentication
python3 -c "import secrets; print('AUTH_SECRET_KEY=' + secrets.token_urlsafe(64))" >> .env
When AUTH_ENABLED=true, both ENGINE_INTERNAL_TOKEN and AUTH_SECRET_KEY are required. The server will refuse to start without them. The setup wizard generates these automatically.

Enable Authentication

To enable the built-in authentication system, set in .env:

AUTH_ENABLED=true

When enabled, the dashboard will redirect to a setup page on first visit where you create an admin account and optionally enable two-factor authentication. See Security Model for details.

Start Services

docker compose up -d

This starts 6 services:

ServiceContainerPortPurpose
PostgreSQLdjinnbot-postgres5432State database
Redisdjinnbot-redis6379Event bus (Redis Streams)
API Serverdjinnbot-api8000REST API (FastAPI)
Pipeline Enginedjinnbot-engineOrchestrates agent execution
Dashboarddjinnbot-dashboard3000React web interface
MCP Proxydjinnbot-mcpo8001Tool server proxy

Check that everything is healthy:

docker compose ps

You should see all services running with healthy status.

SSL/TLS with Traefik

For production deployments exposed to the internet, DjinnBot includes a Traefik reverse proxy with automatic Let’s Encrypt certificates.

The setup wizard configures this automatically when you choose SSL. To set it up manually:

Requirements:

  • A domain name with an A record pointing to your server
  • Ports 80 and 443 accessible from the internet

Configuration:

  1. Set environment variables in .env:
DOMAIN=djinn.example.com
BIND_HOST=127.0.0.1          # Only Traefik faces the internet
TRAEFIK_ENABLED=true
VITE_API_URL=https://djinn.example.com
  1. Create proxy/.env:
ACME_EMAIL=you@example.com
DOMAIN=djinn.example.com
  1. Create the shared Docker network and start the proxy:
docker network create djinnbot-proxy
docker compose -f proxy/docker-compose.yml up -d
  1. Start the main stack (it picks up docker-compose.override.yml automatically):
docker compose up -d

Traefik handles:

  • Automatic certificate issuance and renewal via Let’s Encrypt
  • HTTP to HTTPS redirection
  • Routing /v1/* to the API and everything else to the dashboard
  • SSE streaming support with proper flush intervals

Verify

Open the dashboard:

http://localhost:3000

If authentication is enabled, you’ll be redirected to the setup page to create your admin account.

Check the API:

curl http://localhost:8000/v1/status

You should see a JSON response with "status": "ok" and connected service counts.

What Just Happened

Docker Compose built and started the entire stack:

  1. PostgreSQL stores pipeline runs, steps, agent state, project boards, user accounts, and settings
  2. Redis provides the event bus via Redis Streams — reliable, ordered message delivery between services
  3. API Server (FastAPI/Python) exposes REST endpoints for the dashboard, CLI, and external integrations, with optional JWT authentication
  4. Pipeline Engine (TypeScript/Node) runs the state machine that coordinates agent execution, spawns agent containers, manages memory, and bridges messaging platforms (Slack, Discord, Telegram, Signal, WhatsApp)
  5. Dashboard (React/Vite) serves the web interface with real-time SSE streaming, authentication pages, and project management
  6. mcpo proxies MCP tool servers (GitHub, web fetch, etc.) as OpenAPI endpoints for agents

When a pipeline runs, the engine dynamically spawns agent containers — isolated Docker containers with a full engineering toolbox — for each step. These are separate from the 6 core services and are created/destroyed per step.

Next Steps