Skip to content

Installation

Prerequisites

You need exactly two things:

That’s it. No Node.js, no Python, no database setup. Docker handles everything.

Clone & Configure

git clone https://github.com/BaseDatum/djinnbot.git
cd djinnbot
cp .env.example .env

Open .env in your editor and set your API key:

# Required — this is the only thing you must set
OPENROUTER_API_KEY=sk-or-v1-your-key-here
OpenRouter gives you access to Claude, GPT-4, Gemini, Kimi, and dozens of other models through a single API key. It’s the fastest way to get started. You can also use direct provider keys (Anthropic, OpenAI, etc.) — see LLM Providers for details.

Optional: Encryption Key

For production deployments, generate a secrets encryption key:

# Generate and add to .env
python3 -c "import secrets; print('SECRET_ENCRYPTION_KEY=' + secrets.token_hex(32))" >> .env

This encrypts user-defined secrets (API keys, SSH keys, etc.) at rest. Without it, secrets are encrypted with an ephemeral key that resets on restart.

Optional: Internal Token

Protects the plaintext secrets endpoint from unauthorized access:

python3 -c "import secrets; print('ENGINE_INTERNAL_TOKEN=' + secrets.token_urlsafe(32))" >> .env

Without this, the endpoint that returns decrypted secrets is open to anyone who can reach the API. See Security Model for details.

Start Services

docker compose up -d

This starts 6 services:

ServiceContainerPortPurpose
PostgreSQLdjinnbot-postgres5432State database
Redisdjinnbot-redis6379Event bus (Redis Streams)
API Serverdjinnbot-api8000REST API (FastAPI)
Pipeline Enginedjinnbot-engineOrchestrates agent execution
Dashboarddjinnbot-dashboard3000React web interface
MCP Proxydjinnbot-mcpo8001Tool server proxy

Check that everything is healthy:

docker compose ps

You should see all services running with healthy status.

Verify

Open the dashboard:

http://localhost:3000

Check the API:

curl http://localhost:8000/v1/status

You should see a JSON response with "status": "ok" and connected service counts.

What Just Happened

Docker Compose built and started the entire stack:

  1. PostgreSQL stores pipeline runs, steps, agent state, project boards, and settings
  2. Redis provides the event bus via Redis Streams — reliable, ordered message delivery between services
  3. API Server (FastAPI/Python) exposes REST endpoints for the dashboard, CLI, and external integrations
  4. Pipeline Engine (TypeScript/Node) runs the state machine that coordinates agent execution, spawns agent containers, manages memory, and bridges Slack
  5. Dashboard (React/Vite) serves the web interface with real-time SSE streaming
  6. mcpo proxies MCP tool servers (GitHub, web fetch, etc.) as OpenAPI endpoints for agents

When a pipeline runs, the engine dynamically spawns agent containers — isolated Docker containers with a full engineering toolbox — for each step. These are separate from the 6 core services and are created/destroyed per step.

Next Steps