Architecture
DjinnBot is a distributed system built around an event-driven architecture. Every component communicates through Redis — making it reliable, observable, and easy to extend.
System Overview
graph TB
Dashboard["Dashboard<br/>(React + Vite)"] -->|SSE| API["API Server<br/>(FastAPI)"]
CLI["CLI<br/>(Python)"] --> API
API --> DB["PostgreSQL"]
API --> Redis["Redis<br/>(Streams + Pub/Sub<br/>+ JuiceFS Metadata)"]
Engine["Pipeline Engine<br/>(State Machine)"] --> API
Engine --> Redis
Engine --> Agent1["Agent Container<br/>(Isolated)"]
Engine --> Agent2["Agent Container<br/>(Isolated)"]
Engine --> Swarm["Swarm Executor"]
Swarm --> Agent3["Agent Container"]
Agent1 --> Redis
Agent2 --> Redis
Agent1 --> mcpo["mcpo Proxy<br/>(MCP Tools)"]
Agent2 --> mcpo
Engine --> Slack["Slack Bridge"]
Engine --> Discord["Discord Bridge"]
Engine --> Telegram["Telegram Bridge"]
Engine --> Signal["Signal Bridge"]
Engine --> WhatsApp["WhatsApp Bridge"]
JFS["JuiceFS FUSE Mount"] --> RustFS["RustFS<br/>(S3 Object Store)"]
JFS --> Redis
Engine --> JFS
Agent1 --> JFS
Agent2 --> JFS
API --> JFS
style Dashboard fill:#3b82f6,color:#fff,stroke:#2563eb
style CLI fill:#3b82f6,color:#fff,stroke:#2563eb
style API fill:#8b5cf6,color:#fff,stroke:#7c3aed
style Engine fill:#8b5cf6,color:#fff,stroke:#7c3aed
style Swarm fill:#ec4899,color:#fff,stroke:#db2777
style DB fill:#059669,color:#fff,stroke:#047857
style Redis fill:#dc2626,color:#fff,stroke:#b91c1c
style Agent1 fill:#f59e0b,color:#000,stroke:#d97706
style Agent2 fill:#f59e0b,color:#000,stroke:#d97706
style Agent3 fill:#f59e0b,color:#000,stroke:#d97706
style mcpo fill:#6366f1,color:#fff,stroke:#4f46e5
style Slack fill:#4ade80,color:#000,stroke:#22c55e
style Discord fill:#5865F2,color:#fff,stroke:#4752C4
style Telegram fill:#26A5E4,color:#fff,stroke:#1E96D1
style Signal fill:#3A76F0,color:#fff,stroke:#2E62CC
style WhatsApp fill:#25D366,color:#fff,stroke:#1DA851
style JFS fill:#14b8a6,color:#fff,stroke:#0d9488
style RustFS fill:#14b8a6,color:#fff,stroke:#0d9488
Services
API Server (Python / FastAPI)
The API server is the central coordination point:
- REST API — CRUD operations for runs, pipelines, agents, projects, memory, users, admin, LLM calls, usage, swarms, and more (30+ routers)
- SSE streaming — Server-Sent Events for real-time dashboard updates (activity feed, run progress, swarm status, LLM call tracking)
- Authentication — JWT access/refresh tokens, TOTP 2FA, API keys, OIDC SSO
- Database access — PostgreSQL via SQLAlchemy with Alembic migrations
- File handling — attachment uploads with text extraction and image processing
- PDF processing — structured extraction via OpenDataLoader with automatic chunking and shared vault ingest
- Code Knowledge Graph — Tree-sitter indexing pipeline, KuzuDB graph storage, and query/impact/context endpoints per project
- Browser cookie management — upload, grant/revoke, and distribute cookies to agent containers for authenticated browsing
- Workflow policies — per-project SDLC stage routing rules that define which stages are required, optional, or skipped per task work type
- GitHub webhooks — receive events from GitHub for issue/PR integration
- Ingest endpoint — accept meeting transcripts and documents for Grace to process
The API server does not execute agents. It stores state, manages auth, and serves the frontend.
Pipeline Engine (TypeScript / Node.js)
The engine is the brain of the system:
- State machine — advances pipeline steps based on events, handles branching, loops, and retries
- Container orchestration — creates isolated Docker containers for each agent step
- Swarm executor — parallel DAG-aware multi-task execution across multiple agents
- Memory management — loads/saves ClawVault memories for each agent session, runs memory consolidation
- Pulse scheduler — fires agent wake-up cycles on configurable schedules with named routines
- Agent coordination — work ledger, two-tier messaging, wake guardrails
- Messaging bridges — routes events to Slack, Discord, Telegram, Signal, and WhatsApp; processes agent mentions and DMs across all platforms
- MCP manager — writes tool server config, monitors health, discovers tools
- Container log streaming — relays container logs to the admin panel via Redis
- LLM call logging — captures per-API-call token counts, latency, and cost data
- Code graph indexing — triggers and monitors code knowledge graph builds via Redis events
- Camoufox browser — integrates an anti-detection browser into agent containers for authenticated web browsing
The engine communicates with agent containers via Redis pub/sub — sending commands and receiving events (output chunks, tool calls, completion signals).
Dashboard (React / Vite / TanStack Router)
A full-featured single-page application:
- React with TypeScript and TanStack Router (file-based routing)
- Tailwind CSS for styling
- SSE for real-time streaming (activity feed, run output, swarm progress, LLM tracking)
- Three.js / WebGL for 3D memory graph visualization
- Sigma.js for interactive code knowledge graph visualization
- Admin panel — container logs, LLM call log, API usage analytics, user management, notifications
- Rich chat — file uploads, image attachments, HTML previews, grouped tool calls
- Swarm views — DAG visualization, task detail, status bar, timeline
- Browser cookie management — upload cookies, manage agent grants, Cookie Bridge extension support
The dashboard talks directly to the API server. It’s served as static files by nginx in the Docker container, with runtime API URL injection (no rebuild needed for custom domains).
Swarm Executor
A parallel execution engine for running multiple agents concurrently:
graph TD
Plan["Planning Agent<br/>(Decompose)"] --> DAG["Task DAG"]
DAG --> T1["Task 1<br/>(No deps)"]
DAG --> T2["Task 2<br/>(No deps)"]
DAG --> T3["Task 3<br/>(Depends on 1+2)"]
T1 --> Agent1["Agent A"]
T2 --> Agent2["Agent B"]
T3 --> Agent3["Agent C"]
style Plan fill:#8b5cf6,color:#fff
style DAG fill:#3b82f6,color:#fff
style T1 fill:#f59e0b,color:#000
style T2 fill:#f59e0b,color:#000
style T3 fill:#f59e0b,color:#000
style Agent1 fill:#059669,color:#fff
style Agent2 fill:#059669,color:#fff
style Agent3 fill:#059669,color:#fff
The swarm executor:
- Receives a DAG of tasks with dependency edges
- Identifies tasks with no unmet dependencies
- Spawns agent containers in parallel for ready tasks
- Streams progress via SSE as tasks complete
- Unlocks downstream tasks as dependencies are met
- Handles failures and retries per-task
Storage Layer (JuiceFS + RustFS)
All workspace files, memory vaults, and agent sandboxes live on a shared POSIX filesystem backed by JuiceFS and RustFS. See Storage for full details.
- RustFS — an S3-compatible object storage server that holds the actual file data
- JuiceFS — a FUSE filesystem that presents the S3 data as a standard POSIX directory tree at
/data - Redis DB 2 — serves as JuiceFS’s metadata engine (separate from the event bus on DB 0)
The JuiceFS mount is shared across all containers via a Docker named volume. The engine, API server, and every dynamically spawned agent container see the same /data directory. This is how agents share workspaces, memory vaults, and sandboxes without direct container-to-container mounts.
Redis (Event Bus + Metadata)
Redis serves three roles:
- Streams (DB 0) — reliable, ordered event delivery between the API server and engine. Events like
RUN_CREATED,STEP_QUEUED,STEP_COMPLETEflow through Redis Streams. - Pub/Sub (DB 0) — real-time communication between the engine and agent containers. The engine sends commands, agents publish output chunks and events.
- JuiceFS metadata (DB 2) — stores the filesystem metadata (directory tree, file attributes, chunk mappings) for the JuiceFS FUSE mount.
PostgreSQL (State Store)
All persistent state lives in PostgreSQL:
- Pipeline run state, step outputs, and timing
- Agent configuration and tool overrides
- Project boards, tasks, and dependencies
- User accounts, API keys, OIDC providers
- Chat sessions, messages, and attachments
- LLM call logs (per-API-call token/cost tracking)
- Memory scoring data
- Pulse routines and schedules
- Secrets (encrypted at rest with AES-256-GCM)
- Admin notifications
- Waitlist and onboarding state
Agent Containers
Each agent step spawns a fresh Docker container built from Dockerfile.agent-runtime. See Agent Containers for details.
mcpo Proxy
The mcpo proxy exposes MCP tool servers as REST/OpenAPI endpoints. See MCP Tools for details.
Event Flow
Pulse (Autonomous Work)
The primary workflow — agents pick up tasks from the board:
sequenceDiagram
participant Engine
participant Container as Agent Container
participant Redis
participant API as API Server
participant Dashboard
Engine->>Container: Spawn (persona + memories + tools)
Container->>Container: get_ready_tasks()
Container->>Container: claim_task()
Container->>Container: Work (code, test, commit)
Container->>Redis: Stream output chunks
Redis->>API: Relay events
API->>Dashboard: SSE push
Container->>Container: open_pull_request()
Container->>Container: transition_task()
Container->>Engine: Session complete
Engine->>Engine: Destroy container
Pipeline Runs
For structured workflows (planning, onboarding, engineering SDLC):
- Dashboard →
POST /v1/runs→ API Server creates run in PostgreSQL - API Server → publishes
RUN_CREATEDevent → Redis Streams - Engine picks up event → creates run state machine → publishes
STEP_QUEUEDfor first step - Engine → spawns Agent Container with persona, memories, and workspace
- Agent executes step → streams output via Redis Pub/Sub → Engine relays to API → Dashboard displays via SSE
- Agent completes → Engine evaluates result → routes to next step (or branches, retries, loops)
- Steps continue until pipeline completes or fails
Swarm Runs
For parallel multi-task execution:
- Dashboard → creates swarm via API Server
- Engine → Swarm Executor receives task DAG
- Swarm Executor → identifies ready tasks (no unmet dependencies) → spawns agent containers in parallel
- Each agent works independently → streams output via Redis
- On task completion → swarm executor unlocks downstream tasks → spawns more agents
- Progress streams to dashboard via SSE (DAG visualization, timeline, status bar)
Tech Stack Summary
| Component | Technology | Language |
|---|---|---|
| API Server | FastAPI, SQLAlchemy, Alembic, JWT | Python |
| Pipeline Engine | Custom state machine, Redis Streams | TypeScript |
| Swarm Executor | DAG scheduler, parallel container orchestration | TypeScript |
| Dashboard | React, TanStack Router, Tailwind, Three.js | TypeScript |
| Agent Runtime | pi-mono (pi-agent-core), PTC, Camoufox | TypeScript |
| Code Graph | Tree-sitter, KuzuDB, Louvain clustering | TypeScript |
| Agent Containers | Debian bookworm, full toolbox | Multi-language |
| Memory | ClawVault + QMDR | TypeScript |
| Event Bus | Redis Streams + Pub/Sub | — |
| Database | PostgreSQL 16 | — |
| Object Storage | RustFS (S3-compatible) | Rust |
| Filesystem | JuiceFS FUSE mount (metadata in Redis DB 2) | Go |
| MCP Proxy | mcpo | Python |
| CLI | Click, Rich TUI | Python |
| Build System | Turborepo | — |
| Orchestration | Docker Compose | — |
Monorepo Structure
DjinnBot is a Turborepo monorepo with npm workspaces:
- Engine, events, memory, containers, swarms (TypeScript)
- API server with 30+ routers (Python/FastAPI)
- Web UI with admin panel (React/TypeScript)
- Slack bridge and per-agent bots (TypeScript)
- Discord bridge, per-agent bots, streaming, session pool (TypeScript)
- Telegram bridge manager, per-agent bots via grammY (TypeScript)
- Signal bridge, signal-cli daemon, SSE listener, routing (TypeScript)
- WhatsApp bridge, Baileys socket, routing (TypeScript)
- Container entrypoint, tools, and PTC bridge (TypeScript)
- Tree-sitter indexing pipeline, KuzuDB storage (TypeScript)
- Cookie Bridge extension for Chrome and Firefox
The core package contains the bulk of the orchestration logic — the pipeline engine, Redis event bus, container runner, swarm executor, ClawVault memory integration, skill registry, MCP manager, chat session manager, pulse scheduler, agent coordination, and more.