Skip to content

Memory System

DjinnBot agents have persistent memory that survives across sessions. They remember decisions, learn from mistakes, and build knowledge over time. This is powered by ClawVault with semantic search via QMDR.

How It Works

Every agent has two memory vaults:

  • Personal vault (data/vaults/<agent-id>/) — private memories only this agent can access
  • Shared vault (data/vaults/shared/) — team-wide knowledge all agents can read and write

Memories are stored as markdown entries with metadata (type, tags, timestamps) and connected via wiki-links for graph traversal.

Memory Lifecycle

  1. Wake — when an agent session starts, ClawVault loads relevant memories into context
  2. Recall — during execution, agents search memories semantically before making decisions
  3. Remember — agents save important findings, decisions, and lessons
  4. Checkpoint — during long sessions, memories are periodically saved
  5. Sleep — on session end, a summary is saved with next steps

Memory Tools

recall — Search Memories

recall("search query", { limit: 5, profile: "default" })

Profiles optimize retrieval for different contexts:

ProfileUse Case
defaultGeneral purpose retrieval
planningTask planning and project context
incidentErrors, bugs, and lessons learned
handoffSession continuity information

remember — Save to Vault

remember(type, "Title", "Content with details", {
  shared: true,     // Share with all agents
  tags: ["tag1"]    // For search filtering
})

Memory types:

TypeWhen to Use
lessonLearned from a mistake or success
decisionImportant choice with rationale
patternRecurring approach that works
factImportant information about the project/team
preferenceHow someone or something prefers to work
handoffContext for resuming work later

Wiki-Link Knowledge Graph

Memories are connected using [[wiki-link]] syntax:

remember("decision", "MyApp: Tech Stack",
  "[[Project: MyApp]] will use FastAPI + PostgreSQL. " +
  "Considered Django (rejected: too opinionated) and Express (rejected: need Python). " +
  "See also [[MyApp: Architecture]], [[MyApp: API Design]].",
  { shared: true, tags: ["project:myapp", "architecture"] }
)

These links create a traversable knowledge graph. When an agent recalls context about a project, they start from [[Project: Name]] and follow links to discover related information.

The Anchor Pattern

Every project should have a root anchor memory that links to all related knowledge:

remember("fact", "Project: MyApp",
  "Root anchor for project MyApp.\n" +
  "Goal: [[MyApp: Goal]]\n" +
  "Tech: [[MyApp: Tech Stack]]\n" +
  "Scope: [[MyApp: V1 Scope]]",
  { shared: true, tags: ["project:myapp", "project-anchor"] }
)

Subsequent memories link back to the anchor, keeping the graph connected.

Semantic Search

Memory search uses embeddings for semantic similarity, not just keyword matching. When an agent calls recall("how we handle authentication"), it finds memories about auth patterns even if they don’t contain the exact word “authentication.”

The search pipeline:

  1. Query expansion — the query is expanded to capture related concepts
  2. Embedding — query is converted to a vector via text-embedding-3-small
  3. Retrieval — nearest neighbors found in the SQLite-backed vector store
  4. Reranking — results are reranked using gpt-4o-mini for relevance
  5. Injection — top results are injected into the agent’s context

All embedding and reranking runs through OpenRouter — no local GPU or model downloads required.

Browsing Memory

You can browse and search agent memories through:

  • Dashboard — the Memory page lets you view vaults and search semantically
  • CLIdjinnbot memory search eric "architecture decisions"
  • APIGET /v1/memory/search?agent_id=eric&query=architecture