Engram

Persistent knowledge base for AI agents

An open-source MCP server that gives your AI agent a long-term memory. Markdown files as source of truth, Xapian full-text search, typed graph relations between entries.

What is Engram?

AI agents lose their context between sessions. Every conversation starts from scratch, and hard-won knowledge about your infrastructure, your codebase, or your preferences vanishes the moment the session ends.

Engram solves this. It is a Model Context Protocol (MCP) server that gives any compatible AI agent -- Claude, ChatGPT, or others -- a persistent, searchable knowledge base. Entries are plain Markdown files with YAML front matter. The Xapian search index is a rebuildable cache, never the source of truth.

Your agent can remember facts, search across them with full-text queries and tag filters, recall individual entries with their graph relations, and forget what is no longer relevant. All backed by files you can version, grep, and read yourself.

Features

Full-Text Search

Powered by Xapian with French stemming. Find any entry by content, title, or tags with ranked results and optional tag filtering.

Smart Upsert

The remember tool detects duplicate titles before creating a new entry. Update existing knowledge or force a new entry -- your choice.

Graph Relations

Link entries with typed relations using kb://uuid#type URLs. Recall any entry and see both outgoing links and incoming backlinks.

Markdown Source of Truth

Every entry is a Markdown file with YAML front matter. The search index is a rebuildable cache. Your data is always human-readable and version-controllable.

Three Transports

Connect via stdio for local tools like Claude Code, SSE for network clients, or streamable HTTP for web integrations. One binary, three modes.

Docker-Ready

Alpine-based image, single command to run. Mount a volume for your knowledge directory and you are up in seconds.

Quick Start

Docker

# stdio (Claude Code, ChatGPT, etc.)
docker run -i -v ./knowledge:/knowledge cylian/engram

# SSE (network)
docker run -p 8192:8192 -v ./knowledge:/knowledge cylian/engram --transport sse

# HTTP (streamable)
docker run -p 8192:8192 -v ./knowledge:/knowledge cylian/engram --transport streamable-http

Claude Code Configuration

Add this to your MCP configuration file:

{
  "mcpServers": {
    "kb": {
      "command": "docker",
      "args": ["run", "-i", "--rm", "-v", "./knowledge:/knowledge", "cylian/engram"]
    }
  }
}

SSE Client Configuration

{
  "mcpServers": {
    "kb": {
      "type": "sse",
      "url": "http://your-host:8192/sse"
    }
  }
}

Tools

Engram exposes 7 MCP tools that your AI agent can call:

ToolDescriptionKey Parameters
rememberCreate or update an entry (upsert with smart duplicate detection)title, content, tags, entry_id, force
recallRead full content of an entry with its graph relationsentry_id
searchFull-text search with French stemming and optional tag filterquery, tags, limit
listBrowse entries sorted by title with optional tag filtertags, limit
tagsList all tags with entry counts--
forgetDelete an entry (removes both file and index record)entry_id
rebuildRebuild the Xapian search index from Markdown files--

How it Works

Markdown Files
YAML front matter + content in entries/
Xapian Index
Full-text search with stemming in index/fr/
MCP Protocol
stdio / SSE / HTTP transport layer
AI Agent
Claude, ChatGPT, or any MCP client

The Markdown files are the single source of truth. Each entry is stored as a .md file with a UUID filename, YAML front matter (id, title, tags), and free-form Markdown content.

The Xapian index is a performance cache built from these files. It can be deleted and fully rebuilt at any time using the rebuild tool -- no data is ever lost.

Engram communicates with your AI agent through the Model Context Protocol, which defines a standard way for AI tools to expose capabilities. Your agent discovers the available tools automatically and calls them as needed.

Graph Relations

Entries can reference each other using kb://uuid#type links embedded in their Markdown content. The #type fragment defines the relation kind -- runs-on, depends-on, mirrors, or any string you choose. If omitted, the type defaults to related.

Example

This service runs on [pmx-0102](kb://a1b2c3d4-...#runs-on)
and depends on [PostgreSQL](kb://f9e8d7c6-...#depends-on).

What recall returns

When you recall an entry, the relations field includes both directions:

  • out -- outgoing links from this entry (e.g., runs-on, depends-on)
  • in -- incoming backlinks from other entries pointing here

Each relation includes the type, the target id, and the target title. This gives your AI agent a navigable knowledge graph without any dedicated graph database -- just Markdown files and conventions.