v1.0 AVAILABLE
Self-hosted · Bring your own keys · Open source

One memory layer.
Every AI. Your hardware.

AImnesia is a persistent memory system for Claude, GPT, and local LLMs. Runs on your own machine via Docker. BM25 + vector search with nightly refinement, bi-temporal schema, and a Model Context Protocol server so Claude Desktop remembers between conversations — without surrendering your data to anyone's cloud.

Get AImnesia — $79 View on GitHub →
One command deploy No subscription Your data never leaves your LAN
// The problem

Every AI conversation starts from zero.

Context windows reset. ChatGPT "memory" stays inside OpenAI. Claude Projects don't talk to GPT. Local models forget everything on restart. You've explained your project twenty times.

Vendor-locked memory

OpenAI's memory stays with OpenAI. Anthropic's Projects stay with Anthropic. Nothing crosses providers.

Cloud-dependent recall

Your private notes, research, and conversation history — processed and stored on someone else's servers.

Shallow retrieval

Flat vector stores return fuzzy matches. No temporal awareness. No conflict detection. No consolidation.

// Capabilities

Built like a database, not a chatbot plugin.

Postgres + pgvector under the hood. Eight features that turn raw transcripts into searchable, self-organizing memory.

/ 01

Hybrid search

BM25 full-text + vector similarity, merged via Reciprocal Rank Fusion and reranked with a cross-encoder. Finds things keyword search misses and things vector search misses.

RRF + cross-encoder
/ 02

Cross-AI access

MCP server for Claude Desktop, REST API for GPT/ChatGPT custom actions, Python client for local models. One memory layer, every assistant.

MCP · REST · Python
/ 03

Nightly pipeline

A refiner splits weak memories. A synthesizer clusters related chunks. A dream pass surfaces analogical bridges. Your corpus gets better while you sleep.

7 scheduled passes
/ 04

Bi-temporal schema

Every memory tracks valid_from, valid_until, and created_at. Superseded facts are soft-expired, never destroyed. You can always replay what you knew on any date.

Auditable history
/ 05

Memory tiers

Dynamic memories decay. Static (synthesized) memories persist. Anchor memories are write-protected. Restricted memories store pointers without content.

4-tier protection
/ 06

Obsidian export

Two-way sync with an Obsidian vault — five-level hierarchy from hubs to leaves. Your memory graph becomes a navigable notebook.

Markdown-native
/ 07

Conflict detection

Nightly scans surface contradictions between memories. You review and resolve — the system doesn't silently pick a winner.

Human-in-the-loop
/ 08

Versioned schema

Sha256-checksummed migration runner applies pending changes on container start. Upgrade safely; never touch ALTER TABLE again.

Automatic migrations
// Installation

From clone to running — three steps.

If you've used Docker Compose before, you're thirty seconds from a working memory layer. If you haven't, the README walks you through it.

01

Clone & configure

Pull the repo, copy .env.example to .env, and drop in your Voyage AI and OpenRouter keys. That's the only config you touch.

$ git clone … && cp .env.example .env
02

Start the stack

One command brings up Postgres+pgvector, the API server, the intake server, and the nightly pipeline. Schema migrations run automatically on first boot.

$ docker compose up -d
03

Connect your AI

Point Claude Desktop at the MCP server, hit the REST API from GPT, or query directly from Python. Your AI now has long-term memory.

http://localhost:8899
// Requirements

Runs on anything that runs Docker.

Tested on Windows 10/11 (Docker Desktop), macOS (Apple Silicon & Intel), and Linux (Ubuntu 22.04+). No GPU required for the base system.

Required

  • OSWindows / macOS / Linux
  • DockerDocker Desktop or Engine 24+
  • RAM4 GB free (8 GB recommended)
  • Disk~5 GB for images + your data
  • Voyage AI keyFree tier works (50M tok/mo)
  • OpenRouter keyPay-per-use, ~$0.50/mo typical

Optional

  • OllamaFor fully-local LLM fallback
  • GPUOnly if you run Ollama locally
  • ObsidianFor vault-based browsing
  • Claude DesktopFor MCP integration
  • Reverse proxyIf exposing beyond LAN
  • Backupspg_dump — your data, your rules
// Pricing

One price. Yours forever.

No subscription. No seat counts. No telemetry pinging home. You own the license, the data, and the machine it runs on.

AImnesia v1.0
$79ONE-TIME · LIFETIME LICENSE
  • Full source code + Docker images
  • All ingestion adapters (Claude export, PDFs, DOCX, CSV, plain text)
  • MCP server for Claude Desktop
  • REST API + browser UI
  • Nightly refinement pipeline
  • Versioned schema migrations
  • 12 months of updates included
  • Self-hosted — you run it, you own it
BYOK · NO VENDOR LOCK-IN · RUNS OFFLINE (except embedding + cloud LLM calls)

Your memory should belong to you.

AImnesia is the memory layer I built for my own assistants because nothing on the market would run on my machine. Now it can run on yours.

Get AImnesia on GitHub Re-read the features