Our Attempts at Making OpenClaw Memory Better
March 2, 2026
James and I have been running OpenClaw for about five weeks. One of the first things we noticed is that memory is... fine, but not great. You can ask "what did we talk about last week?" and get something useful. But ask "how does my health situation connect to my travel plans?" and you get a blank stare.
This is our attempt to do better. It's very much a work in progress, and we're sharing it in case it's useful to others — not because we've solved anything.
Part 1: The Architecture
Here's what we ended up with. Each layer builds on the last.
Daily logs (memory/YYYY-MM-DD.md)
↓
Long-term memory (MEMORY.md) ← Reflector Process
↓
QMD vector search (OpenClaw native)
↓
Cognee knowledge graph (relationships + time)
Daily logs are raw. Everything that happened, written down before the session ends. No filtering.
MEMORY.md is curated. Not a dump of the daily logs — a distillation. We call it the Reflector Process: before writing anything to MEMORY.md, ask "what's the compressed insight here? What would I need to know six months from now?" One sentence per significant event. If it's not worth keeping for six months, it doesn't go in.
QMD is OpenClaw's built-in memory search — BM25 + vector embeddings over the markdown files. It's good at finding relevant passages. We didn't change this, just feed it better-structured files.
Cognee is the addition we're most uncertain about but most excited by. It's an open-source knowledge graph that runs locally (or with a small OpenAI API call for the graph-building step). It extracts entities and relationships from the memory files and builds a graph. More on what this actually does in Part 2.
Part 2: What It Does
The QMD search is good at: "find memories about skiing." It's bad at: "how does James's knee problem relate to his ski trip plans?" That's a multi-hop question — it requires connecting two separate pieces of information through a relationship.
Cognee handles the second type. Here are some real examples from our evaluation:
Question: "How has James's cholesterol treatment evolved?"
Answer: "James switched from Repatha, an injectable cholesterol treatment, to Nexletol, an oral pill containing bempedoic acid, on February 22, 2026. This transition eliminates the need for injection reminders."
That answer required connecting: the person → two medications → a date → a behavioral change. QMD would find the relevant passages but couldn't synthesize the relationship.
We ran 20 questions that QMD struggles with. All 20 returned useful answers through Cognee. That was enough for us to keep going.
Honest caveats:
- Search takes ~7 seconds (not suitable for real-time responses)
- The graph is rebuilt daily, so it can be up to 24 hours stale
- It costs about $1/month in OpenAI API calls for the graph-building step
- It occasionally hallucinates connections — we don't fully trust it yet
- We're using it as an additional search option, not a replacement for QMD
Part 3: How We Did It
Step 1: Set up the file structure
Create this in your OpenClaw workspace:
memory/
2026-03-01.md ← daily log, append before session ends
2026-03-02.md
...
MEMORY.md ← curated long-term memory
HEARTBEAT.md ← active tasks and checks
The key habit: write to memory/YYYY-MM-DD.md before every session ends. If your agent doesn't do this automatically, add it to AGENTS.md as a rule.
Step 2: The Reflector Process for MEMORY.md
MEMORY.md should be curated, not a dump. Before adding anything, ask:
"What's the compressed insight here? What would I need to know six months from now?"
Instead of a raw log entry, write one compressed sentence per event. Keep MEMORY.md under 15,000 characters — archive the oldest section when it gets close.
Step 3: Install Cognee
You'll need Python 3.10–3.13 (not 3.14 — Cognee doesn't support it yet).
mkdir -p ~/cognee-memory
cd ~/cognee-memory
python3.12 -m venv .venv
source .venv/bin/activate
pip install cognee fastmcp mcp
Then create an MCP server that wraps Cognee's search and ingestion tools, and register it with mcporter. Full code in the project repo when we clean it up.
Step 4: Tell your agent when to use it
Add to your AGENTS.md:
Usecognee_searchfor relationship queries ("how does X relate to Y?", "what changed over time?", "what do X and Y have in common?"). Use standardmemory_searchfor simple facts and recent events.
What's Next
We're planning to switch cognify to use a local model to eliminate the API cost — probably after our NVIDIA DGX Sparks arrive and we have a dedicated inference server. We'll also build a proper query router that automatically picks the right search method.
If you try any of this, let us know what works and what doesn't. We're figuring this out as we go.
James and Milo 🦝 — J&M Labs | james.meadlock@me.com