Memori Labs released version 0.0.11 of its OpenClaw plugin on May 7, introducing agent-native memory infrastructure that automatically converts execution traces into structured knowledge graphs. The system captures tool calls, decisions, workflow steps, and outcomes from what agents actually do, not just what they say in conversation, according to the company’s announcement.

The update targets a known limitation in current agent architectures. OpenClaw’s default memory system relies on flat markdown files (MEMORY.md and daily logs) that agents read at session start. These files grow unbounded, lack structure, and force agents to re-process entire context on every interaction. Memori replaces this with a queryable knowledge graph where agents control when and what to retrieve, scoped by project, session, entity, or time range.

How It Works

The plugin hooks into OpenClaw’s event lifecycle. After each interaction, Memori asynchronously processes the agent’s execution trace and extracts structured memories: facts, decisions, patterns, outcomes, and relationships. Memory creation happens post-response, adding zero latency to agent interactions.

Agents retrieve memories through targeted queries rather than loading entire context windows. “By automating the creation of structured memories from the agent trace instead of limiting the memory knowledge graph to what agents say, we are capturing tool calls, decisions, workflow steps, outcomes, and other trace events that give the agent a complete picture of its prior activities,” said Adam B. Struck, CEO and Co-Founder of Memori Labs, in the PRWeb announcement.

The system also generates structured daily briefings built from execution traces, covering priorities, risks, active goals, open loops, and known failure patterns.

Benchmark Performance

Memori achieved 81.95% overall accuracy on the LoCoMo long-conversation memory benchmark while using an average of 1,294 tokens per query, according to results published on the project’s GitHub repository. That token count represents 4.97% of the full-context footprint.

Compared with other retrieval-based memory systems, Memori outperformed Zep, LangMem, and Mem0 while reducing prompt size by roughly 67% versus Zep and lowering context cost by more than 20x versus full-context prompting.

Broader Availability

The initial OpenClaw plugin launched in March 2026 with basic memory recall and capture. The May 7 update adds the agent-native trace processing. Memori Labs plans to bring the same capability to Hermes Agent, Claude, Cursor, and OpenAI Codex via Model Context Protocol (MCP).

Installation requires OpenClaw v2026.3.2 or later and takes less than two minutes: openclaw plugins install @memorilabs/openclaw-memori. The project is Apache 2.0 licensed with SDKs available for TypeScript (npm) and Python (pip).

The Memory Competition

Agent memory is becoming a contested layer. OpenClaw’s built-in system uses file-based persistence. Claude Code maintains session memory within Anthropic’s infrastructure. ChatGPT stores conversation history server-side. None of these provide structured, queryable recall from execution data. Memori’s bet is that production agents need selective, persistent memory rather than repeatedly cramming full context into every prompt, and that trace-based extraction captures more useful signal than conversation alone.