Every OpenClaw agent starts every session with amnesia. No memory of yesterday’s conversation, no recall of the project it spent four hours debugging, no awareness of the preferences its operator stated six times. The model’s context window is blank.
The architecture works this way by design. Understanding how OpenClaw memory works — specifically how openclaw remembers across sessions — how plain Markdown files on disk become persistent identity, curated knowledge, and operational continuity — is the difference between an agent that forgets everything and one that compounds intelligence over months.
This guide breaks down the full OpenClaw memory hierarchy and how openclaw agent memory actually works: what each file does, how sessions load context, why context windows alone can’t solve persistence, and how the file system becomes the memory layer that LLMs lack natively.
The Core Problem: Context Windows Are Not Memory
A context window is a buffer, not a brain. Claude’s 200K-token window, GPT-4’s 128K — these are impressive, but they’re ephemeral. When a session ends, the window empties. When the model restarts, everything is gone.
For a one-shot coding question, that’s fine. For an agent that manages your projects, remembers your preferences, tracks decisions across weeks, and maintains a consistent personality — it’s a non-starter.
OpenClaw solves this with a deceptively simple approach: the file system is the memory. According to the official documentation, “OpenClaw remembers things by writing plain Markdown files in your agent’s workspace. The model only ‘remembers’ what gets saved to disk — there is no hidden state.”
No database. No vector store requirement. No proprietary memory format. Markdown files in a directory, read at session start, written during sessions, and persisted across every restart. The agent reads its own notes, like a surgeon reviewing a patient chart before walking into the operating room.
The OpenClaw Memory Hierarchy: Four Layers
OpenClaw’s persistence model has four distinct layers, each serving a different function. Roberto Capodieci’s breakdown of workspace files describes them as “active instructions your agent reads every time it wakes up.” That’s accurate — these files aren’t documentation. They’re injected directly into the system prompt.
Here’s the hierarchy, from most stable to most volatile:
Layer 1: SOUL.md — Identity That Never Changes
SOUL.md defines who the agent is. Personality, communication style, core values, boundaries. It’s the equivalent of a human’s temperament — stable traits that persist regardless of what task is at hand.
A typical SOUL.md includes:
- Identity: name, role, persona characteristics
- Core values: what the agent prioritizes (efficiency, safety, honesty)
- Communication style: how it writes (concise vs. verbose, formal vs. casual, emoji usage)
- Boundaries: what it will and won’t do, safety rails, escalation rules
In production workspaces, SOUL.md files range from a few paragraphs to detailed operating manuals. One common pattern: agents define task prioritization frameworks directly in SOUL.md, so every decision gets filtered through the same lens regardless of session.
SOUL.md is injected into every session’s system prompt automatically. The agent doesn’t choose to read it — OpenClaw handles that at the infrastructure level. This means identity is enforced, not optional.
Layer 2: AGENTS.md — The Operating Procedure
If SOUL.md is who the agent is, AGENTS.md is how it operates. This file defines workspace rules, session startup procedures, memory management protocols, and behavioral constraints.
A well-structured AGENTS.md typically specifies:
- Session startup sequence: which files to read, in what order, before doing anything
- Memory protocols: when to write daily notes, when to update long-term memory, what to capture
- Safety rules: what requires confirmation, what’s off-limits, how to handle destructive operations
- External vs. internal boundaries: what the agent can do freely (read files, search the web) vs. what requires permission (sending emails, posting publicly)
The session startup sequence is particularly important. A common AGENTS.md pattern instructs the agent to read SOUL.md, then USER.md (a profile of the human operator), then recent daily memory files, and finally MEMORY.md for long-term context — all before responding to any message.
AGENTS.md is also auto-injected. Like SOUL.md, it’s part of the bootstrap files that OpenClaw loads into the system prompt at the start of every session, as documented in the Context reference. The default bootstrap files are: AGENTS.md, SOUL.md, TOOLS.md, IDENTITY.md, USER.md, HEARTBEAT.md, and BOOTSTRAP.md.
Layer 3: MEMORY.md — Curated Long-Term Knowledge
MEMORY.md is the agent’s long-term memory. Where SOUL.md and AGENTS.md are relatively static configuration, MEMORY.md is a living document that the agent reads and writes over time.
According to the OpenClaw docs, MEMORY.md stores “durable facts, preferences, and decisions” and is “loaded at the start of every DM session.” This is the curated layer — not raw logs, but distilled knowledge the agent has decided is worth keeping.
In practice, a mature MEMORY.md contains:
- Operational lessons: rules the agent learned from mistakes (“never do X without checking Y first”)
- Project context: key facts about active work, current status, important decisions made
- Preferences and patterns: how the operator likes things done, communication quirks, workflow habits
- Infrastructure notes: server details, account configurations, recurring task parameters
The key distinction: MEMORY.md is curated. An agent running a periodic maintenance routine reviews recent daily files, identifies what’s worth keeping, updates MEMORY.md with distilled learnings, and removes outdated entries. Daily files are the journal; MEMORY.md is the textbook.
One security consideration: some operators restrict MEMORY.md loading to direct (main session) conversations only, excluding group chats and shared contexts. The rationale is straightforward — MEMORY.md often contains personal context about the operator that shouldn’t leak to third parties in a Discord channel or group message.
Layer 4: Daily Memory Files — The Raw Log
The most volatile layer: memory/YYYY-MM-DD.md files. These are daily notes — running logs of what happened during each session. Decisions made, tasks completed, context discovered, errors encountered.
OpenClaw auto-loads today’s and yesterday’s daily files at session start. This gives the agent immediate recall of recent work without needing to search. Older daily files stay on disk, searchable but not pre-loaded.
The daily files serve three purposes:
- Session continuity: when an agent picks up work after a restart, yesterday’s notes bridge the gap
- Audit trail: a chronological record of what the agent did and why
- Memory source material: the raw input that gets distilled into MEMORY.md during periodic reviews
How OpenClaw Memory Loads at Session Start
When a new session begins, OpenClaw assembles the system prompt automatically. The Context documentation describes this as a rebuild that happens “each run” and includes:
- Bootstrap files injected: AGENTS.md, SOUL.md, TOOLS.md, IDENTITY.md, USER.md — loaded under a “Project Context” section in the system prompt
- Skills list: compact metadata about available capabilities
- Runtime metadata: host, OS, model, timezone, workspace location
- Tool schemas: JSON definitions for every available tool
The bootstrap files have a per-file size cap (default: 20,000 characters) and a total cap across all files (default: 150,000 characters). If a file exceeds the limit, it’s truncated — which is why keeping these files focused matters. You can inspect exactly what’s loaded and whether truncation occurred by running /context list in any OpenClaw session.
After bootstrap injection, the agent follows its AGENTS.md instructions to read additional files — daily memory, MEMORY.md, and anything else specified in the startup sequence. These additional reads happen via tool calls (the read tool), not automatic injection.
Compaction: What Happens When Context Fills Up
Long sessions generate context. Tool outputs, file reads, conversation history — it accumulates. When the context window approaches its limit, OpenClaw runs compaction: older conversation turns are summarized, the summary replaces the original messages, and the session continues with room to breathe.
The critical detail: before compaction runs, OpenClaw triggers a silent turn that reminds the agent to save important context to memory files. This automatic memory flush prevents information loss. Facts that existed only in the conversation get written to disk before the summarization wipes them from the active window.
The full conversation history stays on disk regardless — compaction changes what the model sees, not what’s stored. But without the pre-compaction memory flush, nuanced context (a preference mentioned once, a decision rationale explained in passing) could be lost in summarization.
OpenClaw Memory Search: Finding What You Wrote
Writing memories to files is half the equation. Retrieving them is the other half.
OpenClaw provides two memory tools, according to the docs:
memory_search: finds relevant notes using hybrid search — combining vector similarity (semantic meaning) with keyword matching (exact terms, IDs, code symbols). This works automatically when an embedding provider key (OpenAI, Gemini, Voyage, or Mistral) is configured.memory_get: reads a specific memory file or line range directly.
The hybrid approach matters. Pure semantic search misses exact identifiers (“project-alpha-v2”). Pure keyword search misses conceptual connections (“that pricing discussion from last week”). Combining both covers the gap.
Three backend options exist for memory storage: the default SQLite-based builtin engine (works out of the box), QMD (a local-first sidecar with reranking and query expansion), and Honcho (an AI-native cross-session memory layer with user modeling). Each adds different retrieval capabilities on top of the same file-based foundation.
Structuring an OpenClaw Agent Workspace That Remembers
The memory system works best when the workspace is intentionally structured. Based on community patterns and the official documentation, here’s what a production workspace typically looks like:
~/.openclaw/workspace/
├── AGENTS.md # Operating procedures, session startup rules
├── SOUL.md # Identity, personality, values
├── IDENTITY.md # Name, avatar, basic profile
├── USER.md # Operator profile and preferences
├── TOOLS.md # Environment-specific tool notes
├── MEMORY.md # Curated long-term memory
└── memory/
├── 2026-03-30.md # Daily log
├── 2026-03-31.md # Daily log
└── 2026-04-01.md # Today's log
Three principles for effective OpenClaw memory management:
Write aggressively, curate ruthlessly. Daily files should capture everything — decisions, context, errors, lessons. MEMORY.md should contain only what’s worth loading into every future session. The cost of re-reading a fact is tokens; the cost of forgetting a critical decision is rework.
Make retrieval mandatory. Add explicit instructions in AGENTS.md telling the agent to search memory before acting on assumptions. Without this, agents tend to guess rather than check their own notes. As one community guide puts it: “Without it, the agent guesses instead of checking its notes.”
Version control the workspace. Connecting the workspace to a Git repository adds version history to every memory file. If a critical fact gets accidentally removed from MEMORY.md, it’s recoverable from commit history. Several community members treat this as standard practice for production deployments. For those running agents on messaging surfaces like Telegram, the OpenClaw Telegram setup guide covers how memory files interact with multi-session conversations on that channel.
Why Files Instead of a Database
The file-based approach is a deliberate design choice. Markdown files are human-readable, version-controllable, editable with any text editor, and portable across systems. An operator can open MEMORY.md, see exactly what their agent “knows,” and edit it directly.
Compare this to opaque embedding stores or proprietary memory formats: you can’t easily audit what the agent remembers, can’t selectively edit memories, and can’t move them to another system without migration tooling.
The tradeoff is retrieval speed at scale. For workspaces with thousands of daily files spanning years, keyword-based file search slows down. That’s where the memory search backends (SQLite with vector embeddings, QMD, or Honcho) add indexing on top of the file layer — preserving the human-readable source of truth while enabling fast semantic retrieval.
Frequently Asked Questions
Does OpenClaw remember conversations automatically?
No. OpenClaw agents have no built-in memory between sessions beyond what’s written to files. The agent must actively write information to MEMORY.md or daily memory files during a session for it to persist. However, OpenClaw does trigger an automatic memory flush before compaction summarizes the context window, which saves important unsaved facts to disk before they would otherwise be lost.
What is the difference between SOUL.md and MEMORY.md in OpenClaw?
SOUL.md defines the agent’s identity — personality, values, communication style, and boundaries. It’s relatively static and changes rarely. MEMORY.md stores curated long-term knowledge — lessons learned, project context, operational rules discovered over time. It’s a living document the agent updates regularly. Both are injected at session start, but they serve fundamentally different purposes: SOUL.md is who the agent is, MEMORY.md is what the agent knows.
Can I edit my OpenClaw agent’s memory directly?
Yes. All OpenClaw memory files are plain Markdown in the workspace directory (default ~/.openclaw/workspace/). You can open MEMORY.md, daily files, SOUL.md, or any other workspace file in a text editor and change them. The agent will read the updated version at the next session start. This is one of the core advantages of file-based memory over opaque database storage.
How does OpenClaw handle memory when the context window fills up?
When a session approaches the model’s context window limit, OpenClaw runs compaction — summarizing older conversation turns and keeping recent messages intact. Before compaction begins, a silent memory flush prompts the agent to write important unsaved context to memory files on disk. The full conversation transcript is preserved on disk regardless; compaction only changes what the model sees in its active window. Users can also trigger compaction manually with the /compact command.
This is a reference guide and will be updated as OpenClaw’s memory system evolves. For the latest technical documentation, see docs.openclaw.ai/concepts/memory.