
Claude Managed Agents Memory: Cross-Session Guide
Summary
Build Claude agents that remember users across sessions with filesystem-based memory stores.
Anthropic just shipped Memory for Managed Agents as a public beta on the Claude Platform. It plugs a real filesystem into your hosted agent so it can write notes during one session and read them back in the next. If you have ever asked yourself “how do I make my Claude agent remember this user tomorrow?” — this is the answer, and it ships without you running a vector DB or a Redis.
In this guide you will create a memory store, pre-load it with reference docs, attach it to a session, watch the agent write its own memories, then start a fresh session and see those memories survive. End-to-end in under 20 lines of Python.
Prerequisites
- Python 3.10+ and an Anthropic API key with Managed Agents access
pip install anthropic(1.45+ has thebeta.memory_storesnamespace)- Familiarity with managed agents — you have created an
agentand anenvironmentat least once - About 10 minutes
Step 1 — Create the client
Memory APIs live under client.beta. The SDK auto-injects the required beta header for you.
import os
from anthropic import Anthropic
client = Anthropic(api_key=os.environ["ANTHROPIC_API_KEY"])
Step 2 — Create a memory store
A memory store is a versioned, workspace-scoped collection of text files. When a session starts, the store is mounted as a directory and Claude reads/writes it with the same file tools it already knows.
store = client.beta.memory_stores.create(
name="User Preferences",
description="Per-user preferences, prior issues, and project context.",
)
print(store.id)
# memstore_01HxZ8K9...
The description is appended to the agent’s system prompt at session start, so write it like an instruction — the agent will use it to decide what is worth remembering.
Step 3 — Pre-load reference material (optional)
You can seed the store with files the agent should treat as authoritative — brand voice rules, formatting standards, or per-tenant context. Each memory is capped at 100 KB (~25K tokens).
client.beta.memory_stores.memories.create(
store.id,
path="/standards/formatting.md",
content=(
"# Output formatting\n"
"- Always reply in Markdown.\n"
"- Use ISO dates (YYYY-MM-DD).\n"
"- Prefer SI units.\n"
),
)
Step 4 — Attach the store to a session
Memory stores can only be attached at session creation — you cannot hot-swap them on a running session. Pass them in the resources array with read_write access (the default).
session = client.beta.sessions.create(
agent=AGENT_ID,
environment_id=ENVIRONMENT_ID,
title="Onboarding Acme Corp",
resources=[
{
"type": "memory_store",
"memory_store_id": store.id,
"access": "read_write",
}
],
)
Step 5 — Let the agent write a memory
Now run a turn that gives the agent something worth remembering. Claude decides on its own when to write_file into the mounted memory directory.
turn = client.beta.sessions.turns.create(
session_id=session.id,
input=[{
"role": "user",
"content": (
"Hi! I'm Maya from Acme Corp. We are a B2B fintech, "
"we always ship in EU English (no Oxford comma), and "
"our brand color is #0E7A4F. Please remember this for next time."
),
}],
)
print(turn.output[-1].content[0].text)
Example output:
Got it, Maya — noted for next time:
• Tenant: Acme Corp (B2B fintech)
• Locale: EU English, no Oxford comma
• Brand colour: #0E7A4F
I have written these to /users/maya/profile.md so future
sessions pick them up automatically.
Step 6 — Verify in a fresh session
End the session, start a new one with the same store, and ask Claude something that requires the memory.
new_session = client.beta.sessions.create(
agent=AGENT_ID,
environment_id=ENVIRONMENT_ID,
resources=[
{"type": "memory_store",
"memory_store_id": store.id,
"access": "read_write"}
],
)
turn = client.beta.sessions.turns.create(
session_id=new_session.id,
input=[{"role": "user", "content": "Draft a 1-line release note for Maya."}],
)
print(turn.output[-1].content[0].text)
# > Acme Corp release: shipped invoice CSV export, available today — colour #0E7A4F.
Notice the agent recalled Maya’s tenant, EU spelling, and brand colour without you injecting any context. The memory store did the work.
Step 7 — Inspect or edit memories from your app
You will eventually want a “What we know about you” screen, or a way for a human reviewer to fix a wrong memory. List with view="full" to get file contents inline:
mems = client.beta.memory_stores.memories.list(
store.id, view="full", path_prefix="/users/maya/"
)
for m in mems.data:
print(m.path, "\n", m.content[:200], "\n---")
Common pitfalls
- Attaching memory mid-session. Not supported — you must create a new session.
- One store per tenant, not per user. If two customers share a store, Claude will cross-contaminate. Scope by user_id or tenant_id and create one store per scope.
- read_only vs read_write. A read-only mount silently rejects writes. Use it for shared knowledge bases (style guides, taxonomies) and a separate read_write store for user state.
- 100 KB per memory. Long transcripts get truncated. Have the agent summarize before saving, or shard across paths like
/sessions/2026-05-05.md. - Memories are versioned. Each write creates a new version attributed to the session that produced it — useful for rollback and audit, easy to forget when debugging “stale” data.
Quick reference
| Action | Method | Notes |
|---|---|---|
| Create store | client.beta.memory_stores.create(name, description) | Returns memstore_... |
| Pre-load file | memory_stores.memories.create(store_id, path, content) | 100 KB max per memory |
| List memories | memory_stores.memories.list(store_id, view='full') | Filter with path_prefix |
| Attach to session | sessions.create(resources=[{type, memory_store_id, access}]) | Only at session creation |
| Read-only mount | access='read_only' | Writes silently rejected |
Next steps
- Wire one store per
tenant_idin your DB and reuse the IDs. - Add a nightly cron that summarizes long memories with a quick Haiku call.
- Build a “forget me” flow that calls
memories.deletefor GDPR. - Compare memory cost vs. RAG on your traffic — memory wins on small per-user state, RAG still wins on million-doc corpora.
Filed under AI / Agentic AI — ContentBuffer Guides.
Comments
Be the first to comment