The agent-memory category is now five live products with overlapping pitches. The page that should answer "why Hyperspell over the open-source standard?" doesn't exist yet on hyperspell.com. This is a v0 of that page — honest, side-by-side, no overclaiming. Use it, fork it, keep what's true.
Connect tools, get a context graph back as a filesystem any agent can read. Integrations as the unit of truth, not chat history.
Open-source SDK + hosted SaaS. Memory derived from chat history, compressed and re-injected. Most popular by GitHub stars.
OS for stateful agents. Berkeley research lineage. Memory + agent runtime fused — pick Letta, you adopt their agent abstraction.
Temporal knowledge graph + memory. Strong opinions about entity extraction and time-aware retrieval. Cloud + self-hosted.
Built-in to the LangChain framework. Lowest friction if you're already on LangChain; otherwise pulls in the whole framework.
Every memory product's homepage says "persistent context for AI agents". The honest differences are downstream of one decision: what is your agent's source of truth? Chat? Connected tools? A typed knowledge graph? An agent-runtime state machine? Pick your answer and the right product narrows itself.
| Dimension | Hyperspell | Mem0 | Letta | Zep | LangChain Memory |
|---|---|---|---|---|---|
| Source of truth | connected tools Slack, Notion, GDrive — indexed continuously | Chat / message history | Agent runtime state (core + recall + archival memory) | Chat history compiled into a temporal knowledge graph | Whatever your chain reads — usually chat history |
| Abstraction | filesystem agents read() from a virtual FS |
SDK m.add() / m.search() |
stateful agent — you instantiate a Letta agent, not a memory store | SDK + graph retrieve nodes/edges or summaries | Memory class attached to a Chain or Agent |
| Open source? | no SaaS only | yes Apache-2.0 + hosted | yes Apache-2.0 + hosted | community edition + cloud | yes MIT, part of LangChain |
| Auth / connectors | built-in "one line" account linking — the moat | DIY (you bring the data) | DIY | DIY | DIY (LangChain has separate connectors) |
| What it stores | Summaries / memories. Not raw data. | Compressed memories from chat | Structured agent state (3 memory tiers) | Facts, entities, time-anchored relationships | Whatever your Memory subclass writes |
| Compliance | SOC 2 + GDPR | SOC 2 (cloud), HIPAA enterprise | Self-hosted (you handle it) | SOC 2 (cloud) | Your stack handles it |
| "Hello, memory" code | hs.connect("notion") → fs.read() |
m.add(messages, user_id="alice") |
letta.create_agent(memory=ChatMemory()) |
zep.memory.add(session_id, msgs) |
ConversationBufferMemory() |
| Best fit team size | Startup & mid-market shipping integrated agents | Indie devs to enterprise | Researchers + production teams who want full control | Teams with knowledge-graph-shaped problems | Teams already standardized on LangChain |
| The unique bet | "Agents shouldn't recreate connector pipelines" — make integrations the memory layer | "Chat compression is the memory primitive" | "Agents need a typed runtime, not a vector store" | "Time-aware knowledge graphs beat embeddings" | "Memory is a chain step, not a product" |
Honesty serves Hyperspell better than overclaiming. The cohorts below are the ones for which each project is genuinely the best fit. A page that says "use Mem0 if you're indexing chat" is the page candidates and customers will trust enough to keep reading.
Your product asks the user to link their Slack / Notion / GDrive / GitHub / whatever. That data flow IS the agent. You don't want to spend Q1 building OAuth for ten SaaS APIs and an indexing pipeline behind it. Hyperspell collapses that into one connector + one filesystem read.
You're building a chatbot, support agent, or any product where the user's history with the agent is the memory you need to preserve. Mem0's compression-and-recall loop is purpose-built for this. Open source means you can self-host it the moment cloud bills get scary.
You buy into the MemGPT-paper thesis: agents need an explicit OS-style runtime with tiered memory and structured state, not just a vector store. Pick Letta when you'd rather instantiate Agents than glue a memory product to your own loop.
You're modeling entities and relationships between them across time — a CRM-augmenting agent, a longitudinal-care agent, a legal-events agent. Embeddings alone keep returning fuzz; you actually need typed nodes and edges. That's Zep's bet.
users.find() than vec.search().You've built on chains and agents for a year, your team knows the abstractions, and the memory step is one piece of a larger system. Pulling in a separate memory product just to avoid ConversationBufferMemory is yak-shaving. Stay in-framework until you outgrow it.
If your agent is a one-shot tool that runs and exits, "memory" is just your prompt. Don't add a vendor for a problem you don't have. Revisit when retention or personalization becomes a roadmap line item.
Manu — these are the four questions a candidate evaluating Hyperspell as their next job, or a buyer evaluating Hyperspell against Mem0, will ask. The page that pre-answers them earns time.
Because you're paying for connectors, not for memory. The integration layer is the moat — getting Slack/Notion/GDrive auth flows production-ready takes a quarter for a small team. Mem0 doesn't ship that. You'd build it yourself, then add Mem0 on top of your indexing pipeline. Hyperspell collapses both into one bill.
Because reads are context-engineered: a read() against the virtual FS surfaces a structured/LLM-ready slice, not the raw blob. The filesystem isn't storage — it's a retrieval surface that happens to have a familiar mental model so agents written today can adopt it without rewriting their data layer.
Per Hyperspell's site: only summaries and memories are stored, not raw documents. SOC 2 + GDPR. No model training on customer data. This belongs above the fold, not in a security tab.
The agent-memory category has gone from "nice-to-have RAG add-on" to its own line item on infra budgets in < 12 months. Five products are now actively pitching the same buyer. Hyperspell's bet on connectors-as-memory is a reasonable answer to "the next layer of differentiation has to be data flow, not retrieval algorithm". (You'd say it more crisply than I just did.)