A v0 artifact for Manu & the Hyperspell team · built in 60 minutes

Hyperspell vs Mem0, Letta, Zep & LangChain Memory — picking an agent memory layer in 2026

The agent-memory category is now five live products with overlapping pitches. The page that should answer "why Hyperspell over the open-source standard?" doesn't exist yet on hyperspell.com. This is a v0 of that page — honest, side-by-side, no overclaiming. Use it, fork it, keep what's true.

Hyperspell YC F25

For: integration-first agents

Connect tools, get a context graph back as a filesystem any agent can read. Integrations as the unit of truth, not chat history.

Mem0

For: chat memory

Open-source SDK + hosted SaaS. Memory derived from chat history, compressed and re-injected. Most popular by GitHub stars.

Letta ex-MemGPT

For: research-grade agents

OS for stateful agents. Berkeley research lineage. Memory + agent runtime fused — pick Letta, you adopt their agent abstraction.

Zep

For: knowledge graphs

Temporal knowledge graph + memory. Strong opinions about entity extraction and time-aware retrieval. Cloud + self-hosted.

LangChain Memory

For: incumbent users

Built-in to the LangChain framework. Lowest friction if you're already on LangChain; otherwise pulls in the whole framework.

Section 1

Where they actually differ

Every memory product's homepage says "persistent context for AI agents". The honest differences are downstream of one decision: what is your agent's source of truth? Chat? Connected tools? A typed knowledge graph? An agent-runtime state machine? Pick your answer and the right product narrows itself.

Dimension Hyperspell Mem0 Letta Zep LangChain Memory
Source of truth connected tools Slack, Notion, GDrive — indexed continuously Chat / message history Agent runtime state (core + recall + archival memory) Chat history compiled into a temporal knowledge graph Whatever your chain reads — usually chat history
Abstraction filesystem agents read() from a virtual FS SDK m.add() / m.search() stateful agent — you instantiate a Letta agent, not a memory store SDK + graph retrieve nodes/edges or summaries Memory class attached to a Chain or Agent
Open source? no SaaS only yes Apache-2.0 + hosted yes Apache-2.0 + hosted community edition + cloud yes MIT, part of LangChain
Auth / connectors built-in "one line" account linking — the moat DIY (you bring the data) DIY DIY DIY (LangChain has separate connectors)
What it stores Summaries / memories. Not raw data. Compressed memories from chat Structured agent state (3 memory tiers) Facts, entities, time-anchored relationships Whatever your Memory subclass writes
Compliance SOC 2 + GDPR SOC 2 (cloud), HIPAA enterprise Self-hosted (you handle it) SOC 2 (cloud) Your stack handles it
"Hello, memory" code hs.connect("notion") → fs.read() m.add(messages, user_id="alice") letta.create_agent(memory=ChatMemory()) zep.memory.add(session_id, msgs) ConversationBufferMemory()
Best fit team size Startup & mid-market shipping integrated agents Indie devs to enterprise Researchers + production teams who want full control Teams with knowledge-graph-shaped problems Teams already standardized on LangChain
The unique bet "Agents shouldn't recreate connector pipelines" — make integrations the memory layer "Chat compression is the memory primitive" "Agents need a typed runtime, not a vector store" "Time-aware knowledge graphs beat embeddings" "Memory is a chain step, not a product"
Section 2

Use the right one for the job

Honesty serves Hyperspell better than overclaiming. The cohorts below are the ones for which each project is genuinely the best fit. A page that says "use Mem0 if you're indexing chat" is the page candidates and customers will trust enough to keep reading.

Use Hyperspell

If your agent's value comes from connected tools

Your product asks the user to link their Slack / Notion / GDrive / GitHub / whatever. That data flow IS the agent. You don't want to spend Q1 building OAuth for ten SaaS APIs and an indexing pipeline behind it. Hyperspell collapses that into one connector + one filesystem read.

  • Multi-source context is the differentiator, not chat history.
  • You'd rather pay per-user-month than build a connector team.
  • The filesystem abstraction maps cleanly to how your agent already reads files / docs.

Use Mem0

If memory is fundamentally chat-derived

You're building a chatbot, support agent, or any product where the user's history with the agent is the memory you need to preserve. Mem0's compression-and-recall loop is purpose-built for this. Open source means you can self-host it the moment cloud bills get scary.

  • You don't need third-party data sources baked in.
  • You want optionality between SDK and SaaS.
  • You like the GitHub-stars community signal (currently the largest in the category).

Use Letta

If you're adopting an agent runtime, not just a memory layer

You buy into the MemGPT-paper thesis: agents need an explicit OS-style runtime with tiered memory and structured state, not just a vector store. Pick Letta when you'd rather instantiate Agents than glue a memory product to your own loop.

  • Research lineage matters to your team.
  • You're fine giving up a chunk of your existing agent loop in exchange for tier-managed state.

Use Zep

If your problem is shaped like a knowledge graph

You're modeling entities and relationships between them across time — a CRM-augmenting agent, a longitudinal-care agent, a legal-events agent. Embeddings alone keep returning fuzz; you actually need typed nodes and edges. That's Zep's bet.

  • Temporal reasoning is core to your product (not "remember this" but "remember when").
  • You'd rather query users.find() than vec.search().

Use LangChain Memory

If you're already deep in LangChain and the cost of switching is high

You've built on chains and agents for a year, your team knows the abstractions, and the memory step is one piece of a larger system. Pulling in a separate memory product just to avoid ConversationBufferMemory is yak-shaving. Stay in-framework until you outgrow it.

  • Switching cost > capability gap.
  • You're prototyping, not yet in production.

Use no memory product at all

If your product is single-session

If your agent is a one-shot tool that runs and exits, "memory" is just your prompt. Don't add a vendor for a problem you don't have. Revisit when retention or personalization becomes a roadmap line item.

Section 3

Honest answers to the questions a candidate / buyer asks

Manu — these are the four questions a candidate evaluating Hyperspell as their next job, or a buyer evaluating Hyperspell against Mem0, will ask. The page that pre-answers them earns time.

"Why pay when Mem0 is open source?"

Because you're paying for connectors, not for memory. The integration layer is the moat — getting Slack/Notion/GDrive auth flows production-ready takes a quarter for a small team. Mem0 doesn't ship that. You'd build it yourself, then add Mem0 on top of your indexing pipeline. Hyperspell collapses both into one bill.

"How is the filesystem abstraction not just S3 with steps?"

Because reads are context-engineered: a read() against the virtual FS surfaces a structured/LLM-ready slice, not the raw blob. The filesystem isn't storage — it's a retrieval surface that happens to have a familiar mental model so agents written today can adopt it without rewriting their data layer.

"What happens to my customers' raw data?"

Per Hyperspell's site: only summaries and memories are stored, not raw documents. SOC 2 + GDPR. No model training on customer data. This belongs above the fold, not in a security tab.

"Why YC F25 — what changed in the last six months?"

The agent-memory category has gone from "nice-to-have RAG add-on" to its own line item on infra budgets in < 12 months. Five products are now actively pitching the same buyer. Hyperspell's bet on connectors-as-memory is a reasonable answer to "the next layer of differentiation has to be data flow, not retrieval algorithm". (You'd say it more crisply than I just did.)