Agent memory that learns

With Hindsight, your agents remember your users and get better at their job over time. Open source, MIT licensed.

Hindsight memory graph showing interconnected memories across users

The fundamentals

Everything you need from agent memory

Per-user memory
Every user gets persistent context. Preferences, history, and decisions stay separate.
Cross-session persistence
Context survives session boundaries. Pick up where you left off, weeks later.
Fast memory recall
Parallel search returns the most relevant memories in under 100ms.
Works with any LLM
Model-agnostic memory layer. Swap LLMs without losing what your agent learned.

A few of the teams building with Vectorize

Nvidia
Groq
Electronic Arts
Bronson
Brightness

Agents make mistakes.
Yours learn from them.

Every other memory system stores facts and retrieves them. That works until your agent encounters the same problem twice and makes the same mistake both times.

Learns from mistakes
When a tool call fails or a user corrects your agent, that becomes an experience. Next time, your agent knows what went wrong.
Detects patterns automatically
The reflection layer synthesizes individual facts into consolidated knowledge. Patterns emerge from data, not from manual tagging.
Builds judgment over time
Your agent doesn't just get more data. It gets better judgment. Curated mental models guide common situations.

The learning difference, measured

LongMemEval scores across four memory systems. Peer-reviewed benchmark, independently reproduced.

Hindsight94.6%
Supermemory85.2%
Zep71.2%
GPT-4o60.2%

Memory that installs itself

Tell Claude Code or Cursor to add Hindsight. It reads the docs, writes the config, and sets up its own memory. One command. No boilerplate.

terminal
$npx add-skill vectorize-io/hindsight --skill hindsight-docs

Installing hindsight-docs skill...

✓ Connected to Hindsight MCP server

✓ Memory tools registered (remember, recall, reflect)

Ready. Your agent now has memory.

Works with any MCP-capable agent. Your agent gets remember, recall, and reflect tools automatically.

Agents are evolving. Memory should too.

Most agent memory systems were built for chatbots. Those days are gone.

Agents that work together, attend meetings, learn from each other, and take on increasingly complex tasks. They need memory that compounds across every agent and every user. Retrieval-only systems won't get you there.

What compound memory looks like

01Shared context across agents
Agent A learns a user preference. Agent B applies it automatically.
02Memory that improves itself
The reflection layer consolidates raw observations into reusable knowledge.
03Per-user across every session
Months of context, instantly recalled. No retrieval lag, no cold starts.

Need help building with agent memory?

We offer hands-on implementation services, team training, and architecture consulting to get your agents learning faster. Whether you're integrating Hindsight into an existing system or starting from scratch, we'll work with you directly.