HippoFabric gives your agents permanent graph memory — one correction sticks forever, every session builds on the last. Works with GPT-4, Claude, Gemini, or any LLM you're already using.
# Before Luthen — resets every session agent = YourAgent(memory=[]) # lost on restart # After Luthen — permanent cognitive memory from luthen import HippoFabric, AgentRunner brain = HippoFabric(seed="your-domain") agent = AgentRunner(brain=brain).start()
The problem every agent team hits
You've built the agent. It works in demos. But in production — every session resets, every correction evaporates, every preference disappears. That's not a memory problem. It's an architecture problem.
Every conversation starts from scratch. Users repeat themselves. Your agent never learns who they are, what they prefer, or what they've already corrected. You're shipping a goldfish as a colleague.
session.memory = [] # every timeRAG finds similar text — not related concepts. "Budget" and "Q4 forecast" are deeply connected in meaning but distant in embedding space. At 100k+ documents, false positives compound silently.
cosine_similarity ≠ understandingYour user says "never do X." Next session, it does X again. Every behavioral fix requires a retraining cycle. You're not building an intelligent agent — you're maintaining a bug list.
fine_tune(corrections, epochs=10)Your vector store was computed once and never changes. It can't strengthen associations from use, adapt from corrections, or improve with experience. Memory that can't grow isn't memory.
embeddings.frozen = TrueHow HippoFabric works
Inspired by the human hippocampus — concepts and weighted connections, activation that spreads through related ideas, memory that strengthens with use. We stopped using vectors. We built a brain.
brain.ingest()
Store concepts with their relationships and weights. Every piece of knowledge joins the network and forms connections with what's already there — not embeddings, a living graph that grows.
HippoFabric · Layer 1brain.think()
Activate a concept — spreading activation flows through linked ideas by weight. Not similar text. Related ideas. The right context surfaces because it's genuinely connected, not just textually close.
HippoFabric · Layer 1brain.remember(user_id)
Every user's preferences, corrections, and history persist forever across all sessions. One call loads everything. Your agent picks up exactly where it left off — always.
Agent SDK · Layer 2brain.correct()
One correction cascades through memory, rules, and prompt templates simultaneously. No retraining. No engineers required. Permanent from the moment you call it.
Agent SDK · Layer 2"Neurons that fire together, wire together. Luthen applies Hebbian learning to every agent interaction."
Three layers. One platform.
Built as a stack — so you can start with just the memory layer and add the rest when you're ready.
Layer 1 · Memory
For developers
The memory layer that learns from every conversation — permanently. Weighted concept edges, spreading activation, Hebbian learning. Replace your vector store.
Layer 2 · Runtime
For ML engineers
Five lines to a live cognitive agent. Memory, tools, feedback loops, cascading corrections, and sleep consolidation — wired out of the box.
Layer 3 · Governance
For CTOs & ops
See exactly what your agents remember, decide, and do. Full audit trail, brain health monitoring, PII masking, and SafetyGate — all in a no-code console.
LongMemEval · ICLR 2025 · independent benchmark
Tested against ChatGPT, Claude, and Gemini on the gold standard for AI memory evaluation.
Multi-session reasoning
90.6%
50%+ better than ChatGPT
vs 57.7% · ranked #1 overall
Inference speed
0.46s
10× faster than competitors
vs 2–5s · zero API cost
Overall rank
#1
Category of one
no competitor matches memory arch.
"90.6% accuracy in multi-session reasoning — more than 50% better than ChatGPT, at 10× faster inference and zero API cost."
LongMemEval · ICLR 2025 · independent benchmark · April 2026Emergent capabilities
These weren't designed into the architecture. They emerged from building a memory layer that actually thinks.
Users teach agents conversationally. "Never use bullet points" becomes permanent behavior instantly — no retraining, no engineers, no redeployment. The correction cascades through memory, rules, and prompt templates in one call.
Co-activated concepts spontaneously cluster into higher-order understanding. Your agent understands that "budget" and "Q4 forecast" are connected — without being told. It doesn't just retrieve. It reasons.
Clone a trained brain into a new agent. It inherits every learned association, every behavioral rule, every cognitive frame — and evolves independently from there. Deploy expertise at scale.
Task-specific context that optimises itself through use. Prompts that stabilise into expert patterns without manual tuning. The agent gets better at the tasks it does most — automatically.
Agents replay interaction traces offline, strengthen high-signal edges, and crystallise schemas — exactly like biological brains during sleep. Memory improves without any new input.
Enterprise Integration Hub
Agents speak plain language. The Integration Hub inside Cortex handles protocol, authentication, and data transformation automatically — no connector configuration, no custom code.
Independent validation · April 2026
10/10
Memory architecture
Category of one
10/10
Behavioral learning
Nobody else has this
9/10
Sleep consolidation
Unique in production
9.1
Overall score
out of 10
"Architecturally ahead of the entire field in memory & behavioral learning."
Independent validation · April 2026Resources
Research, guides, and thinking from the Luthen team — on cognitive AI, enterprise agents, and the future of intelligent work.
AI Trends · Featured
Why we stopped using RAG and built a hippocampus instead
Two years building AI systems led us to one conclusion: semantic search was never designed to be a brain. Here's what we found, what we built, and why biological memory architecture changes everything for enterprise AI agents.
Product · HippoFabric
HippoFabric vs vector stores: a benchmark breakdown
90.6% multi-session accuracy vs 57.7% for ChatGPT. How we ran the LongMemEval tests and what the numbers actually mean.
AI Trends · Enterprise
The four generations of AI agents — and why most enterprises are stuck on generation two
From rule-based bots to cognitive agents that evolve. Where the market is, where it's going, and what separates generation four from everything before it.
Guide · Procurement
Agentic procurement: from purchase order to managed asset
The complete guide to deploying a procurement agent network — seven stages, eight agents, one closed loop that compounds with every transaction.
AI Trends · Knowledge Work
The knowledge worker replacement thesis — what AI can actually own
70% of knowledge work is retrievable. The honest breakdown of what agents replace, what they assist with, and what genuinely requires a human.
Book a demo
20 minutes. We'll show you persistent memory, behavioral learning, and the correction cascade — live in a real agent.
No commitment. No sales pitch. We'll show you exactly how HippoFabric solves the memory problem for your use case. Responds within 4 hours · hello@luthen.ai
© 2026 Luthen. All rights reserved.
Luthen is a registered trademark.