Skip to main content

Kore Memory

The memory layer that thinks like a human.

Kore Memory is a Python library that gives AI agents a biologically-inspired memory system. It uses Ebbinghaus forgetting curves to decay memories over time, automatically scores importance without an LLM, and provides semantic search across 50+ languages -- all running fully offline on your machine.

pip install kore-memory

Why Kore Memory?

Most AI memory systems treat storage as a flat key-value dump or require expensive LLM calls for every operation. Kore Memory takes a fundamentally different approach:

  • Ebbinghaus Decay -- Memories naturally fade over time, just like human memory. Important memories last longer; trivial ones disappear. Each retrieval strengthens the memory (spaced repetition).
  • No LLM Required -- Importance scoring, compression, and search all run locally. No API keys, no cloud calls, no usage fees.
  • Fully Offline -- SQLite + local sentence-transformers. Works on air-gapped machines, CI pipelines, and edge devices.
  • Multi-Agent Ready -- Namespace isolation via X-Agent-Id headers. Each agent sees only its own memories.

Feature Comparison

FeatureKore MemoryMem0LettaMemori
Offline operationYesNoNoNo
No LLM requiredYesNoNoYes
Memory decay (Ebbinghaus)YesNoNoNo
Auto-importance scoring (local)YesVia LLMNoNo
Memory compressionYesNoNoNo
Semantic search (50+ languages)Yes (local)Via APIYesYes
Timeline APIYesNoNoNo
Tags and relations graphYesNoYesNo
TTL / auto-expirationYesNoNoNo
MCP server supportYesNoNoNo
Batch API (up to 100)YesNoNoNo
Export / Import (JSON)YesNoYesNo
Soft-delete / ArchiveYesNoNoNo
Prometheus metricsYesNoNoNo
Web dashboardYesNoNoNo

Core Concepts

The Memory Pipeline

Every memory flows through five stages:

  1. Save -- Content is received via REST API, Python SDK, or MCP tool
  2. Score -- Importance is auto-scored locally on a 1--5 scale (or explicitly set)
  3. Embed -- A local sentence-transformer generates a semantic embedding
  4. Store -- Memory is persisted to SQLite with an initial decay_score of 1.0
  5. Decay -- Over time, the decay score decreases following the Ebbinghaus curve

When you search, results are ranked by an effective score that combines three signals:

effective_score = similarity * decay * importance

This means recent, important, and semantically relevant memories always surface first.

Memory Categories

Memories can be organized into categories:

CategoryUse Case
generalDefault catch-all
projectProject-specific knowledge
tradingFinancial / trading context
financeFinancial records and decisions
personInformation about people
preferenceUser preferences and settings
taskTasks, deadlines, schedules
decisionArchitectural and strategic decisions

Namespace Isolation

Every API request can include an X-Agent-Id header. Memories are strictly isolated between agents -- agent A cannot read or search agent B's memories. This makes Kore safe for multi-tenant and multi-agent deployments.

Quick Example

# Start the server
pip install kore-memory[semantic]
kore

# Save a memory
curl -X POST http://localhost:8765/save \
-H "Content-Type: application/json" \
-H "X-Agent-Id: my-agent" \
-d '{"content": "User prefers concise responses in Italian", "category": "preference"}'

# Search
curl "http://localhost:8765/search?q=user+preferences&limit=5" \
-H "X-Agent-Id: my-agent"

Or with the Python SDK:

from kore_memory import KoreClient

with KoreClient("http://localhost:8765", agent_id="my-agent") as kore:
kore.save("User prefers dark mode", category="preference")
results = kore.search("dark mode", limit=5)
for memory in results:
print(memory.content, memory.decay_score)

Requirements

  • Python 3.11+
  • SQLite with FTS5 (included in most Python installations)
  • No GPU required (CPU-only sentence-transformers)

What's Next?