Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

EverMemOS is an open-source, enterprise-grade intelligent memory system. Our mission is to build AI memory that never forgets, making every conversation built on previous understanding.

License

Notifications You must be signed in to change notification settings

EverMind-AI/EverMemOS

Repository files navigation


Note

Memory Genesis Hackathon 2026

Join our AI Memory Hackathon! Build innovative applications, plugins, or infrastructure improvements powered by EverMemOS.

Tracks:

  • Agent + Memory - Build intelligent agents with long-term, evolving memories
  • Platform Plugins - Integrate EverMemOS with VSCode, Chrome, Slack, Notion, LangChain, and more
  • OS Infrastructure - Optimize core functionality and performance

Get Started with the Hackathon Starter Kit

Join our Discord to find teammates and brainstorm ideas!


divider divider

Table of Contents

Welcome to EverMemOS

Welcome to EverMemOS! Join our community to help improve the project and collaborate with talented developers worldwide.

Community Purpose
Discord Join our Discord community
Hugging Face Space Join our Hugging Face community to explore our spaces and models
X Follow updates on X
LinkedIn Connect with us on LinkedIn
Reddit Join the Reddit community

🌟 Star and stay tuned with us

star us gif


Introduction

πŸ’¬ More than memory β€” it's foresight.

EverMemOS enables AI to not only remember what happened, but understand the meaning behind memories and use them to guide decisions. Achieving 93% reasoning accuracy on the LoCoMo benchmark, EverMemOS provides long-term memory capabilities for conversational AI agents through structured extraction, intelligent retrieval, and progressive profile building.

EverMemOS Architecture Overview

How it works: EverMemOS extracts structured memories from conversations (Encoding), organizes them into episodes and profiles (Consolidation), and intelligently retrieves relevant context when needed (Retrieval).

πŸ“„ Paper β€’ πŸ“š Vision & Overview β€’ πŸ—οΈ Architecture β€’ πŸ“– Full Documentation

Latest: v1.2.0 with API enhancements + DB efficiency improvements (Changelog)


Why EverMemOS?

  • 🎯 93% Accuracy - Best-in-class performance on LoCoMo benchmark
  • πŸš€ Production Ready - Enterprise-grade with Milvus vector DB, Elasticsearch, MongoDB, and Redis
  • πŸ”§ Easy Integration - Simple REST API, works with any LLM
  • πŸ“Š Multi-Modal Memory - Episodes, facts, preferences, relations
  • πŸ” Smart Retrieval - BM25, embeddings, or agentic search

EverMemOS Benchmark Results
EverMemOS outperforms existing memory systems across all major benchmarks


Quick Start

Prerequisites

  • Python 3.10+ β€’ Docker 20.10+ β€’ uv package manager β€’ 4GB RAM

Verify Prerequisites:

# Verify you have the required versions
python --version # Should be 3.10+
docker --version # Should be 20.10+

Installation

# 1. Clone and navigate
git clone https://github.com/EverMind-AI/EverMemOS.git
cd EverMemOS
# 2. Start Docker services
docker-compose up -d
# 3. Install uv and dependencies
curl -LsSf https://astral.sh/uv/install.sh | sh
uv sync
# 4. Configure API keys
cp env.template .env
# Edit .env and set:
# - LLM_API_KEY (for memory extraction)
# - VECTORIZE_API_KEY (for embedding/rerank)
# 5. Start server
uv run python src/run.py --port 8001
# 6. Verify installation
curl http://localhost:8001/health
# Expected response: {"status": "healthy", ...}

βœ… Server running at http://localhost:8001 β€’ Full Setup Guide


Basic Usage

Store and retrieve memories with simple Python code:

import requests
API_BASE = "http://localhost:8001/api/v1"
# 1. Store a conversation memory
requests.post(f"{API_BASE}/memories", json={
 "message_id": "msg_001",
 "create_time": "2025-02-01T10:00:00+00:00",
 "sender": "user_001",
 "content": "I love playing soccer on weekends"
})
# 2. Search for relevant memories
response = requests.get(f"{API_BASE}/memories/search", json={
 "query": "What sports does the user like?",
 "user_id": "user_001",
 "memory_types": ["episodic_memory"],
 "retrieve_method": "hybrid"
})
result = response.json().get("result", {})
for memory_group in result.get("memories", []):
 print(f"Memory: {memory_group}")

πŸ“– More Examples β€’ πŸ“š API Reference β€’ 🎯 Interactive Demos


Demo

Run the Demo

# Terminal 1: Start the API server
uv run python src/run.py --port 8001
# Terminal 2: Run the simple demo
uv run python src/bootstrap.py demo/simple_demo.py

Try it now: Follow the Demo Guide for step-by-step instructions.

Full Demo Experience

# Extract memories from sample data
uv run python src/bootstrap.py demo/extract_memory.py
# Start interactive chat with memory
uv run python src/bootstrap.py demo/chat_with_memory.py

See the Demo Guide for details.


Advanced Techniques


Documentation

Guide Description
Quick Start Installation and configuration
Configuration Guide Environment variables and services
API Usage Guide Endpoints and data formats
Development Guide Architecture and best practices
Memory API Complete API reference
Demo Guide Interactive examples
Evaluation Guide Benchmark testing

Evaluation & Benchmarking

EverMemOS achieves 93% overall accuracy on the LoCoMo benchmark, outperforming comparable memory systems.

Benchmark Results

EverMemOS Benchmark Results

Supported Benchmarks

  • LoCoMo - Long-context memory benchmark with single/multi-hop reasoning
  • LongMemEval - Multi-session conversation evaluation
  • PersonaMem - Persona-based memory evaluation

Quick Start

# Install evaluation dependencies
uv sync --group evaluation
# Run smoke test (quick verification)
uv run python -m evaluation.cli --dataset locomo --system evermemos --smoke
# Run full evaluation
uv run python -m evaluation.cli --dataset locomo --system evermemos
# View results
cat evaluation/results/locomo-evermemos/report.txt

πŸ“Š Full Evaluation Guide β€’ πŸ“ˆ Complete Results


Questions

EverMemOS is available on these AI-powered Q&A platforms. They can help you find answers quickly and accurately in multiple languages, covering everything from basic setup to advanced implementation details.

Service Link
DeepWiki Ask DeepWiki

Contributing

We love open-source energy! Whether you’re squashing bugs, shipping features, sharpening docs, or just tossing in wild ideas, every PR moves EverMemOS forward. Browse Issues to find your perfect entry pointβ€”then show us what you’ve got. Let’s build the future of memory together.


Tip

Welcome all kinds of contributions πŸŽ‰

Join us in building EverMemOS better! Every contribution makes a difference, from code to documentation. Share your projects on social media to inspire others!

Connect with one of the EverMemOS maintainers @elliotchen200 on X or @cyfyifanchen on GitHub for project updates, discussions, and collaboration opportunities.

divider divider

Code Contributors

EverMemOS

divider divider

Contribution Guidelines

Read our Contribution Guidelines for code standards and Git workflow.

divider divider

License & Citation & Acknowledgments

Apache 2.0 β€’ Citation β€’ Acknowledgments


About

EverMemOS is an open-source, enterprise-grade intelligent memory system. Our mission is to build AI memory that never forgets, making every conversation built on previous understanding.

Resources

License

Contributing

Stars

Watchers

Forks

Packages

No packages published

Languages

AltStyle γ«γ‚ˆγ£γ¦ε€‰ζ›γ•γ‚ŒγŸγƒšγƒΌγ‚Έ (->γ‚ͺγƒͺγ‚ΈγƒŠγƒ«) /