memori

memori

Open-Source Memory Engine for LLMs, AI Agents & Multi-Agent Systems

Stars: 463

Visit
 screenshot

Memori is a lightweight and user-friendly memory management tool for developers. It helps in tracking memory usage, detecting memory leaks, and optimizing memory allocation in software projects. With Memori, developers can easily monitor and analyze memory consumption to improve the performance and stability of their applications. The tool provides detailed insights into memory usage patterns and helps in identifying areas for optimization. Memori is designed to be easy to integrate into existing projects and offers a simple yet powerful interface for managing memory resources effectively.

README:

GibsonAI

memori

Open-Source Memory Engine for LLMs, AI Agents & Multi-Agent Systems

Make LLMs context-aware with human-like memory, dual-mode retrieval, and automatic context injection.

Learn more ยท Join Discord

PyPI version Downloads License: MIT Python 3.8+


๐ŸŽฏ Philosophy

  • Second-memory for all your LLM work - Never repeat context again
  • Dual-mode memory injection - Conscious short-term memory + Auto intelligent search
  • Flexible database connections - SQLite, PostgreSQL, MySQL support
  • Pydantic-based intelligence - Structured memory processing with validation
  • Simple, reliable architecture - Just works out of the box

โšก Quick Start

Install Memori:

pip install memorisdk

Example with OpenAI

  1. Install OpenAI:
pip install openai
  1. Set OpenAI API Key:
export OPENAI_API_KEY="sk-your-openai-key-here"
  1. Run this Python script:
from memori import Memori
from openai import OpenAI

# Initialize OpenAI client
openai_client = OpenAI()

# Initialize memory
memori = Memori(conscious_ingest=True)
memori.enable()

print("=== First Conversation - Establishing Context ===")
response1 = openai_client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[{
        "role": "user", 
        "content": "I'm working on a Python FastAPI project"
    }]
)

print("Assistant:", response1.choices[0].message.content)
print("\n" + "="*50)
print("=== Second Conversation - Memory Provides Context ===")

response2 = openai_client.chat.completions.create(
    model="gpt-4o-mini", 
    messages=[{
        "role": "user",
        "content": "Help me add user authentication"
    }]
)
print("Assistant:", response2.choices[0].message.content)
print("\n๐Ÿ’ก Notice: Memori automatically knows about your FastAPI Python project!")

๐Ÿš€ Ready to explore more?


๐Ÿง  How It Works

1. Universal Recording

office_work.enable()  # Records ALL LLM conversations automatically

2. Intelligent Processing

  • Entity Extraction: Extracts people, technologies, projects
  • Smart Categorization: Facts, preferences, skills, rules
  • Pydantic Validation: Structured, type-safe memory storage

3. Dual Memory Modes

๐Ÿง  Conscious Mode - Short-Term Working Memory

conscious_ingest=True  # One-shot short-term memory injection
  • At Startup: Conscious agent analyzes long-term memory patterns
  • Memory Promotion: Moves essential conversations to short-term storage
  • One-Shot Injection: Injects working memory once at conversation start
  • Like Human Short-Term Memory: Names, current projects, preferences readily available

๐Ÿ” Auto Mode - Dynamic Database Search

auto_ingest=True  # Continuous intelligent memory retrieval
  • Every LLM Call: Retrieval agent analyzes user query intelligently
  • Full Database Search: Searches through entire memory database
  • Context-Aware: Injects relevant memories based on current conversation
  • Performance Optimized: Caching, async processing, background threads

๐Ÿง  Memory Modes Explained

Conscious Mode - Short-Term Working Memory

# Mimics human conscious memory - essential info readily available
memori = Memori(
    database_connect="sqlite:///my_memory.db",
    conscious_ingest=True,  # ๐Ÿง  Short-term working memory
    openai_api_key="sk-..."
)

How Conscious Mode Works:

  1. At Startup: Conscious agent analyzes long-term memory patterns
  2. Essential Selection: Promotes 5-10 most important conversations to short-term
  3. One-Shot Injection: Injects this working memory once at conversation start
  4. No Repeats: Won't inject again during the same session

Auto Mode - Dynamic Intelligent Search

# Searches entire database dynamically based on user queries
memori = Memori(
    database_connect="sqlite:///my_memory.db", 
    auto_ingest=True,  # ๐Ÿ” Smart database search
    openai_api_key="sk-..."
)

How Auto Mode Works:

  1. Every LLM Call: Retrieval agent analyzes user input
  2. Query Planning: Uses AI to understand what memories are needed
  3. Smart Search: Searches through entire database (short-term + long-term)
  4. Context Injection: Injects 3-5 most relevant memories per call

Combined Mode - Best of Both Worlds

# Get both working memory AND dynamic search
memori = Memori(
    conscious_ingest=True,  # Working memory once
    auto_ingest=True,       # Dynamic search every call
    openai_api_key="sk-..."
)

Intelligence Layers:

  1. Memory Agent - Processes every conversation with Pydantic structured outputs
  2. Conscious Agent - Analyzes patterns, promotes long-term โ†’ short-term memories
  3. Retrieval Agent - Intelligently searches and selects relevant context

What gets prioritized in Conscious Mode:

  • ๐Ÿ‘ค Personal Identity: Your name, role, location, basic info
  • โค๏ธ Preferences & Habits: What you like, work patterns, routines
  • ๐Ÿ› ๏ธ Skills & Tools: Technologies you use, expertise areas
  • ๐Ÿ“Š Current Projects: Ongoing work, learning goals
  • ๐Ÿค Relationships: Important people, colleagues, connections
  • ๐Ÿ”„ Repeated References: Information you mention frequently

๐Ÿ—„๏ธ Memory Types

Type Purpose Example Auto-Promoted
Facts Objective information "I use PostgreSQL for databases" โœ… High frequency
Preferences User choices "I prefer clean, readable code" โœ… Personal identity
Skills Abilities & knowledge "Experienced with FastAPI" โœ… Expertise areas
Rules Constraints & guidelines "Always write tests first" โœ… Work patterns
Context Session information "Working on e-commerce project" โœ… Current projects

๐Ÿ”ง Configuration

Simple Setup

from memori import Memori

# Conscious mode - Short-term working memory
memori = Memori(
    database_connect="sqlite:///my_memory.db",
    template="basic", 
    conscious_ingest=True,  # One-shot context injection
    openai_api_key="sk-..."
)

# Auto mode - Dynamic database search
memori = Memori(
    database_connect="sqlite:///my_memory.db",
    auto_ingest=True,  # Continuous memory retrieval
    openai_api_key="sk-..."
)

# Combined mode - Best of both worlds
memori = Memori(
    conscious_ingest=True,  # Working memory + 
    auto_ingest=True,       # Dynamic search
    openai_api_key="sk-..."
)

Advanced Configuration

from memori import Memori, ConfigManager

# Load from memori.json or environment
config = ConfigManager()
config.auto_load()

memori = Memori()
memori.enable()

Create memori.json:

{
  "database": {
    "connection_string": "postgresql://user:pass@localhost/memori"
  },
  "agents": {
    "openai_api_key": "sk-...",
    "conscious_ingest": true,
    "auto_ingest": false
  },
  "memory": {
    "namespace": "my_project",
    "retention_policy": "30_days"
  }
}

๐Ÿ”Œ Universal Integration

Works with ANY LLM library:

memori.enable()  # Enable universal recording

# OpenAI
from openai import OpenAI
client = OpenAI()
client.chat.completions.create(...)

# LiteLLM
from litellm import completion
completion(model="gpt-4", messages=[...])

# Anthropic  
import anthropic
client = anthropic.Anthropic()
client.messages.create(...)

# All automatically recorded and contextualized!

๐Ÿ› ๏ธ Memory Management

Automatic Background Analysis

# Automatic analysis every 6 hours (when conscious_ingest=True)
memori.enable()  # Starts background conscious agent

# Manual analysis trigger
memori.trigger_conscious_analysis()

# Get essential conversations
essential = memori.get_essential_conversations(limit=5)

Memory Retrieval Tools

from memori.tools import create_memory_tool

# Create memory search tool for your LLM
memory_tool = create_memory_tool(memori)

# Use in function calling
tools = [memory_tool]
completion(model="gpt-4", messages=[...], tools=tools)

Context Control

# Get relevant context for a query
context = memori.retrieve_context("Python testing", limit=5)
# Returns: 3 essential + 2 specific memories

# Search by category
skills = memori.search_memories_by_category("skill", limit=10)

# Get memory statistics
stats = memori.get_memory_stats()

๐Ÿ“‹ Database Schema

-- Core tables created automatically
chat_history        # All conversations
short_term_memory   # Recent context (expires)
long_term_memory    # Permanent insights  
rules_memory        # User preferences
memory_entities     # Extracted entities
memory_relationships # Entity connections

๐Ÿ“ Project Structure

memori/
โ”œโ”€โ”€ core/           # Main Memori class, database manager
โ”œโ”€โ”€ agents/         # Memory processing with Pydantic  
โ”œโ”€โ”€ database/       # SQLite/PostgreSQL/MySQL support
โ”œโ”€โ”€ integrations/   # LiteLLM, OpenAI, Anthropic
โ”œโ”€โ”€ config/         # Configuration management
โ”œโ”€โ”€ utils/          # Helpers, validation, logging
โ””โ”€โ”€ tools/          # Memory search tools

Examples

Framework Integrations

Memori works seamlessly with popular AI frameworks:

Framework Description Example Features
๐Ÿค– Agno Memory-enhanced agent framework integration with persistent conversations Simple chat agent with memory search Memory tools, conversation persistence, contextual responses
๐Ÿ‘ฅ CrewAI Multi-agent system with shared memory across agent interactions Collaborative agents with memory Agent coordination, shared memory, task-based workflows
๐ŸŒŠ Digital Ocean AI Memory-enhanced customer support using Digital Ocean's AI platform Customer support assistant with conversation history Context injection, session continuity, support analytics
๐Ÿ”— LangChain Enterprise-grade agent framework with advanced memory integration AI assistant with LangChain tools and memory Custom tools, agent executors, memory persistence, error handling
๏ฟฝ OpenAI Agent Memory-enhanced OpenAI Agent with function calling and user preference tracking Interactive assistant with memory search and user info storage Function calling tools, memory search, preference tracking, async conversations
๏ฟฝ๐Ÿš€ Swarms Multi-agent system framework with persistent memory capabilities Memory-enhanced Swarms agents with auto/conscious ingestion Agent memory persistence, multi-agent coordination, contextual awareness

Interactive Demos

Explore Memori's capabilities through these interactive demonstrations:

Title Description Tools Used Live Demo
๐ŸŒŸ Personal Diary Assistant A comprehensive diary assistant with mood tracking, pattern analysis, and personalized recommendations. Streamlit, LiteLLM, OpenAI, SQLite Run Demo
๐ŸŒ Travel Planner Agent Intelligent travel planning with CrewAI agents, real-time web search, and memory-based personalization. Plans complete itineraries with budget analysis. CrewAI, Streamlit, OpenAI, SQLite
๐Ÿง‘โ€๐Ÿ”ฌ Researcher Agent Advanced AI research assistant with persistent memory, real-time web search, and comprehensive report generation. Builds upon previous research sessions. Agno, Streamlit, OpenAI, ExaAI, SQLite Run Demo

๐Ÿค Contributing

๐Ÿ“„ License

MIT License - see LICENSE for details.


Made for developers who want their AI agents to remember and learn

For Tasks:

Click tags to check more tools for each tasks

For Jobs:

Alternative AI tools for memori

Similar Open Source Tools

For similar tasks

For similar jobs