Memento

Memento

Official Code of Memento: Fine-tuning LLM Agents without Fine-tuning LLMs

Stars: 1009

Visit
 screenshot

Memento is a lightweight and user-friendly version control tool designed for small to medium-sized projects. It provides a simple and intuitive interface for managing project versions and collaborating with team members. With Memento, users can easily track changes, revert to previous versions, and merge different branches. The tool is suitable for developers, designers, content creators, and other professionals who need a streamlined version control solution. Memento simplifies the process of managing project history and ensures that team members are always working on the latest version of the project.

README:

Memento: Fine-tuning LLM Agents without Fine-tuning LLMs

A memory-based, continual-learning framework that helps LLM agents improve from experience without updating model weights.

Planner–Executor ArchitectureCase-Based ReasoningMCP ToolingMemory-Augmented Learning



Memento vs. Baselines on GAIA validation and test sets.

Ablation study of Memento across benchmarks.

Continual learning curves across memory designs.

Memento’s accuracy improvement on OOD datasets.

📰 News

  • [2025.09.03] We’ve set up a WeChat group to make it easier to collaborate and exchange ideas on this project. Welcome to join the Group to share your thoughts, ask questions, or contribute your ideas! 🔥 🔥 🔥 Join our WeChat Group Now!
  • [2025.08.30] We’re excited to announce that our no-parametric Case-Based Reasoning inference code is now officially open-sourced! 🎉
  • [2025.08.28] We’ve created a Discord server to make discussions and collaboration around this project easier. Feel free to join and share your thoughts, ask questions, or contribute ideas! 🔥 🔥 🔥 Join our Discord!
  • [2025.08.27] Thanks for your interest in our work! We’ll release our CBR code next week and our Parametric Memory code next month. We’ll keep updating on our further development.
  • [2025.08.27] We add a new Crawler MCP in server/ai_crawler.py for web crawling and query-aware content compression to reduce token cost.
  • [2025.08.26] We add the SerpAPI (https://serpapi.com/search-api) MCP tool to help you avoid using the search Docker and speed up development.

🔥 Key Features

  • No LLM weight updates. Memento reframes continual learning as memory-based online reinforcement learning over a memory-augmented MDP. A neural case-selection policy guides actions; experiences are stored and reused via efficient Read/Write operations.
  • Two-stage planner–executor loop. A CBR-driven Planner decomposes tasks and retrieves relevant cases; an Executor runs each subtask as an MCP client, orchestrating tools and writing back outcomes.
  • Comprehensive tool ecosystem. Built-in support for web search, document processing, code execution, image/video analysis, and more through a unified MCP interface.
  • Strong benchmark performance. Achieves competitive results across GAIA, DeepResearcher, SimpleQA, and HLE benchmarks.

🧠 Core Concept

Learn from experiences, not gradients. Memento logs successful & failed trajectories into a Case Bank and retrieves by value to steer planning and execution—enabling low-cost, transferable, and online continual learning.


🏗️ Architecture

Core Components

  • Meta-Planner: Breaks down high-level queries into executable subtasks using GPT-4.1
  • Executor: Executes individual subtasks using o3 or other models via MCP tools
  • Case Memory: Stores final-step tuples (s_T, a_T, r_T) for experience replay
  • MCP Tool Layer: Unified interface for external tools and services

Tool Ecosystem

  • Web Research: Live search and controlled crawling via SearxNG
  • Document Processing: Multi-format support (PDF, Office, images, audio, video)
  • Code Execution: Sandboxed Python workspace with security controls
  • Data Analysis: Excel processing, mathematical computations
  • Media Analysis: Image captioning, video narration, audio transcription

🚀 Quick Start

Prerequisites

  • Python 3.11+
  • OpenAI API key (or compatible API endpoint)
  • SearxNG instance for web search
  • FFmpeg (system-level binary required for video processing)

Installation

Method 1: Using uv (Recommended - Fast & Modern)

# Clone repository
git clone https://github.com/Agent-on-the-Fly/Memento
cd Memento

# Install uv if not already installed
curl -LsSf https://astral.sh/uv/install.sh | sh

# Sync dependencies and create virtual environment automatically
uv sync

# Activate the virtual environment
source .venv/bin/activate  # On Windows: .venv\Scripts\activate

System Dependencies Installation

FFmpeg Installation (Required)

FFmpeg is required for video processing functionality. The ffmpeg-python package in our dependencies requires a system-level FFmpeg binary.

Windows:

# Option 1: Using Conda (Recommended for isolated environment)
conda install -c conda-forge ffmpeg

# Option 2: Download from official website
# Visit https://ffmpeg.org/download.html and add to PATH

macOS:

# Using Homebrew
brew install ffmpeg

Linux:

# Debian/Ubuntu
sudo apt-get update && sudo apt-get install ffmpeg

Web Scraping & Search Setup

# Install and setup crawl4ai
crawl4ai-setup
crawl4ai-doctor

# Install playwright browsers
playwright install

Environment Variables Configuration

After creating the .env file, you need to configure the following API keys and service endpoints:

# OPENAI API
OPENAI_API_KEY=your_openai_api_key_here
OPENAI_BASE_URL=https://api.openai.com/v1  # or your custom endpoint

#===========================================
# Tools & Services API
#===========================================
# Chunkr API (https://chunkr.ai/)
CHUNKR_API_KEY=your_chunkr_api_key_here

# Jina API
JINA_API_KEY=your_jina_api_key_here

# ASSEMBLYAI API 
ASSEMBLYAI_API_KEY=your_assemblyai_api_key_here

Note: Replace your_*_api_key_here with your actual API keys. Some services are optional depending on which tools you plan to use.

SearxNG Setup

For web search capabilities, set up SearxNG: You can follow https://github.com/searxng/searxng-docker/ to set the docker and use our setting.

# In a new terminal
cd ./Memento/searxng-docker
docker compose up -d

Basic Usage

Interactive Mode

python client/agent.py

🔧 Configuration

Model Selection

  • Planner Model: Defaults to gpt-4.1 for task decomposition
  • Executor Model: Defaults to o3 for task execution
  • Custom Models: Support for any OpenAI-compatible API

Tool Configuration

  • Search: Configure SearxNG instance URL
  • Code Execution: Customize import whitelist and security settings
  • Document Processing: Set cache directories and processing limits

📊 Performance

Benchmark Results

  • GAIA: 87.88% (Val, Pass@3 Top-1) and 79.40% (Test)
  • DeepResearcher: 66.6% F1 / 80.4% PM, with +4.7–9.6 absolute gains on OOD datasets
  • SimpleQA: 95.0%
  • HLE: 24.4% PM (close to GPT-5 at 25.32%)

Key Insights

  • Small, high-quality memory works best: Retrieval K=4 yields peak F1/PM
  • Planning + CBR consistently improves performance
  • Concise, structured planning outperforms verbose deliberation

🛠️ Development

Project Structure

Memento/
├── client/                   # Main agent implementation
│   ├── agent.py             # Hierarchical client with planner–executor
│   └── no_parametric_cbr.py # Non-parametric case-based reasoning
├── server/                   # MCP tool servers
│   ├── code_agent.py        # Code execution & workspace management
│   ├── search_tool.py       # Web search via SearxNG
│   ├── serp_search.py       # SERP-based search tool
│   ├── documents_tool.py    # Multi-format document processing
│   ├── image_tool.py        # Image analysis & captioning
│   ├── video_tool.py        # Video processing & narration
│   ├── excel_tool.py        # Spreadsheet processing
│   ├── math_tool.py         # Mathematical computations
│   ├── craw_page.py         # Web page crawling
│   └── ai_crawler.py        # Query-aware compression crawler
├── interpreters/             # Code execution backends
│   ├── docker_interpreter.py
│   ├── e2b_interpreter.py
│   ├── internal_python_interpreter.py
│   └── subprocess_interpreter.py
├── memory/                   # Memory components / data
├── data/                     # Sample data / cases
├── searxng-docker/           # SearxNG Docker setup
├── Figure/                   # Figures for README/paper
├── README.md
├── requirements.txt
└── LICENSE

Adding New Tools

  1. Create a new FastMCP server in the server/ directory
  2. Implement your tool functions with proper error handling
  3. Register the tool with the MCP protocol
  4. Update the client's server list in agent.py

Custom Interpreters

Extend the interpreters/ module to add new execution backends:

from interpreters.base import BaseInterpreter

class CustomInterpreter(BaseInterpreter):
    async def execute(self, code: str) -> str:
        # Your custom execution logic
        pass

📋 TODO

Upcoming Features & Improvements

  • [ ] Add Case Bank Reasoning: Implement memory-based case retrieval and reasoning system
  • [ ] Add User Personal Memory Mechanism: Implement user-preference search
  • [ ] Refine Tools & Add More Tools: Enhance existing tools and expand the tool ecosystem
  • [ ] Test More New Benchmarks: Evaluate performance on additional benchmark datasets

Limitations

  • Long-horizon tasks: GAIA Level-3 remains challenging due to compounding errors
  • Frontier knowledge: HLE performance limited by tooling alone
  • Open-source coverage: Limited executor validation in fully open pipelines

🙏 Acknowledgement

  • Some parts of the code in the toolkits and interpreters are adapted from Camel-AI.

📚 Citation

If Memento helps your work, please cite:

@article{zhou2025agentfly,
  title={AgentFly: Fine-tuning LLM Agents without Fine-tuning LLMs},
  author={Zhou, Huichi and Chen, Yihang and Guo, Siyuan and Yan, Xue and Lee, Kin Hei and Wang, Zihan and Lee, Ka Yiu and Zhang, Guchun and Shao, Kun and Yang, Linyi and others},
  journal={arXiv preprint arXiv:2508.16153},
  year={2025}
}

@article{huang2025deep,
  title={Deep Research Agents: A Systematic Examination And Roadmap},
  author={Huang, Yuxuan and Chen, Yihang and Zhang, Haozheng and Li, Kang and Fang, Meng and Yang, Linyi and Li, Xiaoguang and Shang, Lifeng and Xu, Songcen and Hao, Jianye and others},
  journal={arXiv preprint arXiv:2506.18096},
  year={2025}
}

For a broader overview, please check out our survey: Github


🤝 Contributing

We welcome contributions! Please see our contributing guidelines for:

  • Bug reports and feature requests
  • Code contributions and pull requests
  • Documentation improvements
  • Tool and interpreter extensions

Star History

Star History Chart

For Tasks:

Click tags to check more tools for each tasks

For Jobs:

Alternative AI tools for Memento

Similar Open Source Tools

For similar tasks

For similar jobs