roo-code-memory-bank
🧠 Roo Code Memory Bank - Now with new debug mode! (Thanks @TheRealAlexV) Solve the AI context challenge: maintain deep project understanding across sessions with our structured VS Code memory system. Never repeat project details again! ✨
Stars: 200
Roo Code Memory Bank is a tool designed for AI-assisted development to maintain project context across sessions. It provides a structured memory system integrated with VS Code, ensuring deep understanding of the project for the AI assistant. The tool includes key components such as Memory Bank for persistent storage, Mode Rules for behavior configuration, VS Code Integration for seamless development experience, and Real-time Updates for continuous context synchronization. Users can configure custom instructions, initialize the Memory Bank, and organize files within the project root directory. The Memory Bank structure includes files for tracking session state, technical decisions, project overview, progress tracking, and optional project brief and system patterns documentation. Features include persistent context, smart workflows for specialized tasks, knowledge management with structured documentation, and cross-referenced project knowledge. Pro tips include handling multiple projects, utilizing Debug mode for troubleshooting, and managing session updates for synchronization. The tool aims to enhance AI-assisted development by providing a comprehensive solution for maintaining project context and facilitating efficient workflows.
README:
Roo Code Memory Bank solves a critical challenge in AI-assisted development: maintaining context across sessions. By providing a structured memory system integrated with VS Code, it ensures your AI assistant maintains a deep understanding of your project across sessions.
graph LR
A[Memory Bank] --> B[Core Files]
A --> C[Mode Rules]
A --> D[VS Code UI]
B --> E[Project Context]
B --> F[Decisions]
B --> G[Progress]
C --> H[Architect]
C --> I[Code]
C --> J[Ask]
C --> K1[Debug]
K[Real-time Updates] --> B
K --> L[Continuous Sync]
L --> M[Auto-save]
L --> N[Event Tracking]- 🧠 Memory Bank: Persistent storage for project knowledge
- 📋 Mode Rules: YAML-based behavior configuration
- 🔧 VS Code Integration: Seamless development experience
- ⚡ Real-time Updates: Continuous context synchronization
Download and copy these files to your project's root directory:
| Mode | Rule File | Purpose |
|---|---|---|
| Code | .clinerules-code |
Implementation and coding tasks |
| Architect | .clinerules-architect |
System design and architecture |
| Ask | .clinerules-ask |
Information and assistance |
| Debug | .clinerules-debug |
Troubleshooting and problem-solving |
| Modes | .roomodes |
Additional modes |
⚠️ Important: Leave the "Custom Instructions" text boxes empty in VS Code settings (Roo Code Prompts section)
- Switch to Architect or Code mode in Roo Code chat
- Send a message (e.g., "hello")
- Roo will automatically:
- 🔍 Scan for
memory-bank/directory - 📁 Create it if missing (with your approval)
- 📝 Initialize core files
- 🚦 Provide next steps
- 🔍 Scan for
💡 Pro Tip: Project Brief
Create a projectBrief.md in your project root before initialization to give Roo immediate project context.
project-root/
├── .clinerules-architect
├── .clinerules-code
├── .clinerules-ask
├── .clinerules-debug
├── .roomodes
├── memory-bank/
│ ├── activeContext.md
│ ├── productContext.md
│ ├── progress.md
│ └── decisionLog.md
└── projectBrief.md
graph TD
MB[memory-bank/] --> AC[activeContext.md]
MB --> DL[decisionLog.md]
MB --> PC[productContext.md]
MB --> PR[progress.md]
MB --> PB[projectBrief.md]
MB --> SP[systemPatterns.md]
subgraph Core Files
AC[Current Session State]
DL[Technical Decisions]
PC[Project Overview]
PR[Progress Tracking]
end
subgraph Optional
PB[Project Brief]
SP[System Patterns]
end📖 View File Descriptions
| File | Purpose |
|---|---|
activeContext.md |
Tracks current goals, decisions, and session state |
decisionLog.md |
Records architectural choices and their rationale |
productContext.md |
Maintains high-level project context and knowledge |
progress.md |
Documents completed work and upcoming tasks |
projectBrief.md |
Contains initial project requirements (optional) |
systemPatterns.md |
Documents recurring patterns and standards |
- Remembers project details across sessions
- Maintains consistent understanding of your codebase
- Tracks decisions and their rationale
graph LR
A[Architect Mode] -->|Real-time Design Updates| B[Memory Bank]
C[Code Mode] -->|Real-time Implementation| B
D[Ask Mode] -->|Real-time Insights| B
F[Debug Mode] -->|Real-time Analysis| B
B -->|Instant Context| A
B -->|Instant Context| C
B -->|Instant Context| D
B -->|Instant Context| F
E[Event Monitor] -->|Continuous Sync| B- Mode-based operation for specialized tasks
- Automatic context switching
- Project-specific customization via rules
- Structured documentation with clear purposes
- Technical decision tracking with rationale
- Automated progress monitoring
- Cross-referenced project knowledge
graph TD
A[Workspace] --> B[Project 1]
A --> C[Project 2]
B --> D[memory-bank/]
C --> E[memory-bank/]
D --> F[Automatic Detection]
E --> FRoo automatically handles multiple Memory Banks in your workspace!
Roo Code Memory Bank includes a powerful Debug mode for systematic troubleshooting and problem-solving. This mode operates with surgical precision, focusing on root cause analysis and evidence-based verification.
- 🔍 Non-destructive Investigation: Read-only access ensures system integrity during analysis
- 📊 Diagnostic Tools: Access to logging, tracing, and system analysis tools
- 🔬 Systematic Analysis: Methodical problem investigation and isolation
- 🎯 Root Cause Identification: Traces error propagation through all system layers
- ✅ Evidence-based Verification: Validates findings through multiple checkpoints
Debug mode actively monitors and updates Memory Bank files based on:
- 🐛 Bug discoveries and error patterns
- 💾 Memory leaks and resource issues
- 🔄 Race conditions and deadlocks
- 📈 Performance bottlenecks
- 📝 Log analysis and trace outputs
graph TD
A[Debug Mode] --> B[Analysis]
B --> C[Findings]
C --> D[Memory Bank Updates]
D --> E[activeContext.md]
D --> F[progress.md]
D --> G[decisionLog.md]
E --> H[Current Issues]
F --> I[Debug Progress]
G --> J[Solution Decisions]Switch to Debug mode when you need to:
- Investigate system behavior
- Analyze failure patterns
- Isolate root causes
- Verify fixes
- Document debugging insights
- ⚡ Real-time Updates: Memory Bank automatically stays synchronized with your work
- 💾 Manual Updates: Use "UMB" or "update memory bank" as a fallback when:
- Ending a session unexpectedly
- Halting mid-task
- Recovering from connection issues
- Forcing a full synchronization
Apache 2.0 © 2025 GreatScottyMac
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for roo-code-memory-bank
Similar Open Source Tools
roo-code-memory-bank
Roo Code Memory Bank is a tool designed for AI-assisted development to maintain project context across sessions. It provides a structured memory system integrated with VS Code, ensuring deep understanding of the project for the AI assistant. The tool includes key components such as Memory Bank for persistent storage, Mode Rules for behavior configuration, VS Code Integration for seamless development experience, and Real-time Updates for continuous context synchronization. Users can configure custom instructions, initialize the Memory Bank, and organize files within the project root directory. The Memory Bank structure includes files for tracking session state, technical decisions, project overview, progress tracking, and optional project brief and system patterns documentation. Features include persistent context, smart workflows for specialized tasks, knowledge management with structured documentation, and cross-referenced project knowledge. Pro tips include handling multiple projects, utilizing Debug mode for troubleshooting, and managing session updates for synchronization. The tool aims to enhance AI-assisted development by providing a comprehensive solution for maintaining project context and facilitating efficient workflows.
MassGen
MassGen is a cutting-edge multi-agent system that leverages the power of collaborative AI to solve complex tasks. It assigns a task to multiple AI agents who work in parallel, observe each other's progress, and refine their approaches to converge on the best solution to deliver a comprehensive and high-quality result. The system operates through an architecture designed for seamless multi-agent collaboration, with key features including cross-model/agent synergy, parallel processing, intelligence sharing, consensus building, and live visualization. Users can install the system, configure API settings, and run MassGen for various tasks such as question answering, creative writing, research, development & coding tasks, and web automation & browser tasks. The roadmap includes plans for advanced agent collaboration, expanded model, tool & agent integration, improved performance & scalability, enhanced developer experience, and a web interface.
GMTalker
GMTalker is an interactive digital human rendered by Unreal Engine, developed by the Media Intelligence Team at Bright Laboratory. The system integrates speech recognition, speech synthesis, natural language understanding, and lip-sync animation driving. It supports rapid deployment on Windows with only 2GB of VRAM required. The project showcases two 3D cartoon digital human avatars suitable for presentations, expansions, and commercial integration.
EnvScaler
EnvScaler is an automated, scalable framework that creates tool-interactive environments for training LLM agents. It consists of SkelBuilder for environment description mining and quality inspection, ScenGenerator for synthesizing multiple environment scenarios, and modules for supervised fine-tuning and reinforcement learning. The tool provides data, models, and evaluation guides for users to build, generate scenarios, collect training data, train models, and evaluate performance. Users can interact with environments, build environments from scratch, and improve LLMs' task-solving abilities in complex environments.
zcf
ZCF (Zero-Config Claude-Code Flow) is a tool that provides zero-configuration, one-click setup for Claude Code with bilingual support, intelligent agent system, and personalized AI assistant. It offers an interactive menu for easy operations and direct commands for quick execution. The tool supports bilingual operation with automatic language switching and customizable AI output styles. ZCF also includes features like BMad Workflow for enterprise-grade workflow system, Spec Workflow for structured feature development, CCR (Claude Code Router) support for proxy routing, and CCometixLine for real-time usage tracking. It provides smart installation, complete configuration management, and core features like professional agents, command system, and smart configuration. ZCF is cross-platform compatible, supports Windows and Termux environments, and includes security features like dangerous operation confirmation mechanism.
serverless-openclaw
An open-source project, Serverless OpenClaw, that runs OpenClaw on-demand on AWS serverless infrastructure, providing a web UI and Telegram as interfaces. It minimizes cost, offers predictive pre-warming, supports multi-LLM providers, task automation, and one-command deployment. The project aims for cost optimization, easy management, scalability, and security through various features and technologies. It follows a specific architecture and tech stack, with a roadmap for future development phases and estimated costs. The project structure is organized as an npm workspaces monorepo with TypeScript project references, and detailed documentation is available for contributors and users.
agentsys
AgentSys is a modular runtime and orchestration system for AI agents, with 13 plugins, 42 agents, and 28 skills that compose into structured pipelines for software development. It handles task selection, branch management, code review, artifact cleanup, CI, PR comments, and deployment. The system runs on Claude Code, OpenCode, and Codex CLI, providing a functional software suite and runtime for AI agent orchestration.
awesome-slash
Automate the entire development workflow beyond coding. awesome-slash provides production-ready skills, agents, and commands for managing tasks, branches, reviews, CI, and deployments. It automates the entire workflow, including task exploration, planning, implementation, review, and shipping. The tool includes 11 plugins, 40 agents, 26 skills, and 26k lines of lib code, with 3,357 tests and support for 3 platforms. It works with Claude Code, OpenCode, and Codex CLI, offering specialized capabilities through skills and agents.
ring
Ring is a comprehensive skills library and workflow system for AI agents that transforms how AI assistants approach software development. It provides battle-tested patterns, mandatory workflows, and systematic approaches across the entire software delivery value chain. With 74 specialized skills and 33 specialized agents, Ring enforces proven workflows, automates skill discovery, and prevents common failures. The repository includes multiple plugins for different team specializations, each offering a set of skills, agents, and commands to streamline various aspects of software development.
semantic-router
The Semantic Router is an intelligent routing tool that utilizes a Mixture-of-Models (MoM) approach to direct OpenAI API requests to the most suitable models based on semantic understanding. It enhances inference accuracy by selecting models tailored to different types of tasks. The tool also automatically selects relevant tools based on the prompt to improve tool selection accuracy. Additionally, it includes features for enterprise security such as PII detection and prompt guard to protect user privacy and prevent misbehavior. The tool implements similarity caching to reduce latency. The comprehensive documentation covers setup instructions, architecture guides, and API references.
empirica
Empirica is an epistemic self-awareness framework for AI agents to understand their knowledge boundaries. It introduces epistemic vectors to measure knowledge state and uncertainty, enabling honest communication. The tool emerged from 600+ real working sessions across various AI systems, providing cognitive infrastructure for distinguishing between confident knowledge and guessing. Empirica's 13 foundational vectors cover engagement, domain knowledge depth, execution capability, information access, understanding clarity, coherence, signal-to-noise ratio, information richness, working state, progress rate, task completion level, work significance, and explicit doubt tracking. It is applicable across industries like software development, research, healthcare, legal, education, and finance, aiding in tasks such as code review, hypothesis testing, diagnostic confidence, case analysis, learning assessment, and risk assessment.
RepoMaster
RepoMaster is an AI agent that leverages GitHub repositories to solve complex real-world tasks. It transforms how coding tasks are solved by automatically finding the right GitHub tools and making them work together seamlessly. Users can describe their tasks, and RepoMaster's AI analysis leads to auto discovery and smart execution, resulting in perfect outcomes. The tool provides a web interface for beginners and a command-line interface for advanced users, along with specialized agents for deep search, general assistance, and repository tasks.
figma-console-mcp
Figma Console MCP is a Model Context Protocol server that bridges design and development, giving AI assistants complete access to Figma for extraction, creation, and debugging. It connects AI assistants like Claude to Figma, enabling plugin debugging, visual debugging, design system extraction, design creation, variable management, real-time monitoring, and three installation methods. The server offers 53+ tools for NPX and Local Git setups, while Remote SSE provides read-only access with 16 tools. Users can create and modify designs with AI, contribute to projects, or explore design data. The server supports authentication via personal access tokens and OAuth, and offers tools for navigation, console debugging, visual debugging, design system extraction, design creation, design-code parity, variable management, and AI-assisted design creation.
OpenViking
OpenViking is an open-source Context Database designed specifically for AI Agents. It aims to solve challenges in agent development by unifying memories, resources, and skills in a filesystem management paradigm. The tool offers tiered context loading, directory recursive retrieval, visualized retrieval trajectory, and automatic session management. Developers can interact with OpenViking like managing local files, enabling precise context manipulation and intuitive traceable operations. The tool supports various model services like OpenAI and Volcengine, enhancing semantic retrieval and context understanding for AI Agents.
Edit-Banana
Edit Banana is a universal content re-editor that allows users to transform fixed content into fully manipulatable assets. Powered by SAM 3 and multimodal large models, it enables high-fidelity reconstruction while preserving original diagram details and logical relationships. The platform offers advanced segmentation, fixed multi-round VLM scanning, high-quality OCR, user system with credits, multi-user concurrency, and a web interface. Users can upload images or PDFs to get editable DrawIO (XML) or PPTX files in seconds. The project structure includes components for segmentation, text extraction, frontend, models, and scripts, with detailed installation and setup instructions provided. The tool is open-source under the Apache License 2.0, allowing commercial use and secondary development.
bumblecore
BumbleCore is a hands-on large language model training framework that allows complete control over every training detail. It provides manual training loop, customizable model architecture, and support for mainstream open-source models. The framework follows core principles of transparency, flexibility, and efficiency. BumbleCore is suitable for deep learning researchers, algorithm engineers, learners, and enterprise teams looking for customization and control over model training processes.
For similar tasks
roo-code-memory-bank
Roo Code Memory Bank is a tool designed for AI-assisted development to maintain project context across sessions. It provides a structured memory system integrated with VS Code, ensuring deep understanding of the project for the AI assistant. The tool includes key components such as Memory Bank for persistent storage, Mode Rules for behavior configuration, VS Code Integration for seamless development experience, and Real-time Updates for continuous context synchronization. Users can configure custom instructions, initialize the Memory Bank, and organize files within the project root directory. The Memory Bank structure includes files for tracking session state, technical decisions, project overview, progress tracking, and optional project brief and system patterns documentation. Features include persistent context, smart workflows for specialized tasks, knowledge management with structured documentation, and cross-referenced project knowledge. Pro tips include handling multiple projects, utilizing Debug mode for troubleshooting, and managing session updates for synchronization. The tool aims to enhance AI-assisted development by providing a comprehensive solution for maintaining project context and facilitating efficient workflows.
For similar jobs
sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.
teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.
ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.
classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.
chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.
BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students
uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.
griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.
