
RooFlow
RooFlow - Enhanced Memory Bank System with ☢️Footgun Power☢️ Next-gen Memory Bank system with five integrated modes and system-level customization. Uses Roo Code's experimental "Footgun" feature for deep AI assistant customization while maintaining efficient token usage!
Stars: 226

RooFlow is a VS Code extension that enhances AI-assisted development by providing persistent project context and optimized mode interactions. It reduces token consumption and streamlines workflow by integrating Architect, Code, Test, Debug, and Ask modes. The tool simplifies setup, offers real-time updates, and provides clearer instructions through YAML-based rule files. It includes components like Memory Bank, System Prompts, VS Code Integration, and Real-time Updates. Users can install RooFlow by downloading specific files, placing them in the project structure, and running an insert-variables script. They can then start a chat, select a mode, interact with Roo, and use the 'Update Memory Bank' command for synchronization. The Memory Bank structure includes files for active context, decision log, product context, progress tracking, and system patterns. RooFlow features persistent context, real-time updates, mode collaboration, and reduced token consumption.
README:
Still working on full MCP functionality in RooFlow modes. If you have issues, there is a global Default mode available which runs with the Roo Code default system prompt.
Persistent Project Context and Streamlined AI-Assisted Development
RooFlow enhances AI-assisted development in VS Code by providing persistent project context and optimized mode interactions, resulting in reduced token consumption and a more efficient workflow. It builds upon the concepts of the Roo Code Memory Bank, but streamlines the process and introduces a more integrated system of modes. RooFlow ensures your AI assistant maintains a deep understanding of your project across sessions, even after interruptions.
- Reduced Token Consumption: Optimized prompts and instructions minimize token usage.
- Five Integrated Modes: Architect, Code, Test, Debug, and Ask modes work together seamlessly.
- Simplified Setup: Easier installation and configuration.
- Streamlined Real-time Updates: More efficient and targeted Memory Bank updates.
- Clearer Instructions: Improved YAML-based rule files for better readability and maintainability.
flowchart LR
A["RooFlow"] --> D["Toolkit"]
A["RooFlow"] --> M["Real-time Updates"]
D --> C["Mode Rules"]
B["Memory Bank"] --> E["Product Context"] & N["Active Context"] & F["Decisions"] & G["Progress"]
C --> H["Architect"] & I["Code"] & J["Ask"] & K["Debug"] & L["Test"]
M["Real-time Updates"] --> B
- 🧠 Memory Bank: Persistent storage for project knowledge (automatically managed).
- 💻 System Prompts: YAML-based core instructions for each mode (
.roo/system-prompt-[mode]
). - 🔧 VS Code Integration: Seamless development experience within VS Code.
- ⚡ Real-time Updates: Automatic Memory Bank updates triggered by significant events.
- Install Roo Code Extension: Ensure you have the Roo Code extension installed in VS Code.
- Download RooFlow Files: Download the following files from this repository:
system-prompt-architect
system-prompt-ask
system-prompt-code
system-prompt-debug
system-prompt-test
.rooignore
.roomodes
-
insert-variables.cmd
For Windows OS -
insert-variables.sh
For Unix/Linux/macOS
- Place Files in Project:
- Create a directory named
.roo
in your project's root directory. - Place the
system-prompt-[mode]
files inside the.roo
directory. - Place the
.rooignore
file in the project's root directory. (Or add !memory-bank/ to your existing .rooignore file) - Place the
.roomodes
file in the project's root directory (Or add its contents to your existing .roomodes file) - Place the appropriate
insert-variables.[sh/cmd]
script for your platform in the project's root directory.
Your project structure should look like this:
project-root
├── .roo
| ├── system-prompt-architect
| ├── system-prompt-ask
| ├── system-prompt-code
| ├── system-prompt-debug
| └── system-prompt-test
├── memory-bank (This directory will be created automatically by Roo after your first prompt)
| ├── activeContext.md
| ├── decisionLog.md
| ├── productContext.md
| ├── progress.md
| └── systemPatterns.md
├── .rooignore
├── .roomodes
└──insert-variables.[sh/cmd]
- Run insert-variables script
-
Open Command Prompt or PowerShell
-
Navigate to your project:
cd path\to\your\project
-
Run the script:
From Command Prompt:
insert-variables.cmd
From Powershell:
.\insert-variables.cmd
Troubleshooting (Windows)
- If you get "access denied" or execution policy errors:
- Open PowerShell as Administrator
- Run this command once:
Set-ExecutionPolicy RemoteSigned -Scope CurrentUser
- Close Administrator PowerShell
- Try running the script again from your project directory
- If you see "Error: .roo directory not found", verify your directory structure.
-
If using PowerShell 7+, run as:
cmd /c insert-variables.cmd
-
Open Terminal
-
Navigate to your project:
cd path/to/your/project
-
Make the script executable:
chmod +x insert-variables.sh
-
Run the script:
./insert-variables.sh
Troubleshooting (Unix/Linux/macOS)
-
If you see "Permission denied", run:
sudo chmod +x insert-variables.sh
- If you see "Error: .roo directory not found", verify your directory structure
The script will:
- Detect your system configuration
- Process each system prompt file
- Show "Processing" and "Completed" messages for each file
- Display "Done" when finished
The script replaces these placeholders with your system-specific values:
- OS_PLACEHOLDER (e.g., "Windows 10 Pro" or "Ubuntu 22.04")
- SHELL_PLACEHOLDER (e.g., "cmd" or "bash")
- HOME_PLACEHOLDER (your home directory)
- WORKSPACE_PLACEHOLDER (your project directory)
- GLOBAL_SETTINGS_PLACEHOLDER (Roo Code global settings path)
- MCP_LOCATION_PLACEHOLDER (Roo Code MCP directory path)
- MCP_SETTINGS_PLACEHOLDER (Roo Code MCP settings path)
After running the script:
- Verify that
.roo/system-prompt-*
files contain your system paths - Start using VS Code with the Roo Code extension
- The Memory Bank will be initialized on first use
- Start a Chat: Open a new Roo Code chat in your project.
- Select a Mode: Choose the appropriate mode (Architect, Code, Test, Debug, Ask) for your task.
- Interact with Roo: Give Roo instructions and ask questions. Roo will automatically use the Memory Bank to maintain context.
-
Memory Bank Initialization: If you start a chat in a project without a
memory-bank/
directory, Roo will suggest switching to Architect mode and guide you through the initialization process. - "Update Memory Bank" Command: At any time, you can type "Update Memory Bank" or "UMB" to force a synchronization of the chat session's information into the Memory Bank. This is useful for ensuring continuity across sessions or before switching modes.
The Memory Bank is a directory named memory-bank
located in your project's root. It contains several Markdown files that store different aspects of your project's knowledge:
File | Purpose |
---|---|
activeContext.md |
Tracks the current session's context: recent changes, current goals, and open questions/issues. |
decisionLog.md |
Records architectural and implementation decisions, including the context, decision, rationale, and implementation details. |
productContext.md |
Provides a high-level overview of the project, including its goals, features, and overall architecture. |
progress.md |
Tracks the progress of the project, including completed work, current tasks, and next steps. Uses a task list format. |
systemPatterns.md |
(Optional) Documents recurring patterns and standards used in the project (coding patterns, architectural patterns, testing patterns). |
RooFlow automatically manages these files. You generally don't need to edit them directly, although you can review them to understand the AI's knowledge.
RooFlow remembers project details across sessions, maintaining a consistent understanding of your codebase, design decisions, and progress.
The Memory Bank is updated automatically based on significant events within each mode, ensuring that the context is always up-to-date.
The five modes (Architect, Code, Test, Debug, Ask) are designed to work together seamlessly. They can switch between each other as needed, and they share information through the Memory Bank.
RooFlow is designed to use fewer tokens than previous systems, making it more efficient and cost-effective.
The command "Update Memory Bank" or "UMB" can be given at any time to update the memory bank with information from the current chat session.
Contributions to RooFlow are welcome! Please see the CONTRIBUTING.md file (you'll need to create this) for guidelines.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for RooFlow
Similar Open Source Tools

RooFlow
RooFlow is a VS Code extension that enhances AI-assisted development by providing persistent project context and optimized mode interactions. It reduces token consumption and streamlines workflow by integrating Architect, Code, Test, Debug, and Ask modes. The tool simplifies setup, offers real-time updates, and provides clearer instructions through YAML-based rule files. It includes components like Memory Bank, System Prompts, VS Code Integration, and Real-time Updates. Users can install RooFlow by downloading specific files, placing them in the project structure, and running an insert-variables script. They can then start a chat, select a mode, interact with Roo, and use the 'Update Memory Bank' command for synchronization. The Memory Bank structure includes files for active context, decision log, product context, progress tracking, and system patterns. RooFlow features persistent context, real-time updates, mode collaboration, and reduced token consumption.

MetaGPT
MetaGPT is a multi-agent framework that enables GPT to work in a software company, collaborating to tackle more complex tasks. It assigns different roles to GPTs to form a collaborative entity for complex tasks. MetaGPT takes a one-line requirement as input and outputs user stories, competitive analysis, requirements, data structures, APIs, documents, etc. Internally, MetaGPT includes product managers, architects, project managers, and engineers. It provides the entire process of a software company along with carefully orchestrated SOPs. MetaGPT's core philosophy is "Code = SOP(Team)", materializing SOP and applying it to teams composed of LLMs.

pipecat
Pipecat is an open-source framework designed for building generative AI voice bots and multimodal assistants. It provides code building blocks for interacting with AI services, creating low-latency data pipelines, and transporting audio, video, and events over the Internet. Pipecat supports various AI services like speech-to-text, text-to-speech, image generation, and vision models. Users can implement new services and contribute to the framework. Pipecat aims to simplify the development of applications like personal coaches, meeting assistants, customer support bots, and more by providing a complete framework for integrating AI services.

llm-context.py
LLM Context is a tool designed to assist developers in quickly injecting relevant content from code/text projects into Large Language Model chat interfaces. It leverages `.gitignore` patterns for smart file selection and offers a streamlined clipboard workflow using the command line. The tool also provides direct integration with Large Language Models through the Model Context Protocol (MCP). LLM Context is optimized for code repositories and collections of text/markdown/html documents, making it suitable for developers working on projects that fit within an LLM's context window. The tool is under active development and aims to enhance AI-assisted development workflows by harnessing the power of Large Language Models.

RainbowGPT
RainbowGPT is a versatile tool that offers a range of functionalities, including Stock Analysis for financial decision-making, MySQL Management for database navigation, and integration of AI technologies like GPT-4 and ChatGlm3. It provides a user-friendly interface suitable for all skill levels, ensuring seamless information flow and continuous expansion of emerging technologies. The tool enhances adaptability, creativity, and insight, making it a valuable asset for various projects and tasks.

code2prompt
Code2Prompt is a powerful command-line tool that generates comprehensive prompts from codebases, designed to streamline interactions between developers and Large Language Models (LLMs) for code analysis, documentation, and improvement tasks. It bridges the gap between codebases and LLMs by converting projects into AI-friendly prompts, enabling users to leverage AI for various software development tasks. The tool offers features like holistic codebase representation, intelligent source tree generation, customizable prompt templates, smart token management, Gitignore integration, flexible file handling, clipboard-ready output, multiple output options, and enhanced code readability.

resume-job-matcher
Resume Job Matcher is a Python script that automates the process of matching resumes to a job description using AI. It leverages the Anthropic Claude API or OpenAI's GPT API to analyze resumes and provide a match score along with personalized email responses for candidates. The tool offers comprehensive resume processing, advanced AI-powered analysis, in-depth evaluation & scoring, comprehensive analytics & reporting, enhanced candidate profiling, and robust system management. Users can customize font presets, generate PDF versions of unified resumes, adjust logging level, change scoring model, modify AI provider, and adjust AI model. The final score for each resume is calculated based on AI-generated match score and resume quality score, ensuring content relevance and presentation quality are considered. Troubleshooting tips, best practices, contribution guidelines, and required Python packages are provided.

extension-gen-ai
The Looker GenAI Extension provides code examples and resources for building a Looker Extension that integrates with Vertex AI Large Language Models (LLMs). Users can leverage the power of LLMs to enhance data exploration and analysis within Looker. The extension offers generative explore functionality to ask natural language questions about data and generative insights on dashboards to analyze data by asking questions. It leverages components like BQML Remote Models, BQML Remote UDF with Vertex AI, and Custom Fine Tune Model for different integration options. Deployment involves setting up infrastructure with Terraform and deploying the Looker Extension by creating a Looker project, copying extension files, configuring BigQuery connection, connecting to Git, and testing the extension. Users can save example prompts and configure user settings for the extension. Development of the Looker Extension environment includes installing dependencies, starting the development server, and building for production.

langmanus
LangManus is a community-driven AI automation framework that combines language models with specialized tools for tasks like web search, crawling, and Python code execution. It implements a hierarchical multi-agent system with agents like Coordinator, Planner, Supervisor, Researcher, Coder, Browser, and Reporter. The framework supports LLM integration, search and retrieval tools, Python integration, workflow management, and visualization. LangManus aims to give back to the open-source community and welcomes contributions in various forms.

py-llm-core
PyLLMCore is a light-weighted interface with Large Language Models with native support for llama.cpp, OpenAI API, and Azure deployments. It offers a Pythonic API that is simple to use, with structures provided by the standard library dataclasses module. The high-level API includes the assistants module for easy swapping between models. PyLLMCore supports various models including those compatible with llama.cpp, OpenAI, and Azure APIs. It covers use cases such as parsing, summarizing, question answering, hallucinations reduction, context size management, and tokenizing. The tool allows users to interact with language models for tasks like parsing text, summarizing content, answering questions, reducing hallucinations, managing context size, and tokenizing text.

gpt-computer-assistant
GPT Computer Assistant (GCA) is an open-source framework designed to build vertical AI agents that can automate tasks on Windows, macOS, and Ubuntu systems. It leverages the Model Context Protocol (MCP) and its own modules to mimic human-like actions and achieve advanced capabilities. With GCA, users can empower themselves to accomplish more in less time by automating tasks like updating dependencies, analyzing databases, and configuring cloud security settings.

effective_llm_alignment
This is a super customizable, concise, user-friendly, and efficient toolkit for training and aligning LLMs. It provides support for various methods such as SFT, Distillation, DPO, ORPO, CPO, SimPO, SMPO, Non-pair Reward Modeling, Special prompts basket format, Rejection Sampling, Scoring using RM, Effective FAISS Map-Reduce Deduplication, LLM scoring using RM, NER, CLIP, Classification, and STS. The toolkit offers key libraries like PyTorch, Transformers, TRL, Accelerate, FSDP, DeepSpeed, and tools for result logging with wandb or clearml. It allows mixing datasets, generation and logging in wandb/clearml, vLLM batched generation, and aligns models using the SMPO method.

aiaio
aiaio (AI-AI-O) is a lightweight, privacy-focused web UI for interacting with AI models. It supports both local and remote LLM deployments through OpenAI-compatible APIs. The tool provides features such as dark/light mode support, local SQLite database for conversation storage, file upload and processing, configurable model parameters through UI, privacy-focused design, responsive design for mobile/desktop, syntax highlighting for code blocks, real-time conversation updates, automatic conversation summarization, customizable system prompts, WebSocket support for real-time updates, Docker support for deployment, multiple API endpoint support, and multiple system prompt support. Users can configure model parameters and API settings through the UI, handle file uploads, manage conversations, and use keyboard shortcuts for efficient interaction. The tool uses SQLite for storage with tables for conversations, messages, attachments, and settings. Contributions to the project are welcome under the Apache License 2.0.

AutoAgent
AutoAgent is a fully-automated and zero-code framework that enables users to create and deploy LLM agents through natural language alone. It is a top performer on the GAIA Benchmark, equipped with a native self-managing vector database, and allows for easy creation of tools, agents, and workflows without any coding. AutoAgent seamlessly integrates with a wide range of LLMs and supports both function-calling and ReAct interaction modes. It is designed to be dynamic, extensible, customized, and lightweight, serving as a personal AI assistant.

EasyInstruct
EasyInstruct is a Python package proposed as an easy-to-use instruction processing framework for Large Language Models (LLMs) like GPT-4, LLaMA, ChatGLM in your research experiments. EasyInstruct modularizes instruction generation, selection, and prompting, while also considering their combination and interaction.

agents-starter
A starter template for building AI-powered chat agents using Cloudflare's Agent platform, powered by agents-sdk. It provides a foundation for creating interactive chat experiences with AI, complete with a modern UI and tool integration capabilities. Features include interactive chat interface with AI, built-in tool system with human-in-the-loop confirmation, advanced task scheduling, dark/light theme support, real-time streaming responses, state management, and chat history. Prerequisites include a Cloudflare account and OpenAI API key. The project structure includes components for chat UI implementation, chat agent logic, tool definitions, and helper functions. Customization guide covers adding new tools, modifying the UI, and example use cases for customer support, development assistant, data analysis assistant, personal productivity assistant, and scheduling assistant.
For similar tasks

doc2plan
doc2plan is a browser-based application that helps users create personalized learning plans by extracting content from documents. It features a Creator for manual or AI-assisted plan construction and a Viewer for interactive plan navigation. Users can extract chapters, key topics, generate quizzes, and track progress. The application includes AI-driven content extraction, quiz generation, progress tracking, plan import/export, assistant management, customizable settings, viewer chat with text-to-speech and speech-to-text support, and integration with various Retrieval-Augmented Generation (RAG) models. It aims to simplify the creation of comprehensive learning modules tailored to individual needs.

RooFlow
RooFlow is a VS Code extension that enhances AI-assisted development by providing persistent project context and optimized mode interactions. It reduces token consumption and streamlines workflow by integrating Architect, Code, Test, Debug, and Ask modes. The tool simplifies setup, offers real-time updates, and provides clearer instructions through YAML-based rule files. It includes components like Memory Bank, System Prompts, VS Code Integration, and Real-time Updates. Users can install RooFlow by downloading specific files, placing them in the project structure, and running an insert-variables script. They can then start a chat, select a mode, interact with Roo, and use the 'Update Memory Bank' command for synchronization. The Memory Bank structure includes files for active context, decision log, product context, progress tracking, and system patterns. RooFlow features persistent context, real-time updates, mode collaboration, and reduced token consumption.

plandex
Plandex is an open source, terminal-based AI coding engine designed for complex tasks. It uses long-running agents to break up large tasks into smaller subtasks, helping users work through backlogs, navigate unfamiliar technologies, and save time on repetitive tasks. Plandex supports various AI models, including OpenAI, Anthropic Claude, Google Gemini, and more. It allows users to manage context efficiently in the terminal, experiment with different approaches using branches, and review changes before applying them. The tool is platform-independent and runs from a single binary with no dependencies.
For similar jobs

sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.

teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.

ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.

classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.

chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.

BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students

uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.

griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.