
UltraContextAI
https://forum.cursor.com/t/rules-for-ultra-context-memories-lessons-scratchpad-with-plan-and-act-modes/48792/22?u=t1nker-1220
Stars: 140

README:
A comprehensive system for managing AI interactions through memory management, lessons learned tracking, and dual-mode operation (Plan/Agent). This system ensures consistent, high-quality development while maintaining detailed project documentation and knowledge retention.
-
Memory System (
@memories.md
)- Tracks all interactions chronologically
- Auto-updates with timestamps and tags
- Maintains project context and decisions
- Uses version control format [v1.0.0]
- Supports #tags for easy searching
-
Lessons Learned (
@lessons-learned.md
)- Captures solutions and best practices
- Uses structured format: Issue → Solution → Impact
- Categorizes by component, TypeScript, errors, etc.
- Prioritizes issues (Critical/Important/Enhancement)
- Links to related code examples
-
Scratchpad (
@scratchpad.md
)- Manages current phase and tasks
- Tracks implementation progress
- Uses clear status markers [X], [-], [ ], [!], [?]
- Maintains task dependencies
- Updates confidence metrics
Trigger: "plan"
Purpose: Information gathering and planning
Requirements:
- Parse user input
- Cross-reference requirements
- Generate clarifying questions
- Calculate confidence score
- Create task breakdown
Trigger: "agent"
Activation Requirements:
- 95% confidence level
- All questions answered
- Tasks defined
- No blocking issues
- Requirements verified
-
Initialize Planning 🎯
- Use "plan" trigger - System creates new scratchpad entry - Generates minimum 3 questions - Sets initial confidence score
-
Build Confidence 📈
- Answer all questions - Verify requirements - Update task dependencies - Monitor confidence score
-
Execute Implementation ⚡
- Reach 95% confidence - Use "agent" trigger - System implements solution - Updates documentation
USER: "plan implement a new button component"
AI: 🎯 PLAN MODE ACTIVATED
Current Task: Implement new button component
Questions:
1. What variants are needed? (primary/secondary/outline)
2. Should it support icons?
3. What sizes are required?
Confidence: 65%
USER: "primary and secondary variants, with icons, three sizes"
AI: Updated Understanding:
- Primary/Secondary variants
- Icon support
- Small/Medium/Large sizes
Confidence: 95%
USER: "agent"
AI: ⚡ AGENT MODE ACTIVATED
- Creating Button component
- Implementing variants
- Adding icon support
- Testing accessibility
USER: "plan fix the form submission error"
AI: 🎯 PLAN MODE ACTIVATED
Current Task: Debug form submission error
Questions:
1. When does the error occur?
2. Any error messages in console?
3. Which form component is affected?
Confidence: 50%
USER: "happens on login form, console shows validation error"
AI: Chain of Thought Analysis:
1. Login form validation
2. Console error check
3. Input validation rules
Confidence: 95%
USER: "agent"
AI: ⚡ AGENT MODE ACTIVATED
- Debugging validation
- Fixing error handling
- Testing submission
USER: "plan setup new Next.js project"
AI: 🎯 PLAN MODE ACTIVATED
Current Task: Initialize Next.js project
Questions:
1. Which version of Next.js?
2. TypeScript or JavaScript?
3. Which UI components needed?
Confidence: 70%
USER: "Next.js 14, TypeScript, Shadcn UI"
AI: Project Structure:
- Next.js 14 setup
- TypeScript configuration
- Shadcn UI integration
Confidence: 95%
USER: "agent"
AI: ⚡ AGENT MODE ACTIVATED
- Creating project
- Setting up TypeScript
- Installing dependencies
USER: "plan update API documentation"
AI: 🎯 PLAN MODE ACTIVATED
Current Task: Update API docs
Questions:
1. Which endpoints need updates?
2. Any new parameters added?
3. Response format changes?
Confidence: 60%
USER: "new authentication endpoints"
AI: Documentation Plan:
- Auth endpoints
- Request/Response formats
- Example usage
Confidence: 95%
USER: "agent"
AI: ⚡ AGENT MODE ACTIVATED
- Updating docs
- Adding examples
- Verifying formats
- Memories: Every interaction
- Lessons: After solutions
- Scratchpad: During implementation
[v1.0.0] Format for all entries
- Development updates
- Manual updates
- Progress tracking
@memories.md ↔️ @lessons-learned.md ↔️ @scratchpad.md
.cursor/
├── memories.md # Interaction history
├── lessons-learned.md # Solutions & practices
├── scratchpad.md # Current phase tracking
├── project-requirements.md # Project specs
└── rules/ # System rules
└── .cursorrules # Core rules file
-
Memory Management
- Use timestamps consistently
- Include relevant #tags
- Cross-reference related entries
- Keep single-line format
-
Task Tracking
- Generate unique task IDs
- Track dependencies
- Update status in real-time
- Maintain hierarchy
-
Documentation
- Update in real-time
- Include version numbers
- Cross-reference related files
- Follow structured formats
-
Required Open Tabs:
1️⃣ Active working file 2️⃣ Cursor Settings (Feature → Resync) 3️⃣ .cursorrules (for auto-reload)
-
Quick Reload Process:
1. Ctrl+Shift+P 2. "Developer: Reload Window" 3. Wait 3-10 seconds
- Keep .cursorrules file open
- Monitor confidence scores
- Use proper triggers
- Follow version format
- Cross-reference frequently
Feel free to enhance this system:
- Add custom rules
- Improve tracking
- Enhance metrics
- Share practices
MIT License - Free to use and modify!
- Instagram: https://www.instagram.com/clover_nat/
- Facebook: https://www.facebook.com/nathanielmarquez.20
- Twitter: https://x.com/T1nker1220
If this system helps you, consider supporting:
- PayPal: https://www.paypal.me/JohnNathanielMarquez
- GCash: 09605088715
For full context and discussions: https://forum.cursor.com/t/rules-for-ultra-context-memories-lessons-scratchpad-with-plan-and-act-modes/48792/22?u=t1nker-1220
Note: This system is designed for seamless AI interaction management. For detailed implementation guidelines, refer to the individual rule files. 🚀
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for UltraContextAI
Similar Open Source Tools

Shellsage
Shell Sage is an intelligent terminal companion and AI-powered terminal assistant that enhances the terminal experience with features like local and cloud AI support, context-aware error diagnosis, natural language to command translation, and safe command execution workflows. It offers interactive workflows, supports various API providers, and allows for custom model selection. Users can configure the tool for local or API mode, select specific models, and switch between modes easily. Currently in alpha development, Shell Sage has known limitations like limited Windows support and occasional false positives in error detection. The roadmap includes improvements like better context awareness, Windows PowerShell integration, Tmux integration, and CI/CD error pattern database.

mcphub.nvim
MCPHub.nvim is a powerful Neovim plugin that integrates MCP (Model Context Protocol) servers into your workflow. It offers a centralized config file for managing servers and tools, with an intuitive UI for testing resources. Ideal for LLM integration, it provides programmatic API access and interactive testing through the `:MCPHub` command.

AI-Agent-Starter-Kit
AI Agent Starter Kit is a modern full-stack AI-enabled template using Next.js for frontend and Express.js for backend, with Telegram and OpenAI integrations. It offers AI-assisted development, smart environment variable setup assistance, intelligent error resolution, context-aware code completion, and built-in debugging helpers. The kit provides a structured environment for developers to interact with AI tools seamlessly, enhancing the development process and productivity.

evalchemy
Evalchemy is a unified and easy-to-use toolkit for evaluating language models, focusing on post-trained models. It integrates multiple existing benchmarks such as RepoBench, AlpacaEval, and ZeroEval. Key features include unified installation, parallel evaluation, simplified usage, and results management. Users can run various benchmarks with a consistent command-line interface and track results locally or integrate with a database for systematic tracking and leaderboard submission.

quantalogic
QuantaLogic is a ReAct framework for building advanced AI agents that seamlessly integrates large language models with a robust tool system. It aims to bridge the gap between advanced AI models and practical implementation in business processes by enabling agents to understand, reason about, and execute complex tasks through natural language interaction. The framework includes features such as ReAct Framework, Universal LLM Support, Secure Tool System, Real-time Monitoring, Memory Management, and Enterprise Ready components.

LLMVoX
LLMVoX is a lightweight 30M-parameter, LLM-agnostic, autoregressive streaming Text-to-Speech (TTS) system designed to convert text outputs from Large Language Models into high-fidelity streaming speech with low latency. It achieves significantly lower Word Error Rate compared to speech-enabled LLMs while operating at comparable latency and speech quality. Key features include being lightweight & fast with only 30M parameters, LLM-agnostic for easy integration with existing models, multi-queue streaming for continuous speech generation, and multilingual support for easy adaptation to new languages.

R2R
R2R (RAG to Riches) is a fast and efficient framework for serving high-quality Retrieval-Augmented Generation (RAG) to end users. The framework is designed with customizable pipelines and a feature-rich FastAPI implementation, enabling developers to quickly deploy and scale RAG-based applications. R2R was conceived to bridge the gap between local LLM experimentation and scalable production solutions. **R2R is to LangChain/LlamaIndex what NextJS is to React**. A JavaScript client for R2R deployments can be found here. ### Key Features * **🚀 Deploy** : Instantly launch production-ready RAG pipelines with streaming capabilities. * **🧩 Customize** : Tailor your pipeline with intuitive configuration files. * **🔌 Extend** : Enhance your pipeline with custom code integrations. * **⚖️ Autoscale** : Scale your pipeline effortlessly in the cloud using SciPhi. * **🤖 OSS** : Benefit from a framework developed by the open-source community, designed to simplify RAG deployment.

pocketgroq
PocketGroq is a tool that provides advanced functionalities for text generation, web scraping, web search, and AI response evaluation. It includes features like an Autonomous Agent for answering questions, web crawling and scraping capabilities, enhanced web search functionality, and flexible integration with Ollama server. Users can customize the agent's behavior, evaluate responses using AI, and utilize various methods for text generation, conversation management, and Chain of Thought reasoning. The tool offers comprehensive methods for different tasks, such as initializing RAG, error handling, and tool management. PocketGroq is designed to enhance development processes and enable the creation of AI-powered applications with ease.

VT.ai
VT.ai is a multimodal AI platform that offers dynamic conversation routing with SemanticRouter, multi-modal interactions (text/image/audio), an assistant framework with code interpretation, real-time response streaming, cross-provider model switching, and local model support with Ollama integration. It supports various AI providers such as OpenAI, Anthropic, Google Gemini, Groq, Cohere, and OpenRouter, providing a wide range of core capabilities for AI orchestration.

face-api
FaceAPI is an AI-powered tool for face detection, rotation tracking, face description, recognition, age, gender, and emotion prediction. It can be used in both browser and NodeJS environments using TensorFlow/JS. The tool provides live demos for processing images and webcam feeds, along with NodeJS examples for various tasks such as face similarity comparison and multiprocessing. FaceAPI offers different pre-built versions for client-side browser execution and server-side NodeJS execution, with or without TFJS pre-bundled. It is compatible with TFJS 2.0+ and TFJS 3.0+.

UMbreLLa
UMbreLLa is a tool designed for deploying Large Language Models (LLMs) for personal agents. It combines offloading, speculative decoding, and quantization to optimize single-user LLM deployment scenarios. With UMbreLLa, 70B-level models can achieve performance comparable to human reading speed on an RTX 4070Ti, delivering exceptional efficiency and responsiveness, especially for coding tasks. The tool supports deploying models on various GPUs and offers features like code completion and CLI/Gradio chatbots. Users can configure the LLM engine for optimal performance based on their hardware setup.