
claude-task-master
An AI-powered task-management system you can drop into Cursor.
Stars: 616

README:
A task management system for AI-driven development with Claude, designed to work seamlessly with Cursor AI.
- Node.js 14.0.0 or higher
- Anthropic API key (Claude API)
- Anthropic SDK version 0.39.0 or higher
- OpenAI SDK (for Perplexity API integration, optional)
The script can be configured through environment variables in a .env
file at the root of the project:
-
ANTHROPIC_API_KEY
: Your Anthropic API key for Claude
-
MODEL
: Specify which Claude model to use (default: "claude-3-7-sonnet-20250219") -
MAX_TOKENS
: Maximum tokens for model responses (default: 4000) -
TEMPERATURE
: Temperature for model responses (default: 0.7) -
PERPLEXITY_API_KEY
: Your Perplexity API key for research-backed subtask generation -
PERPLEXITY_MODEL
: Specify which Perplexity model to use (default: "sonar-medium-online") -
DEBUG
: Enable debug logging (default: false) -
LOG_LEVEL
: Log level - debug, info, warn, error (default: info) -
DEFAULT_SUBTASKS
: Default number of subtasks when expanding (default: 3) -
DEFAULT_PRIORITY
: Default priority for generated tasks (default: medium) -
PROJECT_NAME
: Override default project name in tasks.json -
PROJECT_VERSION
: Override default version in tasks.json
# Install globally
npm install -g task-master-ai
# OR install locally within your project
npm install task-master-ai
# If installed globally
task-master init
# If installed locally
npx task-master-init
This will prompt you for project details and set up a new project with the necessary files and structure.
- This package uses ES modules. Your package.json should include
"type": "module"
. - The Anthropic SDK version should be 0.39.0 or higher.
After installing the package globally, you can use these CLI commands from any directory:
# Initialize a new project
task-master init
# Parse a PRD and generate tasks
task-master parse-prd your-prd.txt
# List all tasks
task-master list
# Show the next task to work on
task-master next
# Generate task files
task-master generate
Try running it with Node directly:
node node_modules/claude-task-master/scripts/init.js
Or clone the repository and run:
git clone https://github.com/eyaltoledano/claude-task-master.git
cd claude-task-master
node scripts/init.js
Tasks in tasks.json have the following structure:
-
id
: Unique identifier for the task (Example:1
) -
title
: Brief, descriptive title of the task (Example:"Initialize Repo"
) -
description
: Concise description of what the task involves (Example:"Create a new repository, set up initial structure."
) -
status
: Current state of the task (Example:"pending"
,"done"
,"deferred"
) -
dependencies
: IDs of tasks that must be completed before this task (Example:[1, 2]
)- Dependencies are displayed with status indicators (✅ for completed, ⏱️ for pending)
- This helps quickly identify which prerequisite tasks are blocking work
-
priority
: Importance level of the task (Example:"high"
,"medium"
,"low"
) -
details
: In-depth implementation instructions (Example:"Use GitHub client ID/secret, handle callback, set session token."
) -
testStrategy
: Verification approach (Example:"Deploy and call endpoint to confirm 'Hello World' response."
) -
subtasks
: List of smaller, more specific tasks that make up the main task (Example:[{"id": 1, "title": "Configure OAuth", ...}]
)
Claude Task Master is designed to work seamlessly with Cursor AI, providing a structured workflow for AI-driven development.
- After initializing your project, open it in Cursor
- The
.cursor/rules/dev_workflow.mdc
file is automatically loaded by Cursor, providing the AI with knowledge about the task management system - Place your PRD document in the
scripts/
directory (e.g.,scripts/prd.txt
) - Open Cursor's AI chat and switch to Agent mode
To enable enhanced task management capabilities directly within Cursor using the Model Control Protocol (MCP):
- Go to Cursor settings
- Navigate to the MCP section
- Click on "Add New MCP Server"
- Configure with the following details:
- Name: "Task Master"
- Type: "Command"
- Command: "npx -y --package task-master-ai task-master-mcp"
- Save the settings
Once configured, you can interact with Task Master's task management commands directly through Cursor's interface, providing a more integrated experience.
In Cursor's AI chat, instruct the agent to generate tasks from your PRD:
Please use the task-master parse-prd command to generate tasks from my PRD. The PRD is located at scripts/prd.txt.
The agent will execute:
task-master parse-prd scripts/prd.txt
This will:
- Parse your PRD document
- Generate a structured
tasks.json
file with tasks, dependencies, priorities, and test strategies - The agent will understand this process due to the Cursor rules
Next, ask the agent to generate individual task files:
Please generate individual task files from tasks.json
The agent will execute:
task-master generate
This creates individual task files in the tasks/
directory (e.g., task_001.txt
, task_002.txt
), making it easier to reference specific tasks.
The Cursor agent is pre-configured (via the rules file) to follow this workflow:
Ask the agent to list available tasks:
What tasks are available to work on next?
The agent will:
- Run
task-master list
to see all tasks - Run
task-master next
to determine the next task to work on - Analyze dependencies to determine which tasks are ready to be worked on
- Prioritize tasks based on priority level and ID order
- Suggest the next task(s) to implement
When implementing a task, the agent will:
- Reference the task's details section for implementation specifics
- Consider dependencies on previous tasks
- Follow the project's coding standards
- Create appropriate tests based on the task's testStrategy
You can ask:
Let's implement task 3. What does it involve?
Before marking a task as complete, verify it according to:
- The task's specified testStrategy
- Any automated tests in the codebase
- Manual verification if required
When a task is completed, tell the agent:
Task 3 is now complete. Please update its status.
The agent will execute:
task-master set-status --id=3 --status=done
If during implementation, you discover that:
- The current approach differs significantly from what was planned
- Future tasks need to be modified due to current implementation choices
- New dependencies or requirements have emerged
Tell the agent:
We've changed our approach. We're now using Express instead of Fastify. Please update all future tasks to reflect this change.
The agent will execute:
task-master update --from=4 --prompt="Now we are using Express instead of Fastify."
This will rewrite or re-scope subsequent tasks in tasks.json while preserving completed work.
For complex tasks that need more granularity:
Task 5 seems complex. Can you break it down into subtasks?
The agent will execute:
task-master expand --id=5 --num=3
You can provide additional context:
Please break down task 5 with a focus on security considerations.
The agent will execute:
task-master expand --id=5 --prompt="Focus on security aspects"
You can also expand all pending tasks:
Please break down all pending tasks into subtasks.
The agent will execute:
task-master expand --all
For research-backed subtask generation using Perplexity AI:
Please break down task 5 using research-backed generation.
The agent will execute:
task-master expand --id=5 --research
Here's a comprehensive reference of all available commands:
# Parse a PRD file and generate tasks
task-master parse-prd <prd-file.txt>
# Limit the number of tasks generated
task-master parse-prd <prd-file.txt> --num-tasks=10
# List all tasks
task-master list
# List tasks with a specific status
task-master list --status=<status>
# List tasks with subtasks
task-master list --with-subtasks
# List tasks with a specific status and include subtasks
task-master list --status=<status> --with-subtasks
# Show the next task to work on based on dependencies and status
task-master next
# Show details of a specific task
task-master show <id>
# or
task-master show --id=<id>
# View a specific subtask (e.g., subtask 2 of task 1)
task-master show 1.2
# Update tasks from a specific ID and provide context
task-master update --from=<id> --prompt="<prompt>"
# Generate individual task files from tasks.json
task-master generate
# Set status of a single task
task-master set-status --id=<id> --status=<status>
# Set status for multiple tasks
task-master set-status --id=1,2,3 --status=<status>
# Set status for subtasks
task-master set-status --id=1.1,1.2 --status=<status>
When marking a task as "done", all of its subtasks will automatically be marked as "done" as well.
# Expand a specific task with subtasks
task-master expand --id=<id> --num=<number>
# Expand with additional context
task-master expand --id=<id> --prompt="<context>"
# Expand all pending tasks
task-master expand --all
# Force regeneration of subtasks for tasks that already have them
task-master expand --all --force
# Research-backed subtask generation for a specific task
task-master expand --id=<id> --research
# Research-backed generation for all tasks
task-master expand --all --research
# Clear subtasks from a specific task
task-master clear-subtasks --id=<id>
# Clear subtasks from multiple tasks
task-master clear-subtasks --id=1,2,3
# Clear subtasks from all tasks
task-master clear-subtasks --all
# Analyze complexity of all tasks
task-master analyze-complexity
# Save report to a custom location
task-master analyze-complexity --output=my-report.json
# Use a specific LLM model
task-master analyze-complexity --model=claude-3-opus-20240229
# Set a custom complexity threshold (1-10)
task-master analyze-complexity --threshold=6
# Use an alternative tasks file
task-master analyze-complexity --file=custom-tasks.json
# Use Perplexity AI for research-backed complexity analysis
task-master analyze-complexity --research
# Display the task complexity analysis report
task-master complexity-report
# View a report at a custom location
task-master complexity-report --file=my-report.json
# Add a dependency to a task
task-master add-dependency --id=<id> --depends-on=<id>
# Remove a dependency from a task
task-master remove-dependency --id=<id> --depends-on=<id>
# Validate dependencies without fixing them
task-master validate-dependencies
# Find and fix invalid dependencies automatically
task-master fix-dependencies
# Add a new task using AI
task-master add-task --prompt="Description of the new task"
# Add a task with dependencies
task-master add-task --prompt="Description" --dependencies=1,2,3
# Add a task with priority
# Task Master
### by [@eyaltoledano](https://x.com/eyaltoledano)
A task management system for AI-driven development with Claude, designed to work seamlessly with Cursor AI.
## Requirements
- Node.js 14.0.0 or higher
- Anthropic API key (Claude API)
- Anthropic SDK version 0.39.0 or higher
- OpenAI SDK (for Perplexity API integration, optional)
## Configuration
The script can be configured through environment variables in a `.env` file at the root of the project:
### Required Configuration
- `ANTHROPIC_API_KEY`: Your Anthropic API key for Claude
### Optional Configuration
- `MODEL`: Specify which Claude model to use (default: "claude-3-7-sonnet-20250219")
- `MAX_TOKENS`: Maximum tokens for model responses (default: 4000)
- `TEMPERATURE`: Temperature for model responses (default: 0.7)
- `PERPLEXITY_API_KEY`: Your Perplexity API key for research-backed subtask generation
- `PERPLEXITY_MODEL`: Specify which Perplexity model to use (default: "sonar-medium-online")
- `DEBUG`: Enable debug logging (default: false)
- `LOG_LEVEL`: Log level - debug, info, warn, error (default: info)
- `DEFAULT_SUBTASKS`: Default number of subtasks when expanding (default: 3)
- `DEFAULT_PRIORITY`: Default priority for generated tasks (default: medium)
- `PROJECT_NAME`: Override default project name in tasks.json
- `PROJECT_VERSION`: Override default version in tasks.json
## Installation
```bash
# Install globally
npm install -g task-master-ai
# OR install locally within your project
npm install task-master-ai
# If installed globally
task-master init
# If installed locally
npx task-master-init
This will prompt you for project details and set up a new project with the necessary files and structure.
- This package uses ES modules. Your package.json should include
"type": "module"
. - The Anthropic SDK version should be 0.39.0 or higher.
After installing the package globally, you can use these CLI commands from any directory:
# Initialize a new project
task-master init
# Parse a PRD and generate tasks
task-master parse-prd your-prd.txt
# List all tasks
task-master list
# Show the next task to work on
task-master next
# Generate task files
task-master generate
Try running it with Node directly:
node node_modules/claude-task-master/scripts/init.js
Or clone the repository and run:
git clone https://github.com/eyaltoledano/claude-task-master.git
cd claude-task-master
node scripts/init.js
Tasks in tasks.json have the following structure:
-
id
: Unique identifier for the task (Example:1
) -
title
: Brief, descriptive title of the task (Example:"Initialize Repo"
) -
description
: Concise description of what the task involves (Example:"Create a new repository, set up initial structure."
) -
status
: Current state of the task (Example:"pending"
,"done"
,"deferred"
) -
dependencies
: IDs of tasks that must be completed before this task (Example:[1, 2]
)- Dependencies are displayed with status indicators (✅ for completed, ⏱️ for pending)
- This helps quickly identify which prerequisite tasks are blocking work
-
priority
: Importance level of the task (Example:"high"
,"medium"
,"low"
) -
details
: In-depth implementation instructions (Example:"Use GitHub client ID/secret, handle callback, set session token."
) -
testStrategy
: Verification approach (Example:"Deploy and call endpoint to confirm 'Hello World' response."
) -
subtasks
: List of smaller, more specific tasks that make up the main task (Example:[{"id": 1, "title": "Configure OAuth", ...}]
)
Claude Task Master is designed to work seamlessly with Cursor AI, providing a structured workflow for AI-driven development.
- After initializing your project, open it in Cursor
- The
.cursor/rules/dev_workflow.mdc
file is automatically loaded by Cursor, providing the AI with knowledge about the task management system - Place your PRD document in the
scripts/
directory (e.g.,scripts/prd.txt
) - Open Cursor's AI chat and switch to Agent mode
In Cursor's AI chat, instruct the agent to generate tasks from your PRD:
Please use the task-master parse-prd command to generate tasks from my PRD. The PRD is located at scripts/prd.txt.
The agent will execute:
task-master parse-prd scripts/prd.txt
This will:
- Parse your PRD document
- Generate a structured
tasks.json
file with tasks, dependencies, priorities, and test strategies - The agent will understand this process due to the Cursor rules
Next, ask the agent to generate individual task files:
Please generate individual task files from tasks.json
The agent will execute:
task-master generate
This creates individual task files in the tasks/
directory (e.g., task_001.txt
, task_002.txt
), making it easier to reference specific tasks.
The Cursor agent is pre-configured (via the rules file) to follow this workflow:
Ask the agent to list available tasks:
What tasks are available to work on next?
The agent will:
- Run
task-master list
to see all tasks - Run
task-master next
to determine the next task to work on - Analyze dependencies to determine which tasks are ready to be worked on
- Prioritize tasks based on priority level and ID order
- Suggest the next task(s) to implement
When implementing a task, the agent will:
- Reference the task's details section for implementation specifics
- Consider dependencies on previous tasks
- Follow the project's coding standards
- Create appropriate tests based on the task's testStrategy
You can ask:
Let's implement task 3. What does it involve?
Before marking a task as complete, verify it according to:
- The task's specified testStrategy
- Any automated tests in the codebase
- Manual verification if required
When a task is completed, tell the agent:
Task 3 is now complete. Please update its status.
The agent will execute:
task-master set-status --id=3 --status=done
If during implementation, you discover that:
- The current approach differs significantly from what was planned
- Future tasks need to be modified due to current implementation choices
- New dependencies or requirements have emerged
Tell the agent:
We've changed our approach. We're now using Express instead of Fastify. Please update all future tasks to reflect this change.
The agent will execute:
task-master update --from=4 --prompt="Now we are using Express instead of Fastify."
This will rewrite or re-scope subsequent tasks in tasks.json while preserving completed work.
For complex tasks that need more granularity:
Task 5 seems complex. Can you break it down into subtasks?
The agent will execute:
task-master expand --id=5 --num=3
You can provide additional context:
Please break down task 5 with a focus on security considerations.
The agent will execute:
task-master expand --id=5 --prompt="Focus on security aspects"
You can also expand all pending tasks:
Please break down all pending tasks into subtasks.
The agent will execute:
task-master expand --all
For research-backed subtask generation using Perplexity AI:
Please break down task 5 using research-backed generation.
The agent will execute:
task-master expand --id=5 --research
Here's a comprehensive reference of all available commands:
# Parse a PRD file and generate tasks
task-master parse-prd <prd-file.txt>
# Limit the number of tasks generated
task-master parse-prd <prd-file.txt> --num-tasks=10
# List all tasks
task-master list
# List tasks with a specific status
task-master list --status=<status>
# List tasks with subtasks
task-master list --with-subtasks
# List tasks with a specific status and include subtasks
task-master list --status=<status> --with-subtasks
# Show the next task to work on based on dependencies and status
task-master next
# Show details of a specific task
task-master show <id>
# or
task-master show --id=<id>
# View a specific subtask (e.g., subtask 2 of task 1)
task-master show 1.2
# Update tasks from a specific ID and provide context
task-master update --from=<id> --prompt="<prompt>"
# Generate individual task files from tasks.json
task-master generate
# Set status of a single task
task-master set-status --id=<id> --status=<status>
# Set status for multiple tasks
task-master set-status --id=1,2,3 --status=<status>
# Set status for subtasks
task-master set-status --id=1.1,1.2 --status=<status>
When marking a task as "done", all of its subtasks will automatically be marked as "done" as well.
# Expand a specific task with subtasks
task-master expand --id=<id> --num=<number>
# Expand with additional context
task-master expand --id=<id> --prompt="<context>"
# Expand all pending tasks
task-master expand --all
# Force regeneration of subtasks for tasks that already have them
task-master expand --all --force
# Research-backed subtask generation for a specific task
task-master expand --id=<id> --research
# Research-backed generation for all tasks
task-master expand --all --research
# Clear subtasks from a specific task
task-master clear-subtasks --id=<id>
# Clear subtasks from multiple tasks
task-master clear-subtasks --id=1,2,3
# Clear subtasks from all tasks
task-master clear-subtasks --all
# Analyze complexity of all tasks
task-master analyze-complexity
# Save report to a custom location
task-master analyze-complexity --output=my-report.json
# Use a specific LLM model
task-master analyze-complexity --model=claude-3-opus-20240229
# Set a custom complexity threshold (1-10)
task-master analyze-complexity --threshold=6
# Use an alternative tasks file
task-master analyze-complexity --file=custom-tasks.json
# Use Perplexity AI for research-backed complexity analysis
task-master analyze-complexity --research
# Display the task complexity analysis report
task-master complexity-report
# View a report at a custom location
task-master complexity-report --file=my-report.json
# Add a dependency to a task
task-master add-dependency --id=<id> --depends-on=<id>
# Remove a dependency from a task
task-master remove-dependency --id=<id> --depends-on=<id>
# Validate dependencies without fixing them
task-master validate-dependencies
# Find and fix invalid dependencies automatically
task-master fix-dependencies
# Add a new task using AI
task-master add-task --prompt="Description of the new task"
# Add a task with dependencies
task-master add-task --prompt="Description" --dependencies=1,2,3
# Add a task with priority
task-master add-task --prompt="Description" --priority=high
The analyze-complexity
command:
- Analyzes each task using AI to assess its complexity on a scale of 1-10
- Recommends optimal number of subtasks based on configured DEFAULT_SUBTASKS
- Generates tailored prompts for expanding each task
- Creates a comprehensive JSON report with ready-to-use commands
- Saves the report to scripts/task-complexity-report.json by default
The generated report contains:
- Complexity analysis for each task (scored 1-10)
- Recommended number of subtasks based on complexity
- AI-generated expansion prompts customized for each task
- Ready-to-run expansion commands directly within each task analysis
The complexity-report
command:
- Displays a formatted, easy-to-read version of the complexity analysis report
- Shows tasks organized by complexity score (highest to lowest)
- Provides complexity distribution statistics (low, medium, high)
- Highlights tasks recommended for expansion based on threshold score
- Includes ready-to-use expansion commands for each complex task
- If no report exists, offers to generate one on the spot
The expand
command automatically checks for and uses the complexity report:
When a complexity report exists:
- Tasks are automatically expanded using the recommended subtask count and prompts
- When expanding all tasks, they're processed in order of complexity (highest first)
- Research-backed generation is preserved from the complexity analysis
- You can still override recommendations with explicit command-line options
Example workflow:
# Generate the complexity analysis report with research capabilities
task-master analyze-complexity --research
# Review the report in a readable format
task-master complexity-report
# Expand tasks using the optimized recommendations
task-master expand --id=8
# or expand all tasks
task-master expand --all
The next
command:
- Identifies tasks that are pending/in-progress and have all dependencies satisfied
- Prioritizes tasks by priority level, dependency count, and task ID
- Displays comprehensive information about the selected task:
- Basic task details (ID, title, priority, dependencies)
- Implementation details
- Subtasks (if they exist)
- Provides contextual suggested actions:
- Command to mark the task as in-progress
- Command to mark the task as done
- Commands for working with subtasks
The show
command:
- Displays comprehensive details about a specific task or subtask
- Shows task status, priority, dependencies, and detailed implementation notes
- For parent tasks, displays all subtasks and their status
- For subtasks, shows parent task relationship
- Provides contextual action suggestions based on the task's state
- Works with both regular tasks and subtasks (using the format taskId.subtaskId)
-
Start with a detailed PRD: The more detailed your PRD, the better the generated tasks will be.
-
Review generated tasks: After parsing the PRD, review the tasks to ensure they make sense and have appropriate dependencies.
-
Analyze task complexity: Use the complexity analysis feature to identify which tasks should be broken down further.
-
Follow the dependency chain: Always respect task dependencies - the Cursor agent will help with this.
-
Update as you go: If your implementation diverges from the plan, use the update command to keep future tasks aligned with your current approach.
-
Break down complex tasks: Use the expand command to break down complex tasks into manageable subtasks.
-
Regenerate task files: After any updates to tasks.json, regenerate the task files to keep them in sync.
-
Communicate context to the agent: When asking the Cursor agent to help with a task, provide context about what you're trying to achieve.
-
Validate dependencies: Periodically run the validate-dependencies command to check for invalid or circular dependencies.
I've just initialized a new project with Claude Task Master. I have a PRD at scripts/prd.txt.
Can you help me parse it and set up the initial tasks?
What's the next task I should work on? Please consider dependencies and priorities.
I'd like to implement task 4. Can you help me understand what needs to be done and how to approach it?
I need to regenerate the subtasks for task 3 with a different approach. Can you help me clear and regenerate them?
We've decided to use MongoDB instead of PostgreSQL. Can you update all future tasks to reflect this change?
I've finished implementing the authentication system described in task 2. All tests are passing.
Please mark it as complete and tell me what I should work on next.
Can you analyze the complexity of our tasks to help me understand which ones need to be broken down further?
Can you show me the complexity report in a more readable format?
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for claude-task-master
Similar Open Source Tools

cursor-tools
cursor-tools is a CLI tool designed to enhance AI agents with advanced skills, such as web search, repository context, documentation generation, GitHub integration, Xcode tools, and browser automation. It provides features like Perplexity for web search, Gemini 2.0 for codebase context, and Stagehand for browser operations. The tool requires API keys for Perplexity AI and Google Gemini, and supports global installation for system-wide access. It offers various commands for different tasks and integrates with Cursor Composer for AI agent usage.

clickclickclick
ClickClickClick is a framework designed to enable autonomous Android and computer use using various LLM models, both locally and remotely. It supports tasks such as drafting emails, opening browsers, and starting games, with current support for local models via Ollama, Gemini, and GPT 4o. The tool is highly experimental and evolving, with the best results achieved using specific model combinations. Users need prerequisites like `adb` installation and USB debugging enabled on Android phones. The tool can be installed via cloning the repository, setting up a virtual environment, and installing dependencies. It can be used as a CLI tool or script, allowing users to configure planner and finder models for different tasks. Additionally, it can be used as an API to execute tasks based on provided prompts, platform, and models.

llm-functions
LLM Functions is a project that enables the enhancement of large language models (LLMs) with custom tools and agents developed in bash, javascript, and python. Users can create tools for their LLM to execute system commands, access web APIs, or perform other complex tasks triggered by natural language prompts. The project provides a framework for building tools and agents, with tools being functions written in the user's preferred language and automatically generating JSON declarations based on comments. Agents combine prompts, function callings, and knowledge (RAG) to create conversational AI agents. The project is designed to be user-friendly and allows users to easily extend the capabilities of their language models.

ML-Bench
ML-Bench is a tool designed to evaluate large language models and agents for machine learning tasks on repository-level code. It provides functionalities for data preparation, environment setup, usage, API calling, open source model fine-tuning, and inference. Users can clone the repository, load datasets, run ML-LLM-Bench, prepare data, fine-tune models, and perform inference tasks. The tool aims to facilitate the evaluation of language models and agents in the context of machine learning tasks on code repositories.

trubrics-python
Trubrics is a Python client for event tracking and analyzing LLM interactions. It offers fast and non-blocking queuing system with automatic flushing to Trubrics API. Users can track events and LLM interactions, adjust logging verbosity, and configure flush intervals and batch sizes. The tool simplifies tracking user interactions and analyzing data for LLM applications.

desktop
ComfyUI Desktop is a packaged desktop application that allows users to easily use ComfyUI with bundled features like ComfyUI source code, ComfyUI-Manager, and uv. It automatically installs necessary Python dependencies and updates with stable releases. The app comes with Electron, Chromium binaries, and node modules. Users can store ComfyUI files in a specified location and manage model paths. The tool requires Python 3.12+ and Visual Studio with Desktop C++ workload for Windows. It uses nvm to manage node versions and yarn as the package manager. Users can install ComfyUI and dependencies using comfy-cli, download uv, and build/launch the code. Troubleshooting steps include rebuilding modules and installing missing libraries. The tool supports debugging in VSCode and provides utility scripts for cleanup. Crash reports can be sent to help debug issues, but no personal data is included.

tiledesk-dashboard
Tiledesk is an open-source live chat platform with integrated chatbots written in Node.js and Express. It is designed to be a multi-channel platform for web, Android, and iOS, and it can be used to increase sales or provide post-sales customer service. Tiledesk's chatbot technology allows for automation of conversations, and it also provides APIs and webhooks for connecting external applications. Additionally, it offers a marketplace for apps and features such as CRM, ticketing, and data export.

cursor-talk-to-figma-mcp
This project implements a Model Context Protocol (MCP) integration between Cursor AI and Figma, allowing Cursor to communicate with Figma for reading designs and modifying them programmatically. It provides tools for interacting with Figma such as creating elements, modifying text content, styling, layout & organization, components & styles, export & advanced features, and connection management. The project structure includes a TypeScript MCP server for Figma integration, a Figma plugin for communicating with Cursor, and a WebSocket server for facilitating communication between the MCP server and Figma plugin.

aidermacs
Aidermacs is an AI pair programming tool for Emacs that integrates Aider, a powerful open-source AI pair programming tool. It provides top performance on the SWE Bench, support for multi-file edits, real-time file synchronization, and broad language support. Aidermacs delivers an Emacs-centric experience with features like intelligent model selection, flexible terminal backend support, smarter syntax highlighting, enhanced file management, and streamlined transient menus. It thrives on community involvement, encouraging contributions, issue reporting, idea sharing, and documentation improvement.

pacha
Pacha is an AI tool designed for retrieving context for natural language queries using a SQL interface and Python programming environment. It is optimized for working with Hasura DDN for multi-source querying. Pacha is used in conjunction with language models to produce informed responses in AI applications, agents, and chatbots.

Discord-AI-Chatbot
Discord AI Chatbot is a versatile tool that seamlessly integrates into your Discord server, offering a wide range of capabilities to enhance your communication and engagement. With its advanced language model, the bot excels at imaginative generation, providing endless possibilities for creative expression. Additionally, it offers secure credential management, ensuring the privacy of your data. The bot's hybrid command system combines the best of slash and normal commands, providing flexibility and ease of use. It also features mention recognition, ensuring prompt responses whenever you mention it or use its name. The bot's message handling capabilities prevent confusion by recognizing when you're replying to others. You can customize the bot's behavior by selecting from a range of pre-existing personalities or creating your own. The bot's web access feature unlocks a new level of convenience, allowing you to interact with it from anywhere. With its open-source nature, you have the freedom to modify and adapt the bot to your specific needs.

yek
Yek is a fast Rust-based tool designed to read text-based files in a repository or directory, chunk them, and serialize them for Large Language Models (LLM) consumption. It utilizes .gitignore rules to skip unwanted files, Git history to infer important files, and additional ignore patterns. Yek splits content into chunks based on token count or byte size, supports processing multiple directories, and can stream content when output is piped. It is configurable via a 'yek.toml' file and prioritizes important files at the end of the output.

llm-vscode
llm-vscode is an extension designed for all things LLM, utilizing llm-ls as its backend. It offers features such as code completion with 'ghost-text' suggestions, the ability to choose models for code generation via HTTP requests, ensuring prompt size fits within the context window, and code attribution checks. Users can configure the backend, suggestion behavior, keybindings, llm-ls settings, and tokenization options. Additionally, the extension supports testing models like Code Llama 13B, Phind/Phind-CodeLlama-34B-v2, and WizardLM/WizardCoder-Python-34B-V1.0. Development involves cloning llm-ls, building it, and setting up the llm-vscode extension for use.

ethereum-etl-airflow
This repository contains Airflow DAGs for extracting, transforming, and loading (ETL) data from the Ethereum blockchain into BigQuery. The DAGs use the Google Cloud Platform (GCP) services, including BigQuery, Cloud Storage, and Cloud Composer, to automate the ETL process. The repository also includes scripts for setting up the GCP environment and running the DAGs locally.