uLoopMCP
Your Unity project's AI autopilot. Compile, test, debug, repeat—until it just works.
Stars: 145
uLoopMCP is a Unity integration tool designed to let AI drive your Unity project forward with minimal human intervention. It provides a 'self-hosted development loop' where an AI can compile, run tests, inspect logs, and fix issues using tools like compile, run-tests, get-logs, and clear-console. It also allows AI to operate the Unity Editor itself—creating objects, calling menu items, inspecting scenes, and refining UI layouts from screenshots via tools like execute-dynamic-code, execute-menu-item, and capture-window. The tool enables AI-driven development loops to run autonomously inside existing Unity projects.
README:
Let an AI agent compile, test, and operate your Unity project from popular LLM tools via CLI (recommended) or MCP.
Designed to keep AI-driven development loops running autonomously inside your existing Unity projects.
uLoopMCP is a Unity integration tool designed so that AI can drive your Unity project forward with minimal human intervention. Tasks that humans typically handle manually—compiling, running the Test Runner, checking logs, editing scenes, and capturing windows to verify UI layouts—are exposed as tools that LLMs can orchestrate.
uLoopMCP is built around two core ideas:
- Provide a "self-hosted development loop" where an AI can repeatedly compile, run tests, inspect logs, and fix issues using tools like
compile,run-tests,get-logs, andclear-console. - Allow AI to operate the Unity Editor itself—creating objects, calling menu items, inspecting scenes, and refining UI layouts from screenshots—via tools like
execute-dynamic-code,execute-menu-item, andcapture-window.
https://github.com/user-attachments/assets/569a2110-7351-4cf3-8281-3a83fe181817
[!WARNING] The following software is required
- Unity 2022.3 or later
- Node.js 22.0 or later - Required for CLI and MCP server execution
- Install via the official site or your preferred version manager
- Open Unity Editor
- Open Window > Package Manager
- Click the "+" button
- Select "Add package from git URL"
- Enter the following URL:
https://github.com/hatayama/uLoopMCP.git?path=/Packages/src
- Open Project Settings window and go to Package Manager page
- Add the following entry to the Scoped Registries list:
Name: OpenUPM
URL: https://package.openupm.com
Scope(s): io.github.hatayama.uloopmcp
- Open Package Manager window and select OpenUPM in the My Registries section. uLoopMCP will be displayed.
uLoopMCP provides two connection methods: CLI and MCP. Both offer the same core functionality.
| Connection | Characteristics | Recommended For |
|---|---|---|
| CLI (uloop) Recommended | Auto-recognized by Skills-compatible LLM tools. No MCP config needed | Claude Code, Codex, and other Skills-compatible tools |
| MCP | Connect as an MCP server from LLM tools | Cursor, Windsurf, and other MCP-compatible tools |
After installation, open Window > uLoopMCP in Unity and press the Start Server button to launch the server.
For MCP connection, only Step 1 is needed. No CLI or Skills installation required. Proceed to MCP Connection Steps.
npm install -g uloop-cli# Install to project for Claude Code (recommended)
uloop skills install --claude
# Install to project for OpenAI Codex
uloop skills install --codex
# Or install globally
uloop skills install --claude --globalThat's it! After installing Skills, LLM tools can automatically handle instructions like these:
| Your Instruction | Skill Used by LLM Tools |
|---|---|
| "Launch Unity for this project" | /uloop-launch |
| "Fix the compile errors" | /uloop-compile |
| "Run the tests and tell me why they failed" |
/uloop-run-tests + /uloop-get-logs
|
| "Check the scene hierarchy" | /uloop-get-hierarchy |
| "Search for prefabs" | /uloop-unity-search |
| "Play the game and bring Unity to the front" |
/uloop-control-play-mode + /uloop-focus-window
|
| "Bulk-update prefab parameters" | /uloop-execute-dynamic-code |
| "Take a screenshot of Game View and adjust the UI layout" |
/uloop-screenshot + /uloop-execute-dynamic-code
|
[!TIP] No MCP configuration required! As long as the server is running in the uLoopMCP Window and you have installed the CLI and Skills, LLM tools communicate directly with Unity.
All 15 Bundled Skills
-
/uloop-launch- Launch Unity with correct version -
/uloop-compile- Execute compilation -
/uloop-get-logs- Get console logs -
/uloop-run-tests- Run tests -
/uloop-clear-console- Clear console -
/uloop-focus-window- Bring Unity Editor to front -
/uloop-get-hierarchy- Get scene hierarchy -
/uloop-unity-search- Unity Search -
/uloop-get-menu-items- Get menu items -
/uloop-execute-menu-item- Execute menu item -
/uloop-find-game-objects- Find GameObjects -
/uloop-screenshot- Capture EditorWindow -
/uloop-control-play-mode- Control Play Mode -
/uloop-execute-dynamic-code- Execute dynamic C# code -
/uloop-get-provider-details- Get search provider details
Direct CLI Usage (Advanced)
You can also call the CLI directly without using Skills:
# List available tools
uloop list
# Launch Unity project with correct version
uloop launch
# Launch with build target (Android, iOS, StandaloneOSX, etc.)
uloop launch -p Android
# Kill running Unity and restart
uloop launch -r
# Execute compilation
uloop compile
# Get logs
uloop get-logs --max-count 10
# Run tests
uloop run-tests --filter-type all
# Execute dynamic code
uloop execute-dynamic-code --code 'using UnityEngine; Debug.Log("Hello from CLI!");'Shell Completion (Optional)
You can install Bash/Zsh/PowerShell completion:
# Add completion script to shell config (auto-detects shell)
uloop completion --install
# Explicitly specify shell (when auto-detection fails on Windows)
uloop completion --shell bash --install # Git Bash / MINGW64
uloop completion --shell powershell --install # PowerShell
# Check completion script
uloop completionIf --port is omitted, the port configured for the project is automatically selected.
By explicitly specifying the --port option, a single LLM tool can operate multiple Unity instances:
uloop compile --port {target-port}[!NOTE] You can find the port number in each Unity's uLoopMCP Window.
You can also connect via MCP (Model Context Protocol) instead of CLI. No CLI or Skills installation required.
💡 CLI and MCP Relationship CLI provides all MCP functionality plus additional CLI-specific features such as launching and restarting Unity.
- Select Window > uLoopMCP. A dedicated window will open, so press the "Start Server" button.
- Next, select the target IDE in the LLM Tool Settings section. Press the yellow "Configure {LLM Tool Name}" button to automatically connect to the IDE.
- IDE Connection Verification
- For example, with Cursor, check the Tools & MCP in the settings page and find uLoopMCP. Click the toggle to enable MCP.
[!WARNING] About Windsurf Project-level configuration is not supported; only a global configuration is available.
Manual Setup (Usually Unnecessary)
[!NOTE] Usually automatic setup is sufficient, but if needed, you can manually edit the configuration file (e.g.,
mcp.json):
{
"mcpServers": {
"uLoopMCP": {
"command": "node",
"args": [
"[Unity Package Path]/TypeScriptServer~/dist/server.bundle.js"
],
"env": {
"UNITY_TCP_PORT": "{port}"
}
}
}
}Path Examples:
-
Via Package Manager:
"/Users/username/UnityProject/Library/PackageCache/io.github.hatayama.uloopmcp@[hash]/TypeScriptServer~/dist/server.bundle.js"
[!NOTE] When installed via Package Manager, the package is placed in
Library/PackageCachewith a hashed directory name. Using the "Auto Configure Cursor" button will automatically set the correct path.
[!NOTE] Multiple Unity instances can be supported by changing port numbers. uLoopMCP automatically assigns unused ports when starting up.
Performs AssetDatabase.Refresh() and then compiles, returning the results. Can detect errors and warnings that built-in linters cannot find. You can choose between incremental compilation and forced full compilation.
→ Execute compile, analyze error and warning content
→ Automatically fix relevant files
→ Verify with compile again
Filter by LogType or search target string with advanced search capabilities. You can also choose whether to include stacktrace. This allows you to retrieve logs while keeping the context small. MaxCount behavior: Returns the latest logs (tail-like behavior). When MaxCount=10, returns the most recent 10 logs. Advanced Search Features:
-
Regular Expression Support: Use
UseRegex: truefor powerful pattern matching -
Stack Trace Search: Use
SearchInStackTrace: trueto search within stack traces
→ get-logs (LogType: Error, SearchText: "NullReference", MaxCount: 10)
→ get-logs (LogType: All, SearchText: "(?i).*error.*", UseRegex: true, MaxCount: 20)
→ get-logs (LogType: All, SearchText: "MyClass", SearchInStackTrace: true, MaxCount: 50)
→ Identify cause from stacktrace, fix relevant code
Executes Unity Test Runner and retrieves test results. You can set conditions with FilterType and FilterValue.
- FilterType: all (all tests), exact (individual test method name), regex (class name or namespace), assembly (assembly name)
- FilterValue: Value according to filter type (class name, namespace, etc.) Test results can be output as xml. The output path is returned so AI can read it. This is also a strategy to avoid consuming context.
→ run-tests (FilterType: exact, FilterValue: "io.github.hatayama.uLoopMCP.ConsoleLogRetrieverTests.GetAllLogs_WithMaskAllOff_StillReturnsAllLogs")
→ Check failed tests, fix implementation to pass tests
[!WARNING] During PlayMode test execution, Domain Reload is forcibly turned OFF. (Settings are restored after test completion) Note that static variables will not be reset during this period.
Clear logs that become noise during log searches.
→ clear-console
→ Start new debug session
You can use UnitySearch.
→ unity-search (SearchQuery: "*.prefab")
→ List prefabs matching specific conditions
→ Identify problematic prefabs
Retrieve search providers offered by UnitySearch.
→ Understand each provider's capabilities, choose optimal search method
Retrieve menu items defined with [MenuItem("xxx")] attribute. Can filter by string specification.
Execute menu items defined with [MenuItem("xxx")] attribute.
→ Execute project-specific tools
→ Check results with get-logs
Retrieve objects and examine component parameters. Also retrieve information about currently selected GameObjects (multiple selection supported) in Unity Editor.
→ find-game-objects (RequiredComponents: ["Camera"])
→ Investigate Camera component parameters
→ find-game-objects (SearchMode: "Selected")
→ Get detailed information about currently selected GameObjects in Unity Editor (supports multiple selection)
Retrieve information about the currently active Hierarchy in nested JSON format. Works at runtime as well.
Automatic File Export: Retrieved hierarchy data is always saved as JSON in {project_root}/.uloop/outputs/HierarchyResults/ directory. The MCP response only returns the file path, minimizing token consumption even for large datasets.
Selection Mode: Use UseSelection: true to get hierarchy starting from currently selected GameObject(s) in Unity Editor. Supports multiple selection - when parent and child are both selected, only the parent is used as root to avoid duplicate traversal.
→ Understand parent-child relationships between GameObjects, discover and fix structural issues
→ Regardless of scene size, hierarchy data is saved to a file and the path is returned instead of raw JSON
→ get-hierarchy (UseSelection: true)
→ Get hierarchy of currently selected GameObjects without specifying paths manually
Ensures the Unity Editor window associated with the active MCP session becomes the foreground application on macOS and Windows Editor builds. Great for keeping visual feedback in sync after other apps steal focus. (Linux is currently unsupported.)
Capture any EditorWindow as a PNG. Specify the window name (the text displayed in the title bar/tab) to capture.
When multiple windows of the same type are open (e.g., 3 Inspector windows), all windows are saved with numbered filenames.
Supports three matching modes: exact (default), prefix, and contains - all case-insensitive.
→ capture-window (WindowName: "Console")
→ Save Console window state as PNG
→ Provide visual feedback to AI
Control Unity Editor's Play Mode. Supports three actions: Play (start/resume), Stop, and Pause.
→ control-play-mode (Action: Play)
→ Start Play Mode to verify game behavior
→ control-play-mode (Action: Pause)
→ Pause to inspect state
Execute C# code dynamically within Unity Editor.
⚠️ Important Prerequisites To use this tool, you must install theMicrosoft.CodeAnalysis.CSharppackage using OpenUPM NuGet.
View Microsoft.CodeAnalysis.CSharp installation steps
Installation steps:
Use a scoped registry in Unity Package Manager via OpenUPM (recommended).
- Open Project Settings window and go to the Package Manager page
- Add the following entry to the Scoped Registries list:
Name: OpenUPM
URL: https://package.openupm.com
Scope(s): org.nuget- Open the Package Manager window, select OpenUPM in the My Registries section, and install
Microsoft.CodeAnalysis.CSharp.
Async support:
- You can write await in your snippet (Task/ValueTask/UniTask and any awaitable type)
- Cancellation is propagated when you pass a CancellationToken to the tool
Security Level Support: Implements 3-tier security control to progressively restrict executable code:
-
Level 0 - Disabled
- No compilation or execution allowed
-
Level 1 - Restricted 【Recommended Setting】
- All Unity APIs and .NET standard libraries are generally available
- User-defined assemblies (Assembly-CSharp, etc.) are also accessible
- Only pinpoint blocking of security-critical operations:
-
File deletion:
File.Delete,Directory.Delete,FileUtil.DeleteFileOrDirectory -
File writing:
File.WriteAllText,File.WriteAllBytes,File.Replace -
Network communication: All
HttpClient,WebClient,WebRequest,Socket,TcpClientoperations -
Process execution:
Process.Start,Process.Kill -
Dynamic code execution:
Assembly.Load*,Type.InvokeMember,Activator.CreateComInstanceFrom -
Thread manipulation: Direct
Thread,Taskmanipulation -
Registry operations: All
Microsoft.Win32namespace operations
-
File deletion:
- Safe operations are allowed:
- File reading (
File.ReadAllText,File.Exists, etc.) - Path operations (all
Path.*operations) - Information retrieval (
Assembly.GetExecutingAssembly,Type.GetType, etc.)
- File reading (
- Use cases: Normal Unity development, automation with safety assurance
-
Level 2 - FullAccess
- All assemblies are accessible (no restrictions)
⚠️ Warning: Security risks exist, use only with trusted code
→ execute-dynamic-code (Code: "GameObject cube = GameObject.CreatePrimitive(PrimitiveType.Cube); return \"Cube created\";")
→ Rapid prototype verification, batch processing automation
→ Unity API usage restricted according to security level
[!IMPORTANT] Security Settings
Some tools are disabled by default for security reasons. To use these tools, enable the corresponding items in the uLoopMCP window "Security Settings":
Basic Security Settings:
- Allow Tests Execution: Enable
run-teststool- Allow Menu Item Execution: Enable
execute-menu-itemtool- Allow Third Party Tools: Enable user-developed custom tools
Dynamic Code Security Level (
execute-dynamic-codetool):
- Level 0 (Disabled): Complete code execution disabled (safest)
- Level 1 (Restricted): Unity API only, dangerous operations blocked (recommended)
- Level 2 (FullAccess): All APIs available (use with caution)
Setting changes take effect immediately without server restart.
Warning: When using these features for AI-driven code generation, we strongly recommend running in sandbox environments or containers to prepare for unexpected behavior and security risks.
For detailed specifications of all tools (parameters, responses, examples), see TOOL_REFERENCE.md.
uLoopMCP enables efficient development of project-specific tools without requiring changes to the core package. The type-safe design allows for reliable custom tool implementation in minimal time. (If you ask AI, they should be able to make it for you soon ✨)
You can publish your extension tools on GitHub and reuse them across other projects. See uLoopMCP-extensions-sample for an example.
[!TIP] For AI-assisted development: Detailed implementation guides are available in .claude/rules/mcp-tools.md for tool development and .claude/rules/cli.md for CLI/Skills development. These guides are automatically loaded by Claude Code when working in the relevant directories.
[!IMPORTANT] Security Settings
Project-specific tools require enabling Allow Third Party Tools in the uLoopMCP window "Security Settings". When developing custom tools that involve dynamic code execution, also consider the Dynamic Code Security Level setting.
View Implementation Guide
Step 1: Create Schema Class (define parameters):
using System.ComponentModel;
public class MyCustomSchema : BaseToolSchema
{
[Description("Parameter description")]
public string MyParameter { get; set; } = "default_value";
[Description("Example enum parameter")]
public MyEnum EnumParameter { get; set; } = MyEnum.Option1;
}
public enum MyEnum
{
Option1 = 0,
Option2 = 1,
Option3 = 2
}Step 2: Create Response Class (define return data):
public class MyCustomResponse : BaseToolResponse
{
public string Result { get; set; }
public bool Success { get; set; }
public MyCustomResponse(string result, bool success)
{
Result = result;
Success = success;
}
// Required parameterless constructor
public MyCustomResponse() { }
}Step 3: Create Tool Class:
using System.Threading;
using System.Threading.Tasks;
[McpTool(Description = "Description of my custom tool")] // ← Auto-registered with this attribute
public class MyCustomTool : AbstractUnityTool<MyCustomSchema, MyCustomResponse>
{
public override string ToolName => "my-custom-tool";
// Executed on main thread
protected override Task<MyCustomResponse> ExecuteAsync(MyCustomSchema parameters, CancellationToken cancellationToken)
{
// Type-safe parameter access
string param = parameters.MyParameter;
MyEnum enumValue = parameters.EnumParameter;
// Check for cancellation before long-running operations
cancellationToken.ThrowIfCancellationRequested();
// Implement custom logic here
string result = ProcessCustomLogic(param, enumValue);
bool success = !string.IsNullOrEmpty(result);
// For long-running operations, periodically check for cancellation
// cancellationToken.ThrowIfCancellationRequested();
return Task.FromResult(new MyCustomResponse(result, success));
}
private string ProcessCustomLogic(string input, MyEnum enumValue)
{
// Implement custom logic
return $"Processed '{input}' with enum '{enumValue}'";
}
}[!IMPORTANT] Important Notes:
- Thread Safety: Tools execute on Unity's main thread, so Unity API calls are safe without additional synchronization.
Please also refer to Custom Tool Samples.
When you create a custom tool, you can create a Skill/ subfolder within the tool folder and place a SKILL.md file there. This allows LLM tools to automatically discover and use your custom tool through the Skills system.
How it works:
- Create a
Skill/subfolder in your custom tool's folder - Place
SKILL.mdinside theSkill/folder - Run
uloop skills install --claudeto install all skills (bundled + project) - LLM tools will automatically recognize your custom skill
Directory structure:
Assets/Editor/CustomTools/MyTool/
├── MyTool.cs # Tool implementation
└── Skill/
├── SKILL.md # Skill definition (required)
└── references/ # Additional files (optional)
└── usage.md
SKILL.md format:
---
name: uloop-my-custom-tool
description: "Description of what the tool does and when to use it."
---
# uloop my-custom-tool
Detailed documentation for the tool...Scanned locations (searches for Skill/SKILL.md files):
Assets/**/Editor/<ToolFolder>/Skill/SKILL.mdPackages/*/Editor/<ToolFolder>/Skill/SKILL.mdLibrary/PackageCache/*/Editor/<ToolFolder>/Skill/SKILL.md
[!TIP]
- Add
internal: trueto the frontmatter to exclude a skill from installation (useful for internal/debug tools)- Additional files in the
Skill/folder (such asreferences/,scripts/,assets/) are also copied during installation
See HelloWorld sample for a complete example.
For a more comprehensive example project, see uLoopMCP-extensions-sample.
[!TIP] File Output
The
run-tests,unity-search, andget-hierarchytools can save results to the{project_root}/.uloop/outputs/directory to avoid massive token consumption when dealing with large datasets. Recommendation: Add.uloop/to.gitignoreto exclude from version control.
MIT License
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for uLoopMCP
Similar Open Source Tools
uLoopMCP
uLoopMCP is a Unity integration tool designed to let AI drive your Unity project forward with minimal human intervention. It provides a 'self-hosted development loop' where an AI can compile, run tests, inspect logs, and fix issues using tools like compile, run-tests, get-logs, and clear-console. It also allows AI to operate the Unity Editor itself—creating objects, calling menu items, inspecting scenes, and refining UI layouts from screenshots via tools like execute-dynamic-code, execute-menu-item, and capture-window. The tool enables AI-driven development loops to run autonomously inside existing Unity projects.
claude-code-tools
The 'claude-code-tools' repository provides productivity tools for Claude Code, Codex-CLI, and similar CLI coding agents. It includes CLI commands, skills, agents, hooks, and plugins for various tasks. The tools cover functionalities like session search, terminal automation, encrypted backup and sync, safe inspection of .env files, safety hooks, voice feedback, session chain repair, conversion between markdown and Google Docs, and CSV to Google Sheets and vice versa. The repository architecture consists of Python CLI, Rust TUI for search, and Node.js for action menus.
LEANN
LEANN is an innovative vector database that democratizes personal AI, transforming your laptop into a powerful RAG system that can index and search through millions of documents using 97% less storage than traditional solutions without accuracy loss. It achieves this through graph-based selective recomputation and high-degree preserving pruning, computing embeddings on-demand instead of storing them all. LEANN allows semantic search of file system, emails, browser history, chat history, codebase, or external knowledge bases on your laptop with zero cloud costs and complete privacy. It is a drop-in semantic search MCP service fully compatible with Claude Code, enabling intelligent retrieval without changing your workflow.
forge
Forge is a powerful open-source tool for building modern web applications. It provides a simple and intuitive interface for developers to quickly scaffold and deploy projects. With Forge, you can easily create custom components, manage dependencies, and streamline your development workflow. Whether you are a beginner or an experienced developer, Forge offers a flexible and efficient solution for your web development needs.
ragflow
RAGFlow is an open-source Retrieval-Augmented Generation (RAG) engine that combines deep document understanding with Large Language Models (LLMs) to provide accurate question-answering capabilities. It offers a streamlined RAG workflow for businesses of all sizes, enabling them to extract knowledge from unstructured data in various formats, including Word documents, slides, Excel files, images, and more. RAGFlow's key features include deep document understanding, template-based chunking, grounded citations with reduced hallucinations, compatibility with heterogeneous data sources, and an automated and effortless RAG workflow. It supports multiple recall paired with fused re-ranking, configurable LLMs and embedding models, and intuitive APIs for seamless integration with business applications.
pentagi
PentAGI is an innovative tool for automated security testing that leverages cutting-edge artificial intelligence technologies. It is designed for information security professionals, researchers, and enthusiasts who need a powerful and flexible solution for conducting penetration tests. The tool provides secure and isolated operations in a sandboxed Docker environment, fully autonomous AI-powered agent for penetration testing steps, a suite of 20+ professional security tools, smart memory system for storing research results, web intelligence for gathering information, integration with external search systems, team delegation system, comprehensive monitoring and reporting, modern interface, API integration, persistent storage, scalable architecture, self-hosted solution, flexible authentication, and quick deployment through Docker Compose.
iloom-cli
iloom is a tool designed to streamline AI-assisted development by focusing on maintaining alignment between human developers and AI agents. It treats context as a first-class concern, persisting AI reasoning in issue comments rather than temporary chats. The tool allows users to collaborate with AI agents in an isolated environment, switch between complex features without losing context, document AI decisions publicly, and capture key insights and lessons learned from AI sessions. iloom is not just a tool for managing git worktrees, but a control plane for maintaining alignment between users and their AI assistants.
factorio-learning-environment
Factorio Learning Environment is an open source framework designed for developing and evaluating LLM agents in the game of Factorio. It provides two settings: Lab-play with structured tasks and Open-play for building large factories. Results show limitations in spatial reasoning and automation strategies. Agents interact with the environment through code synthesis, observation, action, and feedback. Tools are provided for game actions and state representation. Agents operate in episodes with observation, planning, and action execution. Tasks specify agent goals and are implemented in JSON files. The project structure includes directories for agents, environment, cluster, data, docs, eval, and more. A database is used for checkpointing agent steps. Benchmarks show performance metrics for different configurations.
docs-mcp-server
The docs-mcp-server repository contains the server-side code for the documentation management system. It provides functionalities for managing, storing, and retrieving documentation files. Users can upload, update, and delete documents through the server. The server also supports user authentication and authorization to ensure secure access to the documentation system. Additionally, the server includes APIs for integrating with other systems and tools, making it a versatile solution for managing documentation in various projects and organizations.
mcpdoc
The MCP LLMS-TXT Documentation Server is an open-source server that provides developers full control over tools used by applications like Cursor, Windsurf, and Claude Code/Desktop. It allows users to create a user-defined list of `llms.txt` files and use a `fetch_docs` tool to read URLs within these files, enabling auditing of tool calls and context returned. The server supports various applications and provides a way to connect to them, configure rules, and test tool calls for tasks related to documentation retrieval and processing.
AutoAgent
AutoAgent is a fully-automated and zero-code framework that enables users to create and deploy LLM agents through natural language alone. It is a top performer on the GAIA Benchmark, equipped with a native self-managing vector database, and allows for easy creation of tools, agents, and workflows without any coding. AutoAgent seamlessly integrates with a wide range of LLMs and supports both function-calling and ReAct interaction modes. It is designed to be dynamic, extensible, customized, and lightweight, serving as a personal AI assistant.
aidermacs
Aidermacs is an AI pair programming tool for Emacs that integrates Aider, a powerful open-source AI pair programming tool. It provides top performance on the SWE Bench, support for multi-file edits, real-time file synchronization, and broad language support. Aidermacs delivers an Emacs-centric experience with features like intelligent model selection, flexible terminal backend support, smarter syntax highlighting, enhanced file management, and streamlined transient menus. It thrives on community involvement, encouraging contributions, issue reporting, idea sharing, and documentation improvement.
golf
Golf is a simple command-line tool for calculating the distance between two geographic coordinates. It uses the Haversine formula to accurately determine the distance between two points on the Earth's surface. This tool is useful for developers working on location-based applications or projects that require distance calculations. With Golf, users can easily input latitude and longitude coordinates and get the precise distance in kilometers or miles. The tool is lightweight, easy to use, and can be integrated into various programming workflows.
agenticSeek
AgenticSeek is a voice-enabled AI assistant powered by DeepSeek R1 agents, offering a fully local alternative to cloud-based AI services. It allows users to interact with their filesystem, code in multiple languages, and perform various tasks autonomously. The tool is equipped with memory to remember user preferences and past conversations, and it can divide tasks among multiple agents for efficient execution. AgenticSeek prioritizes privacy by running entirely on the user's hardware without sending data to the cloud.
unity-mcp
MCP for Unity is a tool that acts as a bridge, enabling AI assistants to interact with the Unity Editor via a local MCP Client. Users can instruct their LLM to manage assets, scenes, scripts, and automate tasks within Unity. The tool offers natural language control, powerful tools for asset management, scene manipulation, and automation of workflows. It is extensible and designed to work with various MCP Clients, providing a range of functions for precise text edits, script management, GameObject operations, and more.
generator
ctx is a tool designed to automatically generate organized context files from code files, GitHub repositories, Git commits, web pages, and plain text. It aims to efficiently provide necessary context to AI language models like ChatGPT and Claude, enabling users to streamline code refactoring, multiple iteration development, documentation generation, and seamless AI integration. With ctx, users can create structured markdown documents, save context files, and serve context through an MCP server for real-time assistance. The tool simplifies the process of sharing project information with AI assistants, making AI conversations smarter and easier.
For similar tasks
uLoopMCP
uLoopMCP is a Unity integration tool designed to let AI drive your Unity project forward with minimal human intervention. It provides a 'self-hosted development loop' where an AI can compile, run tests, inspect logs, and fix issues using tools like compile, run-tests, get-logs, and clear-console. It also allows AI to operate the Unity Editor itself—creating objects, calling menu items, inspecting scenes, and refining UI layouts from screenshots via tools like execute-dynamic-code, execute-menu-item, and capture-window. The tool enables AI-driven development loops to run autonomously inside existing Unity projects.
commanddash
Dash AI is an open-source coding assistant for Flutter developers. It is designed to not only write code but also run and debug it, allowing it to assist beyond code completion and automate routine tasks. Dash AI is powered by Gemini, integrated with the Dart Analyzer, and specifically tailored for Flutter engineers. The vision for Dash AI is to create a single-command assistant that can automate tedious development tasks, enabling developers to focus on creativity and innovation. It aims to assist with the entire process of engineering a feature for an app, from breaking down the task into steps to generating exploratory tests and iterating on the code until the feature is complete. To achieve this vision, Dash AI is working on providing LLMs with the same access and information that human developers have, including full contextual knowledge, the latest syntax and dependencies data, and the ability to write, run, and debug code. Dash AI welcomes contributions from the community, including feature requests, issue fixes, and participation in discussions. The project is committed to building a coding assistant that empowers all Flutter developers.
ollama4j
Ollama4j is a Java library that serves as a wrapper or binding for the Ollama server. It facilitates communication with the Ollama server and provides models for deployment. The tool requires Java 11 or higher and can be installed locally or via Docker. Users can integrate Ollama4j into Maven projects by adding the specified dependency. The tool offers API specifications and supports various development tasks such as building, running unit tests, and integration tests. Releases are automated through GitHub Actions CI workflow. Areas of improvement include adhering to Java naming conventions, updating deprecated code, implementing logging, using lombok, and enhancing request body creation. Contributions to the project are encouraged, whether reporting bugs, suggesting enhancements, or contributing code.
crewAI-tools
The crewAI Tools repository provides a guide for setting up tools for crewAI agents, enabling the creation of custom tools to enhance AI solutions. Tools play a crucial role in improving agent functionality. The guide explains how to equip agents with a range of tools and how to create new tools. Tools are designed to return strings for generating responses. There are two main methods for creating tools: subclassing BaseTool and using the tool decorator. Contributions to the toolset are encouraged, and the development setup includes steps for installing dependencies, activating the virtual environment, setting up pre-commit hooks, running tests, static type checking, packaging, and local installation. Enhance AI agent capabilities with advanced tooling.
lightning-lab
Lightning Lab is a public template for artificial intelligence and machine learning research projects using Lightning AI's PyTorch Lightning. It provides a structured project layout with modules for command line interface, experiment utilities, Lightning Module and Trainer, data acquisition and preprocessing, model serving APIs, project configurations, training checkpoints, technical documentation, logs, notebooks for data analysis, requirements management, testing, and packaging. The template simplifies the setup of deep learning projects and offers extras for different domains like vision, text, audio, reinforcement learning, and forecasting.
Magic_Words
Magic_Words is a repository containing code for the paper 'What's the Magic Word? A Control Theory of LLM Prompting'. It implements greedy back generation and greedy coordinate gradient (GCG) to find optimal control prompts (magic words). Users can set up a virtual environment, install the package and dependencies, and run example scripts for pointwise control and optimizing prompts for datasets. The repository provides scripts for finding optimal control prompts for question-answer pairs and dataset optimization using the GCG algorithm.
grafana-llm-app
This repository contains separate packages for Grafana LLM Plugin and the @grafana/llm package for interfacing with it. The packages are tightly coupled and developed together with identical dependencies. The repository provides instructions for developing the packages, including backend and frontend development, testing, and release processes.
crewAI-tools
This repository provides a guide for setting up tools for crewAI agents to enhance functionality. It offers steps to equip agents with ready-to-use tools and create custom ones. Tools are expected to return strings for generating responses. Users can create tools by subclassing BaseTool or using the tool decorator. Contributions are welcome to enrich the toolset, and guidelines are provided for contributing. The development setup includes installing dependencies, activating virtual environment, setting up pre-commit hooks, running tests, static type checking, packaging, and local installation. The goal is to empower AI solutions through advanced tooling.
For similar jobs
promptflow
**Prompt flow** is a suite of development tools designed to streamline the end-to-end development cycle of LLM-based AI applications, from ideation, prototyping, testing, evaluation to production deployment and monitoring. It makes prompt engineering much easier and enables you to build LLM apps with production quality.
deepeval
DeepEval is a simple-to-use, open-source LLM evaluation framework specialized for unit testing LLM outputs. It incorporates various metrics such as G-Eval, hallucination, answer relevancy, RAGAS, etc., and runs locally on your machine for evaluation. It provides a wide range of ready-to-use evaluation metrics, allows for creating custom metrics, integrates with any CI/CD environment, and enables benchmarking LLMs on popular benchmarks. DeepEval is designed for evaluating RAG and fine-tuning applications, helping users optimize hyperparameters, prevent prompt drifting, and transition from OpenAI to hosting their own Llama2 with confidence.
MegaDetector
MegaDetector is an AI model that identifies animals, people, and vehicles in camera trap images (which also makes it useful for eliminating blank images). This model is trained on several million images from a variety of ecosystems. MegaDetector is just one of many tools that aims to make conservation biologists more efficient with AI. If you want to learn about other ways to use AI to accelerate camera trap workflows, check out our of the field, affectionately titled "Everything I know about machine learning and camera traps".
leapfrogai
LeapfrogAI is a self-hosted AI platform designed to be deployed in air-gapped resource-constrained environments. It brings sophisticated AI solutions to these environments by hosting all the necessary components of an AI stack, including vector databases, model backends, API, and UI. LeapfrogAI's API closely matches that of OpenAI, allowing tools built for OpenAI/ChatGPT to function seamlessly with a LeapfrogAI backend. It provides several backends for various use cases, including llama-cpp-python, whisper, text-embeddings, and vllm. LeapfrogAI leverages Chainguard's apko to harden base python images, ensuring the latest supported Python versions are used by the other components of the stack. The LeapfrogAI SDK provides a standard set of protobuffs and python utilities for implementing backends and gRPC. LeapfrogAI offers UI options for common use-cases like chat, summarization, and transcription. It can be deployed and run locally via UDS and Kubernetes, built out using Zarf packages. LeapfrogAI is supported by a community of users and contributors, including Defense Unicorns, Beast Code, Chainguard, Exovera, Hypergiant, Pulze, SOSi, United States Navy, United States Air Force, and United States Space Force.
llava-docker
This Docker image for LLaVA (Large Language and Vision Assistant) provides a convenient way to run LLaVA locally or on RunPod. LLaVA is a powerful AI tool that combines natural language processing and computer vision capabilities. With this Docker image, you can easily access LLaVA's functionalities for various tasks, including image captioning, visual question answering, text summarization, and more. The image comes pre-installed with LLaVA v1.2.0, Torch 2.1.2, xformers 0.0.23.post1, and other necessary dependencies. You can customize the model used by setting the MODEL environment variable. The image also includes a Jupyter Lab environment for interactive development and exploration. Overall, this Docker image offers a comprehensive and user-friendly platform for leveraging LLaVA's capabilities.
carrot
The 'carrot' repository on GitHub provides a list of free and user-friendly ChatGPT mirror sites for easy access. The repository includes sponsored sites offering various GPT models and services. Users can find and share sites, report errors, and access stable and recommended sites for ChatGPT usage. The repository also includes a detailed list of ChatGPT sites, their features, and accessibility options, making it a valuable resource for ChatGPT users seeking free and unlimited GPT services.
TrustLLM
TrustLLM is a comprehensive study of trustworthiness in LLMs, including principles for different dimensions of trustworthiness, established benchmark, evaluation, and analysis of trustworthiness for mainstream LLMs, and discussion of open challenges and future directions. Specifically, we first propose a set of principles for trustworthy LLMs that span eight different dimensions. Based on these principles, we further establish a benchmark across six dimensions including truthfulness, safety, fairness, robustness, privacy, and machine ethics. We then present a study evaluating 16 mainstream LLMs in TrustLLM, consisting of over 30 datasets. The document explains how to use the trustllm python package to help you assess the performance of your LLM in trustworthiness more quickly. For more details about TrustLLM, please refer to project website.
AI-YinMei
AI-YinMei is an AI virtual anchor Vtuber development tool (N card version). It supports fastgpt knowledge base chat dialogue, a complete set of solutions for LLM large language models: [fastgpt] + [one-api] + [Xinference], supports docking bilibili live broadcast barrage reply and entering live broadcast welcome speech, supports Microsoft edge-tts speech synthesis, supports Bert-VITS2 speech synthesis, supports GPT-SoVITS speech synthesis, supports expression control Vtuber Studio, supports painting stable-diffusion-webui output OBS live broadcast room, supports painting picture pornography public-NSFW-y-distinguish, supports search and image search service duckduckgo (requires magic Internet access), supports image search service Baidu image search (no magic Internet access), supports AI reply chat box [html plug-in], supports AI singing Auto-Convert-Music, supports playlist [html plug-in], supports dancing function, supports expression video playback, supports head touching action, supports gift smashing action, supports singing automatic start dancing function, chat and singing automatic cycle swing action, supports multi scene switching, background music switching, day and night automatic switching scene, supports open singing and painting, let AI automatically judge the content.