
UnrealGenAISupport
An Unreal Engine plugin for LLM/GenAI models & MCP UE5 server. Supports Claude Desktop App, Windsurf & Cursor, also includes OpenAI's GPT 5, Deepseek V3.1, Claude Sonnet 4 APIs and Grok 4, with plans to add Gemini, audio & realtime APIs soon. UnrealMCP is also here!! Automatic blueprint and scene generation from AI!!
Stars: 263

The Unreal Engine Generative AI Support Plugin is a tool designed to integrate various cutting-edge LLM/GenAI models into Unreal Engine for game development. It aims to simplify the process of using AI models for game development tasks, such as controlling scene objects, generating blueprints, running Python scripts, and more. The plugin currently supports models from organizations like OpenAI, Anthropic, XAI, Google Gemini, Meta AI, Deepseek, and Baidu. It provides features like API support, model control, generative AI capabilities, UI generation, project file management, and more. The plugin is still under development but offers a promising solution for integrating AI models into game development workflows.
README:
Claude spawning scene objects and controlling their transformations and materials, generating blueprints, functions, variables, adding components, running python scripts etc.
A project called become human, where NPCs are OpenAI agentic instances. Built using this plugin.
Every month, hundreds of new AI models are released by various organizations, making it hard to keep up with the latest advancements.
The Unreal Engine Generative AI Support Plugin allows you to focus on game development without worrying about the LLM/GenAI integration layer.
Currently integrating Model Control Protocol (MCP) with Unreal Engine 5.5.
This project aims to build a long-term support (LTS) plugin for various cutting-edge LLM/GenAI models and foster a community around it. It currently includes OpenAI's GPT-4o, Deepseek R1, Claude Sonnet 4, Claude Opus 4, and GPT-4o-mini for Unreal Engine 5.1 or higher, with plans to add , real-time APIs, Gemini, MCP, and Grok 3 APIs soon. The plugin will focus exclusively on APIs useful for game development, evals and interactive experiences. All suggestions and contributions are welcome. The plugin can also be used for setting up new evals and ways to compare models in game battlefields.
[!WARNING]
This plugin is still under rapid development.
- Do not use it in production environments.
⚠️ - Do not use it without version control.
⚠️
I will continue to keep this repo updated with the latest features and models as they become available. Contributions are welcome. If you want a production ready plugin, with more features and guaranteed stability please checkout the Gen AI Pro plugin, as it costs a lot of API credits for me to test different models per feature and per engine version to make sure everything works well and compatible. Otherwise, I feel this plugin is good enough for many use cases (including the examples shown in the beginning) and you can use it for free, forever.
Currently working on fixing the issues with MCP (especially the node generation) and adding more features to it.
- OpenAI API Support:
- OpenAI Chat API ✅
(models-ref)
-
gpt-5
✅, -
gpt-4.1
,gpt-4.1-mini
,gpt-4.1-nano
Model ✅ -
gpt-4o
,gpt-4o-mini
Model ✅ -
o4-mini
Model ✅ -
o3
,o3-pro
,o3-mini
,o1
Model ✅
-
- OpenAI DALL-E API (available in pro) ☑️
- Responses API 🛠️
- OpenAI Vision API (available in pro) ☑️
- OpenAI Realtime API
-
gpt-4o-realtime-preview
gpt-4o-mini-realtime-preview
Model (available in pro) ☑️
-
- OpenAI Structured Outputs ✅
- OpenAI Whisper & TTS API (available in pro) ☑️
- Multimodal API Support (available in pro) ☑️
- Text Streaming (available in pro) ☑️
- OpenAI Chat API ✅
(models-ref)
- Anthropic Claude API Support:
- Claude Chat API ✅
-
claude-4-latest
,claude-3-7-sonnet-latest
,claude-3-5-sonnet
,claude-3-5-haiku-latest
,claude-3-opus-latest
Model ✅
-
- Multimodal/Vision API Support (available in pro) ☑️
- Claude Chat API ✅
- XAI (Grok 3) API Support:
- XAI Chat Completions API ✅
-
grok-3-latest
,grok-3-mini-beta
Model ✅ -
grok-3
,grok-3-mini
,grok-3-fast
,grok-3-mini-fast
,grok-2-vision-1212
,grok-2-1212
. -
grok-4
Reasoning API (available in pro) ☑️
-
- XAI Image API 🚧
- Text Streaming API (available in pro) ☑️
- Multimodal API Support (available in pro) ☑️
- XAI Chat Completions API ✅
- Google Gemini API Support:
- Gemini Chat API
-
gemini-2.0-flash-lite
,gemini-2.0-flash
gemini-1.5-flash
(available in pro) ☑️ - Gemini 2.5 Pro Model (available in pro) ☑️
-
- Gemini Imagen API:
-
imagen-3.0-generate-002
(available in pro) ☑️ -
nano-banana
(available in pro) ☑️
-
- Google TTS & Transcription API: (available in pro) ☑️
- Multimodal API Support (available in pro) ☑️
- Gemini Chat API
- Meta AI API Support:
- Llama 4 herd: ❌
- Llama 4 Behemoth, Llama 4 Maverick, Llama 4 Scout ❌
- llama3.3-70b, llama3.1-8b Model❌
- Local Llama API 🚧🤝
- Llama 4 herd: ❌
- Deepseek API Support:
- Deepseek Chat API ✅
-
deepseek-chat
(DeepSeek-V3.1) Model ✅
-
- Deepseek Reasoning API, R1 ✅
-
deepseek-reasoning-r1
Model ✅ -
deepseek-reasoning-r1
CoT Streaming ❌
-
- Independently Hosted Deepseek Models ❌
- Deepseek Chat API ✅
- Baidu API Support:
- Baidu Chat API ❌
-
baidu-chat
Model ❌
-
- Baidu Chat API ❌
- 3D generative model APIs:
- TripoSR by StabilityAI 🚧
- Plugin Documentation 🛠️🤝
- Plugin Example Project 🛠️ here
- Version Control Support
- Perforce Support 🛠️
- Git Submodule Support ✅
- LTS Branching 🚧
- Stable Branch with Bug Fixes 🚧
- Dedicated Contributor for LTS 🚧
- Lightweight Plugin (In Builds)
- No External Dependencies ✅
- Build Flags to enable/disable APIs 🚧
- Submodules per API Organization 🚧
- Exclude MCP from build 🚧
- Testing
- Automated Testing (available in pro) ☑️
- Different Platforms (available in pro) ☑️
- Different Engine Versions (available in pro) ☑️
- Clients Support ✅
- Claude Desktop App Support ✅
- Cursor IDE Support ✅
- OpenAI Operator API Support 🚧
- Blueprints Auto Generation 🛠️
- Creating new blueprint of types ✅
- Adding new functions, function/blueprint variables ✅
- Adding nodes and connections 🛠️ (buggy, issues open)
- Advanced Blueprints Generation 🛠️
- Level/Scene Control for LLMs 🛠️
- Spawning Objects and Shapes ✅
- Moving, rotating and scaling objects ✅
- Changing materials and color ✅
- Advanced scene features 🛠️
- Generative AI:
- Prompt to 3D model fetch and spawn 🛠️
- Control:
- Ability to run Python scripts ✅
- Ability to run Console Commands ✅
- UI:
- Widgets generation 🛠️
- UI Blueprint generation 🛠️
- Project Files:
- Create/Edit project files/folders ️✅
- Delete existing project files ❌
- Others:
- Project Cleanup 🛠️
Where,
- ✅ - Completed
- ☑️ - (available in pro)
- 🛠️ - In Progress
- 🚧 - Planned
- 🤝 - Need Contributors
- ❌ - Won't Support For Now
- Setting API Keys
- Setting up MCP
- Adding the plugin to your project
- Fetching the Latest Plugin Changes
- Usage
- Known Issues
- Config Window
- Contribution Guidelines
- References
[!NOTE]
There is no need to set the API key for testing the MCP features in Claude app. Anthropic key only needed for Claude API.
Set the environment variable PS_<ORGNAME>
to your API key.
setx PS_<ORGNAME> "your api key"
-
Run the following command in your terminal, replacing yourkey with your API key.
echo "export PS_<ORGNAME>='yourkey'" >> ~/.zshrc
-
Update the shell with the new variable:
source ~/.zshrc
PS: Don't forget to restart the Editor and ALSO the connected IDE after setting the environment variable.
Where <ORGNAME>
can be:
PS_OPENAIAPIKEY
, PS_DEEPSEEKAPIKEY
, PS_ANTHROPICAPIKEY
, PS_METAAPIKEY
, PS_GOOGLEAPIKEY
etc.
Storing API keys in packaged builds is a security risk. This is what the OpenAI API documentation says about it:
"Exposing your OpenAI API key in client-side environments like browsers or mobile apps allows malicious users to take that key and make requests on your behalf – which may lead to unexpected charges or compromise of certain account data. Requests should always be routed through your own backend server where you can keep your API key secure."
Read more about it here.
For test builds you can call the GenSecureKey::SetGenAIApiKeyRuntime
either in c++ or blueprints function with your API key in the packaged build.
[!NOTE]
If your project only uses the LLM APIs and not the MCP, you can skip this section.
[!CAUTION]
Discalimer: If you are using the MCP feature of the plugin, it will directly let the Claude Desktop App control your Unreal Engine project. Make sure you are aware of the security risks and only use it in a controlled environment.Please backup your project before using the MCP feature and use version control to track changes.
claude_desktop_config.json
file in Claude Desktop App's installation directory. (might ask claude where its located for your platform!)
The file will look something like this:
{
"mcpServers": {
"unreal-handshake": {
"command": "python",
"args": ["<your_project_directoy_path>/Plugins/GenerativeAISupport/Content/Python/mcp_server.py"],
"env": {
"UNREAL_HOST": "localhost",
"UNREAL_PORT": "9877"
}
}
}
}
.cursor/mcp.json
file in your project directory. The file will look something like this:
{
"mcpServers": {
"unreal-handshake": {
"command": "python",
"args": ["<your_project_directoy_path>/Plugins/GenerativeAISupport/Content/Python/mcp_server.py"],
"env": {
"UNREAL_HOST": "localhost",
"UNREAL_PORT": "9877"
}
}
}
}
pip install mcp[cli]
-
Add the Plugin Repository as a Submodule in your project's repository.
git submodule add https://github.com/prajwalshettydev/UnrealGenAISupport Plugins/GenerativeAISupport
-
Regenerate Project Files: Right-click your .uproject file and select Generate Visual Studio project files.
-
Enable the Plugin in Unreal Editor: Open your project in Unreal Editor. Go to Edit > Plugins. Search for the Plugin in the list and enable it.
-
For Unreal C++ Projects, include the Plugin's module in your project's Build.cs file:
PrivateDependencyModuleNames.AddRange(new string[] { "GenerativeAISupport" });
Still in development..
Coming soon, for free, in the Unreal Engine Marketplace.
you can pull the latest changes with:
cd Plugins/GenerativeAISupport
git pull origin main
Or update all submodules in the project:
git submodule update --recursive --remote
Still in development..
There is a example Unreal project that already implements the plugin. You can find it here.
Currently the plugin supports Chat and Structured Outputs from OpenAI API. Both for C++ and Blueprints.
Tested models are gpt-4o
, gpt-4o-mini
, gpt-4.5
, o1-mini
, o1
, o3-mini-high
.
void SomeDebugSubsystem::CallGPT(const FString& Prompt,
const TFunction<void(const FString&, const FString&, bool)>& Callback)
{
FGenChatSettings ChatSettings;
ChatSettings.Model = TEXT("gpt-4o-mini");
ChatSettings.MaxTokens = 500;
ChatSettings.Messages.Add(FGenChatMessage{ TEXT("system"), Prompt });
FOnChatCompletionResponse OnComplete = FOnChatCompletionResponse::CreateLambda(
[Callback](const FString& Response, const FString& ErrorMessage, bool bSuccess)
{
Callback(Response, ErrorMessage, bSuccess);
});
UGenOAIChat::SendChatRequest(ChatSettings, OnComplete);
}
Sending a custom schema json directly to function call
FString MySchemaJson = R"({
"type": "object",
"properties": {
"count": {
"type": "integer",
"description": "The total number of users."
},
"users": {
"type": "array",
"items": {
"type": "object",
"properties": {
"name": { "type": "string", "description": "The user's name." },
"heading_to": { "type": "string", "description": "The user's destination." }
},
"required": ["name", "role", "age", "heading_to"]
}
}
},
"required": ["count", "users"]
})";
UGenAISchemaService::RequestStructuredOutput(
TEXT("Generate a list of users and their details"),
MySchemaJson,
[](const FString& Response, const FString& Error, bool Success) {
if (Success)
{
UE_LOG(LogTemp, Log, TEXT("Structured Output: %s"), *Response);
}
else
{
UE_LOG(LogTemp, Error, TEXT("Error: %s"), *Error);
}
}
);
Sending a custom schema json from a file
#include "Misc/FileHelper.h"
#include "Misc/Paths.h"
FString SchemaFilePath = FPaths::Combine(
FPaths::ProjectDir(),
TEXT("Source/:ProjectName/Public/AIPrompts/SomeSchema.json")
);
FString MySchemaJson;
if (FFileHelper::LoadFileToString(MySchemaJson, *SchemaFilePath))
{
UGenAISchemaService::RequestStructuredOutput(
TEXT("Generate a list of users and their details"),
MySchemaJson,
[](const FString& Response, const FString& Error, bool Success) {
if (Success)
{
UE_LOG(LogTemp, Log, TEXT("Structured Output: %s"), *Response);
}
else
{
UE_LOG(LogTemp, Error, TEXT("Error: %s"), *Error);
}
}
);
}
Currently the plugin supports Chat and Reasoning from DeepSeek API. Both for C++ and Blueprints. Points to note:
- System messages are currently mandatory for the reasoning model. API otherwise seems to return null
- Also, from the documentation: "Please note that if the reasoning_content field is included in the sequence of input messages, the API will return a 400 error. Read more about it here"
[!WARNING]
While using the R1 reasoning model, make sure the Unreal's HTTP timeouts are not the default values at 30 seconds. As these API calls can take longer than 30 seconds to respond. Simply setting theHttpRequest->SetTimeout(<N Seconds>);
is not enough So the following lines need to be added to your project'sDefaultEngine.ini
file:[HTTP] HttpConnectionTimeout=180 HttpReceiveTimeout=180
FGenDSeekChatSettings ReasoningSettings;
ReasoningSettings.Model = EDeepSeekModels::Reasoner; // or EDeepSeekModels::Chat for Chat API
ReasoningSettings.MaxTokens = 100;
ReasoningSettings.Messages.Add(FGenChatMessage{TEXT("system"), TEXT("You are a helpful assistant.")});
ReasoningSettings.Messages.Add(FGenChatMessage{TEXT("user"), TEXT("9.11 and 9.8, which is greater?")});
ReasoningSettings.bStreamResponse = false;
UGenDSeekChat::SendChatRequest(
ReasoningSettings,
FOnDSeekChatCompletionResponse::CreateLambda(
[this](const FString& Response, const FString& ErrorMessage, bool bSuccess)
{
if (!UTHelper::IsContextStillValid(this))
{
return;
}
// Log response details regardless of success
UE_LOG(LogTemp, Warning, TEXT("DeepSeek Reasoning Response Received - Success: %d"), bSuccess);
UE_LOG(LogTemp, Warning, TEXT("Response: %s"), *Response);
if (!ErrorMessage.IsEmpty())
{
UE_LOG(LogTemp, Error, TEXT("Error Message: %s"), *ErrorMessage);
}
})
);
Currently the plugin supports Chat from Anthropic API. Both for C++ and Blueprints.
Tested models are claude-sonnet-4-20250514
, claude-opus-4-20250514
, claude-3-7-sonnet-latest
, claude-3-5-sonnet
, claude-3-5-haiku-latest
, claude-3-opus-latest
.
// ---- Claude Chat Test ----
FGenClaudeChatSettings ChatSettings;
ChatSettings.Model = EClaudeModels::Claude_3_7_Sonnet; // Use Claude 3.7 Sonnet model
ChatSettings.MaxTokens = 4096;
ChatSettings.Temperature = 0.7f;
ChatSettings.Messages.Add(FGenChatMessage{TEXT("system"), TEXT("You are a helpful assistant.")});
ChatSettings.Messages.Add(FGenChatMessage{TEXT("user"), TEXT("What is the capital of France?")});
UGenClaudeChat::SendChatRequest(
ChatSettings,
FOnClaudeChatCompletionResponse::CreateLambda(
[this](const FString& Response, const FString& ErrorMessage, bool bSuccess)
{
if (!UTHelper::IsContextStillValid(this))
{
return;
}
if (bSuccess)
{
UE_LOG(LogTemp, Warning, TEXT("Claude Chat Response: %s"), *Response);
}
else
{
UE_LOG(LogTemp, Error, TEXT("Claude Chat Error: %s"), *ErrorMessage);
}
})
);
Currently the plugin supports Chat from XAI's Grok 3 API. Both for C++ and Blueprints.
FGenXAIChatSettings ChatSettings;
ChatSettings.Model = TEXT("grok-3-latest");
ChatSettings.Messages.Add(FGenXAIMessage{
TEXT("system"),
TEXT("You are a helpful AI assistant for a game. Please provide concise responses.")
});
ChatSettings.Messages.Add(FGenXAIMessage{TEXT("user"), TEXT("Create a brief description for a forest level in a fantasy game")});
ChatSettings.MaxTokens = 1000;
UGenXAIChat::SendChatRequest(
ChatSettings,
FOnXAIChatCompletionResponse::CreateLambda(
[this](const FString& Response, const FString& ErrorMessage, bool bSuccess)
{
if (!UTHelper::IsContextStillValid(this))
{
return;
}
UE_LOG(LogTemp, Warning, TEXT("XAI Chat response: %s"), *Response);
if (!bSuccess)
{
UE_LOG(LogTemp, Error, TEXT("XAI Chat error: %s"), *ErrorMessage);
}
})
);
This is currently work in progress. The plugin supports various clients like Claude Desktop App, Cursor etc.
That's it! You can now use the MCP features of the plugin.
python <your_project_directoy>/Plugins/GenerativeAISupport/Content/Python/mcp_server.py
3. Open a new Unreal Engine project and run the below python script from the plugin's python directory.
Tools -> Run Python Script -> Select the
Plugins/GenerativeAISupport/Content/Python/unreal_socket_server.py
file.
- Nodes fail to connect properly with MCP
- No undo redo support for MCP
- No streaming support for Deepseek reasoning model
- No complex material generation support for the create material tool
- Issues with running some llm generated valid python scripts
- When LLM compiles a blueprint no proper error handling in its response
- Issues spawning certain nodes, especially with getters and setters
- Doesn't open the right context window during scene and project files edit.
- Doesn't dock the window properly in the editor for blueprints.
- Install
unreal
python package and setup the IDE's python interpreter for proper intellisense.
pip install unreal
More details will be added soon.
More details will be added soon.
- Env Var set logic from: OpenAI-Api-Unreal by KellanM
- MCP Server inspiration from: Blender-MCP by ahujasid
- OpenAI API Documentation
- Anthropic API Documentation
- XAI API Documentation
- Google Gemini API Documentation
- Meta AI API Documentation
- Deepseek API Documentation
- Model Control Protocol (MCP) Documentation
- TripoSt Documentation
If you find UnrealGenAISupport helpful, consider sponsoring me to keep the project going! Click the "Sponsor" button above to contribute.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for UnrealGenAISupport
Similar Open Source Tools

UnrealGenAISupport
The Unreal Engine Generative AI Support Plugin is a tool designed to integrate various cutting-edge LLM/GenAI models into Unreal Engine for game development. It aims to simplify the process of using AI models for game development tasks, such as controlling scene objects, generating blueprints, running Python scripts, and more. The plugin currently supports models from organizations like OpenAI, Anthropic, XAI, Google Gemini, Meta AI, Deepseek, and Baidu. It provides features like API support, model control, generative AI capabilities, UI generation, project file management, and more. The plugin is still under development but offers a promising solution for integrating AI models into game development workflows.

wikipedia-mcp
The Wikipedia MCP Server is a Model Context Protocol (MCP) server that provides real-time access to Wikipedia information for Large Language Models (LLMs). It allows AI assistants to retrieve accurate and up-to-date information from Wikipedia to enhance their responses. The server offers features such as searching Wikipedia, retrieving article content, getting article summaries, extracting specific sections, discovering links within articles, finding related topics, supporting multiple languages and country codes, optional caching for improved performance, and compatibility with Google ADK agents and other AI frameworks. Users can install the server using pipx, Smithery, PyPI, virtual environment, or from source. The server can be run with various options for transport protocol, language, country/locale, caching, access token, and more. It also supports Docker and Kubernetes deployment. The server provides MCP tools for interacting with Wikipedia, such as searching articles, getting article content, summaries, sections, links, coordinates, related topics, and extracting key facts. It also supports country/locale codes and language variants for languages like Chinese, Serbian, Kurdish, and Norwegian. The server includes example prompts for querying Wikipedia and provides MCP resources for interacting with Wikipedia through MCP endpoints. The project structure includes main packages, API implementation, core functionality, utility functions, and a comprehensive test suite for reliability and functionality testing.

LLMVoX
LLMVoX is a lightweight 30M-parameter, LLM-agnostic, autoregressive streaming Text-to-Speech (TTS) system designed to convert text outputs from Large Language Models into high-fidelity streaming speech with low latency. It achieves significantly lower Word Error Rate compared to speech-enabled LLMs while operating at comparable latency and speech quality. Key features include being lightweight & fast with only 30M parameters, LLM-agnostic for easy integration with existing models, multi-queue streaming for continuous speech generation, and multilingual support for easy adaptation to new languages.

acte
Acte is a framework designed to build GUI-like tools for AI Agents. It aims to address the issues of cognitive load and freedom degrees when interacting with multiple APIs in complex scenarios. By providing a graphical user interface (GUI) for Agents, Acte helps reduce cognitive load and constraints interaction, similar to how humans interact with computers through GUIs. The tool offers APIs for starting new sessions, executing actions, and displaying screens, accessible via HTTP requests or the SessionManager class.

MassGen
MassGen is a cutting-edge multi-agent system that leverages the power of collaborative AI to solve complex tasks. It assigns a task to multiple AI agents who work in parallel, observe each other's progress, and refine their approaches to converge on the best solution to deliver a comprehensive and high-quality result. The system operates through an architecture designed for seamless multi-agent collaboration, with key features including cross-model/agent synergy, parallel processing, intelligence sharing, consensus building, and live visualization. Users can install the system, configure API settings, and run MassGen for various tasks such as question answering, creative writing, research, development & coding tasks, and web automation & browser tasks. The roadmap includes plans for advanced agent collaboration, expanded model, tool & agent integration, improved performance & scalability, enhanced developer experience, and a web interface.

zotero-mcp
Zotero MCP seamlessly connects your Zotero research library with AI assistants like ChatGPT and Claude via the Model Context Protocol. It offers AI-powered semantic search, access to library content, PDF annotation extraction, and easy updates. Users can search their library, analyze citations, and get summaries, making it ideal for research tasks. The tool supports multiple embedding models, intelligent search results, and flexible access methods for both local and remote collaboration. With advanced features like semantic search and PDF annotation extraction, Zotero MCP enhances research efficiency and organization.

orra
Orra is a tool for building production-ready multi-agent applications that handle complex real-world interactions. It coordinates tasks across existing stack, agents, and tools run as services using intelligent reasoning. With features like smart pre-evaluated execution plans, domain grounding, durable execution, and automatic service health monitoring, Orra enables users to go fast with tools as services and revert state to handle failures. It provides real-time status tracking and webhook result delivery, making it ideal for developers looking to move beyond simple crews and agents.

ck
ck (seek) is a semantic grep tool that finds code by meaning, not just keywords. It replaces traditional grep by understanding the user's search intent. It allows users to search for code based on concepts like 'error handling' and retrieves relevant code even if the exact keywords are not present. ck offers semantic search, drop-in grep compatibility, hybrid search combining keyword precision with semantic understanding, agent-friendly output in JSONL format, smart file filtering, and various advanced features. It supports multiple search modes, relevance scoring, top-K results, and smart exclusions. Users can index projects for semantic search, choose embedding models, and search specific files or directories. The tool is designed to improve code search efficiency and accuracy for developers and AI agents.

Shellsage
Shell Sage is an intelligent terminal companion and AI-powered terminal assistant that enhances the terminal experience with features like local and cloud AI support, context-aware error diagnosis, natural language to command translation, and safe command execution workflows. It offers interactive workflows, supports various API providers, and allows for custom model selection. Users can configure the tool for local or API mode, select specific models, and switch between modes easily. Currently in alpha development, Shell Sage has known limitations like limited Windows support and occasional false positives in error detection. The roadmap includes improvements like better context awareness, Windows PowerShell integration, Tmux integration, and CI/CD error pattern database.

WebAI-to-API
This project implements a web API that offers a unified interface to Google Gemini and Claude 3. It provides a self-hosted, lightweight, and scalable solution for accessing these AI models through a streaming API. The API supports both Claude and Gemini models, allowing users to interact with them in real-time. The project includes a user-friendly web UI for configuration and documentation, making it easy to get started and explore the capabilities of the API.

educhain
Educhain is a powerful Python package that leverages Generative AI to create engaging and personalized educational content. It enables users to generate multiple-choice questions, create lesson plans, and support various LLM models. Users can export questions to JSON, PDF, and CSV formats, customize prompt templates, and generate questions from text, PDF, URL files, youtube videos, and images. Educhain outperforms traditional methods in content generation speed and quality. It offers advanced configuration options and has a roadmap for future enhancements, including integration with popular Learning Management Systems and a mobile app for content generation on-the-go.

MCPSpy
MCPSpy is a command-line tool leveraging eBPF technology to monitor Model Context Protocol (MCP) communication at the kernel level. It provides real-time visibility into JSON-RPC 2.0 messages exchanged between MCP clients and servers, supporting Stdio and HTTP transports. MCPSpy offers security analysis, debugging, performance monitoring, compliance assurance, and learning opportunities for understanding MCP communications. The tool consists of eBPF programs, an eBPF loader, an HTTP session manager, an MCP protocol parser, and output handlers for console display and JSONL output.

quantalogic
QuantaLogic is a ReAct framework for building advanced AI agents that seamlessly integrates large language models with a robust tool system. It aims to bridge the gap between advanced AI models and practical implementation in business processes by enabling agents to understand, reason about, and execute complex tasks through natural language interaction. The framework includes features such as ReAct Framework, Universal LLM Support, Secure Tool System, Real-time Monitoring, Memory Management, and Enterprise Ready components.

R2R
R2R (RAG to Riches) is a fast and efficient framework for serving high-quality Retrieval-Augmented Generation (RAG) to end users. The framework is designed with customizable pipelines and a feature-rich FastAPI implementation, enabling developers to quickly deploy and scale RAG-based applications. R2R was conceived to bridge the gap between local LLM experimentation and scalable production solutions. **R2R is to LangChain/LlamaIndex what NextJS is to React**. A JavaScript client for R2R deployments can be found here. ### Key Features * **🚀 Deploy** : Instantly launch production-ready RAG pipelines with streaming capabilities. * **🧩 Customize** : Tailor your pipeline with intuitive configuration files. * **🔌 Extend** : Enhance your pipeline with custom code integrations. * **⚖️ Autoscale** : Scale your pipeline effortlessly in the cloud using SciPhi. * **🤖 OSS** : Benefit from a framework developed by the open-source community, designed to simplify RAG deployment.

pilottai
PilottAI is a Python framework for building autonomous multi-agent systems with advanced orchestration capabilities. It provides enterprise-ready features for building scalable AI applications. The framework includes hierarchical agent systems, production-ready features like asynchronous processing and fault tolerance, advanced memory management with semantic storage, and integrations with multiple LLM providers and custom tools. PilottAI offers specialized agents for various tasks such as customer service, document processing, email handling, knowledge acquisition, marketing, research analysis, sales, social media, and web search. The framework also provides documentation, example use cases, and advanced features like memory management, load balancing, and fault tolerance.

aioshelly
Aioshelly is an asynchronous library designed to control Shelly devices. It is currently under development and requires Python version 3.11 or higher, along with dependencies like bluetooth-data-tools, aiohttp, and orjson. The library provides examples for interacting with Gen1 devices using CoAP protocol and Gen2/Gen3 devices using RPC and WebSocket protocols. Users can easily connect to Shelly devices, retrieve status information, and perform various actions through the provided APIs. The repository also includes example scripts for quick testing and usage guidelines for contributors to maintain consistency with the Shelly API.
For similar tasks

UnrealGenAISupport
The Unreal Engine Generative AI Support Plugin is a tool designed to integrate various cutting-edge LLM/GenAI models into Unreal Engine for game development. It aims to simplify the process of using AI models for game development tasks, such as controlling scene objects, generating blueprints, running Python scripts, and more. The plugin currently supports models from organizations like OpenAI, Anthropic, XAI, Google Gemini, Meta AI, Deepseek, and Baidu. It provides features like API support, model control, generative AI capabilities, UI generation, project file management, and more. The plugin is still under development but offers a promising solution for integrating AI models into game development workflows.

generative-ai-dart
The Google Generative AI SDK for Dart enables developers to utilize cutting-edge Large Language Models (LLMs) for creating language applications. It provides access to the Gemini API for generating content using state-of-the-art models. Developers can integrate the SDK into their Dart or Flutter applications to leverage powerful AI capabilities. It is recommended to use the SDK for server-side API calls to ensure the security of API keys and protect against potential key exposure in mobile or web apps.

SemanticKernel.Assistants
This repository contains an assistant proposal for the Semantic Kernel, allowing the usage of assistants without relying on OpenAI Assistant APIs. It runs locally planners and plugins for the assistants, providing scenarios like Assistant with Semantic Kernel plugins, Multi-Assistant conversation, and AutoGen conversation. The Semantic Kernel is a lightweight SDK enabling integration of AI Large Language Models with conventional programming languages, offering functions like semantic functions, native functions, and embeddings-based memory. Users can bring their own model for the assistants and host them locally. The repository includes installation instructions, usage examples, and information on creating new conversation threads with the assistant.

ezlocalai
ezlocalai is an artificial intelligence server that simplifies running multimodal AI models locally. It handles model downloading and server configuration based on hardware specs. It offers OpenAI Style endpoints for integration, voice cloning, text-to-speech, voice-to-text, and offline image generation. Users can modify environment variables for customization. Supports NVIDIA GPU and CPU setups. Provides demo UI and workflow visualization for easy usage.

llmproxy
llmproxy is a reverse proxy for LLM API based on Cloudflare Worker, supporting platforms like OpenAI, Gemini, and Groq. The interface is compatible with the OpenAI API specification and can be directly accessed using the OpenAI SDK. It provides a convenient way to interact with various AI platforms through a unified API endpoint, enabling seamless integration and usage in different applications.

gemini-api-quickstart
This repository contains a simple Python Flask App utilizing the Google AI Gemini API to explore multi-modal capabilities. It provides a basic UI and Flask backend for easy integration and testing. The app allows users to interact with the AI model through chat messages, making it a great starting point for developers interested in AI-powered applications.

KaibanJS
KaibanJS is a JavaScript-native framework for building multi-agent AI systems. It enables users to create specialized AI agents with distinct roles and goals, manage tasks, and coordinate teams efficiently. The framework supports role-based agent design, tool integration, multiple LLMs support, robust state management, observability and monitoring features, and a real-time agentic Kanban board for visualizing AI workflows. KaibanJS aims to empower JavaScript developers with a user-friendly AI framework tailored for the JavaScript ecosystem, bridging the gap in the AI race for non-Python developers.

FFAIVideo
FFAIVideo is a lightweight node.js project that utilizes popular AI LLM to intelligently generate short videos. It supports multiple AI LLM models such as OpenAI, Moonshot, Azure, g4f, Google Gemini, etc. Users can input text to automatically synthesize exciting video content with subtitles, background music, and customizable settings. The project integrates Microsoft Edge's online text-to-speech service for voice options and uses Pexels website for video resources. Installation of FFmpeg is essential for smooth operation. Inspired by MoneyPrinterTurbo, MoneyPrinter, and MsEdgeTTS, FFAIVideo is designed for front-end developers with minimal dependencies and simple usage.
For similar jobs

sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.

teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.

ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.

classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.

chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.

BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students

uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.

griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.