MakerAi
The AI Operating System for Delphi. 100% native framework with RAG 2.0 for knowledge retrieval, autonomous agents with semantic memory, visual workflow orchestration, and universal LLM connector. Supports OpenAI, Claude, Gemini, Ollama, and more. Enterprise-grade AI for Delphi 10.3+
Stars: 125
MakerAI Suite v3.3 is an AI ecosystem for Delphi that goes beyond just wrapping REST calls. It provides native, provider-specific components for direct access to each provider's API, along with a complete AI application ecosystem for building production-grade intelligent systems entirely in Delphi. The suite includes RAG pipelines with SQL-like query languages, autonomous agents with graph orchestration, MCP servers and clients, native ChatTools, FMX visual components, and a universal connector for switching providers at runtime without changing application code. MakerAI covers the full stack natively in Delphi, catering to both simple one-provider integrations and complex multi-agent, multi-provider systems.
README:
π Official Website: https://makerai.cimamaker.com
Most AI libraries for Delphi stop at wrapping REST calls. MakerAI is different.
Yes, MakerAI includes native, provider-specific components that give you direct, full-fidelity access to each provider's API β every model parameter, every response field, every streaming event, exactly as the provider defines it.
But on top of that, MakerAI is a complete AI application ecosystem that lets you build production-grade intelligent systems entirely in Delphi:
- RAG pipelines (vector and graph-based) with SQL-like query languages (VQL / GQL)
- Autonomous Agents with graph orchestration, checkpoints, and human-in-the-loop approval
- MCP Servers and Clients β expose or consume tools using the Model Context Protocol
- Native ChatTools β bridge AI reasoning with deterministic real-world capabilities (PDF, Vision, Speech, Web Search, Shell, Computer Use)
- FMX Visual Components β drop-in UI for multimodal chat interfaces
- Universal Connector β switch providers at runtime without changing your application code
Whether you need a simple one-provider integration or a multi-agent, multi-provider, retrieval-augmented production system, MakerAI covers the full stack β natively in Delphi.
The biggest architectural change in v3.3 is the TAiCapabilities system, which replaces scattered per-provider flags with a unified, declarative model of what each model can do and what a session needs:
-
ModelCapsβ what the model natively supports (e.g.[cap_Image, cap_Reasoning]) -
SessionCapsβ what capabilities the current session requires -
Gap analysis β when
SessionCapsexceedsModelCaps, MakerAI automatically activates bridges (tool-assisted OCR, vision bridges, etc.) without changing your code -
ThinkingLevelβ unified reasoning depth control (tlLow,tlMedium,tlHigh) across all providers that support extended thinking
| Provider | New / Updated Models |
|---|---|
| OpenAI | gpt-5.2, gpt-image-1, o3, o3-mini |
| Claude | claude-opus-4-6, claude-sonnet-4-6, claude-3-7-sonnet |
| Gemini | gemini-3.0, gemini-2.5-flash, gemini-2.5-flash-image |
| Grok | grok-4, grok-3, grok-imagine-image |
| Mistral | Magistral (reasoning), mistral-ocr-latest |
| DeepSeek | deepseek-reasoner (extended thinking) |
| Kimi | kimi-k2.5 (extended thinking) |
-
TAiFileCheckpointerβ persists agent graph state to disk; resume workflows after crashes or restarts -
TAiWaitApprovalToolβ suspends a node and waits for human approval before continuing -
TAIAgentManager.OnSuspendevent for building approval UIs -
ResumeThread(ThreadID, NodeName, Input)to continue suspended workflows
- New
uMakerAi.RAG.Graph.Documents.pasβ full document lifecycle management (ingest, chunk, embed, link) directly into the knowledge graph
-
reasoning_contentis now correctly preserved and re-sent in multi-turn tool call conversations for all providers that require it (DeepSeek-reasoner, Kimi k2.5, Groq reasoning models)
-
TAiEmbeddingsConnectionβ abstract connector for swappable embedding providers -
TAiAudioPushStreamβ push-based audio streaming utility - Demo 027 β Document Manager
- Demo 012 β ChatWebList (chat with web-based content)
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Your Delphi Application β
ββββββ¬βββββββββββββββββββ¬ββββββββββββββββββ¬βββββββββββββββββββββββββ
β β β
ββββββΌβββββ βββββββββββΌβββββββββββ ββββΌβββββββββββββββββββββββββ
β ChatUI β β Agents β β Design-Time β
β FMX β β TAIAgentManager β β Property Editors β
β Visual β β TAIBlackboard β β Object Inspector support β
β Comps β β Checkpoint/Approve β βββββββββββββββββββββββββββββ
ββββββ¬βββββ βββββββββββ¬βββββββββββ
β β
ββββββΌβββββββββββββββββββΌβββββββββββββββββββββββββββββββββββββββββββ
β TAiChatConnection β Universal Connector β
β Switch provider at runtime via DriverName property β
ββββββββββββββββββββββββββββββββ¬ββββββββββββββββββββββββββββββββββββ
β
ββββββββββββββββββββββββββββββββΌββββββββββββββββββββββββββββββββββββ
β Native Provider Drivers (direct API access, full fidelity) β
β OpenAI Β· Claude Β· Gemini Β· Grok Β· Mistral Β· DeepSeek Β· Kimi β
β Groq Β· Cohere Β· Ollama Β· LM Studio Β· GenericLLM β
ββββββββββββββββββββββββββββββββ¬ββββββββββββββββββββββββββββββββββββ
β
βββββββββββββββββββββββββββΌβββββββββββββββββββββββββ
β β β
ββββββΌβββββββββ ββββββββββββββΌβββββββββ βββββββββββββΌββββββββββ
β ChatTools β β RAG β β MCP β
β PDF/Vision β β Vector (VQL) β β Server (HTTP/SSE β
β Speech/STT β β Graph (GQL) β β StdIO/Direct) β
β Web Search β β PostgreSQL/SQLite β β Client β
β Shell β β HNSW Β· BM25 Β· RRF β β TAiFunctions bridgeβ
β ComputerUseβ β Rerank Β· Documents β βββββββββββββββββββββββ
βββββββββββββββ βββββββββββββββββββββββ
MakerAI gives you two ways to work with each provider, which you can mix freely:
Full, provider-specific access to every API feature. Use when you need complete control:
| Component | Provider | Latest Models |
|---|---|---|
TAiOpenChat |
OpenAI | gpt-5.2, o3, o3-mini |
TAiClaudeChat |
Anthropic | claude-opus-4-6, claude-sonnet-4-6 |
TAiGeminiChat |
gemini-3.0, gemini-2.5-flash | |
TAiGrokChat |
xAI | grok-4, grok-3 |
TAiMistralChat |
Mistral AI | Magistral, mistral-large |
TAiDeepSeekChat |
DeepSeek | deepseek-reasoner, deepseek-chat |
TAiKimiChat |
Moonshot | kimi-k2.5 |
TAiGroqChat |
Groq | llama-3.3, deepseek-r1 |
TCohereChat |
Cohere | command-r-plus |
TAiOllamaChat |
Ollama | Any local model |
TAiLMStudioChat |
LM Studio | Any local model |
TAiGenericChat |
OpenAI-compatible | Any OpenAI-API endpoint |
Provider-agnostic code. Switch models or providers by changing one property:
AiConn.DriverName := 'OpenAI';
AiConn.Model := 'gpt-5.2';
AiConn.ApiKey := '@OPENAI_API_KEY'; // resolved from environment variable
// Switch to Gemini without changing anything else
AiConn.DriverName := 'Gemini';
AiConn.Model := 'gemini-3.0-flash';
AiConn.ApiKey := '@GEMINI_API_KEY';| Feature | OpenAI (gpt-5.2) | Claude (4.6) | Gemini (3.0) | Grok (4) | Mistral | DeepSeek | Ollama |
|---|---|---|---|---|---|---|---|
| Text Generation | β | β | β | β | β | β | β |
| Streaming (SSE) | β | β | β | β | β | β | β |
| Function Calling | β | β | β | β | β | β | β |
| JSON Mode / Schema | β | β | β | β | β | β | β |
| Image Input | β | β | β | β | β | β | β |
| PDF / Files | β | β | β | β | β | ||
| Image Generation | β | β | β | β | β | β | β |
| Video Generation | β | β | β | β | β | β | β |
| Extended Thinking | β | β | β | β | β | β | |
| Speech (TTS/STT) | β | β | β | β | β | β | |
| Web Search | β | β | β | β | β | β | β |
| Computer Use | β | β | β | β | β | β | β |
| RAG (all modes) | β | β | β | β | β | β | β |
| MCP Client/Server | β | β | β | β | β | β | β |
| Agents | β | β | β | β | β | β | β |
Legend: β Native |
β οΈ Tool-Assisted bridge | β Not Supported
Two complementary retrieval engines with their own query languages:
Vector RAG β semantic and hybrid search over document embeddings:
- HNSW index for approximate nearest-neighbor search
- BM25 lexical index for keyword matching
- Hybrid search with RRF (Reciprocal Rank Fusion) or weighted fusion
- Reranking and Lost-in-the-Middle reordering for LLM context
-
VQL (Vector Query Language) β SQL-like DSL for complex retrieval queries:
MATCH documents SEARCH 'machine learning' USING HYBRID WEIGHTS(semantic: 0.7, lexical: 0.3) FUSION RRF WHERE category = 'tech' AND date > '2025-01-01' RERANK 'neural networks' WITH REGENERATE LIMIT 10
- Drivers: PostgreSQL/pgvector, SQLite, in-memory
Graph RAG β knowledge graph with semantic search over entities and relationships:
- Nodes and edges with embeddings and metadata
-
GQL (Graph Query Language) β Cypher-like DSL:
MATCH (p:Person)-[r:WORKS_AT]->(c:Company) WHERE c.city = 'Madrid' DEPTH 2 RETURN p, r, c
- Dijkstra shortest path, centrality analysis, hub detection
- Export to GraphViz DOT, GraphML (Gephi), native JSON format
- Document lifecycle management (ingest β chunk β embed β link)
Graph-based multi-agent workflows with full thread safety:
-
TAIAgentManagerβ executes directed graphs of AI nodes via thread pool -
TAIAgentsNodeβ single execution unit; runs an LLM call, a tool, or custom logic -
TAIBlackboardβ thread-safe shared state dictionary between all nodes -
Link modes:
lmFanout(parallel broadcast),lmConditional(routing),lmExpression(binding),lmManual -
Join modes:
jmAny(first arrival wins),jmAll(wait for all inputs) -
Durable execution:
TAiFileCheckpointerpersists state; resume after crashes -
Human-in-the-loop:
TAiWaitApprovalToolsuspends execution for human approval - Supports any LLM provider via
TAiChatConnection
Full implementation of the MCP standard for both consuming and exposing tools:
MCP Server β expose Delphi functions as MCP tools, callable by any MCP client (Claude Desktop, AI agents, etc.):
- Transports: HTTP, SSE (Server-Sent Events), StdIO, Direct (in-process)
- Bridge
TAiFunctions β IAiMCPToolβ any existingTAiFunctionscomponent becomes an MCP server instantly - API Key authentication, CORS configuration
-
TAiMCPResponseBuilderfor structured responses (text + files + media) - RTTI-based automatic JSON Schema generation from parameter classes
MCP Client β consume any external MCP server from your Delphi app:
- Connect to Claude Desktop tools, filesystem servers, database tools, etc.
- Integrated into
TAiFunctionscomponent alongside native function definitions
ChatTools bridge the gap between AI reasoning and real-world operations. They activate automatically based on gap analysis between SessionCaps and ModelCaps:
| Tool Interface | What it does | Implementations |
|---|---|---|
IAiPdfTool |
Extract text from PDFs | Mistral OCR, Ollama OCR |
IAiVisionTool |
Describe / analyze images | Any vision model |
IAiSpeechTool |
Text-to-speech / speech-to-text | Whisper, Gemini Speech, OpenAI TTS |
IAiWebSearchTool |
Live web search | Gemini Web Search |
IAiImageTool |
Generate images | DALL-E 3, gpt-image-1, Gemini, Grok |
IAiVideoTool |
Generate video | Sora, Gemini Veo |
TAiShell |
Execute shell commands | Windows/Linux |
TAiTextEditorTool |
Read/write/patch files | Diff-based editing |
TAiComputerUseTool |
Control mouse and keyboard | Claude Computer Use, OpenAI |
Tools follow a common pattern: SetContext(AiChat) + Execute*(). They can run standalone, as function-call bridges, or as automatic capability bridges.
Drop-in FireMonkey components for building multimodal chat UIs:
-
TChatListβ scrollable message container with Markdown rendering, code blocks, copy buttons -
TChatBubbleβ individual message bubble (user / assistant / tool) -
TChatInputβ text input bar with voice recording, file attachment, and send button
Compatible with all providers. Works with streaming responses.
Full Delphi IDE support via the MakerAiDsg.dpk design-time package:
-
DriverNameproperty shows a dropdown of all registered providers in the Object Inspector -
Modelproperty lists all models for the selected provider - MCP Client configuration editor with transport type selection
- Embedding connection editor
- Version/About dialog
git clone https://github.com/gustavoeenriquez/MakerAi.gitCompile and install in this exact order:
-
Source/Packages/MakerAI.dpkβ Runtime core (~100 units) -
Source/Packages/MakerAi.RAG.Drivers.dpkβ PostgreSQL/pgvector connector -
Source/Packages/MakerAi.UI.dpkβ FMX visual components -
Source/Packages/MakerAiDsg.dpkβ Design-time editors (requires VCL + DesignIDE)
Open Source/Packages/MakerAiGrp.groupproj to compile all packages at once.
Add all of these to Tools > Options > Language > Delphi > Library:
Source/Agents
Source/Chat
Source/ChatUI
Source/Core
Source/Design
Source/MCPClient
Source/MCPServer
Source/Packages
Source/RAG
Source/Resources
Source/Tools
Source/Utils
API keys are resolved from environment variables using the @VAR_NAME convention:
AiConn.ApiKey := '@OPENAI_API_KEY'; // reads OPENAI_API_KEY from environment
AiConn.ApiKey := '@CLAUDE_API_KEY'; // reads CLAUDE_API_KEY
AiConn.ApiKey := '@GEMINI_API_KEY'; // reads GEMINI_API_KEY
AiConn.ApiKey := 'sk-...'; // or set a literal key directly| Delphi Version | Support |
|---|---|
| 10.4 Sydney | Minimum (core framework) |
| 11 Alexandria | Full support |
| 12 Athens | Full support |
| 13 Florence | Full support (latest tested) |
Open Demos/DemosVersion31.groupproj to access all demos.
| Demo | Description |
|---|---|
010-Minimalchat |
Minimal chat with Ollama and TAiChatConnection |
012-ChatAllFunctions |
Full-featured multimodal chat (images, audio, streaming, tools) |
012-ChatWebList |
Chat with web-based content list |
021-RAG+Postgres-UpdateDB |
Build a vector RAG database with PostgreSQL/pgvector |
022-1-RAG_SQLite |
Lightweight vector RAG with SQLite |
023-RAGVQL |
VQL query language for semantic search |
025-RAGGraph |
Knowledge graph RAG with GQL queries |
026-RAGGraph-Basic |
Simplified graph RAG patterns |
027-DocumentManager |
Document ingestion and management |
031-MCPServer |
Multi-protocol MCP server (HTTP, SSE, StdIO) |
032-MCP_StdIO_FileManager |
File manager exposed via MCP StdIO |
032-MCPServerDataSnap |
MCP server using DataSnap transport |
034-MCPServer_Http_FileManager |
File manager via MCP HTTP |
035-MCPServerWithTAiFunctions |
TAiFunctions bridge to MCP |
036-MCPServerStdIO_AiFunction |
StdIO MCP server with AI functions |
041-GeminiVeo |
Video generation with Google Veo |
051-AgentDemo |
Visual agent graph builder and runner |
052-AgentConsole |
Console-based agent execution |
053-DemoAgentesTools |
Agents with integrated tool use |
- New
TAiCapabilitiessystem (ModelCaps/SessionCaps/ThinkingLevel) - Models updated: OpenAI gpt-5.2, Claude 4.6, Gemini 3.0, Grok 4, Mistral Magistral, DeepSeek-reasoner, Kimi k2.5
- Agents: durable execution (checkpoints), human-in-the-loop approval tool
- RAG: Graph Document management (
uMakerAi.RAG.Graph.Documents) - Fix:
reasoning_contentpreserved in multi-turn tool calls (DeepSeek, Kimi, Groq) - New:
TAiEmbeddingsConnection,TAiAudioPushStream - New demos: DocumentManager, ChatWebList
- Native ChatTools framework (
IAiPdfTool,IAiVisionTool,IAiSpeechTool, etc.) - Unified deterministic tool orchestration and capability bridges
- GPT-5.1, Gemini 3.0, Claude 4.5 initial support
- FMX multimodal UI components
- RAG Rerank + Graph RAG engine
- MCP Server framework (SSE, StdIO, HTTP)
- Major architecture redesign
- Visual FMX chat components
- Graph-based vector database
- Full Delphi 10.4β13 compatibility
- MCP Client/Server (Model Context Protocol)
- Agent graph orchestration
- Linux/POSIX full support
- Website: https://makerai.cimamaker.com
- Telegram (Spanish): https://t.me/MakerAi_Suite_Delphi
- Telegram (English): https://t.me/MakerAi_Delphi_Suite_English
- Email: [email protected]
- GitHub Issues: https://github.com/gustavoeenriquez/MakerAi/issues
MIT License β see LICENSE.txt for details.
Copyright Β© 2024β2026 Gustavo EnrΓquez β CimaMaker
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for MakerAi
Similar Open Source Tools
MakerAi
MakerAI Suite v3.3 is an AI ecosystem for Delphi that goes beyond just wrapping REST calls. It provides native, provider-specific components for direct access to each provider's API, along with a complete AI application ecosystem for building production-grade intelligent systems entirely in Delphi. The suite includes RAG pipelines with SQL-like query languages, autonomous agents with graph orchestration, MCP servers and clients, native ChatTools, FMX visual components, and a universal connector for switching providers at runtime without changing application code. MakerAI covers the full stack natively in Delphi, catering to both simple one-provider integrations and complex multi-agent, multi-provider systems.
moltis
Moltis is a secure, full-featured Rust-native AI gateway tool that runs on your own hardware, providing sandboxed execution and auditable code. It offers voice, memory, scheduling, Telegram, browser automation, and MCP servers functionalities without the need for Node.js or npm. Moltis ensures that your keys never leave your machine and includes features like auditable codebase, secure execution environment, and built-in functionalities for various tasks.
claude-code-orchestrator-kit
The Claude Code Orchestrator Kit is a professional automation and orchestration system for Claude Code, featuring 39 AI agents, 38 skills, 25 slash commands, auto-optimized MCP, Beads issue tracking, Gastown multi-agent orchestration, ready-to-use prompts, and quality gates. It transforms Claude Code into an intelligent orchestration system by delegating complex tasks to specialized sub-agents, preserving context and enabling indefinite work sessions.
topsha
LocalTopSH is an AI Agent Framework designed for companies and developers who require 100% on-premise AI agents with data privacy. It supports various OpenAI-compatible LLM backends and offers production-ready security features. The framework allows simple deployment using Docker compose and ensures that data stays within the user's network, providing full control and compliance. With cost-effective scaling options and compatibility in regions with restrictions, LocalTopSH is a versatile solution for deploying AI agents on self-hosted infrastructure.
vibe-local
vibe-local is a free AI coding agent designed for offline workshops, non-profit research, and education purposes. It is a single-file Python agent with no external dependencies, running on the Python standard library only. The tool allows instructors to support learners with AI agents, enables students without paid plans to practice agent coding, and helps beginners learn terminal operations through natural language. It is built for scenarios where no network is available, making it suitable for offline environments.
NeuroSploit
NeuroSploit v3 is an advanced security assessment platform that combines AI-driven autonomous agents with 100 vulnerability types, per-scan isolated Kali Linux containers, false-positive hardening, exploit chaining, and a modern React web interface with real-time monitoring. It offers features like 100 Vulnerability Types, Autonomous Agent with 3-stream parallel pentest, Per-Scan Kali Containers, Anti-Hallucination Pipeline, Exploit Chain Engine, WAF Detection & Bypass, Smart Strategy Adaptation, Multi-Provider LLM, Real-Time Dashboard, and Sandbox Dashboard. The tool is designed for authorized security testing purposes only, ensuring compliance with laws and regulations.
starknet-agentic
Open-source stack for giving AI agents wallets, identity, reputation, and execution rails on Starknet. `starknet-agentic` is a monorepo with Cairo smart contracts for agent wallets, identity, reputation, and validation, TypeScript packages for MCP tools, A2A integration, and payment signing, reusable skills for common Starknet agent capabilities, and examples and docs for integration. It provides contract primitives + runtime tooling in one place for integrating agents. The repo includes various layers such as Agent Frameworks / Apps, Integration + Runtime Layer, Packages / Tooling Layer, Cairo Contract Layer, and Starknet L2. It aims for portability of agent integrations without giving up Starknet strengths, with a cross-chain interop strategy and skills marketplace. The repository layout consists of directories for contracts, packages, skills, examples, docs, and website.
clawd-cursor
Clawd Cursor is an AI Desktop Agent that operates on a Smart 3-Layer Pipeline. It can work with any AI provider, run with local models for free, and acts as a self-healing doctor. The tool offers features like Install/Uninstall commands, Auto OpenClaw registration, Dashboard favorites, Credential detection, and Doctor UX. It integrates with OpenClaw to allow desktop control through natural language. Users can interact with Clawd Cursor through a Web Dashboard, browser foreground focus, and smart task handoff. The tool's 5-layer pipeline ensures efficient task execution, with different layers handling various aspects of the tasks. Clawd Cursor also provides self-healing capabilities to adapt at runtime in case of model failures or API rate limitations.
jat
JAT is a complete, self-contained environment for agentic development, offering task management, agent orchestration, code editor, git integration, and terminal access all in a single IDE. It allows users to connect various external sources like RSS, Slack, Telegram, and Gmail to create tasks and spawn agents automatically. JAT supports hands-on supervision of agents or autonomous operation. The tool provides features such as multi-agent management, task management, smart question UI, epic swarm for parallel agent spawning, autonomous triggers, task scheduling, error recovery, and a skill marketplace. JAT is designed to be a control tower for managing a swarm of agents, whether actively supervised or running autonomously.
openlegion
OpenLegion is an AI agent framework designed for builders who need secure, isolated, and production-ready AI agents. Each agent runs in its own Docker container with API keys stored securely in a vault. The framework offers autonomous AI agent fleets with features like chat via Telegram, Discord, Slack, or WhatsApp, built-in cost controls, and 100+ LLM providers. It provides detailed security measures, production-grade cost control, deterministic multi-agent orchestration, autonomous agent actions, self-awareness, and self-improvement capabilities. OpenLegion is fully auditable, lightweight, and offers a real-time dashboard for fleet observability.
tinyclaw
TinyClaw is a lightweight wrapper around Claude Code that connects WhatsApp via QR code, processes messages sequentially, maintains conversation context, runs 24/7 in tmux, and is ready for multi-channel support. Its key innovation is the file-based queue system that prevents race conditions and enables multi-channel support. TinyClaw consists of components like whatsapp-client.js for WhatsApp I/O, queue-processor.js for message processing, heartbeat-cron.sh for health checks, and tinyclaw.sh as the main orchestrator with a CLI interface. It ensures no race conditions, is multi-channel ready, provides clean responses using claude -c -p, and supports persistent sessions. Security measures include local storage of WhatsApp session and queue files, channel-specific authentication, and running Claude with user permissions.
pasal
Pasal.id is the first open, AI-native platform for Indonesian law, providing access to real Indonesian legal data through a REST API and a web app. It offers full-text legal search, structured reading, grounded AI tools, public JSON endpoints, crowd-sourced corrections, amendment tracking, and a bilingual UI. The platform is powered by Opus 4.6, ensuring accuracy through a self-improving correction flywheel. Key features include a grounded legal access server, multimodal verification agent, self-improving feedback loop, human-in-the-loop safety, and Claude Code as a development tool. The technical depth includes SQL migrations, search optimization, append-only revision audit trail, transaction-safe mutations, row-level security, input sanitization, ISR with on-demand revalidation, atomic job claiming, and coverage of 11 regulation types from 1945 to 2026.
forge-orchestrator
Forge Orchestrator is a Rust CLI tool designed to coordinate and manage multiple AI tools seamlessly. It acts as a senior tech lead, preventing conflicts, capturing knowledge, and ensuring work aligns with specifications. With features like file locking, knowledge capture, and unified state management, Forge enhances collaboration and efficiency among AI tools. The tool offers a pluggable brain for intelligent decision-making and includes a Model Context Protocol server for real-time integration with AI tools. Forge is not a replacement for AI tools but a facilitator for making them work together effectively.
cactus
Cactus is an energy-efficient and fast AI inference framework designed for phones, wearables, and resource-constrained arm-based devices. It provides a bottom-up approach with no dependencies, optimizing for budget and mid-range phones. The framework includes Cactus FFI for integration, Cactus Engine for high-level transformer inference, Cactus Graph for unified computation graph, and Cactus Kernels for low-level ARM-specific operations. It is suitable for implementing custom models and scientific computing on mobile devices.
For similar tasks
gen-ai-experiments
Gen-AI-Experiments is a structured collection of Jupyter notebooks and AI experiments designed to guide users through various AI tools, frameworks, and models. It offers valuable resources for both beginners and experienced practitioners, covering topics such as AI agents, model testing, RAG systems, real-world applications, and open-source tools. The repository includes folders with curated libraries, AI agents, experiments, LLM testing, open-source libraries, RAG experiments, and educhain experiments, each focusing on different aspects of AI development and application.
ISEK
ISEK is a decentralized agent network framework that enables building intelligent, collaborative agent-to-agent systems. It integrates the Google A2A protocol and ERC-8004 contracts for identity registration, reputation building, and cooperative task-solving, creating a self-organizing, decentralized society of agents. The platform addresses challenges in the agent ecosystem by providing an incentive system for users to pay for agent services, motivating developers to build high-quality agents and fostering innovation and quality in the ecosystem. ISEK focuses on decentralized agent collaboration and coordination, allowing agents to find each other, reason together, and act as a decentralized system without central control. The platform utilizes ERC-8004 for decentralized identity, reputation, and validation registries, establishing trustless verification and reputation management.
agentfactory
The AI Agent Factory is a spec-driven blueprint for building and monetizing digital FTEs. It empowers developers, entrepreneurs, and organizations to learn, build, and monetize intelligent AI agents, creating reliable digital FTEs that can be trusted, deployed, and scaled. The tool focuses on co-learning between humans and machines, emphasizing collaboration, clear specifications, and evolving together. It covers AI-assisted, AI-driven, and AI-native development approaches, guiding users through the AI development spectrum and organizational AI maturity levels. The core philosophy revolves around treating AI as a collaborative partner, using specification-first methodology, bilingual development, learning by doing, and ensuring transparency and reproducibility. The tool is suitable for beginners, professional developers, entrepreneurs, product leaders, educators, and tech leaders.
MakerAi
MakerAI Suite v3.3 is an AI ecosystem for Delphi that goes beyond just wrapping REST calls. It provides native, provider-specific components for direct access to each provider's API, along with a complete AI application ecosystem for building production-grade intelligent systems entirely in Delphi. The suite includes RAG pipelines with SQL-like query languages, autonomous agents with graph orchestration, MCP servers and clients, native ChatTools, FMX visual components, and a universal connector for switching providers at runtime without changing application code. MakerAI covers the full stack natively in Delphi, catering to both simple one-provider integrations and complex multi-agent, multi-provider systems.
magic
Magic is an open-source all-in-one AI productivity platform designed to help enterprises quickly build and deploy AI applications, aiming for a 100x increase in productivity. It consists of various AI products and infrastructure tools, such as Super Magic, Magic IM, Magic Flow, and more. Super Magic is a general-purpose AI Agent for complex task scenarios, while Magic Flow is a visual AI workflow orchestration system. Magic IM is an enterprise-grade AI Agent conversation system for internal knowledge management. Teamshare OS is a collaborative office platform integrating AI capabilities. The platform provides cloud services, enterprise solutions, and a self-hosted community edition for users to leverage its features.
For similar jobs
promptflow
**Prompt flow** is a suite of development tools designed to streamline the end-to-end development cycle of LLM-based AI applications, from ideation, prototyping, testing, evaluation to production deployment and monitoring. It makes prompt engineering much easier and enables you to build LLM apps with production quality.
deepeval
DeepEval is a simple-to-use, open-source LLM evaluation framework specialized for unit testing LLM outputs. It incorporates various metrics such as G-Eval, hallucination, answer relevancy, RAGAS, etc., and runs locally on your machine for evaluation. It provides a wide range of ready-to-use evaluation metrics, allows for creating custom metrics, integrates with any CI/CD environment, and enables benchmarking LLMs on popular benchmarks. DeepEval is designed for evaluating RAG and fine-tuning applications, helping users optimize hyperparameters, prevent prompt drifting, and transition from OpenAI to hosting their own Llama2 with confidence.
MegaDetector
MegaDetector is an AI model that identifies animals, people, and vehicles in camera trap images (which also makes it useful for eliminating blank images). This model is trained on several million images from a variety of ecosystems. MegaDetector is just one of many tools that aims to make conservation biologists more efficient with AI. If you want to learn about other ways to use AI to accelerate camera trap workflows, check out our of the field, affectionately titled "Everything I know about machine learning and camera traps".
leapfrogai
LeapfrogAI is a self-hosted AI platform designed to be deployed in air-gapped resource-constrained environments. It brings sophisticated AI solutions to these environments by hosting all the necessary components of an AI stack, including vector databases, model backends, API, and UI. LeapfrogAI's API closely matches that of OpenAI, allowing tools built for OpenAI/ChatGPT to function seamlessly with a LeapfrogAI backend. It provides several backends for various use cases, including llama-cpp-python, whisper, text-embeddings, and vllm. LeapfrogAI leverages Chainguard's apko to harden base python images, ensuring the latest supported Python versions are used by the other components of the stack. The LeapfrogAI SDK provides a standard set of protobuffs and python utilities for implementing backends and gRPC. LeapfrogAI offers UI options for common use-cases like chat, summarization, and transcription. It can be deployed and run locally via UDS and Kubernetes, built out using Zarf packages. LeapfrogAI is supported by a community of users and contributors, including Defense Unicorns, Beast Code, Chainguard, Exovera, Hypergiant, Pulze, SOSi, United States Navy, United States Air Force, and United States Space Force.
llava-docker
This Docker image for LLaVA (Large Language and Vision Assistant) provides a convenient way to run LLaVA locally or on RunPod. LLaVA is a powerful AI tool that combines natural language processing and computer vision capabilities. With this Docker image, you can easily access LLaVA's functionalities for various tasks, including image captioning, visual question answering, text summarization, and more. The image comes pre-installed with LLaVA v1.2.0, Torch 2.1.2, xformers 0.0.23.post1, and other necessary dependencies. You can customize the model used by setting the MODEL environment variable. The image also includes a Jupyter Lab environment for interactive development and exploration. Overall, this Docker image offers a comprehensive and user-friendly platform for leveraging LLaVA's capabilities.
carrot
The 'carrot' repository on GitHub provides a list of free and user-friendly ChatGPT mirror sites for easy access. The repository includes sponsored sites offering various GPT models and services. Users can find and share sites, report errors, and access stable and recommended sites for ChatGPT usage. The repository also includes a detailed list of ChatGPT sites, their features, and accessibility options, making it a valuable resource for ChatGPT users seeking free and unlimited GPT services.
TrustLLM
TrustLLM is a comprehensive study of trustworthiness in LLMs, including principles for different dimensions of trustworthiness, established benchmark, evaluation, and analysis of trustworthiness for mainstream LLMs, and discussion of open challenges and future directions. Specifically, we first propose a set of principles for trustworthy LLMs that span eight different dimensions. Based on these principles, we further establish a benchmark across six dimensions including truthfulness, safety, fairness, robustness, privacy, and machine ethics. We then present a study evaluating 16 mainstream LLMs in TrustLLM, consisting of over 30 datasets. The document explains how to use the trustllm python package to help you assess the performance of your LLM in trustworthiness more quickly. For more details about TrustLLM, please refer to project website.
AI-YinMei
AI-YinMei is an AI virtual anchor Vtuber development tool (N card version). It supports fastgpt knowledge base chat dialogue, a complete set of solutions for LLM large language models: [fastgpt] + [one-api] + [Xinference], supports docking bilibili live broadcast barrage reply and entering live broadcast welcome speech, supports Microsoft edge-tts speech synthesis, supports Bert-VITS2 speech synthesis, supports GPT-SoVITS speech synthesis, supports expression control Vtuber Studio, supports painting stable-diffusion-webui output OBS live broadcast room, supports painting picture pornography public-NSFW-y-distinguish, supports search and image search service duckduckgo (requires magic Internet access), supports image search service Baidu image search (no magic Internet access), supports AI reply chat box [html plug-in], supports AI singing Auto-Convert-Music, supports playlist [html plug-in], supports dancing function, supports expression video playback, supports head touching action, supports gift smashing action, supports singing automatic start dancing function, chat and singing automatic cycle swing action, supports multi scene switching, background music switching, day and night automatic switching scene, supports open singing and painting, let AI automatically judge the content.