AgentX
AgentX · Next-generation open-source AI agent development framework and runtime platform | 下一代 AI 智能体开发框架和运行时平台
Stars: 75
AgentX is a next-generation open-source AI agent development framework and runtime platform. It provides an event-driven runtime with a simple framework and minimal UI. The platform is ready-to-use and offers features like multi-user support, session persistence, real-time streaming, and Docker readiness. Users can build AI Agent applications with event-driven architecture using TypeScript for server-side (Node.js) and client-side (Browser/React) development. AgentX also includes comprehensive documentation, core concepts, guides, API references, and various packages for different functionalities. The architecture follows an event-driven design with layered components for server-side and client-side interactions.
README:
Next-generation open-source AI agent development framework and runtime platform
下一代开源 AI 智能体开发框架与运行时平台
Event-driven Runtime · Simple Framework · Minimal UI · Ready-to-use Portal
事件驱动 · 简易开发 · 界面简约 · 开箱即用
Quick try with Node.js 20+:
LLM_PROVIDER_KEY=sk-ant-xxxxx \
LLM_PROVIDER_URL=https://api.anthropic.com \
npx @agentxjs/portagentStable, no compilation required:
docker run -d \
--name portagent \
-p 5200:5200 \
-e LLM_PROVIDER_KEY=sk-ant-xxxxx \
-e LLM_PROVIDER_URL=https://api.anthropic.com \
-v ./data:/home/node/.agentx \
deepracticexs/portagent:latestOpen http://localhost:5200 and start chatting!
- Multi-User Support - User registration (invite code optional)
- Session Persistence - Resume conversations anytime
- Real-time Streaming - WebSocket-based communication
- Docker Ready - Production-ready with health checks
Tip: Add
-e INVITE_CODE_REQUIRED=trueto enable invite code protection.
👉 Full Portagent Documentation - Configuration, deployment, API reference
AgentX is a TypeScript framework for building AI Agent applications with event-driven architecture.
Server-side (Node.js)
import { createServer } from "http";
import { createAgentX, defineAgent } from "agentxjs";
// Define your Agent
const MyAgent = defineAgent({
name: "MyAgent",
systemPrompt: "You are a helpful assistant.",
mcpServers: {
// Optional: Add MCP servers for tools
filesystem: {
command: "npx",
args: ["-y", "@anthropic/mcp-server-filesystem", "/tmp"],
},
},
});
// Create HTTP server
const server = createServer();
// Create AgentX instance
const agentx = await createAgentX({
llm: {
apiKey: process.env.LLM_PROVIDER_KEY,
baseUrl: process.env.LLM_PROVIDER_URL,
},
agentxDir: "~/.agentx", // Auto-configures SQLite storage
server, // Attach WebSocket to HTTP server
defaultAgent: MyAgent, // Default agent for new conversations
});
// Start server
server.listen(5200, () => {
console.log("✓ Server running at http://localhost:5200");
console.log("✓ WebSocket available at ws://localhost:5200/ws");
});Client-side (Browser/React)
import { useAgentX, ResponsiveStudio } from "@agentxjs/ui";
import "@agentxjs/ui/styles.css";
function App() {
const agentx = useAgentX("ws://localhost:5200/ws");
if (!agentx) return <div>Connecting...</div>;
return <ResponsiveStudio agentx={agentx} />;
}👉 Full Documentation - Learning paths and more
Event-driven architecture with layered design:
SERVER SIDE SYSTEMBUS CLIENT SIDE
═══════════════════════════════════════════════════════════════════════════
║
┌─────────────────┐ ║
│ Environment │ ║
│ • LLMProvider │ emit ║
│ • Sandbox │─────────────────>║
└─────────────────┘ ║
║
║
┌─────────────────┐ subscribe ║
│ Agent Layer │<─────────────────║
│ • AgentEngine │ ║
│ • Agent │ emit ║
│ │─────────────────>║ ┌─────────────────┐
│ 4-Layer Events │ ║ │ │
│ • Stream │ ║ broadcast │ WebSocket │
│ • State │ ║════════>│ (Event Stream) │
│ • Message │ ║<════════│ │
│ • Turn │ ║ input │ AgentX API │
└─────────────────┘ ║ └─────────────────┘
║
║
┌─────────────────┐ ║
│ Runtime Layer │ ║
│ │ emit ║
│ • Persistence │─────────────────>║
│ • Container │ ║
│ • WebSocket │<─────────────────╫
│ │─────────────────>║
└─────────────────┘ ║
║
[ Event Bus ]
[ RxJS Pub/Sub ]
Event Flow:
→ Input: Client → WebSocket → BUS → Claude SDK
← Output: SDK → BUS → AgentEngine → BUS → Client
AgentX is in early development. We welcome your ideas, feedback, and feature requests!
Part of the Deepractice AI Agent infrastructure:
- PromptX - AI Agent context platform
- ResourceX - Unified resource manager (ARP)
- ToolX - Tool integration
- UIX - AI-to-UI protocol layer
- SandboX - Agent sandbox
Built with ❤️ by Deepractice
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for AgentX
Similar Open Source Tools
For similar tasks
Flowise
Flowise is a tool that allows users to build customized LLM flows with a drag-and-drop UI. It is open-source and self-hostable, and it supports various deployments, including AWS, Azure, Digital Ocean, GCP, Railway, Render, HuggingFace Spaces, Elestio, Sealos, and RepoCloud. Flowise has three different modules in a single mono repository: server, ui, and components. The server module is a Node backend that serves API logics, the ui module is a React frontend, and the components module contains third-party node integrations. Flowise supports different environment variables to configure your instance, and you can specify these variables in the .env file inside the packages/server folder.
nlux
nlux is an open-source Javascript and React JS library that makes it super simple to integrate powerful large language models (LLMs) like ChatGPT into your web app or website. With just a few lines of code, you can add conversational AI capabilities and interact with your favourite LLM.
generative-ai-go
The Google AI Go SDK enables developers to use Google's state-of-the-art generative AI models (like Gemini) to build AI-powered features and applications. It supports use cases like generating text from text-only input, generating text from text-and-images input (multimodal), building multi-turn conversations (chat), and embedding.
awesome-langchain-zh
The awesome-langchain-zh repository is a collection of resources related to LangChain, a framework for building AI applications using large language models (LLMs). The repository includes sections on the LangChain framework itself, other language ports of LangChain, tools for low-code development, services, agents, templates, platforms, open-source projects related to knowledge management and chatbots, as well as learning resources such as notebooks, videos, and articles. It also covers other LLM frameworks and provides additional resources for exploring and working with LLMs. The repository serves as a comprehensive guide for developers and AI enthusiasts interested in leveraging LangChain and LLMs for various applications.
Large-Language-Model-Notebooks-Course
This practical free hands-on course focuses on Large Language models and their applications, providing a hands-on experience using models from OpenAI and the Hugging Face library. The course is divided into three major sections: Techniques and Libraries, Projects, and Enterprise Solutions. It covers topics such as Chatbots, Code Generation, Vector databases, LangChain, Fine Tuning, PEFT Fine Tuning, Soft Prompt tuning, LoRA, QLoRA, Evaluate Models, Knowledge Distillation, and more. Each section contains chapters with lessons supported by notebooks and articles. The course aims to help users build projects and explore enterprise solutions using Large Language Models.
ai-chatbot
Next.js AI Chatbot is an open-source app template for building AI chatbots using Next.js, Vercel AI SDK, OpenAI, and Vercel KV. It includes features like Next.js App Router, React Server Components, Vercel AI SDK for streaming chat UI, support for various AI models, Tailwind CSS styling, Radix UI for headless components, chat history management, rate limiting, session storage with Vercel KV, and authentication with NextAuth.js. The template allows easy deployment to Vercel and customization of AI model providers.
awesome-local-llms
The 'awesome-local-llms' repository is a curated list of open-source tools for local Large Language Model (LLM) inference, covering both proprietary and open weights LLMs. The repository categorizes these tools into LLM inference backend engines, LLM front end UIs, and all-in-one desktop applications. It collects GitHub repository metrics as proxies for popularity and active maintenance. Contributions are encouraged, and users can suggest additional open-source repositories through the Issues section or by running a provided script to update the README and make a pull request. The repository aims to provide a comprehensive resource for exploring and utilizing local LLM tools.
Awesome-AI-Data-Guided-Projects
A curated list of data science & AI guided projects to start building your portfolio. The repository contains guided projects covering various topics such as large language models, time series analysis, computer vision, natural language processing (NLP), and data science. Each project provides detailed instructions on how to implement specific tasks using different tools and technologies.
For similar jobs
promptflow
**Prompt flow** is a suite of development tools designed to streamline the end-to-end development cycle of LLM-based AI applications, from ideation, prototyping, testing, evaluation to production deployment and monitoring. It makes prompt engineering much easier and enables you to build LLM apps with production quality.
deepeval
DeepEval is a simple-to-use, open-source LLM evaluation framework specialized for unit testing LLM outputs. It incorporates various metrics such as G-Eval, hallucination, answer relevancy, RAGAS, etc., and runs locally on your machine for evaluation. It provides a wide range of ready-to-use evaluation metrics, allows for creating custom metrics, integrates with any CI/CD environment, and enables benchmarking LLMs on popular benchmarks. DeepEval is designed for evaluating RAG and fine-tuning applications, helping users optimize hyperparameters, prevent prompt drifting, and transition from OpenAI to hosting their own Llama2 with confidence.
MegaDetector
MegaDetector is an AI model that identifies animals, people, and vehicles in camera trap images (which also makes it useful for eliminating blank images). This model is trained on several million images from a variety of ecosystems. MegaDetector is just one of many tools that aims to make conservation biologists more efficient with AI. If you want to learn about other ways to use AI to accelerate camera trap workflows, check out our of the field, affectionately titled "Everything I know about machine learning and camera traps".
leapfrogai
LeapfrogAI is a self-hosted AI platform designed to be deployed in air-gapped resource-constrained environments. It brings sophisticated AI solutions to these environments by hosting all the necessary components of an AI stack, including vector databases, model backends, API, and UI. LeapfrogAI's API closely matches that of OpenAI, allowing tools built for OpenAI/ChatGPT to function seamlessly with a LeapfrogAI backend. It provides several backends for various use cases, including llama-cpp-python, whisper, text-embeddings, and vllm. LeapfrogAI leverages Chainguard's apko to harden base python images, ensuring the latest supported Python versions are used by the other components of the stack. The LeapfrogAI SDK provides a standard set of protobuffs and python utilities for implementing backends and gRPC. LeapfrogAI offers UI options for common use-cases like chat, summarization, and transcription. It can be deployed and run locally via UDS and Kubernetes, built out using Zarf packages. LeapfrogAI is supported by a community of users and contributors, including Defense Unicorns, Beast Code, Chainguard, Exovera, Hypergiant, Pulze, SOSi, United States Navy, United States Air Force, and United States Space Force.
llava-docker
This Docker image for LLaVA (Large Language and Vision Assistant) provides a convenient way to run LLaVA locally or on RunPod. LLaVA is a powerful AI tool that combines natural language processing and computer vision capabilities. With this Docker image, you can easily access LLaVA's functionalities for various tasks, including image captioning, visual question answering, text summarization, and more. The image comes pre-installed with LLaVA v1.2.0, Torch 2.1.2, xformers 0.0.23.post1, and other necessary dependencies. You can customize the model used by setting the MODEL environment variable. The image also includes a Jupyter Lab environment for interactive development and exploration. Overall, this Docker image offers a comprehensive and user-friendly platform for leveraging LLaVA's capabilities.
carrot
The 'carrot' repository on GitHub provides a list of free and user-friendly ChatGPT mirror sites for easy access. The repository includes sponsored sites offering various GPT models and services. Users can find and share sites, report errors, and access stable and recommended sites for ChatGPT usage. The repository also includes a detailed list of ChatGPT sites, their features, and accessibility options, making it a valuable resource for ChatGPT users seeking free and unlimited GPT services.
TrustLLM
TrustLLM is a comprehensive study of trustworthiness in LLMs, including principles for different dimensions of trustworthiness, established benchmark, evaluation, and analysis of trustworthiness for mainstream LLMs, and discussion of open challenges and future directions. Specifically, we first propose a set of principles for trustworthy LLMs that span eight different dimensions. Based on these principles, we further establish a benchmark across six dimensions including truthfulness, safety, fairness, robustness, privacy, and machine ethics. We then present a study evaluating 16 mainstream LLMs in TrustLLM, consisting of over 30 datasets. The document explains how to use the trustllm python package to help you assess the performance of your LLM in trustworthiness more quickly. For more details about TrustLLM, please refer to project website.
AI-YinMei
AI-YinMei is an AI virtual anchor Vtuber development tool (N card version). It supports fastgpt knowledge base chat dialogue, a complete set of solutions for LLM large language models: [fastgpt] + [one-api] + [Xinference], supports docking bilibili live broadcast barrage reply and entering live broadcast welcome speech, supports Microsoft edge-tts speech synthesis, supports Bert-VITS2 speech synthesis, supports GPT-SoVITS speech synthesis, supports expression control Vtuber Studio, supports painting stable-diffusion-webui output OBS live broadcast room, supports painting picture pornography public-NSFW-y-distinguish, supports search and image search service duckduckgo (requires magic Internet access), supports image search service Baidu image search (no magic Internet access), supports AI reply chat box [html plug-in], supports AI singing Auto-Convert-Music, supports playlist [html plug-in], supports dancing function, supports expression video playback, supports head touching action, supports gift smashing action, supports singing automatic start dancing function, chat and singing automatic cycle swing action, supports multi scene switching, background music switching, day and night automatic switching scene, supports open singing and painting, let AI automatically judge the content.

