PaiAgent
🔥轻量级的AI工作流编排系统,类似dify、n8n,全程使用Vibe Coding,AI工具为Qoder+CLI。涉及到的技术栈包括SpringAI、LangGraph4J、SSE、MinIO、DAG自定引擎等。
Stars: 78
PaiAgent is an enterprise-level AI workflow visualization orchestration platform that simplifies the combination and scheduling of AI capabilities. It allows developers and business users to quickly build complex AI processing flows through an intuitive drag-and-drop interface, without the need to write code, enabling collaboration of various large models.
README:
PaiAgent 是一个企业级的 AI 工作流可视化编排平台,让 AI 能力的组合和调度变得简单高效。通过直观的拖拽式界面,开发者和业务人员都能快速构建复杂的 AI 处理流程,无需编写代码即可实现多种大模型的协同工作。
- 🎯 零代码编排:可视化拖拽界面,无需编程即可构建复杂 AI 工作流
- 🚀 高性能引擎:自研轻量级 DAG 引擎,支持拓扑排序和智能循环检测
- 🔌 多模型统一:基于 Spring AI 框架,统一接入 OpenAI、DeepSeek、通义千问等主流大模型
- 🛠️ 灵活扩展:基于插件化设计,轻松开发自定义节点满足个性化需求
- 🐛 实时调试:内置调试面板,支持 SSE 流式输出,可视化执行过程
- 📦 开箱即用:完整的前后端解决方案,快速部署到生产环境
我已经放到技术派上了,点击这里查看详细教程(限时免费)。
第一期我更新了五篇教程,春节期间打算再更新五篇,假期有想学习的同学可以跟着这个项目冲。
基于 ReactFlow 构建的专业流程图编辑器,支持节点拖拽、连线配置、参数编辑等完整功能。
基于 Spring AI + Spring AI Alibaba 框架统一接入:
- OpenAI 节点:GPT-5 等模型(Spring AI OpenAI 接口)
- DeepSeek 节点:国产大模型(OpenAI 兼容接口)
- 通义千问节点:阿里云千问系列(Spring AI Alibaba DashScope 原生支持)
- 智谱 AI 节点:GLM 系列模型(OpenAI 兼容接口)
- AIPing 节点:第三方模型代理(OpenAI 兼容接口)
- TTS 音频合成:超拟人语音生成
- 输入/输出节点:灵活的数据输入输出
- 自定义扩展:基于统一接口开发专属节点
- 拓扑排序:基于 Kahn 算法的节点调度
- 循环检测:DFS 深度优先搜索防止死循环
- 数据流转:节点间智能数据传递机制
- 执行监控:完整的执行日志和结果记录
| 场景类别 | 具体应用 | 适用对象 |
|---|---|---|
| 📝 内容生成 | 批量文章生成、多语言翻译、内容改写润色 | 内容创作者、营销团队 |
| 💬 智能客服 | 多轮对话流程、意图识别与智能响应 | 客服团队、产品经理 |
| 📊 数据处理 | 文本分析、信息抽取、数据清洗转换 | 数据分析师、研发团队 |
| 🎵 音视频处理 | 语音合成、字幕生成、音频转写 | 内容团队、教育行业 |
| ⚙️ 流程自动化 | 报告生成、邮件自动回复、定时任务 | 运营团队、企业用户 |
┌─────────────────────────────────────────────────────────┐
│ 前端层 (Frontend) │
│ React 18 + TypeScript + ReactFlow + Ant Design │
│ • 可视化编辑器 • 节点面板 • 调试工具 │
└────────────────────┬────────────────────────────────────┘
│ REST API / SSE
┌────────────────────┴────────────────────────────────────┐
│ 应用层 (Backend) │
│ Spring Boot 3.4.1 + Java 21 │
│ • Controller • Service • Interceptor │
└────────────────────┬────────────────────────────────────┘
│
┌────────────────────┴────────────────────────────────────┐
│ 核心引擎层 (Engine) │
│ • WorkflowEngine: 工作流调度引擎 │
│ • DAGParser: 拓扑排序 + 循环检测 │
│ • NodeExecutor: 节点执行器工厂 │
└────────────────────┬────────────────────────────────────┘
│
┌────────────────────┴────────────────────────────────────┐
│ AI 模型层 (Spring AI) │
│ • Spring AI: OpenAI/DeepSeek/智谱 等兼容接口 │
│ • Spring AI Alibaba: 通义千问 DashScope 原生支持 │
│ • ChatClientFactory: 统一的 ChatClient 动态工厂 │
└────────────────────┬────────────────────────────────────┘
│
┌────────────────────┴────────────────────────────────────┐
│ 数据层 (Data & Storage) │
│ • MySQL: 工作流配置、执行记录 │
│ • MinIO: 文件存储 (可选) │
└─────────────────────────────────────────────────────────┘
| 层级 | 技术选型 | 版本要求 | 说明 |
| 前端 | React | 18.x | 现代化 UI 框架 |
| TypeScript | 5.x | 类型安全保障 | |
| Vite | 5.x | 高性能构建工具 | |
| ReactFlow | 最新版 | 专业流程图库 | |
| Ant Design + Tailwind CSS | - | 企业级 UI 组件 | |
| Zustand | 最新版 | 轻量级状态管理 | |
| 后端 | Spring Boot | 3.4.1 | 企业级 Java 框架 |
| Java | 21+ | LTS 长期支持版本 | |
| MyBatis-Plus | 3.5.5 | 增强版 ORM 框架 | |
| Spring AI | 1.0.0-M5 | AI 模型统一调用框架 | |
| Spring AI Alibaba | 1.0.0-M6.1 | 通义千问 DashScope 原生支持 | |
| MySQL | 8.0+ | 关系型数据库 | |
| FastJSON2 | 最新版 | 高性能 JSON 库 | |
| MinIO | 可选 | 对象存储服务 | |
| 核心引擎 | 自研 DAG 引擎 | - | 工作流编排核心 |
| Kahn 拓扑排序 | - | 节点依赖分析 | |
| DFS 循环检测 | - | 防止工作流死锁 | |
| Spring AI ChatClient | - | 统一 AI 模型调用接口 | |
| ChatClientFactory | - | 动态创建不同模型客户端 |
PaiAgent-one/
├── backend/ # Spring Boot 后端服务
│ ├── src/main/
│ │ ├── java/com/paiagent/
│ │ │ ├── engine/ # 🎯 DAG 工作流引擎(核心)
│ │ │ │ ├── WorkflowEngine.java # 工作流编排引擎
│ │ │ │ ├── dag/DAGParser.java # 拓扑排序+循环检测
│ │ │ │ ├── llm/ # LLM 调用层(Spring AI)
│ │ │ │ │ ├── ChatClientFactory.java # ChatClient 动态工厂
│ │ │ │ │ ├── PromptTemplateService.java # 提示词模板处理
│ │ │ │ │ └── LLMNodeConfig.java # LLM 节点配置
│ │ │ │ ├── executor/ # 节点执行器
│ │ │ │ │ ├── NodeExecutor.java # 执行器接口
│ │ │ │ │ ├── NodeExecutorFactory.java # 工厂模式
│ │ │ │ │ └── impl/ # 具体实现
│ │ │ │ │ ├── AbstractLLMNodeExecutor.java # LLM 抽象基类
│ │ │ │ │ ├── InputNodeExecutor.java
│ │ │ │ │ ├── OutputNodeExecutor.java
│ │ │ │ │ ├── OpenAINodeExecutor.java
│ │ │ │ │ ├── DeepSeekNodeExecutor.java
│ │ │ │ │ ├── QwenNodeExecutor.java
│ │ │ │ │ ├── ZhiPuNodeExecutor.java
│ │ │ │ │ ├── AIPingNodeExecutor.java
│ │ │ │ │ └── TTSNodeExecutor.java
│ │ │ │ └── model/ # 数据模型
│ │ │ ├── controller/ # REST API 接口层
│ │ │ ├── service/ # 业务逻辑层
│ │ │ ├── mapper/ # MyBatis-Plus 数据访问层
│ │ │ ├── entity/ # 数据库实体
│ │ │ ├── dto/ # 数据传输对象
│ │ │ ├── config/ # 配置类
│ │ │ ├── interceptor/ # 拦截器(认证)
│ │ │ └── common/ # 通用工具
│ │ └── resources/
│ │ ├── application.yml # 应用配置
│ │ └── schema.sql # 数据库初始化脚本
│ └── pom.xml # Maven 依赖配置
│
├── frontend/ # React 前端应用
│ ├── src/
│ │ ├── components/ # 🎨 核心组件
│ │ │ ├── FlowCanvas.tsx # ReactFlow 流程编辑器
│ │ │ ├── NodePanel.tsx # 可拖拽节点面板
│ │ │ ├── DebugDrawer.tsx # 调试抽屉面板
│ │ │ └── AudioPlayer.tsx # 音频播放器
│ │ ├── pages/ # 页面组件
│ │ │ ├── LoginPage.tsx # 登录页
│ │ │ ├── MainPage.tsx # 工作流列表页
│ │ │ └── EditorPage.tsx # 工作流编辑器页
│ │ ├── store/ # Zustand 状态管理
│ │ │ ├── authStore.ts # 用户认证状态
│ │ │ └── workflowStore.ts # 工作流编辑状态
│ │ ├── api/ # API 调用层
│ │ ├── utils/ # 工具函数
│ │ └── App.tsx # 应用入口
│ ├── package.json # NPM 依赖配置
│ └── vite.config.ts # Vite 构建配置
│
├── docs/ # 📚 项目文档
│ ├── README.md # 项目概览(当前文件)
│ ├── USER_GUIDE.md # 用户使用指南
│ ├── PROGRESS.md # 开发进度追踪
│ ├── SUMMARY.md # 项目技术总结
│ ├── AGENTS.md # AI Agent 开发指引
│ └── mermaid.md # 架构可视化图表
│
└── .gitignore # Git 忽略配置
在开始之前,请确保您的开发环境满足以下要求:
| 工具 | 版本要求 | 说明 |
|---|---|---|
| Java | 21+ | 推荐使用 OpenJDK 或 Oracle JDK |
| Node.js | 18+ | 包含 npm 包管理器 |
| MySQL | 8.0+ | 数据库服务 |
| Maven | 3.8+ | Java 项目构建工具 |
git clone https://github.com/yourusername/PaiAgent-one.git
cd PaiAgent-one2.1 创建数据库
mysql -u root -pCREATE DATABASE paiagent DEFAULT CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;2.2 导入初始化脚本
mysql -u root -p paiagent < backend/src/main/resources/schema.sql2.3 配置数据库连接
编辑 backend/src/main/resources/application.yml:
spring:
datasource:
url: jdbc:mysql://localhost:3306/paiagent?useUnicode=true&characterEncoding=utf-8&useSSL=false&serverTimezone=Asia/Shanghai
username: root
password: your_password # 修改为您的数据库密码cd backend
./mvnw spring-boot:run✅ 后端服务启动成功后,您将看到:
Started PaiAgentApplication in X.XXX seconds
后端服务地址:http://localhost:8080
打开新的终端窗口:
cd frontend
npm install
npm run dev✅ 前端服务启动成功后,您将看到:
➜ Local: http://localhost:5173/
| 服务 | 地址 | 说明 |
|---|---|---|
| 🌐 前端应用 | http://localhost:5173 | 主要操作界面 |
| 🔧 后端 API | http://localhost:8080 | REST API 服务 |
| 📚 API 文档 | http://localhost:8080/swagger-ui.html | Swagger 接口文档 |
默认登录凭证:
- 用户名:
admin - 密码:
123
创建您的第一个工作流:
- 登录系统:使用默认凭证登录
- 创建工作流:点击「新建工作流」按钮
- 拖拽节点:从左侧节点面板拖拽「输入节点」→「OpenAI节点」→「输出节点」到画布
- 连接节点:按顺序连接各个节点
- 配置参数:点击 OpenAI 节点,配置 API Key 和提示词
- 执行调试:点击「调试」按钮,查看执行结果
示例:文本转语音工作流
[输入节点] → [OpenAI节点] → [TTS节点] → [输出节点]
输入文本 生成脚本 语音合成 播放音频
- ✅ 基础架构搭建(Spring Boot + React + TypeScript)
- ✅ 用户认证系统(Token 认证 + 拦截器)
- ✅ 可视化流程编辑器(ReactFlow 集成)
- ✅ DAG 工作流引擎(拓扑排序 + 循环检测)
- ✅ Spring AI 框架集成(v1.0.0-M5)
- 统一的 ChatClient 调用接口
- 支持 OpenAI 兼容协议(OpenAI/DeepSeek/智谱/AIPing)
- ✅ Spring AI Alibaba 集成(v1.0.0-M6.1)
- 通义千问 DashScope 原生支持
- 阿里云 Qwen 系列模型接入
- ✅ 多大模型节点支持
- OpenAI 节点(GPT 系列)
- DeepSeek 节点
- 通义千问节点
- 智谱 AI 节点
- AIPing 节点
- ✅ 工具节点实现
- 输入/输出节点
- TTS 音频合成节点
- 音频播放器组件
- ✅ SSE 实时流式输出(调试抽屉 + 日志输出 + 结果展示)
- ✅ 工作流 CRUD 管理
- ✅ 执行记录查询
当前完成度:95% 🎉
v1.1 版本(近期)
- 🔄 条件分支节点(IF/ELSE 逻辑)
- 🔁 循环执行节点(FOR/WHILE 循环)
- 📝 更多 LLM 节点接入(Claude、Gemini、本地模型)
- 🧪 集成测试完善
- 📊 执行性能监控
v1.5 版本(中期)
- 🔗 子工作流调用(工作流复用)
- ⏰ 定时任务调度(Cron 表达式)
- 📦 工作流版本管理(Git 风格)
- 🎨 工作流模板市场
- 🖼️ 更多工具节点(图像处理、文档解析、Web爬虫)
v2.0 版本(远期)
- 👥 多租户与权限管理
- 🤝 协作与分享功能
- 📈 性能监控与告警
- 🌍 国际化支持(多语言)
- 🔌 插件市场(第三方节点)
| 文档类型 | 文档链接 | 说明 |
|---|---|---|
| 📖 使用指南 | USER_GUIDE.md | 详细使用说明和最佳实践 |
| 📊 开发进度 | PROGRESS.md | 项目开发阶段和完成情况 |
| 📝 项目总结 | SUMMARY.md | 技术选型和实现总结 |
| 📋 完成报告 | PROJECT_COMPLETION_REPORT.md | 项目交付报告 |
| 🏗️ 架构设计 | mermaid.md | 系统架构可视化图表 |
| 🤖 开发指南 | AGENTS.md | AI Agent 开发指引 |
节点类型
- 输入节点:接收外部输入数据,作为工作流的起点
- 输出节点:输出工作流执行结果,作为工作流的终点
- LLM 节点:调用大语言模型进行文本生成、理解等任务
- 工具节点:执行特定功能,如 TTS 语音合成、图像处理等
连线规则
- 节点之间通过有向边连接,表示数据流向
- 支持一对多、多对一的连接方式
- 自动检测循环依赖,防止死锁
工作流解析流程
- JSON 解析:将前端配置解析为 WorkflowConfig 对象
- 拓扑排序:使用 Kahn 算法确定节点执行顺序
- 循环检测:DFS 深度优先搜索检测循环依赖
- 节点调度:按拓扑顺序依次执行节点
- 数据传递:上游节点输出作为下游节点输入
- 结果记录:保存每个节点的执行结果和日志
数据传递机制
// 节点执行器接口
public interface NodeExecutor {
Map<String, Object> execute(WorkflowNode node, Map<String, Object> input);
}
// LLM 节点通过 Spring AI ChatClient 统一调用
ChatClient chatClient = chatClientFactory.createClient(nodeType, apiUrl, apiKey, model, temperature);
String response = chatClient.prompt().user(prompt).call().content();方式一:开发普通工具节点
- 实现 NodeExecutor 接口
public class CustomNodeExecutor implements NodeExecutor {
@Override
public Map<String, Object> execute(WorkflowNode node, Map<String, Object> input) {
// 自定义逻辑
return output;
}
}- 注册到工厂
NodeExecutorFactory.register("custom", new CustomNodeExecutor());方式二:开发 LLM 节点(推荐)
// 继承 AbstractLLMNodeExecutor,自动获得 Spring AI 能力
@Component
public class CustomLLMNodeExecutor extends AbstractLLMNodeExecutor {
@Override
protected String getNodeType() {
return "custom_llm";
}
}- 前端添加节点定义
const customNode = {
type: 'custom',
label: '自定义节点',
category: 'tool'
};项目名称:PaiAgent - 企业级 AI 工作流编排平台
项目描述:基于可视化流程编辑器的 AI Agent 工作流平台,支持用户通过拖拽方式编排多种大模型(DeepSeek、通义千问等)和工具节点,使用自研 DAG 引擎按拓扑顺序执行工作流,实现复杂 AI 任务的自动化编排与执行。
技术栈:Java 21、Spring Boot 3.4.1、Spring AI 1.0.0
核心职责:
- 基于 Spring AI 框架重构 LLM 通信层,采用工厂模式+模板方法模式设计 ChatClientFactory 动态工厂和 AbstractLLMNodeExecutor 抽象基类,将 5 个 LLM 节点执行器的重复代码从 800+行精简至 75 行
- 设计动态 ChatClient 创建机制,支持运行时根据工作流节点配置(apiKey/apiUrl/model)动态实例化不同厂商的 ChatClient,实现多租户场景下每个节点独立配置的能力
- 抽取 PromptTemplateService 公共服务,统一处理
{{variable}}模板变量替换和上下游节点参数引用映射,支持 input 静态值和 reference 动态引用两种参数类型 - 基于 Spring AI 的 Flux 响应式流实现 LLM 流式输出,通过 SSE 实时推送生成进度到前端,配合现有 ExecutionEvent 事件机制,用户可实时查看 AI 生成过程
我们欢迎所有形式的贡献,包括但不限于:
- 🐛 提交 Bug 报告
- 💡 提出新功能建议
- 📝 改进文档
- 🔧 提交代码修复
- ⭐ Star 项目支持我们
- Fork 本仓库
-
创建特性分支 (
git checkout -b feature/AmazingFeature) -
提交更改 (
git commit -m 'Add some AmazingFeature') -
推送到分支 (
git push origin feature/AmazingFeature) - 开启 Pull Request
- 后端:遵循阿里巴巴 Java 开发手册
- 前端:遵循 Airbnb React/JSX 风格指南
- 提交信息:使用约定式提交(Conventional Commits)
- 📬 GitHub Issues:问题反馈和功能建议
- 💭 GitHub Discussions:技术讨论和经验分享
- 📖 官方文档:查看 USER_GUIDE.md 获取完整使用指南
- 项目维护者:@itwanger
- 邮箱:项目相关问题请通过 GitHub Issues 提交
本项目采用 MIT License 开源协议。
感谢所有为本项目做出贡献的开发者!
如果这个项目对您有帮助,请点击 ⭐ Star 支持我们!
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for PaiAgent
Similar Open Source Tools
PaiAgent
PaiAgent is an enterprise-level AI workflow visualization orchestration platform that simplifies the combination and scheduling of AI capabilities. It allows developers and business users to quickly build complex AI processing flows through an intuitive drag-and-drop interface, without the need to write code, enabling collaboration of various large models.
lanhu-mcp
Lanhu MCP Server is a powerful Model Context Protocol (MCP) server designed for the AI programming era, perfectly supporting the Lanhu design collaboration platform. It offers features like intelligent requirement analysis, team knowledge base, UI design support, and performance optimization. The server is suitable for Cursor + Lanhu, Windsurf + Lanhu, Claude Code + Lanhu, Trae + Lanhu, and Cline + Lanhu integrations. It aims to break the isolation of AI IDEs and enable all AI assistants to share knowledge and context.
py-xiaozhi
py-xiaozhi is a Python-based XiaoZhi voice client designed for learning code and experiencing AI XiaoZhi's voice functions without hardware conditions. It features voice interaction, graphical interface, volume control, session management, encrypted audio transmission, CLI mode, and automatic copying of verification codes and opening browsers for first-time users. The project aims to optimize and add new features to zhh827's py-xiaozhi based on the original hardware project xiaozhi-esp32 and the Python implementation py-xiaozhi.
z.ai2api_python
Z.AI2API Python is a lightweight OpenAI API proxy service that integrates seamlessly with existing applications. It supports the full functionality of GLM-4.5 series models and features high-performance streaming responses, enhanced tool invocation, support for thinking mode, integration with search models, Docker deployment, session isolation for privacy protection, flexible configuration via environment variables, and intelligent upstream model routing.
JeecgBoot
JeecgBoot is a Java AI Low Code Platform for Enterprise web applications, based on BPM and code generator. It features a SpringBoot2.x/3.x backend, SpringCloud, Ant Design Vue3, Mybatis-plus, Shiro, JWT, supporting microservices, multi-tenancy, and AI capabilities like DeepSeek and ChatGPT. The powerful code generator allows for one-click generation of frontend and backend code without writing any code. JeecgBoot leads the way in AI low-code development mode, helping to solve 80% of repetitive work in Java projects and allowing developers to focus more on business logic.
gin-vue-admin
Gin-vue-admin is a full-stack development platform based on Vue and Gin, integrating features like JWT authentication, dynamic routing, dynamic menus, Casbin authorization, form generator, code generator, etc. It provides various example files to help users focus more on business development. The project offers detailed documentation, video tutorials for setup and deployment, and a community for support and contributions. Users need a certain level of knowledge in Golang and Vue to work with this project. It is recommended to follow the Apache2.0 license if using the project for commercial purposes.
vibium
Vibium is a browser automation infrastructure designed for AI agents, providing a single binary that manages browser lifecycle, WebDriver BiDi protocol, and an MCP server. It offers zero configuration, AI-native capabilities, and is lightweight with no runtime dependencies. It is suitable for AI agents, test automation, and any tasks requiring browser interaction.
private-llm-qa-bot
This is a production-grade knowledge Q&A chatbot implementation based on AWS services and the LangChain framework, with optimizations at various stages. It supports flexible configuration and plugging of vector models and large language models. The front and back ends are separated, making it easy to integrate with IM tools (such as Feishu).
openakita
OpenAkita is a self-evolving AI Agent framework that autonomously learns new skills, performs daily self-checks and repairs, accumulates experience from task execution, and persists until the task is done. It auto-generates skills, installs dependencies, learns from mistakes, and remembers preferences. The framework is standards-based, multi-platform, and provides a Setup Center GUI for intuitive installation and configuration. It features self-learning and evolution mechanisms, a Ralph Wiggum Mode for persistent execution, multi-LLM endpoints, multi-platform IM support, desktop automation, multi-agent architecture, scheduled tasks, identity and memory management, a tool system, and a guided wizard for setup.
boxlite
BoxLite is an embedded, lightweight micro-VM runtime designed for AI agents running OCI containers with hardware-level isolation. It is built for high concurrency with no daemon required, offering features like lightweight VMs, high concurrency, hardware isolation, embeddability, and OCI compatibility. Users can spin up 'Boxes' to run containers for AI agent sandboxes and multi-tenant code execution scenarios where Docker alone is insufficient and full VM infrastructure is too heavy. BoxLite supports Python, Node.js, and Rust with quick start guides for each, along with features like CPU/memory limits, storage options, networking capabilities, security layers, and image registry configuration. The tool provides SDKs for Python and Node.js, with Go support coming soon. It offers detailed documentation, examples, and architecture insights for users to understand how BoxLite works under the hood.
bumpgen
bumpgen is a tool designed to automatically upgrade TypeScript / TSX dependencies and make necessary code changes to handle any breaking issues that may arise. It uses an abstract syntax tree to analyze code relationships, type definitions for external methods, and a plan graph DAG to execute changes in the correct order. The tool is currently limited to TypeScript and TSX but plans to support other strongly typed languages in the future. It aims to simplify the process of upgrading dependencies and handling code changes caused by updates.
continew-admin
Continew-admin is a responsive admin dashboard template built with Bootstrap 4. It provides a clean and intuitive user interface for managing and visualizing data in web applications. The template includes various components and widgets that can be easily customized to suit different project requirements. With Continew-admin, developers can quickly set up a professional-looking admin panel for their web applications.
AI-CloudOps
AI+CloudOps is a cloud-native operations management platform designed for enterprises. It aims to integrate artificial intelligence technology with cloud-native practices to significantly improve the efficiency and level of operations work. The platform offers features such as AIOps for monitoring data analysis and alerts, multi-dimensional permission management, visual CMDB for resource management, efficient ticketing system, deep integration with Prometheus for real-time monitoring, and unified Kubernetes management for cluster optimization.
AgentX
AgentX is a next-generation open-source AI agent development framework and runtime platform. It provides an event-driven runtime with a simple framework and minimal UI. The platform is ready-to-use and offers features like multi-user support, session persistence, real-time streaming, and Docker readiness. Users can build AI Agent applications with event-driven architecture using TypeScript for server-side (Node.js) and client-side (Browser/React) development. AgentX also includes comprehensive documentation, core concepts, guides, API references, and various packages for different functionalities. The architecture follows an event-driven design with layered components for server-side and client-side interactions.
observers
Observers is a lightweight library for AI observability that provides support for various generative AI APIs and storage backends. It allows users to track interactions with AI models and sync observations to different storage systems. The library supports OpenAI, Hugging Face transformers, AISuite, Litellm, and Docling for document parsing and export. Users can configure different stores such as Hugging Face Datasets, DuckDB, Argilla, and OpenTelemetry to manage and query their observations. Observers is designed to enhance AI model monitoring and observability in a user-friendly manner.
For similar tasks
phospho
Phospho is a text analytics platform for LLM apps. It helps you detect issues and extract insights from text messages of your users or your app. You can gather user feedback, measure success, and iterate on your app to create the best conversational experience for your users.
OpenFactVerification
Loki is an open-source tool designed to automate the process of verifying the factuality of information. It provides a comprehensive pipeline for dissecting long texts into individual claims, assessing their worthiness for verification, generating queries for evidence search, crawling for evidence, and ultimately verifying the claims. This tool is especially useful for journalists, researchers, and anyone interested in the factuality of information.
open-parse
Open Parse is a Python library for visually discerning document layouts and chunking them effectively. It is designed to fill the gap in open-source libraries for handling complex documents. Unlike text splitting, which converts a file to raw text and slices it up, Open Parse visually analyzes documents for superior LLM input. It also supports basic markdown for parsing headings, bold, and italics, and has high-precision table support, extracting tables into clean Markdown formats with accuracy that surpasses traditional tools. Open Parse is extensible, allowing users to easily implement their own post-processing steps. It is also intuitive, with great editor support and completion everywhere, making it easy to use and learn.
spaCy
spaCy is an industrial-strength Natural Language Processing (NLP) library in Python and Cython. It incorporates the latest research and is designed for real-world applications. The library offers pretrained pipelines supporting 70+ languages, with advanced neural network models for tasks such as tagging, parsing, named entity recognition, and text classification. It also facilitates multi-task learning with pretrained transformers like BERT, along with a production-ready training system and streamlined model packaging, deployment, and workflow management. spaCy is commercial open-source software released under the MIT license.
NanoLLM
NanoLLM is a tool designed for optimized local inference for Large Language Models (LLMs) using HuggingFace-like APIs. It supports quantization, vision/language models, multimodal agents, speech, vector DB, and RAG. The tool aims to provide efficient and effective processing for LLMs on local devices, enhancing performance and usability for various AI applications.
ontogpt
OntoGPT is a Python package for extracting structured information from text using large language models, instruction prompts, and ontology-based grounding. It provides a command line interface and a minimal web app for easy usage. The tool has been evaluated on test data and is used in related projects like TALISMAN for gene set analysis. OntoGPT enables users to extract information from text by specifying relevant terms and provides the extracted objects as output.
lima
LIMA is a multilingual linguistic analyzer developed by the CEA LIST, LASTI laboratory. It is Free Software available under the MIT license. LIMA has state-of-the-art performance for more than 60 languages using deep learning modules. It also includes a powerful rules-based mechanism called ModEx for extracting information in new domains without annotated data.
liboai
liboai is a simple C++17 library for the OpenAI API, providing developers with access to OpenAI endpoints through a collection of methods and classes. It serves as a spiritual port of OpenAI's Python library, 'openai', with similar structure and features. The library supports various functionalities such as ChatGPT, Audio, Azure, Functions, Image DALL·E, Models, Completions, Edit, Embeddings, Files, Fine-tunes, Moderation, and Asynchronous Support. Users can easily integrate the library into their C++ projects to interact with OpenAI services.
For similar jobs
sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.
teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.
ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.
classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.
chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.
BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students
uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.
griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.




