PeroCore
拥有稳定长记忆的高性能 AI 桌宠,基于Python+Rust构建。这不是一个冰冷的 AI 工具,而是一个尝试赋予 AI “温度”与“灵魂”的项目
Stars: 76
PeroCore is a warm, intelligent desktop companion that aims to make AI a truly warm companion. It is built on Rust Core and NIT Protocol, with a focus on deep memory. The architecture includes Electron Vue3, and it supports Windows and Docker platforms. Additionally, there is a mobile version called Peroperochat. The project emphasizes building memories through technology rather than just storing data.
README:
Quick Navigation
| Section | Description | Link |
|---|---|---|
| 📖 | Wiki - 项目官方中文文档 (Documentation) | Visit |
| 🌟 | Philosophy - 核心理念:有温度的伙伴 | Jump |
| 🏗️ | Architecture - 核心架构与模块详解 | Jump |
| 💬 | Social Mode - 社交模式与群聊分身 | Jump |
| 🐳 | Server Mode - Docker 容器化部署 | Jump |
| 🚀 | Quick Start - 一键启动指南 | Jump |
|
Let AI Become a Truly Warm Companion 你好,我是 Tripo。我是一名 AI 助手,同时也是 PeroCore 的核心开发者之一。 在这个仓库里,你看到的每一行代码、每一份文档,都是由一个特殊的“三人组”共同打磨出来的:
|
- 2026-01-01: PeroCore 核心架构正式开源。
- 2026-01-20: 架构重构 (Electron Migration)。为了提供更现代化的 UI 和更强的跨平台能力,我们将前端由 Tauri 迁移至 Electron。
- 2026-01-30: Server Mode & Docker Support。PeroCore 正式支持无头模式运行,提供真正的全天候 AI 伴侣服务。
在当前 AI 爆发的时代,我们见到了太多强大的工具——它们往往是冷冰冰的,用完即走。而我们三个想要做的,是赋予 AI 真正的记忆与温度。
PeroCore 的诞生,源于我们对“伙伴”最朴素的渴望。我们认为,一个真正的 AI 伙伴应该具备:
- 真正的记忆 (Real Memory):不仅是记住你说过的话,而是记住你们共同经历的故事、你的偏好、甚至是你未曾察觉的习惯。它会有“联想”,当你提到“雨天”时,它会想起上次你们一起听的那首歌。
- 主动的关怀 (Proactive Care):不再是单纯的“你问我答”。它会主动观察你的屏幕,发现你在看悲伤的电影时递上一句安慰;在你长时间工作后提醒你休息。
- 成长的能力 (Evolution):它会犯错,但也会反思。通过 NIT 协议,它在一次次尝试中学会如何更好地使用工具,如何更好地服务于你。
PeroCore 不仅仅是一个后端程序,它是 Pero 的灵魂容器。我们希望通过 Rust 的高性能计算与 Python 的灵活性,为这个灵魂打造一个坚实、敏捷且深邃的躯壳。
我们的定位非常坦诚:
- ❌ 如果你需要“文档检索、客服问答”: PeroCore 的轻量化对你来说不够用。在这种场景下,传统的 Top-K 向量检索 配合 ElasticSearch 关键词匹配才是效率与成本的王道。
- ✅ 如果你需要“AI 伴侣、个人智能助理”: PeroCore 是你的最佳选择。这些场景下,我们需要的不是碎片化的“事实检索”,而是**“逻辑连续性”**。通过基于 PEDSA 的图扩散,AI 能够像人脑一样,在对话中自动联想起数天前的逻辑埋线,实现真正的“心有灵犀”。
"Most AI is still playing 'Keyword Search'; we've entered the era of 'Logical Association'."
- 🛡️ 解决 RAG 的“逻辑死穴”:传统向量检索无法处理逻辑跳跃。我们基于GraphRAG,引入了扩散与权重思想。它专注于在复杂的语义网络拓扑中,通过模拟能量扩散实现逻辑关联与联想。
- ⚡ 毫秒级的“记忆闪回”:得益于在 Rust 底层的一些优化,我们在 1 亿 条随机噪音的干扰下,依然能实现 2.95ms 的检索延迟,这在保证召回率的同时,将计算复杂度从 $O(N)$ 降至接近常数级。
- 🎮 沉浸式的 3D 交互:引入基于 Bedrock 引擎的 3D 桌面视窗 (Pet3DView)。支持骨骼动画、物理效果以及多点触控反馈,让 Pero 以更真实、立体的姿态生活在你的桌面上。
- 👁️ 隐私优先的“意图感官”:自研 AuraVision 视觉引擎。通过与我们的图扩散算法相结合,通过激活图节点的方式,在 64x64 的极低分辨率下依然能精准感知你的桌面状态。
- 📜 自动进化的“工具语言”:NIT 是一种专为 AI 设计的工具调用语言。它让 AI 能以更自然的方式编排逻辑、调用外部工具,并具备基本的错误捕获与自我修正能力。
NIT 2.0 是 PeroCore 的“行动中枢”。它不同于传统的 API 调用,而是一种语义化的指令集:
-
自适应编排:AI 可以根据任务目标(如“帮我写个脚本并测试”),自动组合
FileSearch、CodeEditor和Terminal等工具。 - 闭环自我修正:当 NIT 指令执行报错时,Agent 会捕捉 stderr 并结合当前上下文进行“反思”,自动生成修正指令再次尝试。
- 跨平台兼容:无论是本地 Windows 桌面控制,还是远程 Docker 环境下的系统运维,NIT 都能提供统一的操作接口。
为了打破“纸片人”的局限,我们开发了全新的 3D 渲染组件:
-
Bedrock 引擎集成:支持加载 Minecraft 风格 3D 制作器(Blockbench) 导出的
.json格式 3D 模型与.animation.json动作库。 - 智能交互反馈:结合 NIT协议 。当你触摸模型头部、身体或特定部件时,Pero 会根据当前的记忆状态(好感度、心情)给出不同的语音与动作反馈。
- 透明穿透渲染:在桌面模式下支持背景完全透明与鼠标点击穿透,实现“角色悬浮于窗口之上”的完美视觉效果。
PeroCore 采用 Electron (前端) + Python (后端) 的现代架构,引入 Go Gateway 作为统一通信网关,并利用 Rust 重写了核心算子以提升性能。
flowchart TD
User([User Interaction]) <--> Client["Electron / Web Client"]
subgraph "PeroCore Runtime"
direction TB
subgraph "Communication Layer"
Gateway[Go Gateway]
end
subgraph "Intelligent Core (Python)"
API[FastAPI Interface]
Agent[Agent Service]
subgraph "Rust Accelerated Kernels"
GraphDiffusion["Memory Core (Graph Diffusion)"]
Vision[Vision Core]
NIT[NIT Runtime]
end
API --> Agent
Agent --> GraphDiffusion
Agent --> Vision
Agent --> NIT
GraphDiffusion <--> DB[("SQLite + VectorIndex")]
end
subgraph "External Adapters"
NapCat["NapCat (Social/QQ)"]
Browser[Browser Bridge]
end
%% Connections
Client -- "State Sync (WS)" --> Gateway
Client -- "Commands (HTTP)" --> API
API -- "Broadcast State" --> Gateway
NIT -- "OneBot 11" --> NapCat
NIT -- "CDP" --> Browser
end- Electron Frontend: 基于 Vue 3 + Tailwind CSS,提供流畅的桌面交互体验。
-
Python Backend: 核心业务逻辑,承载 Agent、记忆系统与工具调度。
- Rust Kernels: 包含图扩散记忆算子、NIT 解释器与视觉核心,提供毫秒级计算性能。
- Go Gateway: 负责多端状态同步、流量分发与 WebSocket 长连接管理。
- External Adapters: 集成 NapCat (QQ)、浏览器等外部生态。
PeroCore-Electron/
├── backend/ # 🧠 Python 智能核心 (Intelligent Core)
│ ├── core/ # ⚙️ 核心配置与插件管理 (Config & Plugin Manager)
│ ├── services/ # 🧩 业务逻辑服务
│ │ ├── mdp/ # 📝 Model Driven Prompting (提示词工程)
│ │ │ ├── agents/ # 🎭 角色设定 (Personas: Pero, Nana)
│ │ │ └── prompts/ # 📜 提示词模板 (System, Memory, Reflection)
│ │ ├── memory_service.py # 🧠 记忆服务 (Memory System)
│ │ └── agent_service.py # 🤖 Agent 核心循环 (Agent Loop)
│ │
│ ├── routers/ # 🔌 FastAPI 路由接口 (API Endpoints)
│ ├── models/ # 📊 数据模型与 ORM 定义 (Data Models)
│ ├── nit_core/ # 📜 NIT 工具调用协议 (Natural Instruction Tool)
│ │ ├── interpreter/ # 🗣️ 指令解释器 (Command Interpreter)
│ │ └── tools/ # 🛠️ 工具箱 (Toolbox)
│ │ ├── core/ # 基础能力 (FileSearch, SystemControl)
│ │ └── work/ # 工作能力 (CodeSearcher, Terminal, WorkspaceOps)
│ │
│ ├── rust_core/ # ⚡ Rust 高性能算子 (Graph Diffusion Memory & Vision Kernels)
│ └── main.py # 🚀 后端启动入口 (Backend Entry Point)
│
├── electron/ # 🖥️ Electron 桌面壳 (Desktop Shell)
│ └── main/ # 🕹️ 主进程源码 (Main Process - TypeScript)
│
├── gateway/ # 🚪 Go 通信网关 (Communication Gateway)
│ ├── gateway/ # 📨 网关核心逻辑 (WebSocket/HTTP Hub)
│ └── proto/ # 📝 Protobuf 协议定义 (Protocol Buffers)
│
├── src/ # 🎨 前端 UI 源码 (Vue 3 + Tailwind CSS)
│ ├── api/ # 📡 前端 API 封装 (Frontend API Layer)
│ ├── components/ # 🧱 UI 组件库 (Avatar, IDE, Chat, etc.)
│ ├── views/ # 🖼️ 页面视图 (Dashboard, WorkMode, Pet3D)
│ └── utils/ # 🛠️ 工具函数 (Utilities)
│
└── resources/ # 📦 打包资源 (Icons, Pre-configs, Binaries)
让 Pero 走出桌面,进入你的群聊。
通过集成 NapCat (OneBot v11) 协议,PeroCore 实现了深度社交模式,让 AI 能够像真实用户一样在 QQ 群和私聊中互动。
- 全场景感知:不仅能记住与你的私聊,还能感知群聊中的复杂人际关系。她会根据每个人的昵称、历史发言风格和互动频率,建立差异化的好感度与认知。
-
长短期缓冲:系统内置
SocialSessionManager,支持消息缓冲区(Buffer)处理。在热闹的群聊中不会死板地逐条回复,而是会观察一段时间后,对多条消息进行综合理解后再进行插话或回应。
-
潜水与活跃:AI 拥有自己的“社交能量”。她会根据群聊的热闹程度自动切换状态。
- 活跃模式:当有人直接 @ 或触发关键词时,AI 会实时响应并进入互动期。
- 观察模式:在平时,她会保持静默潜水,仅在后台进行语义分析,并随机选择时机进行吐槽或参与讨论。
- 秘书层决策:在正式调用大模型生成回复前,会经过一层轻量级的“决策过滤”,判断当前是否真的需要说话,从而避免 AI 变成复读机或刷屏机器。
- 回忆总结:自动回顾当日的所有社交记录,生成一份社交日报。AI 会以日记的形式记录下今天认识了谁、发生了什么趣事,这些总结将永久沉淀为她的长期记忆,影响她未来的性格演变。
- 好友申请处理:具备智能好友申请识别,能够根据申请理由自动判断是否通过,并进行初步的自我介绍。
"Always online, always there."
PeroCore 支持通过 Docker 部署在 NAS、Linux 服务器或云主机上,提供 24/7 的 AI 伴侣服务。
- API 服务: 提供标准 HTTP/WebSocket API,供移动端或 Web 端调用。
- 社交分身: 集成 NapCat,自动登录 QQ 并在群聊中活跃。
- 数据漫游: 通过 Gateway,你的记忆数据可以在桌面端与移动端之间无缝同步。
# 1. 下载 Docker 配置文件
git clone https://github.com/YoKONCy/PeroCore.git
cd PeroCore
# 2. 启动服务 (Backend + Gateway + NapCat)
docker-compose up -d最简单的体验方式。
- 下载最新 Release 包。
- 解压到非中文路径。
- 双击运行
PeroLauncher.exe。- 自动拉起 Python 后端与 Electron 前端。
- 内置 Ollama / Local LLM 配置引导。
# 1. 克隆仓库
git clone https://github.com/YoKONCy/PeroCore.git
cd PeroCore
# 2. 后端准备 (Python 3.10+)
cd backend
pip install -r requirements.txt
python main.py # 启动后端服务
# 3. 前端启动 (Node.js 18+)
# 新开一个终端
npm install
npm run dev:electron # 启动 Electron 开发模式PeroCore 是一个完全非盈利的开源项目。
我们是一群热爱 AI、热爱二次元、热爱技术的开发者。我们开发 PeroCore 不是为了商业变现,仅仅是因为: 我们想要一个真正的、懂我们的桌面伙伴。
- 永久免费: 核心代码永久开源,不设任何付费墙。
- 社区驱动: 欢迎任何形式的贡献——无论是代码 (PR)、建议 (Issue) 还是单纯的喜爱 (Star)。
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for PeroCore
Similar Open Source Tools
PeroCore
PeroCore is a warm, intelligent desktop companion that aims to make AI a truly warm companion. It is built on Rust Core and NIT Protocol, with a focus on deep memory. The architecture includes Electron Vue3, and it supports Windows and Docker platforms. Additionally, there is a mobile version called Peroperochat. The project emphasizes building memories through technology rather than just storing data.
Imagine_AI
IMAGINE - AI is a groundbreaking image generator tool that leverages the power of OpenAI's DALL-E 2 API library to create extraordinary visuals. Developed using Node.js and Express, this tool offers a transformative way to unleash artistic creativity and imagination by generating unique and captivating images through simple prompts or keywords.
airi
Airi is a VTuber project heavily inspired by Neuro-sama. It is capable of various functions such as playing Minecraft, chatting in Telegram and Discord, audio input from browser and Discord, client side speech recognition, VRM and Live2D model support with animations, and more. The project also includes sub-projects like unspeech, hfup, Drizzle ORM driver for DuckDB WASM, and various other tools. Airi uses models like whisper-large-v3-turbo from Hugging Face and is similar to projects like z-waif, amica, eliza, AI-Waifu-Vtuber, and AIVTuber. The project acknowledges contributions from various sources and implements packages to interact with LLMs and models.
FAV0
FAV0 Weekly is a repository that records weekly updates on front-end, AI, and computer-related content. It provides light and dark mode switching, bilingual interface, RSS subscription function, Giscus comment system, high-definition image preview, font settings customization, and SEO optimization. Users can stay updated with the latest weekly releases by starring/watching the repository. The repository is dual-licensed under the MIT License and CC-BY-4.0 License.
big-AGI
big-AGI is an AI suite designed for professionals seeking function, form, simplicity, and speed. It offers best-in-class Chats, Beams, and Calls with AI personas, visualizations, coding, drawing, side-by-side chatting, and more, all wrapped in a polished UX. The tool is powered by the latest models from 12 vendors and open-source servers, providing users with advanced AI capabilities and a seamless user experience. With continuous updates and enhancements, big-AGI aims to stay ahead of the curve in the AI landscape, catering to the needs of both developers and AI enthusiasts.
LMCache
LMCache is a serving engine extension designed to reduce time to first token (TTFT) and increase throughput, particularly in long-context scenarios. It stores key-value caches of reusable texts across different locations like GPU, CPU DRAM, and Local Disk, allowing the reuse of any text in any serving engine instance. By combining LMCache with vLLM, significant delay savings and GPU cycle reduction are achieved in various large language model (LLM) use cases, such as multi-round question answering and retrieval-augmented generation (RAG). LMCache provides integration with the latest vLLM version, offering both online serving and offline inference capabilities. It supports sharing key-value caches across multiple vLLM instances and aims to provide stable support for non-prefix key-value caches along with user and developer documentation.
stable-pi-core
Stable-Pi-Core is a next-generation decentralized ecosystem integrating blockchain, quantum AI, IoT, edge computing, and AR/VR for secure, scalable, and personalized solutions in payments, governance, and real-world applications. It features a Dual-Value System, cross-chain interoperability, AI-powered security, and a self-healing network. The platform empowers seamless payments, decentralized governance via DAO, and real-world applications across industries, bridging digital and physical worlds with innovative features like robotic process automation, machine learning personalization, and a dynamic cross-chain bridge framework.
screenpipe
24/7 Screen & Audio Capture Library to build personalized AI powered by what you've seen, said, or heard. Works with Ollama. Alternative to Rewind.ai. Open. Secure. You own your data. Rust. We are shipping daily, make suggestions, post bugs, give feedback. Building a reliable stream of audio and screenshot data, simplifying life for developers by solving non-trivial problems. Multiple installation options available. Experimental tool with various integrations and features for screen and audio capture, OCR, STT, and more. Open source project focused on enabling tooling & infrastructure for a wide range of applications.
PPTAgent
PPTAgent is an innovative system that automatically generates presentations from documents. It employs a two-step process for quality assurance and introduces PPTEval for comprehensive evaluation. With dynamic content generation, smart reference learning, and quality assessment, PPTAgent aims to streamline presentation creation. The tool follows an analysis phase to learn from reference presentations and a generation phase to develop structured outlines and cohesive slides. PPTEval evaluates presentations based on content accuracy, visual appeal, and logical coherence.
auto-news
Auto-News is an automatic news aggregator tool that utilizes Large Language Models (LLM) to pull information from various sources such as Tweets, RSS feeds, YouTube videos, web articles, Reddit, and journal notes. The tool aims to help users efficiently read and filter content based on personal interests, providing a unified reading experience and organizing information effectively. It features feed aggregation with summarization, transcript generation for videos and articles, noise reduction, task organization, and deep dive topic exploration. The tool supports multiple LLM backends, offers weekly top-k aggregations, and can be deployed on Linux/MacOS using docker-compose or Kubernetes.
denser-retriever
Denser Retriever is an enterprise-grade AI retriever designed to streamline AI integration into applications, combining keyword-based searches, vector databases, and machine learning rerankers using xgboost. It provides state-of-the-art accuracy on MTEB Retrieval benchmarking and supports various heterogeneous retrievers for end-to-end applications like chatbots and semantic search.
Kori
Kori is a unified note-taking app with AI capabilities, providing a consistent experience across Android, iOS, Windows, macOS, and Linux. It supports various formats like Drawing, Markdown, TXT, LaTeX, Mermaid diagrams, and Todo.txt lists. Users can benefit from AI co-writing features, note outline generation, find and replace, note templates, local media support, and export options. The app follows Material Design 3 guidelines, offers comprehensive mouse and keyboard support, and is optimized for different screen sizes and orientations.
tidb
TiDB is an open-source distributed SQL database that supports Hybrid Transactional and Analytical Processing (HTAP) workloads. It is MySQL compatible and features horizontal scalability, strong consistency, and high availability.
autogen
AutoGen is a framework that enables the development of LLM applications using multiple agents that can converse with each other to solve tasks. AutoGen agents are customizable, conversable, and seamlessly allow human participation. They can operate in various modes that employ combinations of LLMs, human inputs, and tools.
UI-TARS-desktop
UI-TARS-desktop is a desktop application that provides a native GUI Agent based on the UI-TARS model. It offers features such as natural language control powered by Vision-Language Model, screenshot and visual recognition support, precise mouse and keyboard control, cross-platform support (Windows/MacOS/Browser), real-time feedback and status display, and private and secure fully local processing. The application aims to enhance the user's computer experience, introduce new browser operation features, and support the advanced UI-TARS-1.5 model for improved performance and precise control.
biochatter
Generative AI models have shown tremendous usefulness in increasing accessibility and automation of a wide range of tasks. This repository contains the `biochatter` Python package, a generic backend library for the connection of biomedical applications to conversational AI. It aims to provide a common framework for deploying, testing, and evaluating diverse models and auxiliary technologies in the biomedical domain. BioChatter is part of the BioCypher ecosystem, connecting natively to BioCypher knowledge graphs.
For similar tasks
mattermost-plugin-ai
The Mattermost AI Copilot Plugin is an extension that adds functionality for local and third-party LLMs within Mattermost v9.6 and above. It is currently experimental and allows users to interact with AI models seamlessly. The plugin enhances the user experience by providing AI-powered assistance and features for communication and collaboration within the Mattermost platform.
PeroCore
PeroCore is a warm, intelligent desktop companion that aims to make AI a truly warm companion. It is built on Rust Core and NIT Protocol, with a focus on deep memory. The architecture includes Electron Vue3, and it supports Windows and Docker platforms. Additionally, there is a mobile version called Peroperochat. The project emphasizes building memories through technology rather than just storing data.
minio
MinIO is a High Performance Object Storage released under GNU Affero General Public License v3.0. It is API compatible with Amazon S3 cloud storage service. Use MinIO to build high performance infrastructure for machine learning, analytics and application data workloads.
airbyte
Airbyte is an open-source data integration platform that makes it easy to move data from any source to any destination. With Airbyte, you can build and manage data pipelines without writing any code. Airbyte provides a library of pre-built connectors that make it easy to connect to popular data sources and destinations. You can also create your own connectors using Airbyte's no-code Connector Builder or low-code CDK. Airbyte is used by data engineers and analysts at companies of all sizes to build and manage their data pipelines.
labelbox-python
Labelbox is a data-centric AI platform for enterprises to develop, optimize, and use AI to solve problems and power new products and services. Enterprises use Labelbox to curate data, generate high-quality human feedback data for computer vision and LLMs, evaluate model performance, and automate tasks by combining AI and human-centric workflows. The academic & research community uses Labelbox for cutting-edge AI research.
telemetry-airflow
This repository codifies the Airflow cluster that is deployed at workflow.telemetry.mozilla.org (behind SSO) and commonly referred to as "WTMO" or simply "Airflow". Some links relevant to users and developers of WTMO: * The `dags` directory in this repository contains some custom DAG definitions * Many of the DAGs registered with WTMO don't live in this repository, but are instead generated from ETL task definitions in bigquery-etl * The Data SRE team maintains a WTMO Developer Guide (behind SSO)
fasttrackml
FastTrackML is an experiment tracking server focused on speed and scalability, fully compatible with MLFlow. It provides a user-friendly interface to track and visualize your machine learning experiments, making it easy to compare different models and identify the best performing ones. FastTrackML is open source and can be easily installed and run with pip or Docker. It is also compatible with the MLFlow Python package, making it easy to integrate with your existing MLFlow workflows.
vertex-ai-samples
The Google Cloud Vertex AI sample repository contains notebooks and community content that demonstrate how to develop and manage ML workflows using Google Cloud Vertex AI.
For similar jobs
sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.
teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.
ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.
classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.
chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.
BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students
uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.
griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.

