agentfield
Kubernetes for AI Agents. Build and run AI like microservices - scalable, observable, and identity-aware from day one.
Stars: 529
AgentField is an open-source control plane designed for autonomous AI agents, providing infrastructure for agents to make decisions beyond chatbots. It offers features like scaling infrastructure, routing & discovery, async execution, durable state, observability, trust infrastructure with cryptographic identity, verifiable credentials, and policy enforcement. Users can write agents in Python, Go, TypeScript, or interact via REST APIs. The tool enables the creation of AI backends that reason autonomously within defined boundaries, offering predictability and flexibility. AgentField aims to bridge the gap between AI frameworks and production-ready infrastructure for AI agents.
README:
When AI moves from chatbots into backends, making decisions, not just answering questions, it needs infrastructure, not frameworks.
Docs | Quick Start | Python SDK | Go SDK | TypeScript SDK | REST API | Discord
AgentField is the backend infrastructure layer for autonomous AI.
AI has outgrown frameworks and is moving from chatbots into backends—making decisions about refunds, coordinating supply chains, managing portfolios. These agents need infrastructure, not prompt wrappers.
AgentField is an open-source control plane that treats AI agents as first-class backend services and makes agents production-ready.
Scale Infrastructure (think: Kubernetes)
- Routing & Discovery: Agents find and call each other through standard REST APIs
- Async Execution: Fire-and-forget tasks that run for minutes, hours, or days
- Durable State: Built-in memory with vector search—no Redis or Pinecone required
- Observability: Automatic workflow DAGs, Prometheus metrics, structured logs
Trust Infrastructure (think: Okta, rebuilt for agents)
- W3C DIDs: Every agent gets a cryptographic identity—not a shared API key
- Verifiable Credentials: Tamper-proof audit trails for every action
- Policy Enforcement: Boundaries enforced by infrastructure, not prompts
Write Python, Go, TypeScript, or call via REST. Get production infrastructure automatically.
Software keeps adding layers when complexity demands it. Frontend/backend separation. Data lakes and pipelines. Now: a reasoning layer that sits alongside your services, making decisions that used to be hardcoded.
We call this the AI Backend. Not a chatbot, not a copilot—infrastructure for software that can think.
Guided autonomy: Agents that reason freely within boundaries you define. Predictable enough to trust. Flexible enough to be useful.
📖 Read: The AI Backend — Our thesis on why every serious backend will need a reasoning layer.
Python
from agentfield import Agent, AIConfig
app = Agent(node_id="researcher", ai_config=AIConfig(model="gpt-4o"))
@app.skill()
def fetch_url(url: str) -> str:
return requests.get(url).text
@app.reasoner()
async def summarize(url: str) -> dict:
content = fetch_url(url)
return await app.ai(f"Summarize: {content}")
app.run() # → POST /api/v1/execute/researcher.summarizeGo
agent, _ := agentfieldagent.New(agentfieldagent.Config{
NodeID: "researcher",
AgentFieldURL: "http://localhost:8080",
})
agent.RegisterSkill("summarize", func(ctx context.Context, input map[string]any) (any, error) {
url := input["url"].(string)
// Your agent logic here
return map[string]any{"summary": "..."}, nil
})
agent.Run(context.Background())TypeScript
import { Agent } from '@agentfield/sdk';
const agent = new Agent({
nodeId: 'researcher',
agentFieldUrl: 'http://localhost:8080',
});
agent.reasoner('summarize', async (ctx, input: { url: string }) => {
const content = await fetch(input.url).then(r => r.text());
return await ctx.ai(`Summarize: ${content}`);
});
agent.run(); // → POST /api/v1/execute/researcher.summarizeREST / Any Language
# Call any agent from anywhere—no SDK required
curl -X POST http://localhost:8080/api/v1/execute/researcher.summarize \
-H "Content-Type: application/json" \
-d '{"input": {"url": "https://example.com"}}'// Frontend (React, Next.js, etc.)
const result = await fetch("http://localhost:8080/api/v1/execute/researcher.summarize", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ input: { url: "https://example.com" } }),
}).then(r => r.json());curl -fsSL https://agentfield.ai/install.sh | bashaf init my-agent --defaults
cd my-agent && pip install -r requirements.txtAgentField uses a control plane + agent node architecture. You'll need two terminal windows:
Terminal 1 – Start the Control Plane:
af serverOpens the dashboard at http://localhost:8080
Terminal 2 – Start Your Agent:
python main.pyAgent auto-registers with the control plane
curl -X POST http://localhost:8080/api/v1/execute/my-agent.demo_echo \
-H "Content-Type: application/json" \
-d '{"input": {"message": "Hello!"}}'Other Languages / Options
Go:
af init my-agent --defaults --language go
cd my-agent && go mod download
go run .TypeScript:
af init my-agent --defaults --language typescript
cd my-agent && npm install
npm run devInteractive mode (choose language, set author info):
af init my-agent # No --defaults flagDocker / Troubleshooting
If running the control plane in Docker and your agent node runs outside that container, make sure the control plane can reach the agent at the URL it registers.
Option A (agent on your host, control plane in Docker):
docker run -p 8080:8080 agentfield/control-plane:latest
# Python agents (recommended)
export AGENTFIELD_URL="http://localhost:8080"
export AGENT_CALLBACK_URL="http://host.docker.internal:8001"
python main.py
# Go agents
export AGENTFIELD_URL="http://localhost:8080"
export AGENT_PUBLIC_URL="http://host.docker.internal:8001"Option B (agent + control plane both in Docker Compose / same network):
- Set the agent callback/public URL to the agent container's service name, e.g.
http://my-agent:8001.
Linux note: host.docker.internal may require --add-host=host.docker.internal:host-gateway or using a Compose setup where both containers share a network.
Next Steps: Build Your First Agent | Deploy to Production | Examples
Real-world patterns built on AgentField:
| Example | Description | Links |
|---|---|---|
| Deep Research API | Massively parallel research backend. Fans out to 10k+ agents, synthesizing verifiable strategies with deep citation chains. | GitHub • Docs |
| RAG Evaluator | Production monitoring for LLM responses. Scores across 4 dimensions to identify reliability issues. | Architecture |
Most frameworks stop at "make the LLM call." But production agents need:
See the production-ready feature set →
Agents that run for hours or days. Webhooks with automatic retries. Backpressure handling when downstream services are slow.
# Fire-and-forget: webhook called when done
result = await app.call(
"research_agent.deep_dive",
input={"topic": "quantum computing"},
async_config=AsyncConfig(
webhook_url="https://myapp.com/webhook",
timeout_hours=6
)
)Agents that discover and invoke each other through the control plane. Every call tracked. Every workflow visualized as a DAG.
# Agent A calls Agent B—routed through control plane, fully traced
analysis = await app.call("analyst.evaluate", input={"data": dataset})
report = await app.call("writer.summarize", input={"analysis": analysis})Standard REST APIs. No magic abstractions. Build agents the way you build microservices.
# Every agent is an API endpoint
curl -X POST http://localhost:8080/api/v1/execute/researcher.summarize \
-H "Content-Type: application/json" \
-d '{"input": {"url": "https://example.com"}}'Cryptographic identity for every agent. Tamper-proof audit trails for every action. Learn more about Identity & Trust.
AgentField isn't a framework you extend. It's infrastructure you deploy on.
See how AgentField compares to agent frameworks →
| Agent Frameworks | DAG/Workflow Engines | AgentField | |
|---|---|---|---|
| Architecture | Monolithic scripts | Predetermined pipelines | Distributed microservices |
| Execution | Synchronous, blocking | Scheduled, batch | Async-native (webhooks, SSE, WebSocket) |
| Coordination | Manual message passing | Central scheduler | Service mesh with discovery |
| Memory | External (Redis, Pinecone) | External | Built-in + vector search |
| Multi-language | SDK-locked | Config files | Native REST APIs (any language) |
| Long-running | Timeouts, hacks | Designed for batch | Hours/days, durable execution |
| Audit | Logs (trust me) | Logs | Cryptographic proofs (W3C DIDs/VCs) |
AgentField SDKs at Scale (100,000 handlers)
| Go | TypeScript | Python | |
|---|---|---|---|
| Registration | 17 ms | 14 ms | ~5.7 s |
| Memory/Handler | 280 B | 276 B | 7.5 KB |
| Throughput | 8.2M req/s | 4.0M req/s | 6.7M req/s |
vs Other Frameworks (1,000 handlers, same language)
| AgentField | LangChain | CrewAI | Mastra | |
|---|---|---|---|---|
| Registration | 57 ms (py) / 14 ms (ts) | 483 ms | 200 ms | 365 ms |
| Memory/Handler | 7.5 KB (py) / 276 B (ts) | 10.8 KB | 14.3 KB | 1.8 KB |
Apple M1. Handler registration + invocation overhead (no LLM). Methodology →
Not a DAG builder. Agents decide what to do next—dynamically. The control plane tracks the execution graph automatically.
Not tool attachment. You don't just give an LLM a bag of MCP tools and hope. You define Reasoners (AI logic) and Skills (deterministic code) with explicit boundaries. Learn more.
- Control Plane: Stateless Go service that routes, tracks, and orchestrates
- Async by Default: Fire-and-forget or wait. Webhooks with retries. SSE streaming.
- Long-Running: Tasks that run for hours or days with durable checkpointing
- Backpressure: Built-in queuing and circuit breakers
- Discovery: Agents register capabilities. Others find them via API.
-
Cross-Agent Calls:
app.call("other.reasoner", input={...})routed through control plane - Workflow DAGs: Every execution path visualized automatically
- Shared Memory: Scoped to global, agent, session, or run—with vector search
- W3C DIDs: Every agent gets a cryptographic identity
- Verifiable Credentials: Tamper-proof receipts for every action
-
Prometheus Metrics:
/metricsendpoint out of the box - Policy Enforcement: "Only agents signed by 'Finance' can access this tool"
Explore the full feature set →
When agents move from answering questions to making decisions, approving refunds, coordinating supply chains, moving money, "check the logs" isn't enough.
AgentField gives every agent a W3C Decentralized Identifier (DID)—a cryptographic identity. Every execution produces a Verifiable Credential: a tamper-proof receipt showing exactly what happened, who authorized it, and the full delegation chain.
# Export audit trail for any workflow
curl http://localhost:8080/api/ui/v1/workflows/{workflow_id}/vc-chainFor compliance teams: mathematical proof, not trust.
📖 Read: IAM for AI Backends — Why OAuth can't secure autonomous software, and what replaces it.
Learn more about the core architecture →
- You're building an AI backend - agents that make decisions, not just answer questions
- You're building multi-agent systems that need to coordinate
- You need production infrastructure: async, retries, observability
- You want agents as standard backend services with REST APIs
- You need audit trails for compliance or debugging
- You have multiple teams deploying agents independently
- You're building a single chatbot (prompt orchestration frameworks like LangChain, CrewAI, LlamaIndex etc.. are great for that)
- You're prototyping and don't need production concerns yet
When you're ready to ship agents to production, we'll be here.
If you are Backend Engineers shipping AI into production who want standard APIs, not magic or Platform Teams who don't want to build another homegrown orchestrator or Enterprise Teams in regulated industries (Finance, Health) needing audit trails or Frontend Developers who just want to fetch() an agent without Python headaches, AgentField is built for you.
- 📖 The AI Backend — Why every backend needs a reasoning layer
- 📖 IAM for AI Backends — Why agents need identity, not just API keys
- 📚 Documentation — Full technical reference
- 🚀 Examples — Production patterns and use cases
Agents are becoming part of production backends. They need identity, governance, and infrastructure. That's why AgentField exists.
Built by developers who got tired of duct-taping agents together.
agentfield.ai
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for agentfield
Similar Open Source Tools
agentfield
AgentField is an open-source control plane designed for autonomous AI agents, providing infrastructure for agents to make decisions beyond chatbots. It offers features like scaling infrastructure, routing & discovery, async execution, durable state, observability, trust infrastructure with cryptographic identity, verifiable credentials, and policy enforcement. Users can write agents in Python, Go, TypeScript, or interact via REST APIs. The tool enables the creation of AI backends that reason autonomously within defined boundaries, offering predictability and flexibility. AgentField aims to bridge the gap between AI frameworks and production-ready infrastructure for AI agents.
alphora
Alphora is a full-stack framework for building production AI agents, providing agent orchestration, prompt engineering, tool execution, memory management, streaming, and deployment with an async-first, OpenAI-compatible design. It offers features like agent derivation, reasoning-action loop, async streaming, visual debugger, OpenAI compatibility, multimodal support, tool system with zero-config tools and type safety, prompt engine with dynamic prompts, memory and storage management, sandbox for secure execution, deployment as API, and more. Alphora allows users to build sophisticated AI agents easily and efficiently.
handit.ai
Handit.ai is an autonomous engineer tool designed to fix AI failures 24/7. It catches failures, writes fixes, tests them, and ships PRs automatically. It monitors AI applications, detects issues, generates fixes, tests them against real data, and ships them as pull requests—all automatically. Users can write JavaScript, TypeScript, Python, and more, and the tool automates what used to require manual debugging and firefighting.
dexto
Dexto is a lightweight runtime for creating and running AI agents that turn natural language into real-world actions. It serves as the missing intelligence layer for building AI applications, standalone chatbots, or as the reasoning engine inside larger products. Dexto features a powerful CLI and Web UI for running AI agents, supports multiple interfaces, allows hot-swapping of LLMs from various providers, connects to remote tool servers via the Model Context Protocol, is config-driven with version-controlled YAML, offers production-ready core features, extensibility for custom services, and enables multi-agent collaboration via MCP and A2A.
AutoAgents
AutoAgents is a cutting-edge multi-agent framework built in Rust that enables the creation of intelligent, autonomous agents powered by Large Language Models (LLMs) and Ractor. Designed for performance, safety, and scalability. AutoAgents provides a robust foundation for building complex AI systems that can reason, act, and collaborate. With AutoAgents you can create Cloud Native Agents, Edge Native Agents and Hybrid Models as well. It is so extensible that other ML Models can be used to create complex pipelines using Actor Framework.
osaurus
Osaurus is a native, Apple Silicon-only local LLM server built on Apple's MLX for maximum performance on M‑series chips. It is a SwiftUI app + SwiftNIO server with OpenAI‑compatible and Ollama‑compatible endpoints. The tool supports native MLX text generation, model management, streaming and non‑streaming chat completions, OpenAI‑compatible function calling, real-time system resource monitoring, and path normalization for API compatibility. Osaurus is designed for macOS 15.5+ and Apple Silicon (M1 or newer) with Xcode 16.4+ required for building from source.
lighteval
LightEval is a lightweight LLM evaluation suite that Hugging Face has been using internally with the recently released LLM data processing library datatrove and LLM training library nanotron. We're releasing it with the community in the spirit of building in the open. Note that it is still very much early so don't expect 100% stability ^^' In case of problems or question, feel free to open an issue!
oh-my-pi
oh-my-pi is an AI coding agent for the terminal, providing tools for interactive coding, AI-powered git commits, Python code execution, LSP integration, time-traveling streamed rules, interactive code review, task management, interactive questioning, custom TypeScript slash commands, universal config discovery, MCP & plugin system, web search & fetch, SSH tool, Cursor provider integration, multi-credential support, image generation, TUI overhaul, edit fuzzy matching, and more. It offers a modern terminal interface with smart session management, supports multiple AI providers, and includes various tools for coding, task management, code review, and interactive questioning.
AgentNeo
AgentNeo is an advanced, open-source Agentic AI Application Observability, Monitoring, and Evaluation Framework designed to provide deep insights into AI agents, Large Language Model (LLM) calls, and tool interactions. It offers robust logging, visualization, and evaluation capabilities to help debug and optimize AI applications with ease. With features like tracing LLM calls, monitoring agents and tools, tracking interactions, detailed metrics collection, flexible data storage, simple instrumentation, interactive dashboard, project management, execution graph visualization, and evaluation tools, AgentNeo empowers users to build efficient, cost-effective, and high-quality AI-driven solutions.
aegra
Aegra is a self-hosted AI agent backend platform that provides LangGraph power without vendor lock-in. Built with FastAPI + PostgreSQL, it offers complete control over agent orchestration for teams looking to escape vendor lock-in, meet data sovereignty requirements, enable custom deployments, and optimize costs. Aegra is Agent Protocol compliant and perfect for teams seeking a free, self-hosted alternative to LangGraph Platform with zero lock-in, full control, and compatibility with existing LangGraph Client SDK.
llm4s
LLM4S provides a simple, robust, and scalable framework for building Large Language Models (LLM) applications in Scala. It aims to leverage Scala's type safety, functional programming, JVM ecosystem, concurrency, and performance advantages to create reliable and maintainable AI-powered applications. The framework supports multi-provider integration, execution environments, error handling, Model Context Protocol (MCP) support, agent frameworks, multimodal generation, and Retrieval-Augmented Generation (RAG) workflows. It also offers observability features like detailed trace logging, monitoring, and analytics for debugging and performance insights.
presenton
Presenton is an open-source AI presentation generator and API that allows users to create professional presentations locally on their devices. It offers complete control over the presentation workflow, including custom templates, AI template generation, flexible generation options, and export capabilities. Users can use their own API keys for various models, integrate with Ollama for local model running, and connect to OpenAI-compatible endpoints. The tool supports multiple providers for text and image generation, runs locally without cloud dependencies, and can be deployed as a Docker container with GPU support.
agor
Agor is a multiplayer spatial canvas where you coordinate multiple AI coding assistants on parallel tasks, with GitHub-linked worktrees, automated workflow zones, and isolated test environments—all running simultaneously. It allows users to run multiple AI coding sessions, manage git worktrees, track AI conversations, and visualize team's work in real-time. Agor provides features like Agent Swarm Control, Multiplayer Spatial Canvas, Session Trees, Zone Triggers, Isolated Development Environments, Real-Time Strategy for AI Teams, and Mobile-Friendly Prompting. It is designed to streamline parallel PR workflows and enhance collaboration among AI teams.
airflow-client-python
The Apache Airflow Python Client provides a range of REST API endpoints for managing Airflow metadata objects. It supports CRUD operations for resources, with endpoints accepting and returning JSON. Users can create, read, update, and delete resources. The API design follows conventions with consistent naming and field formats. Update mask is available for patch endpoints to specify fields for update. API versioning is not synchronized with Airflow releases, and changes go through a deprecation phase. The tool supports various authentication methods and error responses follow RFC 7807 format.
agentscope
AgentScope is an agent-oriented programming tool for building LLM (Large Language Model) applications. It provides transparent development, realtime steering, agentic tools management, model agnostic programming, LEGO-style agent building, multi-agent support, and high customizability. The tool supports async invocation, reasoning models, streaming returns, async/sync tool functions, user interruption, group-wise tools management, streamable transport, stateful/stateless mode MCP client, distributed and parallel evaluation, multi-agent conversation management, and fine-grained MCP control. AgentScope Studio enables tracing and visualization of agent applications. The tool is highly customizable and encourages customization at various levels.
shimmy
Shimmy is a 5.1MB single-binary local inference server providing OpenAI-compatible endpoints for GGUF models. It offers fast, reliable AI inference with sub-second responses, zero configuration, and automatic port management. Perfect for developers seeking privacy, cost-effectiveness, speed, and easy integration with popular tools like VSCode and Cursor. Shimmy is designed to be invisible infrastructure that simplifies local AI development and deployment.
For similar tasks
OpenAGI
OpenAGI is an AI agent creation package designed for researchers and developers to create intelligent agents using advanced machine learning techniques. The package provides tools and resources for building and training AI models, enabling users to develop sophisticated AI applications. With a focus on collaboration and community engagement, OpenAGI aims to facilitate the integration of AI technologies into various domains, fostering innovation and knowledge sharing among experts and enthusiasts.
GPTSwarm
GPTSwarm is a graph-based framework for LLM-based agents that enables the creation of LLM-based agents from graphs and facilitates the customized and automatic self-organization of agent swarms with self-improvement capabilities. The library includes components for domain-specific operations, graph-related functions, LLM backend selection, memory management, and optimization algorithms to enhance agent performance and swarm efficiency. Users can quickly run predefined swarms or utilize tools like the file analyzer. GPTSwarm supports local LM inference via LM Studio, allowing users to run with a local LLM model. The framework has been accepted by ICML2024 and offers advanced features for experimentation and customization.
AgentForge
AgentForge is a low-code framework tailored for the rapid development, testing, and iteration of AI-powered autonomous agents and Cognitive Architectures. It is compatible with a range of LLM models and offers flexibility to run different models for different agents based on specific needs. The framework is designed for seamless extensibility and database-flexibility, making it an ideal playground for various AI projects. AgentForge is a beta-testing ground and future-proof hub for crafting intelligent, model-agnostic autonomous agents.
atomic_agents
Atomic Agents is a modular and extensible framework designed for creating powerful applications. It follows the principles of Atomic Design, emphasizing small and single-purpose components. Leveraging Pydantic for data validation and serialization, the framework offers a set of tools and agents that can be combined to build AI applications. It depends on the Instructor package and supports various APIs like OpenAI, Cohere, Anthropic, and Gemini. Atomic Agents is suitable for developers looking to create AI agents with a focus on modularity and flexibility.
LongRoPE
LongRoPE is a method to extend the context window of large language models (LLMs) beyond 2 million tokens. It identifies and exploits non-uniformities in positional embeddings to enable 8x context extension without fine-tuning. The method utilizes a progressive extension strategy with 256k fine-tuning to reach a 2048k context. It adjusts embeddings for shorter contexts to maintain performance within the original window size. LongRoPE has been shown to be effective in maintaining performance across various tasks from 4k to 2048k context lengths.
ax
Ax is a Typescript library that allows users to build intelligent agents inspired by agentic workflows and the Stanford DSP paper. It seamlessly integrates with multiple Large Language Models (LLMs) and VectorDBs to create RAG pipelines or collaborative agents capable of solving complex problems. The library offers advanced features such as streaming validation, multi-modal DSP, and automatic prompt tuning using optimizers. Users can easily convert documents of any format to text, perform smart chunking, embedding, and querying, and ensure output validation while streaming. Ax is production-ready, written in Typescript, and has zero dependencies.
Awesome-AI-Agents
Awesome-AI-Agents is a curated list of projects, frameworks, benchmarks, platforms, and related resources focused on autonomous AI agents powered by Large Language Models (LLMs). The repository showcases a wide range of applications, multi-agent task solver projects, agent society simulations, and advanced components for building and customizing AI agents. It also includes frameworks for orchestrating role-playing, evaluating LLM-as-Agent performance, and connecting LLMs with real-world applications through platforms and APIs. Additionally, the repository features surveys, paper lists, and blogs related to LLM-based autonomous agents, making it a valuable resource for researchers, developers, and enthusiasts in the field of AI.
CodeFuse-muAgent
CodeFuse-muAgent is a Multi-Agent framework designed to streamline Standard Operating Procedure (SOP) orchestration for agents. It integrates toolkits, code libraries, knowledge bases, and sandbox environments for rapid construction of complex Multi-Agent interactive applications. The framework enables efficient execution and handling of multi-layered and multi-dimensional tasks.
For similar jobs
sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.
teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.
ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.
classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.
chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.
BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students
uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.
griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.

