
agentpress
AI Agents API Server Starter; FastAPI, Supabase, Redis
Stars: 67

README:
AgentPress is a collection of simple, but powerful utilities that serve as building blocks for creating AI agents. Plug, play, and customize.
See How It Works for an explanation of this flow.
- Threads: Manage Messages[] as threads.
- Tools: Register code as callable tools with definitions in both OpenAPI and XML
- Response Processing: Support for native-LLM OpenAPI and XML-based tool calling
- State Management: Thread-safe JSON key-value state management
- LLM: +100 LLMs using the OpenAI I/O Format powered by LiteLLM
- Install the package:
pip install agentpress
- Initialize AgentPress in your project:
agentpress init
Creates a agentpress
directory with all the core utilities.
Check out File Overview for explanations of the generated files.
- If you selected the example agent during initialization:
- Creates an
agent.py
file with a web development agent example - Creates a
tools
directory with example tools:-
files_tool.py
: File operations (create/update files, read directory and load into state) -
terminal_tool.py
: Terminal command execution
-
- Creates a
workspace
directory for the agent to work in
- Creates an
- Set up your environment variables in a
.env
file:
OPENAI_API_KEY=your_key_here
ANTHROPIC_API_KEY=your_key_here
GROQ_API_KEY=your_key_here
- Create a calculator tool with OpenAPI schema:
from agentpress.tool import Tool, ToolResult, openapi_schema
class CalculatorTool(Tool):
@openapi_schema({
"type": "function",
"function": {
"name": "add",
"description": "Add two numbers",
"parameters": {
"type": "object",
"properties": {
"a": {"type": "number"},
"b": {"type": "number"}
},
"required": ["a", "b"]
}
}
})
async def add(self, a: float, b: float) -> ToolResult:
try:
result = a + b
return self.success_response(f"The sum is {result}")
except Exception as e:
return self.fail_response(f"Failed to add numbers: {str(e)}")
- Or create a tool with XML schema:
from agentpress.tool import Tool, ToolResult, xml_schema
class FilesTool(Tool):
@xml_schema(
tag_name="create-file",
mappings=[
{"param_name": "file_path", "node_type": "attribute", "path": "."},
{"param_name": "file_contents", "node_type": "content", "path": "."}
],
example='''
<create-file file_path="path/to/file">
File contents go here
</create-file>
'''
)
async def create_file(self, file_path: str, file_contents: str) -> ToolResult:
# Implementation here
pass
- Use the Thread Manager with tool execution:
import asyncio
from agentpress.thread_manager import ThreadManager
from calculator_tool import CalculatorTool
async def main():
# Initialize thread manager and add tools
manager = ThreadManager()
manager.add_tool(CalculatorTool)
# Create a new thread
thread_id = await manager.create_thread()
# Add your message
await manager.add_message(thread_id, {
"role": "user",
"content": "What's 2 + 2?"
})
# Run with streaming and tool execution
response = await manager.run_thread(
thread_id=thread_id,
system_message={
"role": "system",
"content": "You are a helpful assistant with calculation abilities."
},
model_name="anthropic/claude-3-5-sonnet-latest",
execute_tools=True,
native_tool_calling=True, # Contrary to xml_tool_calling = True
parallel_tool_execution=True # Will execute tools in parallel, contrary to sequential (one after another)
)
asyncio.run(main())
- View conversation threads in a web UI:
streamlit run agentpress/thread_viewer_ui.py
Each AI agent iteration follows a clear, modular flow:
-
Message & LLM Handling
- Messages are managed in threads via
ThreadManager
- LLM API calls are made through a unified interface (
llm.py
) - Supports streaming responses for real-time interaction
- Messages are managed in threads via
-
Response Processing
- LLM returns both content and tool calls
- Content is streamed in real-time
- Tool calls are parsed using either:
- Standard OpenAPI function calling
- XML-based tool definitions
- Custom parsers (extend
ToolParserBase
)
-
Tool Execution
- Tools are executed either:
- In real-time during streaming (
execute_tools_on_stream
) - After complete response
- In parallel or sequential order
- In real-time during streaming (
- Supports both standard and XML tool formats
- Extensible through
ToolExecutorBase
- Tools are executed either:
-
Results Management
- Results from both content and tool executions are handled
- Supports different result formats (standard/XML)
- Customizable through
ResultsAdderBase
This modular architecture allows you to:
- Use standard OpenAPI function calling
- Switch to XML-based tool definitions
- Create custom processors by extending base classes
- Mix and match different approaches
LLM API interface using LiteLLM. Supports 100+ LLMs with OpenAI-compatible format. Includes streaming, retry logic, and error handling.
Manages conversation threads with support for:
- Message history management
- Tool registration and execution
- Streaming responses
- Both OpenAPI and XML tool calling patterns
Base infrastructure for tools with:
- OpenAPI schema decorator for standard function calling
- XML schema decorator for XML-based tool calls
- Standardized ToolResult responses
Central registry for tool management:
- Registers both OpenAPI and XML tools
- Maintains tool schemas and implementations
- Provides tool lookup and validation
Thread-safe state persistence:
- JSON-based key-value storage
- Atomic operations with locking
- Automatic file handling
Handles LLM response processing with support for:
- Streaming and complete responses
- Tool call extraction and execution
- Result formatting and message management
-
standard_tool_parser.py
: Parses OpenAPI function calls -
standard_tool_executor.py
: Executes standard tool calls -
standard_results_adder.py
: Manages standard results
-
xml_tool_parser.py
: Parses XML-formatted tool calls -
xml_tool_executor.py
: Executes XML tool calls -
xml_results_adder.py
: Manages XML results
- Plug & Play: Start with our defaults, then customize to your needs.
- Agnostic: Built on LiteLLM, supporting any LLM provider. Minimal opinions, maximum flexibility.
- Simplicity: Clean, readable code that's easy to understand and modify.
- No Lock-in: Take full ownership of the code. Copy what you need directly into your codebase.
We welcome contributions! Feel free to:
- Submit issues for bugs or suggestions
- Fork the repository and send pull requests
- Share how you've used AgentPress in your projects
- Clone:
git clone https://github.com/kortix-ai/agentpress
cd agentpress
- Install dependencies:
pip install poetry
poetry install
- For quick testing:
pip install -e .
Built with ❤️ by Kortix AI Corp
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for agentpress
Similar Open Source Tools

pocketgroq
PocketGroq is a tool that provides advanced functionalities for text generation, web scraping, web search, and AI response evaluation. It includes features like an Autonomous Agent for answering questions, web crawling and scraping capabilities, enhanced web search functionality, and flexible integration with Ollama server. Users can customize the agent's behavior, evaluate responses using AI, and utilize various methods for text generation, conversation management, and Chain of Thought reasoning. The tool offers comprehensive methods for different tasks, such as initializing RAG, error handling, and tool management. PocketGroq is designed to enhance development processes and enable the creation of AI-powered applications with ease.

LLMVoX
LLMVoX is a lightweight 30M-parameter, LLM-agnostic, autoregressive streaming Text-to-Speech (TTS) system designed to convert text outputs from Large Language Models into high-fidelity streaming speech with low latency. It achieves significantly lower Word Error Rate compared to speech-enabled LLMs while operating at comparable latency and speech quality. Key features include being lightweight & fast with only 30M parameters, LLM-agnostic for easy integration with existing models, multi-queue streaming for continuous speech generation, and multilingual support for easy adaptation to new languages.

langchainrb
Langchain.rb is a Ruby library that makes it easy to build LLM-powered applications. It provides a unified interface to a variety of LLMs, vector search databases, and other tools, making it easy to build and deploy RAG (Retrieval Augmented Generation) systems and assistants. Langchain.rb is open source and available under the MIT License.

mcphub.nvim
MCPHub.nvim is a powerful Neovim plugin that integrates MCP (Model Context Protocol) servers into your workflow. It offers a centralized config file for managing servers and tools, with an intuitive UI for testing resources. Ideal for LLM integration, it provides programmatic API access and interactive testing through the `:MCPHub` command.

clarifai-python
The Clarifai Python SDK offers a comprehensive set of tools to integrate Clarifai's AI platform to leverage computer vision capabilities like classification , detection ,segementation and natural language capabilities like classification , summarisation , generation , Q&A ,etc into your applications. With just a few lines of code, you can leverage cutting-edge artificial intelligence to unlock valuable insights from visual and textual content.

ai-gateway
LangDB AI Gateway is an open-source enterprise AI gateway built in Rust. It provides a unified interface to all LLMs using the OpenAI API format, focusing on high performance, enterprise readiness, and data control. The gateway offers features like comprehensive usage analytics, cost tracking, rate limiting, data ownership, and detailed logging. It supports various LLM providers and provides OpenAI-compatible endpoints for chat completions, model listing, embeddings generation, and image generation. Users can configure advanced settings, such as rate limiting, cost control, dynamic model routing, and observability with OpenTelemetry tracing. The gateway can be run with Docker Compose and integrated with MCP tools for server communication.

LightRAG
LightRAG is a repository hosting the code for LightRAG, a system that supports seamless integration of custom knowledge graphs, Oracle Database 23ai, Neo4J for storage, and multiple file types. It includes features like entity deletion, batch insert, incremental insert, and graph visualization. LightRAG provides an API server implementation for RESTful API access to RAG operations, allowing users to interact with it through HTTP requests. The repository also includes evaluation scripts, code for reproducing results, and a comprehensive code structure.

lumen
Lumen is a command-line tool that leverages AI to enhance your git workflow. It assists in generating commit messages, understanding changes, interactive searching, and analyzing impacts without the need for an API key. With smart commit messages, git history insights, interactive search, change analysis, and rich markdown output, Lumen offers a seamless and flexible experience for users across various git workflows.

solana-agent-kit
Solana Agent Kit is an open-source toolkit designed for connecting AI agents to Solana protocols. It enables agents, regardless of the model used, to autonomously perform various Solana actions such as trading tokens, launching new tokens, lending assets, sending compressed airdrops, executing blinks, and more. The toolkit integrates core blockchain features like token operations, NFT management via Metaplex, DeFi integration, Solana blinks, AI integration features with LangChain, autonomous modes, and AI tools. It provides ready-to-use tools for blockchain operations, supports autonomous agent actions, and offers features like memory management, real-time feedback, and error handling. Solana Agent Kit facilitates tasks such as deploying tokens, creating NFT collections, swapping tokens, lending tokens, staking SOL, and sending SPL token airdrops via ZK compression. It also includes functionalities for fetching price data from Pyth and relies on key Solana and Metaplex libraries for its operations.

aws-mcp
AWS MCP is a Model Context Protocol (MCP) server that facilitates interactions between AI assistants and AWS environments. It allows for natural language querying and management of AWS resources during conversations. The server supports multiple AWS profiles, SSO authentication, multi-region operations, and secure credential handling. Users can locally execute commands with their AWS credentials, enhancing the conversational experience with AWS resources.

e2m
E2M is a Python library that can parse and convert various file types into Markdown format. It supports the conversion of multiple file formats, including doc, docx, epub, html, htm, url, pdf, ppt, pptx, mp3, and m4a. The ultimate goal of the E2M project is to provide high-quality data for Retrieval-Augmented Generation (RAG) and model training or fine-tuning. The core architecture consists of a Parser responsible for parsing various file types into text or image data, and a Converter responsible for converting text or image data into Markdown format.

arxiv-mcp-server
The ArXiv MCP Server acts as a bridge between AI assistants and arXiv's research repository, enabling AI models to search for and access papers programmatically through the Message Control Protocol (MCP). It offers features like paper search, access, listing, local storage, and research prompts. Users can install it via Smithery or manually for Claude Desktop. The server provides tools for paper search, download, listing, and reading, along with specialized prompts for paper analysis. Configuration can be done through environment variables, and testing is supported with a test suite. The tool is released under the MIT License and is developed by the Pearl Labs Team.

daydreams
Daydreams is a generative agent library designed for playing onchain games by injecting context. It is chain agnostic and allows users to perform onchain tasks, including playing any onchain game. The tool is lightweight and powerful, enabling users to define game context, register actions, set goals, monitor progress, and integrate with external agents. Daydreams aims to be 'lite' and 'composable', dynamically generating code needed to play games. It is currently in pre-alpha stage, seeking feedback and collaboration for further development.