
mcp-agent
Build effective agents using Model Context Protocol and simple workflow patterns
Stars: 7412

mcp-agent is a simple, composable framework designed to build agents using the Model Context Protocol. It handles the lifecycle of MCP server connections and implements patterns for building production-ready AI agents in a composable way. The framework also includes OpenAI's Swarm pattern for multi-agent orchestration in a model-agnostic manner, making it the simplest way to build robust agent applications. It is purpose-built for the shared protocol MCP, lightweight, and closer to an agent pattern library than a framework. mcp-agent allows developers to focus on the core business logic of their AI applications by handling mechanics such as server connections, working with LLMs, and supporting external signals like human input.
README:
Build effective agents with Model Context Protocol using simple, composable patterns.
Examples | Building Effective Agents | MCP
mcp-agent
is a simple, composable framework to build agents using Model Context Protocol.
Inspiration: Anthropic announced 2 foundational updates for AI application developers:
- Model Context Protocol - a standardized interface to let any software be accessible to AI assistants via MCP servers.
- Building Effective Agents - a seminal writeup on simple, composable patterns for building production-ready AI agents.
mcp-agent
puts these two foundational pieces into an AI application framework:
- It handles the pesky business of managing the lifecycle of MCP server connections so you don't have to.
- It implements every pattern described in Building Effective Agents, and does so in a composable way, allowing you to chain these patterns together.
- Bonus: It implements OpenAI's Swarm pattern for multi-agent orchestration, but in a model-agnostic way.
Altogether, this is the simplest and easiest way to build robust agent applications. Much like MCP, this project is in early development. We welcome all kinds of contributions, feedback and your help in growing this to become a new standard.
We recommend using uv to manage your Python projects:
uv add "mcp-agent"
Alternatively:
pip install mcp-agent
[!TIP] The
examples
directory has several example applications to get started with. To run an example, clone this repo, then:cd examples/basic/mcp_basic_agent # Or any other example # Option A: secrets YAML # cp mcp_agent.secrets.yaml.example mcp_agent.secrets.yaml && edit mcp_agent.secrets.yaml # Option B: .env cp .env.example .env && edit .env uv run main.py
Here is a basic "finder" agent that uses the fetch and filesystem servers to look up a file, read a blog and write a tweet. Example link:
finder_agent.py
import asyncio
import os
from mcp_agent.app import MCPApp
from mcp_agent.agents.agent import Agent
from mcp_agent.workflows.llm.augmented_llm_openai import OpenAIAugmentedLLM
app = MCPApp(name="hello_world_agent")
async def example_usage():
async with app.run() as mcp_agent_app:
logger = mcp_agent_app.logger
# This agent can read the filesystem or fetch URLs
finder_agent = Agent(
name="finder",
instruction="""You can read local files or fetch URLs.
Return the requested information when asked.""",
server_names=["fetch", "filesystem"], # MCP servers this Agent can use
)
async with finder_agent:
# Automatically initializes the MCP servers and adds their tools for LLM use
tools = await finder_agent.list_tools()
logger.info(f"Tools available:", data=tools)
# Attach an OpenAI LLM to the agent (defaults to GPT-4o)
llm = await finder_agent.attach_llm(OpenAIAugmentedLLM)
# This will perform a file lookup and read using the filesystem server
result = await llm.generate_str(
message="Show me what's in README.md verbatim"
)
logger.info(f"README.md contents: {result}")
# Uses the fetch server to fetch the content from URL
result = await llm.generate_str(
message="Print the first two paragraphs from https://www.anthropic.com/research/building-effective-agents"
)
logger.info(f"Blog intro: {result}")
# Multi-turn interactions by default
result = await llm.generate_str("Summarize that in a 128-char tweet")
logger.info(f"Tweet: {result}")
if __name__ == "__main__":
asyncio.run(example_usage())
mcp_agent.config.yaml
execution_engine: asyncio
logger:
transports: [console] # You can use [file, console] for both
level: debug
path: "logs/mcp-agent.jsonl" # Used for file transport
# For dynamic log filenames:
# path_settings:
# path_pattern: "logs/mcp-agent-{unique_id}.jsonl"
# unique_id: "timestamp" # Or "session_id"
# timestamp_format: "%Y%m%d_%H%M%S"
mcp:
servers:
fetch:
command: "uvx"
args: ["mcp-server-fetch"]
filesystem:
command: "npx"
args:
[
"-y",
"@modelcontextprotocol/server-filesystem",
"<add_your_directories>",
]
openai:
# Secrets (API keys, etc.) are stored in an mcp_agent.secrets.yaml file which can be gitignored
default_model: gpt-4o
- Why use mcp-agent?
- Example Applications
- Core Concepts
- Workflows Patterns
- Advanced
- Contributing
- Roadmap
- FAQs
There are too many AI frameworks out there already. But mcp-agent
is the only one that is purpose-built for a shared protocol - MCP. It is also the most lightweight, and is closer to an agent pattern library than a framework.
As more services become MCP-aware, you can use mcp-agent to build robust and controllable AI agents that can leverage those services out-of-the-box.
Before we go into the core concepts of mcp-agent, let's show what you can build with it.
In short, you can build any kind of AI application with mcp-agent: multi-agent collaborative workflows, human-in-the-loop workflows, RAG pipelines and more.
You can integrate mcp-agent apps into MCP clients like Claude Desktop.
This app wraps an mcp-agent application inside an MCP server, and exposes that server to Claude Desktop. The app exposes agents and workflows that Claude Desktop can invoke to service of the user's request.
https://github.com/user-attachments/assets/7807cffd-dba7-4f0c-9c70-9482fd7e0699
This demo shows a multi-agent evaluation task where each agent evaluates aspects of an input poem, and then an aggregator summarizes their findings into a final response.
Details: Starting from a user's request over text, the application:
- dynamically defines agents to do the job
- uses the appropriate workflow to orchestrate those agents (in this case the Parallel workflow)
Link to code: examples/basic/mcp_server_aggregator
[!NOTE] Huge thanks to Jerron Lim (@StreetLamb) for developing and contributing this example!
You can deploy mcp-agent apps using Streamlit.
This app is able to perform read and write actions on gmail using text prompts -- i.e. read, delete, send emails, mark as read/unread, etc. It uses an MCP server for Gmail.
https://github.com/user-attachments/assets/54899cac-de24-4102-bd7e-4b2022c956e3
Link to code: gmail-mcp-server
[!NOTE] Huge thanks to Jason Summer (@jasonsum) for developing and contributing this example!
This app uses a Qdrant vector database (via an MCP server) to do Q&A over a corpus of text.
https://github.com/user-attachments/assets/f4dcd227-cae9-4a59-aa9e-0eceeb4acaf4
Link to code: examples/usecases/streamlit_mcp_rag_agent
[!NOTE] Huge thanks to Jerron Lim (@StreetLamb) for developing and contributing this example!
Marimo is a reactive Python notebook that replaces Jupyter and Streamlit. Here's the "file finder" agent from Quickstart implemented in Marimo:
Link to code: examples/usecases/marimo_mcp_basic_agent
[!NOTE] Huge thanks to Akshay Agrawal (@akshayka) for developing and contributing this example!
You can write mcp-agent apps as Python scripts or Jupyter notebooks.
This example demonstrates a multi-agent setup for handling different customer service requests in an airline context using the Swarm workflow pattern. The agents can triage requests, handle flight modifications, cancellations, and lost baggage cases.
https://github.com/user-attachments/assets/b314d75d-7945-4de6-965b-7f21eb14a8bd
Link to code: examples/workflows/workflow_swarm
The following are the building blocks of the mcp-agent framework:
- MCPApp: global state and app configuration
-
MCP server management:
gen_client
andMCPConnectionManager
to easily connect to MCP servers. - Agent: An Agent is an entity that has access to a set of MCP servers and exposes them to an LLM as tool calls. It has a name and purpose (instruction).
-
AugmentedLLM: An LLM that is enhanced with tools provided from a collection of MCP servers. Every Workflow pattern described below is an
AugmentedLLM
itself, allowing you to compose and chain them together.
Everything in the framework is a derivative of these core capabilities.
mcp-agent provides implementations for every pattern in Anthropic’s Building Effective Agents, as well as the OpenAI Swarm pattern.
Each pattern is model-agnostic, and exposed as an AugmentedLLM
, making everything very composable.
AugmentedLLM is an LLM that has access to MCP servers and functions via Agents.
LLM providers implement the AugmentedLLM interface to expose 3 functions:
-
generate
: Generate message(s) given a prompt, possibly over multiple iterations and making tool calls as needed. -
generate_str
: Callsgenerate
and returns result as a string output. -
generate_structured
: Uses Instructor to return the generated result as a Pydantic model.
Additionally, AugmentedLLM
has memory, to keep track of long or short-term history.
Example
from mcp_agent.agents.agent import Agent
from mcp_agent.workflows.llm.augmented_llm_anthropic import AnthropicAugmentedLLM
finder_agent = Agent(
name="finder",
instruction="You are an agent with filesystem + fetch access. Return the requested file or URL contents.",
server_names=["fetch", "filesystem"],
)
async with finder_agent:
llm = await finder_agent.attach_llm(AnthropicAugmentedLLM)
result = await llm.generate_str(
message="Print the first 2 paragraphs of https://www.anthropic.com/research/building-effective-agents",
# Can override model, tokens and other defaults
)
logger.info(f"Result: {result}")
# Multi-turn conversation
result = await llm.generate_str(
message="Summarize those paragraphs in a 128 character tweet",
)
logger.info(f"Result: {result}")
Fan-out tasks to multiple sub-agents and fan-in the results. Each subtask is an AugmentedLLM, as is the overall Parallel workflow, meaning each subtask can optionally be a more complex workflow itself.
[!NOTE]
Example
proofreader = Agent(name="proofreader", instruction="Review grammar...")
fact_checker = Agent(name="fact_checker", instruction="Check factual consistency...")
style_enforcer = Agent(name="style_enforcer", instruction="Enforce style guidelines...")
grader = Agent(name="grader", instruction="Combine feedback into a structured report.")
parallel = ParallelLLM(
fan_in_agent=grader,
fan_out_agents=[proofreader, fact_checker, style_enforcer],
llm_factory=OpenAIAugmentedLLM,
)
result = await parallel.generate_str("Student short story submission: ...", RequestParams(model="gpt4-o"))
Given an input, route to the top_k
most relevant categories. A category can be an Agent, an MCP server or a regular function.
mcp-agent provides several router implementations, including:
-
EmbeddingRouter
: uses embedding models for classification -
LLMRouter
: uses LLMs for classification
[!NOTE]
Example
def print_hello_world:
print("Hello, world!")
finder_agent = Agent(name="finder", server_names=["fetch", "filesystem"])
writer_agent = Agent(name="writer", server_names=["filesystem"])
llm = OpenAIAugmentedLLM()
router = LLMRouter(
llm=llm,
agents=[finder_agent, writer_agent],
functions=[print_hello_world],
)
results = await router.route( # Also available: route_to_agent, route_to_server
request="Find and print the contents of README.md verbatim",
top_k=1
)
chosen_agent = results[0].result
async with chosen_agent:
...
A close sibling of Router, the Intent Classifier pattern identifies the top_k
Intents that most closely match a given input.
Just like a Router, mcp-agent provides both an embedding and LLM-based intent classifier.
One LLM (the “optimizer”) refines a response, another (the “evaluator”) critiques it until a response exceeds a quality criteria.
[!NOTE]
Example
optimizer = Agent(name="cover_letter_writer", server_names=["fetch"], instruction="Generate a cover letter ...")
evaluator = Agent(name="critiquer", instruction="Evaluate clarity, specificity, relevance...")
eo_llm = EvaluatorOptimizerLLM(
optimizer=optimizer,
evaluator=evaluator,
llm_factory=OpenAIAugmentedLLM,
min_rating=QualityRating.EXCELLENT, # Keep iterating until the minimum quality bar is reached
)
result = await eo_llm.generate_str("Write a job cover letter for an AI framework developer role at LastMile AI.")
print("Final refined cover letter:", result)
A higher-level LLM generates a plan, then assigns them to sub-agents, and synthesizes the results. The Orchestrator workflow automatically parallelizes steps that can be done in parallel, and blocks on dependencies.
[!NOTE]
Example
finder_agent = Agent(name="finder", server_names=["fetch", "filesystem"])
writer_agent = Agent(name="writer", server_names=["filesystem"])
proofreader = Agent(name="proofreader", ...)
fact_checker = Agent(name="fact_checker", ...)
style_enforcer = Agent(name="style_enforcer", instructions="Use APA style guide from ...", server_names=["fetch"])
orchestrator = Orchestrator(
llm_factory=AnthropicAugmentedLLM,
available_agents=[finder_agent, writer_agent, proofreader, fact_checker, style_enforcer],
)
task = "Load short_story.md, evaluate it, produce a graded_report.md with multiple feedback aspects."
result = await orchestrator.generate_str(task, RequestParams(model="gpt-4o"))
print(result)
OpenAI has an experimental multi-agent pattern called Swarm, which we provide a model-agnostic reference implementation for in mcp-agent.
The mcp-agent Swarm pattern works seamlessly with MCP servers, and is exposed as an AugmentedLLM
, allowing for composability with other patterns above.
[!NOTE]
Example
triage_agent = SwarmAgent(...)
flight_mod_agent = SwarmAgent(...)
lost_baggage_agent = SwarmAgent(...)
# The triage agent decides whether to route to flight_mod_agent or lost_baggage_agent
swarm = AnthropicSwarm(agent=triage_agent, context_variables={...})
test_input = "My bag was not delivered!"
result = await swarm.generate_str(test_input)
print("Result:", result)
An example of composability is using an Evaluator-Optimizer workflow as the planner LLM inside the Orchestrator workflow. Generating a high-quality plan to execute is important for robust behavior, and an evaluator-optimizer can help ensure that.
Doing so is seamless in mcp-agent, because each workflow is implemented as an AugmentedLLM
.
Example
optimizer = Agent(name="plan_optimizer", server_names=[...], instruction="Generate a plan given an objective ...")
evaluator = Agent(name="plan_evaluator", instruction="Evaluate logic, ordering and precision of plan......")
planner_llm = EvaluatorOptimizerLLM(
optimizer=optimizer,
evaluator=evaluator,
llm_factory=OpenAIAugmentedLLM,
min_rating=QualityRating.EXCELLENT,
)
orchestrator = Orchestrator(
llm_factory=AnthropicAugmentedLLM,
available_agents=[finder_agent, writer_agent, proofreader, fact_checker, style_enforcer],
planner=planner_llm # It's that simple
)
...
Signaling: The framework can pause/resume tasks. The agent or LLM might “signal” that it needs user input, so the workflow awaits. A developer may signal during a workflow to seek approval or review before continuing with a workflow.
Human Input: If an Agent has a human_input_callback
, the LLM can call a __human_input__
tool to request user input mid-workflow.
Example
The Swarm example shows this in action.
from mcp_agent.human_input.console_handler import console_input_callback
lost_baggage = SwarmAgent(
name="Lost baggage traversal",
instruction=lambda context_variables: f"""
{
FLY_AIR_AGENT_PROMPT.format(
customer_context=context_variables.get("customer_context", "None"),
flight_context=context_variables.get("flight_context", "None"),
)
}\n Lost baggage policy: policies/lost_baggage_policy.md""",
functions=[
escalate_to_agent,
initiate_baggage_search,
transfer_to_triage,
case_resolved,
],
server_names=["fetch", "filesystem"],
human_input_callback=console_input_callback, # Request input from the console
)
Create an mcp_agent.config.yaml
and define secrets via either a gitignored mcp_agent.secrets.yaml
or a local .env
. In production, prefer MCP_APP_SETTINGS_PRELOAD
to avoid writing plaintext secrets to disk.
mcp-agent makes it trivial to connect to MCP servers. Create an mcp_agent.config.yaml
to define server configuration under the mcp
section:
mcp:
servers:
fetch:
command: "uvx"
args: ["mcp-server-fetch"]
description: "Fetch content at URLs from the world wide web"
Manage the lifecycle of an MCP server within an async context manager:
from mcp_agent.mcp.gen_client import gen_client
async with gen_client("fetch") as fetch_client:
# Fetch server is initialized and ready to use
result = await fetch_client.list_tools()
# Fetch server is automatically disconnected/shutdown
The gen_client function makes it easy to spin up connections to MCP servers.
In many cases, you want an MCP server to stay online for persistent use (e.g. in a multi-step tool use workflow). For persistent connections, use:
-
connect
anddisconnect
from mcp_agent.mcp.gen_client import connect, disconnect
fetch_client = None
try:
fetch_client = connect("fetch")
result = await fetch_client.list_tools()
finally:
disconnect("fetch")
-
MCPConnectionManager
For even more fine-grained control over server connections, you can use the MCPConnectionManager.
Example
from mcp_agent.context import get_current_context
from mcp_agent.mcp.mcp_connection_manager import MCPConnectionManager
context = get_current_context()
connection_manager = MCPConnectionManager(context.server_registry)
async with connection_manager:
fetch_client = await connection_manager.get_server("fetch") # Initializes fetch server
result = fetch_client.list_tool()
fetch_client2 = await connection_manager.get_server("fetch") # Reuses same server connection
# All servers managed by connection manager are automatically disconnected/shut down
MCPAggregator
acts as a "server-of-servers".
It provides a single MCP server interface for interacting with multiple MCP servers.
This allows you to expose tools from multiple servers to LLM applications.
Example
from mcp_agent.mcp.mcp_aggregator import MCPAggregator
aggregator = await MCPAggregator.create(server_names=["fetch", "filesystem"])
async with aggregator:
# combined list of tools exposed by 'fetch' and 'filesystem' servers
tools = await aggregator.list_tools()
# namespacing -- invokes the 'fetch' server to call the 'fetch' tool
fetch_result = await aggregator.call_tool(name="fetch-fetch", arguments={"url": "https://www.anthropic.com/research/building-effective-agents"})
# no namespacing -- first server in the aggregator exposing that tool wins
read_file_result = await aggregator.call_tool(name="read_file", arguments={})
We welcome any and all kinds of contributions. Please see the CONTRIBUTING guidelines to get started.
There have already been incredible community contributors who are driving this project forward:
-
Shaun Smith (@evalstate) -- who has been leading the charge on countless complex improvements, both to
mcp-agent
and generally to the MCP ecosystem. - Jerron Lim (@StreetLamb) -- who has contributed countless hours and excellent examples, and great ideas to the project.
- Jason Summer (@jasonsum) -- for identifying several issues and adapting his Gmail MCP server to work with mcp-agent
We will be adding a detailed roadmap (ideally driven by your feedback). The current set of priorities include:
- Durable Execution -- allow workflows to pause/resume and serialize state so they can be replayed or be paused indefinitely. We are working on integrating Temporal for this purpose.
- Memory -- adding support for long-term memory
- Streaming -- Support streaming listeners for iterative progress
-
Additional MCP capabilities -- Expand beyond tool calls to support:
- Resources
- Prompts
- Notifications
mcp-agent provides a streamlined approach to building AI agents using capabilities exposed by MCP (Model Context Protocol) servers.
MCP is quite low-level, and this framework handles the mechanics of connecting to servers, working with LLMs, handling external signals (like human input) and supporting persistent state via durable execution. That lets you, the developer, focus on the core business logic of your AI application.
Core benefits:
- 🤝 Interoperability: ensures that any tool exposed by any number of MCP servers can seamlessly plug in to your agents.
- ⛓️ Composability & Customizability: Implements well-defined workflows, but in a composable way that enables compound workflows, and allows full customization across model provider, logging, orchestrator, etc.
- 💻 Programmatic control flow: Keeps things simple as developers just write code instead of thinking in graphs, nodes and edges. For branching logic, you write
if
statements. For cycles, usewhile
loops. - 🖐️ Human Input & Signals: Supports pausing workflows for external signals, such as human input, which are exposed as tool calls an Agent can make.
No, you can use mcp-agent anywhere, since it handles MCPClient creation for you. This allows you to leverage MCP servers outside of MCP hosts like Claude Desktop.
Here's all the ways you can set up your mcp-agent application:
You can expose mcp-agent applications as MCP servers themselves (see example), allowing MCP clients to interface with sophisticated AI workflows using the standard tools API of MCP servers. This is effectively a server-of-servers.
You can embed mcp-agent in an MCP client directly to manage the orchestration across multiple MCP servers.
You can use mcp-agent applications in a standalone fashion (i.e. they aren't part of an MCP client). The examples
are all standalone applications.
I debated naming this project silsila (سلسلہ), which means chain of events in Urdu. mcp-agent is more matter-of-fact, but there's still an easter egg in the project paying homage to silsila.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for mcp-agent
Similar Open Source Tools

mcp-agent
mcp-agent is a simple, composable framework designed to build agents using the Model Context Protocol. It handles the lifecycle of MCP server connections and implements patterns for building production-ready AI agents in a composable way. The framework also includes OpenAI's Swarm pattern for multi-agent orchestration in a model-agnostic manner, making it the simplest way to build robust agent applications. It is purpose-built for the shared protocol MCP, lightweight, and closer to an agent pattern library than a framework. mcp-agent allows developers to focus on the core business logic of their AI applications by handling mechanics such as server connections, working with LLMs, and supporting external signals like human input.

catai
CatAI is a tool that allows users to run GGUF models on their computer with a chat UI. It serves as a local AI assistant inspired by Node-Llama-Cpp and Llama.cpp. The tool provides features such as auto-detecting programming language, showing original messages by clicking on user icons, real-time text streaming, and fast model downloads. Users can interact with the tool through a CLI that supports commands for installing, listing, setting, serving, updating, and removing models. CatAI is cross-platform and supports Windows, Linux, and Mac. It utilizes node-llama-cpp and offers a simple API for asking model questions. Additionally, developers can integrate the tool with node-llama-cpp@beta for model management and chatting. The configuration can be edited via the web UI, and contributions to the project are welcome. The tool is licensed under Llama.cpp's license.

continuous-eval
Open-Source Evaluation for LLM Applications. `continuous-eval` is an open-source package created for granular and holistic evaluation of GenAI application pipelines. It offers modularized evaluation, a comprehensive metric library covering various LLM use cases, the ability to leverage user feedback in evaluation, and synthetic dataset generation for testing pipelines. Users can define their own metrics by extending the Metric class. The tool allows running evaluation on a pipeline defined with modules and corresponding metrics. Additionally, it provides synthetic data generation capabilities to create user interaction data for evaluation or training purposes.

client-python
The Mistral Python Client is a tool inspired by cohere-python that allows users to interact with the Mistral AI API. It provides functionalities to access and utilize the AI capabilities offered by Mistral. Users can easily install the client using pip and manage dependencies using poetry. The client includes examples demonstrating how to use the API for various tasks, such as chat interactions. To get started, users need to obtain a Mistral API Key and set it as an environment variable. Overall, the Mistral Python Client simplifies the integration of Mistral AI services into Python applications.

ChatRex
ChatRex is a Multimodal Large Language Model (MLLM) designed to seamlessly integrate fine-grained object perception and robust language understanding. By adopting a decoupled architecture with a retrieval-based approach for object detection and leveraging high-resolution visual inputs, ChatRex addresses key challenges in perception tasks. It is powered by the Rexverse-2M dataset with diverse image-region-text annotations. ChatRex can be applied to various scenarios requiring fine-grained perception, such as object detection, grounded conversation, grounded image captioning, and region understanding.

vecs
vecs is a Python client for managing and querying vector stores in PostgreSQL with the pgvector extension. It allows users to create collections of vectors with associated metadata, index the collections for fast search performance, and query the collections based on specified filters. The tool simplifies the process of working with vector data in a PostgreSQL database, making it easier to store, retrieve, and analyze vector information.

starwhale
Starwhale is an MLOps/LLMOps platform that brings efficiency and standardization to machine learning operations. It streamlines the model development lifecycle, enabling teams to optimize workflows around key areas like model building, evaluation, release, and fine-tuning. Starwhale abstracts Model, Runtime, and Dataset as first-class citizens, providing tailored capabilities for common workflow scenarios including Models Evaluation, Live Demo, and LLM Fine-tuning. It is an open-source platform designed for clarity and ease of use, empowering developers to build customized MLOps features tailored to their needs.

notte
Notte is a web browser designed specifically for LLM agents, providing a language-first web navigation experience without the need for DOM/HTML parsing. It transforms websites into structured, navigable maps described in natural language, enabling users to interact with the web using natural language commands. By simplifying browser complexity, Notte allows LLM policies to focus on conversational reasoning and planning, reducing token usage, costs, and latency. The tool supports various language model providers and offers a reinforcement learning style action space and controls for full navigation control.

GraphRAG-SDK
Build fast and accurate GenAI applications with GraphRAG SDK, a specialized toolkit for building Graph Retrieval-Augmented Generation (GraphRAG) systems. It integrates knowledge graphs, ontology management, and state-of-the-art LLMs to deliver accurate, efficient, and customizable RAG workflows. The SDK simplifies the development process by automating ontology creation, knowledge graph agent creation, and query handling, enabling users to interact and query their knowledge graphs effectively. It supports multi-agent systems and orchestrates agents specialized in different domains. The SDK is optimized for FalkorDB, ensuring high performance and scalability for large-scale applications. By leveraging knowledge graphs, it enables semantic relationships and ontology-driven queries that go beyond standard vector similarity, enhancing retrieval-augmented generation capabilities.

graphiti
Graphiti is a framework for building and querying temporally-aware knowledge graphs, tailored for AI agents in dynamic environments. It continuously integrates user interactions, structured and unstructured data, and external information into a coherent, queryable graph. The framework supports incremental data updates, efficient retrieval, and precise historical queries without complete graph recomputation, making it suitable for developing interactive, context-aware AI applications.

lionagi
LionAGI is a powerful intelligent workflow automation framework that introduces advanced ML models into any existing workflows and data infrastructure. It can interact with almost any model, run interactions in parallel for most models, produce structured pydantic outputs with flexible usage, automate workflow via graph based agents, use advanced prompting techniques, and more. LionAGI aims to provide a centralized agent-managed framework for "ML-powered tools coordination" and to dramatically lower the barrier of entries for creating use-case/domain specific tools. It is designed to be asynchronous only and requires Python 3.10 or higher.

beta9
Beta9 is an open-source platform for running scalable serverless GPU workloads across cloud providers. It allows users to scale out workloads to thousands of GPU or CPU containers, achieve ultrafast cold-start for custom ML models, automatically scale to zero to pay for only what is used, utilize flexible distributed storage, distribute workloads across multiple cloud providers, and easily deploy task queues and functions using simple Python abstractions. The platform is designed for launching remote serverless containers quickly, featuring a custom, lazy loading image format backed by S3/FUSE, a fast redis-based container scheduling engine, content-addressed storage for caching images and files, and a custom runc container runtime.

browser-use
Browser Use is a tool designed to make websites accessible for AI agents. It provides an easy way to connect AI agents with the browser, enabling users to perform tasks such as extracting vision and HTML elements, managing multiple tabs, and executing custom actions. The tool supports various language models and allows users to parallelize multiple agents for efficient processing. With features like self-correction and the ability to register custom actions, Browser Use offers a versatile solution for interacting with web content using AI technology.

rl
TorchRL is an open-source Reinforcement Learning (RL) library for PyTorch. It provides pytorch and **python-first** , low and high level abstractions for RL that are intended to be **efficient** , **modular** , **documented** and properly **tested**. The code is aimed at supporting research in RL. Most of it is written in python in a highly modular way, such that researchers can easily swap components, transform them or write new ones with little effort.

sdk-python
Strands Agents is a lightweight and flexible SDK that takes a model-driven approach to building and running AI agents. It supports various model providers, offers advanced capabilities like multi-agent systems and streaming support, and comes with built-in MCP server support. Users can easily create tools using Python decorators, integrate MCP servers seamlessly, and leverage multiple model providers for different AI tasks. The SDK is designed to scale from simple conversational assistants to complex autonomous workflows, making it suitable for a wide range of AI development needs.

claude-code.nvim
Claude Code Neovim Plugin is a seamless integration between Claude Code AI assistant and Neovim. It allows users to toggle Claude Code in a terminal window with a single key press, automatically detect and reload files modified by Claude Code, provide real-time buffer updates when files are changed externally, offer customizable window position and size, integrate with which-key, use git project root as working directory, maintain a modular code structure, provide type annotations with LuaCATS for better IDE support, offer configuration validation, and include a testing framework for reliability. The plugin creates a terminal buffer running the Claude Code CLI, sets up autocommands to detect file changes on disk, automatically reloads files modified by Claude Code, provides keymaps and commands for toggling the terminal, and detects git repositories to set the working directory to the git root.
For similar tasks

OpenAGI
OpenAGI is an AI agent creation package designed for researchers and developers to create intelligent agents using advanced machine learning techniques. The package provides tools and resources for building and training AI models, enabling users to develop sophisticated AI applications. With a focus on collaboration and community engagement, OpenAGI aims to facilitate the integration of AI technologies into various domains, fostering innovation and knowledge sharing among experts and enthusiasts.

GPTSwarm
GPTSwarm is a graph-based framework for LLM-based agents that enables the creation of LLM-based agents from graphs and facilitates the customized and automatic self-organization of agent swarms with self-improvement capabilities. The library includes components for domain-specific operations, graph-related functions, LLM backend selection, memory management, and optimization algorithms to enhance agent performance and swarm efficiency. Users can quickly run predefined swarms or utilize tools like the file analyzer. GPTSwarm supports local LM inference via LM Studio, allowing users to run with a local LLM model. The framework has been accepted by ICML2024 and offers advanced features for experimentation and customization.

AgentForge
AgentForge is a low-code framework tailored for the rapid development, testing, and iteration of AI-powered autonomous agents and Cognitive Architectures. It is compatible with a range of LLM models and offers flexibility to run different models for different agents based on specific needs. The framework is designed for seamless extensibility and database-flexibility, making it an ideal playground for various AI projects. AgentForge is a beta-testing ground and future-proof hub for crafting intelligent, model-agnostic autonomous agents.

atomic_agents
Atomic Agents is a modular and extensible framework designed for creating powerful applications. It follows the principles of Atomic Design, emphasizing small and single-purpose components. Leveraging Pydantic for data validation and serialization, the framework offers a set of tools and agents that can be combined to build AI applications. It depends on the Instructor package and supports various APIs like OpenAI, Cohere, Anthropic, and Gemini. Atomic Agents is suitable for developers looking to create AI agents with a focus on modularity and flexibility.

LongRoPE
LongRoPE is a method to extend the context window of large language models (LLMs) beyond 2 million tokens. It identifies and exploits non-uniformities in positional embeddings to enable 8x context extension without fine-tuning. The method utilizes a progressive extension strategy with 256k fine-tuning to reach a 2048k context. It adjusts embeddings for shorter contexts to maintain performance within the original window size. LongRoPE has been shown to be effective in maintaining performance across various tasks from 4k to 2048k context lengths.

ax
Ax is a Typescript library that allows users to build intelligent agents inspired by agentic workflows and the Stanford DSP paper. It seamlessly integrates with multiple Large Language Models (LLMs) and VectorDBs to create RAG pipelines or collaborative agents capable of solving complex problems. The library offers advanced features such as streaming validation, multi-modal DSP, and automatic prompt tuning using optimizers. Users can easily convert documents of any format to text, perform smart chunking, embedding, and querying, and ensure output validation while streaming. Ax is production-ready, written in Typescript, and has zero dependencies.

Awesome-AI-Agents
Awesome-AI-Agents is a curated list of projects, frameworks, benchmarks, platforms, and related resources focused on autonomous AI agents powered by Large Language Models (LLMs). The repository showcases a wide range of applications, multi-agent task solver projects, agent society simulations, and advanced components for building and customizing AI agents. It also includes frameworks for orchestrating role-playing, evaluating LLM-as-Agent performance, and connecting LLMs with real-world applications through platforms and APIs. Additionally, the repository features surveys, paper lists, and blogs related to LLM-based autonomous agents, making it a valuable resource for researchers, developers, and enthusiasts in the field of AI.

CodeFuse-muAgent
CodeFuse-muAgent is a Multi-Agent framework designed to streamline Standard Operating Procedure (SOP) orchestration for agents. It integrates toolkits, code libraries, knowledge bases, and sandbox environments for rapid construction of complex Multi-Agent interactive applications. The framework enables efficient execution and handling of multi-layered and multi-dimensional tasks.
For similar jobs

goat
GOAT (Great Onchain Agent Toolkit) is an open-source framework designed to simplify the process of making AI agents perform onchain actions by providing a provider-agnostic solution that abstracts away the complexities of interacting with blockchain tools such as wallets, token trading, and smart contracts. It offers a catalog of ready-made blockchain actions for agent developers and allows dApp/smart contract developers to develop plugins for easy access by agents. With compatibility across popular agent frameworks, support for multiple blockchains and wallet providers, and customizable onchain functionalities, GOAT aims to streamline the integration of blockchain capabilities into AI agents.

typedai
TypedAI is a TypeScript-first AI platform designed for developers to create and run autonomous AI agents, LLM based workflows, and chatbots. It offers advanced autonomous agents, software developer agents, pull request code review agent, AI chat interface, Slack chatbot, and supports various LLM services. The platform features configurable Human-in-the-loop settings, functional callable tools/integrations, CLI and Web UI interface, and can be run locally or deployed on the cloud with multi-user/SSO support. It leverages the Python AI ecosystem through executing Python scripts/packages and provides flexible run/deploy options like single user mode, Firestore & Cloud Run deployment, and multi-user SSO enterprise deployment. TypedAI also includes UI examples, code examples, and automated LLM function schemas for seamless development and execution of AI workflows.

appworld
AppWorld is a high-fidelity execution environment of 9 day-to-day apps, operable via 457 APIs, populated with digital activities of ~100 people living in a simulated world. It provides a benchmark of natural, diverse, and challenging autonomous agent tasks requiring rich and interactive coding. The repository includes implementations of AppWorld apps and APIs, along with tests. It also introduces safety features for code execution and provides guides for building agents and extending the benchmark.

mcp-agent
mcp-agent is a simple, composable framework designed to build agents using the Model Context Protocol. It handles the lifecycle of MCP server connections and implements patterns for building production-ready AI agents in a composable way. The framework also includes OpenAI's Swarm pattern for multi-agent orchestration in a model-agnostic manner, making it the simplest way to build robust agent applications. It is purpose-built for the shared protocol MCP, lightweight, and closer to an agent pattern library than a framework. mcp-agent allows developers to focus on the core business logic of their AI applications by handling mechanics such as server connections, working with LLMs, and supporting external signals like human input.

openrouter-kit
OpenRouter Kit is a powerful TypeScript/JavaScript library for interacting with the OpenRouter API. It simplifies working with LLMs by providing a high-level API for chats, dialogue history management, tool calls with error handling, security module, and cost tracking. Ideal for building chatbots, AI agents, and integrating LLMs into applications.

Awesome-LLM-RAG-Application
Awesome-LLM-RAG-Application is a repository that provides resources and information about applications based on Large Language Models (LLM) with Retrieval-Augmented Generation (RAG) pattern. It includes a survey paper, GitHub repo, and guides on advanced RAG techniques. The repository covers various aspects of RAG, including academic papers, evaluation benchmarks, downstream tasks, tools, and technologies. It also explores different frameworks, preprocessing tools, routing mechanisms, evaluation frameworks, embeddings, security guardrails, prompting tools, SQL enhancements, LLM deployment, observability tools, and more. The repository aims to offer comprehensive knowledge on RAG for readers interested in exploring and implementing LLM-based systems and products.

ChatGPT-On-CS
ChatGPT-On-CS is an intelligent chatbot tool based on large models, supporting various platforms like WeChat, Taobao, Bilibili, Douyin, Weibo, and more. It can handle text, voice, and image inputs, access external resources through plugins, and customize enterprise AI applications based on proprietary knowledge bases. Users can set custom replies, utilize ChatGPT interface for intelligent responses, send images and binary files, and create personalized chatbots using knowledge base files. The tool also features platform-specific plugin systems for accessing external resources and supports enterprise AI applications customization.

call-gpt
Call GPT is a voice application that utilizes Deepgram for Speech to Text, elevenlabs for Text to Speech, and OpenAI for GPT prompt completion. It allows users to chat with ChatGPT on the phone, providing better transcription, understanding, and speaking capabilities than traditional IVR systems. The app returns responses with low latency, allows user interruptions, maintains chat history, and enables GPT to call external tools. It coordinates data flow between Deepgram, OpenAI, ElevenLabs, and Twilio Media Streams, enhancing voice interactions.