
fastagency
The fastest way to bring multi-agent workflows to production.
Stars: 421

README:
The fastest way to bring multi-agent workflows to production.
For start, FastAgency is not yet another agentic AI framework. There are many such frameworks available today, the most popular open-source one is AG2 (formerly AutoGen). FastAgency provides you with a unified programming interface for deploying agentic workflows written in AG2 agentic framework in both development and productional settings. With only a few lines of code, you can create a web chat application or REST API service interacting with agents of your choice. If you need to scale-up your workloads, FastAgency can help you deploy a fully distributed system using internal message brokers coordinating multiple machines in multiple datacenters with just a few lines of code changed from your local development setup.
FastAgency is an open-source framework designed to accelerate the transition from prototype to production for multi-agent AI workflows. For developers who use the AG2 (formerly AutoGen) framework, FastAgency enables you to seamlessly scale Jupyter notebook prototypes into a fully functional, production-ready applications. With multi-framework support, a unified programming interface, and powerful API integration capabilities, FastAgency streamlines the deployment process, saving time and effort while maintaining flexibility and performance.
Whether you're orchestrating complex AI agents or integrating external APIs into workflows, FastAgency provides the tools necessary to quickly transition from concept to production, reducing development cycles and allowing you to focus on optimizing your multi-agent systems.
-
Unified Programming Interface Across UIs: FastAgency features a common programming interface that enables you to develop your core workflows once and reuse them across various user interfaces without rewriting code. This includes support for both console-based applications via
ConsoleUI
and web-based applications viaMesopUI
. Whether you need a command-line tool or a fully interactive web app, FastAgency allows you to deploy the same underlying workflows across environments, saving development time and ensuring consistency. -
Seamless External API Integration: One of FastAgency's standout features is its ability to easily integrate external APIs into your agent workflows. With just a few lines of code, you can import an OpenAPI specification, and in only one more line, you can connect it to your agents. This dramatically simplifies the process of enhancing AI agents with real-time data, external processing, or third-party services. For example, you can easily integrate a weather API to provide dynamic, real-time weather updates to your users, making your application more interactive and useful with minimal effort.
-
Tester Class for Continuous Integration: FastAgency also provides a Tester Class that enables developers to write and execute tests for their multi-agent workflows. This feature is crucial for maintaining the reliability and robustness of your application, allowing you to automatically verify agent behavior and interactions. The Tester Class is designed to integrate smoothly with continuous integration (CI) pipelines, helping you catch bugs early and ensure that your workflows remain functional as they scale into production.
-
Command-Line Interface (CLI) for Orchestration: FastAgency includes a powerful command-line interface (CLI) for orchestrating and managing multi-agent applications directly from the terminal. The CLI allows developers to quickly run workflows, pass parameters, and monitor agent interactions without needing a full GUI. This is especially useful for automating deployments and integrating workflows into broader DevOps pipelines, enabling developers to maintain control and flexibility in how they manage AI-driven applications.
FastAgency bridges the gap between rapid prototyping and production-ready deployment, empowering developers to bring their multi-agent systems to life quickly and efficiently. By integrating familiar frameworks like AG2 (formerly AutoGen), providing powerful API integration, and offering robust CI testing tools, FastAgency reduces the complexity and overhead typically associated with deploying AI agents in real-world applications.
Whether you’re building interactive console tools, developing fully-featured web apps, or orchestrating large-scale multi-agent systems, FastAgency is built to help you deploy faster, more reliably, and with greater flexibility.
Currently, the only supported runtime is AG2 (formerly AutoGen).
FastAgency currently supports workflows defined using AG2 (formerly AutoGen) and provides options for different types of applications:
-
Console: Use the
ConsoleUI
interface for command-line based interaction. This is ideal for developing and testing workflows in a text-based environment. -
Mesop: Utilize
MesopUI
for web-based applications. This interface is suitable for creating web applications with a user-friendly interface.
FastAgency can use chainable network adapters that can be used to easily create scalable, production ready architectures for serving your workflows. Currently, we support the following network adapters:
-
REST API via FastAPI: Use the
FastAPIAdapter
to serve your workflow using FastAPI server. This setup allows you to work your workflows in multiple workers and serve them using the highly extensible and stable ASGI server. -
NATS.io via FastStream: Utilize the
NatsAdapter
to use NATS.io MQ message broker for highly-scalable, production-ready setup. This interface is suitable for setups in VPN-s or, in combination with theFastAPIAdapter
to serve public workflows in an authenticated, secure manner.
We strongly recommend using Cookiecutter for setting up a FastAgency project. It creates the project folder structure, default workflow, automatically installs all the necessary requirements, and creates a devcontainer that can be used with Visual Studio Code for development.
-
Install Cookiecutter with the following command:
pip install cookiecutter
-
Run the
cookiecutter
command:cookiecutter https://github.com/ag2ai/cookiecutter-fastagency.git
-
Assuming that you used the default values, you should get the following output:
[1/4] project_name (My FastAgency App): [2/4] project_slug (my_fastagency_app): [3/4] Select app_type 1 - fastapi+mesop 2 - mesop 3 - nats+fastapi+mesop Choose from [1/2/3] (1): 1 [4/4] Select python_version 1 - 3.12 2 - 3.11 3 - 3.10 Choose from [1/2/3] (1): [5/5] Select authentication 1 - none 2 - google Choose from [1/2] (1):
-
To run LLM-based applications, you need an API key for the LLM used. The most commonly used LLM is OpenAI. To use it, create an OpenAI API Key and set it as an environment variable in the terminal using the following command:
export OPENAI_API_KEY=openai_api_key_here
-
Open the generated project in Visual Studio Code with the following command:
code my_fastagency_app
-
Once the project is opened, you will get the following option to reopen it in a devcontainer:
-
After reopening the project in devcontainer, you can verify that the setup is correct by running the provided tests with the following command:
pytest -s
You should get the following output if everything is correctly setup.
=================================== test session starts =================================== platform linux -- Python 3.12.7, pytest-8.3.3, pluggy-1.5.0 rootdir: /workspaces/my_fastagency_app configfile: pyproject.toml plugins: asyncio-0.24.0, anyio-4.6.2.post1 asyncio: mode=Mode.STRICT, default_loop_scope=None collected 1 item tests/test_workflow.py . [100%] ==================================== 1 passed in 1.02s ====================================
You need to define the workflow that your application will use. This is where you specify how the agents interact and what they do. Here's a simple example of a workflow definition as it is generated by the cookie cutter under my_fastagency_app/workflow.py
:
import os
from typing import Any
from autogen.agentchat import ConversableAgent
from fastagency import UI
from fastagency.runtimes.ag2 import Workflow
llm_config = {
"config_list": [
{
"model": "gpt-4o-mini",
"api_key": os.getenv("OPENAI_API_KEY"),
}
],
"temperature": 0.8,
}
wf = Workflow()
@wf.register(name="simple_learning", description="Student and teacher learning chat") # type: ignore[misc]
def simple_workflow(ui: UI, params: dict[str, Any]) -> str:
initial_message = ui.text_input(
sender="Workflow",
recipient="User",
prompt="I can help you learn about mathematics. What subject you would like to explore?",
)
student_agent = ConversableAgent(
name="Student_Agent",
system_message="You are a student willing to learn.",
llm_config=llm_config,
)
teacher_agent = ConversableAgent(
name="Teacher_Agent",
system_message="You are a math teacher.",
llm_config=llm_config,
)
chat_result = student_agent.initiate_chat(
teacher_agent,
message=initial_message,
summary_method="reflection_with_llm",
max_turns=3,
)
return str(chat_result.summary)
This code snippet sets up a simple learning chat between a student and a teacher. It defines the agents and how they should interact and specify how the conversation should be summarized.
To ensure that the workflow we have defined is working properly, we can test it locally using MesopUI. The code below can be found under my_fastagency_app/local/main_mesop.py
and imports the defined workflow and sets up MesopUI.
You can run the Mesop application locally with the following command on Linux and MacOS:
gunicorn my_fastagency_app.local.main_mesop:app
On Windows, please use the following command:
waitress-serve --listen=0.0.0.0:8000 my_fastagency_app.local.main_mesop:app
Open the MesopUI URL http://localhost:8000 in your browser. You can now use the graphical user interface to start, run, test and debug the autogen workflow manually.
If you created the project using Cookiecutter, then building the Docker image is as simple as running the provided script, as shown below:
./scripts/build_docker.sh
Similarly, running the Docker container is as simple as running the provided script, as shown below:
./scripts/run_docker.sh
If you created the project using Cookiecutter, there are built-in scripts to deploy your workflow to Fly.io. In Fly.io, the application namespace is global, so the application name you chose might already be taken. To check your application's name availability and to reserve it, you can run the following script:
./scripts/register_to_fly_io.sh
Once you have reserved your application name, you can test whether you can deploy your application to Fly.io using the following script:
./scripts/deploy_to_fly_io.sh
This is only for testing purposes. You should deploy using GitHub Actions{target="_blank"} as explained below.
Cookiecutter generated all the necessary files to deploy your application to Fly.io using GitHub Actions. Simply push your code to your github repository's main branch and GitHub Actions will automatically deploy your application to Fly.io. For this, you need to set the following secrets in your GitHub repository:
FLY_API_TOKEN
OPENAI_API_KEY
To learn how to create keys and add them as secrets, use the following links:
We are actively working on expanding FastAgency’s capabilities. In addition to supporting AG2 (formerly AutoGen), we plan to integrate support for other frameworks, other network providers and other UI frameworks.
Stay up to date with new features and integrations by following our documentation and community updates on our Discord server. FastAgency is continually evolving to support new frameworks, APIs, and deployment strategies, ensuring you remain at the forefront of AI-driven development.
Last but not least, show us your support by giving a star to our GitHub repository.
Your support helps us to stay in touch with you and encourages us to continue developing and improving the framework. Thank you for your support!
Thanks to all of these amazing people who made the project better!
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for fastagency
Similar Open Source Tools

fastagency
FastAgency is a powerful tool that leverages the AutoGen framework to quickly build applications with multi-agent workflows. It supports various interfaces like ConsoleUI and MesopUI, allowing users to create interactive applications. The tool enables defining workflows between agents, such as students and teachers, and summarizing conversations. FastAgency aims to expand its capabilities by integrating with additional agentic frameworks like CrewAI, providing more options for workflow definition and AI tool integration.

lighteval
LightEval is a lightweight LLM evaluation suite that Hugging Face has been using internally with the recently released LLM data processing library datatrove and LLM training library nanotron. We're releasing it with the community in the spirit of building in the open. Note that it is still very much early so don't expect 100% stability ^^' In case of problems or question, feel free to open an issue!

OpenMusic
OpenMusic is a repository providing an implementation of QA-MDT, a Quality-Aware Masked Diffusion Transformer for music generation. The code integrates state-of-the-art models and offers training strategies for music generation. The repository includes implementations of AudioLDM, PixArt-alpha, MDT, AudioMAE, and Open-Sora. Users can train or fine-tune the model using different strategies and datasets. The model is well-pretrained and can be used for music generation tasks. The repository also includes instructions for preparing datasets, training the model, and performing inference. Contact information is provided for any questions or suggestions regarding the project.

openshield
OpenShield is a firewall designed for AI models to protect against various attacks such as prompt injection, insecure output handling, training data poisoning, model denial of service, supply chain vulnerabilities, sensitive information disclosure, insecure plugin design, excessive agency granting, overreliance, and model theft. It provides rate limiting, content filtering, and keyword filtering for AI models. The tool acts as a transparent proxy between AI models and clients, allowing users to set custom rate limits for OpenAI endpoints and perform tokenizer calculations for OpenAI models. OpenShield also supports Python and LLM based rules, with upcoming features including rate limiting per user and model, prompts manager, content filtering, keyword filtering based on LLM/Vector models, OpenMeter integration, and VectorDB integration. The tool requires an OpenAI API key, Postgres, and Redis for operation.

indexify
Indexify is an open-source engine for building fast data pipelines for unstructured data (video, audio, images, and documents) using reusable extractors for embedding, transformation, and feature extraction. LLM Applications can query transformed content friendly to LLMs by semantic search and SQL queries. Indexify keeps vector databases and structured databases (PostgreSQL) updated by automatically invoking the pipelines as new data is ingested into the system from external data sources. **Why use Indexify** * Makes Unstructured Data **Queryable** with **SQL** and **Semantic Search** * **Real-Time** Extraction Engine to keep indexes **automatically** updated as new data is ingested. * Create **Extraction Graph** to describe **data transformation** and extraction of **embedding** and **structured extraction**. * **Incremental Extraction** and **Selective Deletion** when content is deleted or updated. * **Extractor SDK** allows adding new extraction capabilities, and many readily available extractors for **PDF**, **Image**, and **Video** indexing and extraction. * Works with **any LLM Framework** including **Langchain**, **DSPy**, etc. * Runs on your laptop during **prototyping** and also scales to **1000s of machines** on the cloud. * Works with many **Blob Stores**, **Vector Stores**, and **Structured Databases** * We have even **Open Sourced Automation** to deploy to Kubernetes in production.

qa-mdt
This repository provides an implementation of QA-MDT, integrating state-of-the-art models for music generation. It offers a Quality-Aware Masked Diffusion Transformer for enhanced music generation. The code is based on various repositories like AudioLDM, PixArt-alpha, MDT, AudioMAE, and Open-Sora. The implementation allows for training and fine-tuning the model with different strategies and datasets. The repository also includes instructions for preparing datasets in LMDB format and provides a script for creating a toy LMDB dataset. The model can be used for music generation tasks, with a focus on quality injection to enhance the musicality of generated music.

sec-parser
The `sec-parser` project simplifies extracting meaningful information from SEC EDGAR HTML documents by organizing them into semantic elements and a tree structure. It helps in parsing SEC filings for financial and regulatory analysis, analytics and data science, AI and machine learning, causal AI, and large language models. The tool is especially beneficial for AI, ML, and LLM applications by streamlining data pre-processing and feature extraction.

premsql
PremSQL is an open-source library designed to help developers create secure, fully local Text-to-SQL solutions using small language models. It provides essential tools for building and deploying end-to-end Text-to-SQL pipelines with customizable components, ideal for secure, autonomous AI-powered data analysis. The library offers features like Local-First approach, Customizable Datasets, Robust Executors and Evaluators, Advanced Generators, Error Handling and Self-Correction, Fine-Tuning Support, and End-to-End Pipelines. Users can fine-tune models, generate SQL queries from natural language inputs, handle errors, and evaluate model performance against predefined metrics. PremSQL is extendible for customization and private data usage.

inngest
Inngest is a platform that offers durable functions to replace queues, state management, and scheduling for developers. It allows writing reliable step functions faster without dealing with infrastructure. Developers can create durable functions using various language SDKs, run a local development server, deploy functions to their infrastructure, sync functions with the Inngest Platform, and securely trigger functions via HTTPS. Inngest Functions support retrying, scheduling, and coordinating operations through triggers, flow control, and steps, enabling developers to build reliable workflows with robust support for various operations.

unify
The Unify Python Package provides access to the Unify REST API, allowing users to query Large Language Models (LLMs) from any Python 3.7.1+ application. It includes Synchronous and Asynchronous clients with Streaming responses support. Users can easily use any endpoint with a single key, route to the best endpoint for optimal throughput, cost, or latency, and customize prompts to interact with the models. The package also supports dynamic routing to automatically direct requests to the top-performing provider. Additionally, users can enable streaming responses and interact with the models asynchronously for handling multiple user requests simultaneously.

TaskingAI
TaskingAI brings Firebase's simplicity to **AI-native app development**. The platform enables the creation of GPTs-like multi-tenant applications using a wide range of LLMs from various providers. It features distinct, modular functions such as Inference, Retrieval, Assistant, and Tool, seamlessly integrated to enhance the development process. TaskingAI’s cohesive design ensures an efficient, intelligent, and user-friendly experience in AI application development.

petals
Petals is a tool that allows users to run large language models at home in a BitTorrent-style manner. It enables fine-tuning and inference up to 10x faster than offloading. Users can generate text with distributed models like Llama 2, Falcon, and BLOOM, and fine-tune them for specific tasks directly from their desktop computer or Google Colab. Petals is a community-run system that relies on people sharing their GPUs to increase its capacity and offer a distributed network for hosting model layers.

mindnlp
MindNLP is an open-source NLP library based on MindSpore. It provides a platform for solving natural language processing tasks, containing many common approaches in NLP. It can help researchers and developers to construct and train models more conveniently and rapidly. Key features of MindNLP include: * Comprehensive data processing: Several classical NLP datasets are packaged into a friendly module for easy use, such as Multi30k, SQuAD, CoNLL, etc. * Friendly NLP model toolset: MindNLP provides various configurable components. It is friendly to customize models using MindNLP. * Easy-to-use engine: MindNLP simplified complicated training process in MindSpore. It supports Trainer and Evaluator interfaces to train and evaluate models easily. MindNLP supports a wide range of NLP tasks, including: * Language modeling * Machine translation * Question answering * Sentiment analysis * Sequence labeling * Summarization MindNLP also supports industry-leading Large Language Models (LLMs), including Llama, GLM, RWKV, etc. For support related to large language models, including pre-training, fine-tuning, and inference demo examples, you can find them in the "llm" directory. To install MindNLP, you can either install it from Pypi, download the daily build wheel, or install it from source. The installation instructions are provided in the documentation. MindNLP is released under the Apache 2.0 license. If you find this project useful in your research, please consider citing the following paper: @misc{mindnlp2022, title={{MindNLP}: a MindSpore NLP library}, author={MindNLP Contributors}, howpublished = {\url{https://github.com/mindlab-ai/mindnlp}}, year={2022} }

Upsonic
Upsonic offers a cutting-edge enterprise-ready framework for orchestrating LLM calls, agents, and computer use to complete tasks cost-effectively. It provides reliable systems, scalability, and a task-oriented structure for real-world cases. Key features include production-ready scalability, task-centric design, MCP server support, tool-calling server, computer use integration, and easy addition of custom tools. The framework supports client-server architecture and allows seamless deployment on AWS, GCP, or locally using Docker.

Easy-Translate
Easy-Translate is a script designed for translating large text files with a single command. It supports various models like M2M100, NLLB200, SeamlessM4T, LLaMA, and Bloom. The tool is beginner-friendly and offers seamless and customizable features for advanced users. It allows acceleration on CPU, multi-CPU, GPU, multi-GPU, and TPU, with support for different precisions and decoding strategies. Easy-Translate also provides an evaluation script for translations. Built on HuggingFace's Transformers and Accelerate library, it supports prompt usage and loading huge models efficiently.