
agentic
An opinionated framework for building sophisticated AI Agents
Stars: 95

Agentic is a lightweight and flexible Python library for building multi-agent systems. It provides a simple and intuitive API for creating and managing agents, defining their behaviors, and simulating interactions in a multi-agent environment. With Agentic, users can easily design and implement complex agent-based models to study emergent behaviors, social dynamics, and decentralized decision-making processes. The library supports various agent architectures, communication protocols, and simulation scenarios, making it suitable for a wide range of research and educational applications in the fields of artificial intelligence, machine learning, social sciences, and robotics.
README:
Agentic - Docs
Agentic makes it easy to create AI agents - autonomous software programs that understand natural language and can use tools to do work on your behalf.
Agentic is in the tradition of opinionated frameworks. We've tried to encode lots of sensible defaults and best practices into the design, testing and deployment of agents.
Agentic is a few different things:
- A lightweight agent framework. Same part of the stack as SmolAgents or PydanticAI.
- A reference implementation of the agent protocol.
- An agent runtime built on Ray
- An optional "batteries included" set of features to help you get running quickly:
You can pretty much use any of these features and leave the others. There are lots of framework choices but we think we have embedded some good ideas into ours.
Some of the framework features:
- Approachable and simple to use, but flexible enough to support the most complex agents
- Supports teams of cooperating agents
- Supports Human-in-the-loop
- Easy definition and use of tools (functions, class methods, import LangChain tools, ...)
- Built alongside a set of production-tested tools
Visit the docs: https://supercog-ai.github.io/agentic/latest/
Perform complex research on any topic. Adapted from the LangChain version (but you can actually understand the code).
...full browser automation, including using authenticated sessions...
An agent team which auto-produces and publishes a daily podcast. Customize for your news interests.
Your own meeting bot agent with meeting summaries stored into RAG.
At this stage it's probably easiest to run this repo from source. We use uv
for package management:
Note If you're on Linux or Windows and installing the
rag
extra you will need to add--extra-index-url https://download.pytorch.org/whl/cpu
to install the CPU version of PyTorch.
git clone [email protected]:supercog-ai/agentic.git
uv venv --python 3.12
source .venv/bin/activate
# For MacOS
uv pip install -e "./agentic[all,dev]"
# For Linux or Windows
uv pip install -e "./agentic[all,dev]" --extra-index-url https://download.pytorch.org/whl/cpu --index-strategy unsafe-first-match
these commands will install the agentic
package locally so that you can use the agentic
CLI command
and so your pythonpath is set correctly.
You can also try installing just the package:
# For MacOS
uv pip install "agentic-framework[all,dev]"
# For Linux or Windows
uv pip install "agentic-framework[all,dev]" --extra-index-url https://download.pytorch.org/whl/cpu
Now setup your folder to hold your agents:
agentic init .
The install will copy examples and a basic file structure into the directory myagents
. You can name
or rename this folder however you like.
Visit the docs for a tutorial on getting started with the framework.
Agentic provides multiple ways to interact with your agents, from command-line interfaces to web applications. This guide covers all available methods.
Interface | Use Case | Features |
---|---|---|
Command Line (CLI) | Quick testing, scripting | Simple text I/O, dot commands |
REST API | Integration with other applications | HTTP endpoints, event streaming |
Next.js Dashboard | Professional web UI | Real-time updates, thread history, background tasks |
Streamlit Dashboard | Quick prototyping | Simple web UI with minimal setup |
The CLI provides a simple REPL interface for direct conversations with your agents.
agentic thread examples/basic_agent.py
The REST API allows integration with web applications, automation systems, or other services.
agentic serve examples/basic_agent.py
The Next.js Dashboard offers a full-featured web interface with:
- Multiple agent management
- Real-time event streaming
- Background task management
- Thread history and logs
- Markdown rendering
agentic dashboard start --agent-path examples/basic_agent.py
Learn more about the Next.js Dashboard →
The Streamlit Dashboard provides a lightweight interface for quick prototyping.
agentic streamlit --agent-path examples/basic_agent.py
Learn more about the Streamlit Dashboard →
You can always interact with agents directly in Python:
from agentic.common import Agent
# Create an agent
agent = Agent(
name="My Agent",
instructions="You are a helpful assistant.",
model="openai/gpt-4o-mini"
)
# Use the << operator for a quick response
response = agent << "Hello, how are you?"
print(response)
# For more control over the conversation
request_id = agent.start_request("Tell me a joke").request_id
for event in agent.get_events(request_id):
print(event)
Learn more about Programmatic Access →
Agentic builds on Litellm
to enable consistent support for many different LLM models.
Under the covers, Agentic uses Ray to host and run your agents. Ray implements an actor model which implements a much better architecture for running complex agents than a typical web framework.
Agentic requires API keys for the LLM providers you plan to use. Copy the .env.example
file to .env
and set the following environment variables:
OPENAI_API_KEY=your_openai_api_key_here
ANTHROPIC_API_KEY=your_anthropic_api_key_here
You only need to set the API keys for the models you plan to use. For example, if you're only using OpenAI models, you only need to set OPENAI_API_KEY
.
Run tests:
pytest
Preview docs locally:
mike serve
Deploy the docs:
mike deploy --push dev
Yup, there's a lot of agent frameworks. But many of these are "gen 1" frameworks - designed before anyone had really built agents and tried to put them into production. Agentic is informed by our learnings at Supercog from building and running hundreds of agents over the past year.
Some reasons why Agentic is different:
- We have a thin abstraction over the LLM. The "agent loop" code is a couple hundred lines calling directly into the LLM API (the OpenAI completion API via Litellm).
- Logging is built-in and usable out of the box. Trace agent threads, tool calls, and LLM completions with ability to control the right level of detail.
- Well designed abstractions with just a few nouns: Agent, Tool, Thread, Run. Stop assembling the computational graph out of toothpicks.
- Rich event system goes beyond text so agents can work with data and media.
- Event streams can have multiple channels, so your agent can "run in the background" and still notify you of what is happening.
- Human-in-the-loop is built into the framework, not hacked in. An agent can wait indefinitely, or get notification from any channel like an email or webhook.
- Context length, token usage, and timing usage data is emitted in a standard form.
- Tools are designed to support configuration and authentication, not just run on a sea of random env vars.
- Use tools from almost any framework, including MCP and Composio.
- "Tools are agents". You can use tools and agents interchangeably. This is where the world is heading, that whatever "service" your agent uses it will be indistinguishable whether that service is "hard-coded" or implemented by another agent.
- Agents can add or remove tools dynamically while they are running. (coming soon...)
- "Batteries included". Easy RAG support. Every agent has an API interface. UI tools for quickly building a UI for your agents. "Agent contracts" for testing.
- Automatic context management keeps your agent within context length limits.
We would love you to contribute! We especially welcome:
- New tools
- Example agents
- New UI apps
but obviously we appreciate bug reports, bug fixes, etc... We encourage tests with all contributions, but especially if you want to modify the core framework please submit tests in the PR.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for agentic
Similar Open Source Tools

agentic
Agentic is a lightweight and flexible Python library for building multi-agent systems. It provides a simple and intuitive API for creating and managing agents, defining their behaviors, and simulating interactions in a multi-agent environment. With Agentic, users can easily design and implement complex agent-based models to study emergent behaviors, social dynamics, and decentralized decision-making processes. The library supports various agent architectures, communication protocols, and simulation scenarios, making it suitable for a wide range of research and educational applications in the fields of artificial intelligence, machine learning, social sciences, and robotics.

OpenManus-RL
OpenManus-RL is an open-source initiative focused on enhancing reasoning and decision-making capabilities of large language models (LLMs) through advanced reinforcement learning (RL)-based agent tuning. The project explores novel algorithmic structures, diverse reasoning paradigms, sophisticated reward strategies, and extensive benchmark environments. It aims to push the boundaries of agent reasoning and tool integration by integrating insights from leading RL tuning frameworks and continuously updating progress in a dynamic, live-streaming fashion.

ms-agent
MS-Agent is a lightweight framework designed to empower agents with autonomous exploration capabilities. It provides a flexible and extensible architecture for creating agents capable of tasks like code generation, data analysis, and tool calling with MCP support. The framework supports multi-agent interactions, deep research, code generation, and is lightweight and extensible for various applications.

ml-engineering
This repository provides a comprehensive collection of methodologies, tools, and step-by-step instructions for successful training of large language models (LLMs) and multi-modal models. It is a technical resource suitable for LLM/VLM training engineers and operators, containing numerous scripts and copy-n-paste commands to facilitate quick problem-solving. The repository is an ongoing compilation of the author's experiences training BLOOM-176B and IDEFICS-80B models, and currently focuses on the development and training of Retrieval Augmented Generation (RAG) models at Contextual.AI. The content is organized into six parts: Insights, Hardware, Orchestration, Training, Development, and Miscellaneous. It includes key comparison tables for high-end accelerators and networks, as well as shortcuts to frequently needed tools and guides. The repository is open to contributions and discussions, and is licensed under Attribution-ShareAlike 4.0 International.

open-webui-tools
Open WebUI Tools Collection is a set of tools for structured planning, arXiv paper search, Hugging Face text-to-image generation, prompt enhancement, and multi-model conversations. It enhances LLM interactions with academic research, image generation, and conversation management. Tools include arXiv Search Tool and Hugging Face Image Generator. Function Pipes like Planner Agent offer autonomous plan generation and execution. Filters like Prompt Enhancer improve prompt quality. Installation and configuration instructions are provided for each tool and pipe.

ISEK
ISEK is a decentralized agent network framework that enables building intelligent, collaborative agent-to-agent systems. It integrates the Google A2A protocol and ERC-8004 contracts for identity registration, reputation building, and cooperative task-solving, creating a self-organizing, decentralized society of agents. The platform addresses challenges in the agent ecosystem by providing an incentive system for users to pay for agent services, motivating developers to build high-quality agents and fostering innovation and quality in the ecosystem. ISEK focuses on decentralized agent collaboration and coordination, allowing agents to find each other, reason together, and act as a decentralized system without central control. The platform utilizes ERC-8004 for decentralized identity, reputation, and validation registries, establishing trustless verification and reputation management.

deeppowers
Deeppowers is a powerful Python library for deep learning applications. It provides a wide range of tools and utilities to simplify the process of building and training deep neural networks. With Deeppowers, users can easily create complex neural network architectures, perform efficient training and optimization, and deploy models for various tasks. The library is designed to be user-friendly and flexible, making it suitable for both beginners and experienced deep learning practitioners.

RustGPT
A complete Large Language Model implementation in pure Rust with no external ML frameworks. Demonstrates building a transformer-based language model from scratch, including pre-training, instruction tuning, interactive chat mode, full backpropagation, and modular architecture. Model learns basic world knowledge and conversational patterns. Features custom tokenization, greedy decoding, gradient clipping, modular layer system, and comprehensive test coverage. Ideal for understanding modern LLMs and key ML concepts. Dependencies include ndarray for matrix operations and rand for random number generation. Contributions welcome for model persistence, performance optimizations, better sampling, evaluation metrics, advanced architectures, training improvements, data handling, and model analysis. Follows standard Rust conventions and encourages contributions at beginner, intermediate, and advanced levels.

transformers
Transformers is a state-of-the-art pretrained models library that acts as the model-definition framework for machine learning models in text, computer vision, audio, video, and multimodal tasks. It centralizes model definition for compatibility across various training frameworks, inference engines, and modeling libraries. The library simplifies the usage of new models by providing simple, customizable, and efficient model definitions. With over 1M+ Transformers model checkpoints available, users can easily find and utilize models for their tasks.

rag-in-action
rag-in-action is a GitHub repository that provides a practical course structure for developing a RAG system based on DeepSeek. The repository likely contains resources, code samples, and tutorials to guide users through the process of building and implementing a RAG system using DeepSeek technology. Users interested in learning about RAG systems and their development may find this repository helpful in gaining hands-on experience and practical knowledge in this area.

ml-retreat
ML-Retreat is a comprehensive machine learning library designed to simplify and streamline the process of building and deploying machine learning models. It provides a wide range of tools and utilities for data preprocessing, model training, evaluation, and deployment. With ML-Retreat, users can easily experiment with different algorithms, hyperparameters, and feature engineering techniques to optimize their models. The library is built with a focus on scalability, performance, and ease of use, making it suitable for both beginners and experienced machine learning practitioners.

verl
veRL is a flexible and efficient reinforcement learning training framework designed for large language models (LLMs). It allows easy extension of diverse RL algorithms, seamless integration with existing LLM infrastructures, and flexible device mapping. The framework achieves state-of-the-art throughput and efficient actor model resharding with 3D-HybridEngine. It supports popular HuggingFace models and is suitable for users working with PyTorch FSDP, Megatron-LM, and vLLM backends.

agents
Cloudflare Agents is a framework for building intelligent, stateful agents that persist, think, and evolve at the edge of the network. It allows for maintaining persistent state and memory, real-time communication, processing and learning from interactions, autonomous operation at global scale, and hibernating when idle. The project is actively evolving with focus on core agent framework, WebSocket communication, HTTP endpoints, React integration, and basic AI chat capabilities. Future developments include advanced memory systems, WebRTC for audio/video, email integration, evaluation framework, enhanced observability, and self-hosting guide.

langgraphjs
LangGraph.js is a library for building stateful, multi-actor applications with LLMs, offering benefits such as cycles, controllability, and persistence. It allows defining flows involving cycles, providing fine-grained control over application flow and state. Inspired by Pregel and Apache Beam, it includes features like loops, persistence, human-in-the-loop workflows, and streaming support. LangGraph integrates seamlessly with LangChain.js and LangSmith but can be used independently.

youtu-graphrag
Youtu-GraphRAG is a vertically unified agentic paradigm that connects the entire framework based on graph schema, allowing seamless domain transfer with minimal intervention. It introduces key innovations like schema-guided hierarchical knowledge tree construction, dually-perceived community detection, agentic retrieval, advanced construction and reasoning capabilities, fair anonymous dataset 'AnonyRAG', and unified configuration management. The framework demonstrates robustness with lower token cost and higher accuracy compared to state-of-the-art methods, enabling enterprise-scale deployment with minimal manual intervention for new domains.

FLAME
FLAME is a lightweight and efficient deep learning framework designed for edge devices. It provides a simple and user-friendly interface for developing and deploying deep learning models on resource-constrained devices. With FLAME, users can easily build and optimize neural networks for tasks such as image classification, object detection, and natural language processing. The framework supports various neural network architectures and optimization techniques, making it suitable for a wide range of applications in the field of edge computing.
For similar tasks

agentic
Agentic is a lightweight and flexible Python library for building multi-agent systems. It provides a simple and intuitive API for creating and managing agents, defining their behaviors, and simulating interactions in a multi-agent environment. With Agentic, users can easily design and implement complex agent-based models to study emergent behaviors, social dynamics, and decentralized decision-making processes. The library supports various agent architectures, communication protocols, and simulation scenarios, making it suitable for a wide range of research and educational applications in the fields of artificial intelligence, machine learning, social sciences, and robotics.

Awesome-LLM-in-Social-Science
This repository compiles a list of academic papers that evaluate, align, simulate, and provide surveys or perspectives on the use of Large Language Models (LLMs) in the field of Social Science. The papers cover various aspects of LLM research, including assessing their alignment with human values, evaluating their capabilities in tasks such as opinion formation and moral reasoning, and exploring their potential for simulating social interactions and addressing issues in diverse fields of Social Science. The repository aims to provide a comprehensive resource for researchers and practitioners interested in the intersection of LLMs and Social Science.

AdaSociety
AdaSociety is a multi-agent environment designed for simulating social structures and decision-making processes. It offers built-in resources, events, and player interactions. Users can customize the environment through JSON configuration or custom Python code. The environment supports training agents using RLlib and LLM frameworks. It provides a platform for studying multi-agent systems and social dynamics.
For similar jobs

weave
Weave is a toolkit for developing Generative AI applications, built by Weights & Biases. With Weave, you can log and debug language model inputs, outputs, and traces; build rigorous, apples-to-apples evaluations for language model use cases; and organize all the information generated across the LLM workflow, from experimentation to evaluations to production. Weave aims to bring rigor, best-practices, and composability to the inherently experimental process of developing Generative AI software, without introducing cognitive overhead.

LLMStack
LLMStack is a no-code platform for building generative AI agents, workflows, and chatbots. It allows users to connect their own data, internal tools, and GPT-powered models without any coding experience. LLMStack can be deployed to the cloud or on-premise and can be accessed via HTTP API or triggered from Slack or Discord.

VisionCraft
The VisionCraft API is a free API for using over 100 different AI models. From images to sound.

kaito
Kaito is an operator that automates the AI/ML inference model deployment in a Kubernetes cluster. It manages large model files using container images, avoids tuning deployment parameters to fit GPU hardware by providing preset configurations, auto-provisions GPU nodes based on model requirements, and hosts large model images in the public Microsoft Container Registry (MCR) if the license allows. Using Kaito, the workflow of onboarding large AI inference models in Kubernetes is largely simplified.

PyRIT
PyRIT is an open access automation framework designed to empower security professionals and ML engineers to red team foundation models and their applications. It automates AI Red Teaming tasks to allow operators to focus on more complicated and time-consuming tasks and can also identify security harms such as misuse (e.g., malware generation, jailbreaking), and privacy harms (e.g., identity theft). The goal is to allow researchers to have a baseline of how well their model and entire inference pipeline is doing against different harm categories and to be able to compare that baseline to future iterations of their model. This allows them to have empirical data on how well their model is doing today, and detect any degradation of performance based on future improvements.

tabby
Tabby is a self-hosted AI coding assistant, offering an open-source and on-premises alternative to GitHub Copilot. It boasts several key features: * Self-contained, with no need for a DBMS or cloud service. * OpenAPI interface, easy to integrate with existing infrastructure (e.g Cloud IDE). * Supports consumer-grade GPUs.

spear
SPEAR (Simulator for Photorealistic Embodied AI Research) is a powerful tool for training embodied agents. It features 300 unique virtual indoor environments with 2,566 unique rooms and 17,234 unique objects that can be manipulated individually. Each environment is designed by a professional artist and features detailed geometry, photorealistic materials, and a unique floor plan and object layout. SPEAR is implemented as Unreal Engine assets and provides an OpenAI Gym interface for interacting with the environments via Python.

Magick
Magick is a groundbreaking visual AIDE (Artificial Intelligence Development Environment) for no-code data pipelines and multimodal agents. Magick can connect to other services and comes with nodes and templates well-suited for intelligent agents, chatbots, complex reasoning systems and realistic characters.