nagato-ai
Simple cross-LLM AI Agent library
Stars: 76
Nagato-AI is an intuitive AI Agent library that supports multiple LLMs including OpenAI's GPT, Anthropic's Claude, Google's Gemini, and Groq LLMs. Users can create agents from these models and combine them to build an effective AI Agent system. The library is named after the powerful ninja Nagato from the anime Naruto, who can control multiple bodies with different abilities. Nagato-AI acts as a linchpin to summon and coordinate AI Agents for specific missions. It provides flexibility in programming and supports tools like Coordinator, Researcher, Critic agents, and HumanConfirmInputTool.
README:
Nagato-AI is an intuitive AI Agent library that works across multiple LLMs.
Currently it supports OpenAI's GPT, Anthpropic's Claude, Google's Gemini, and Groq (e.g. Llama 3) LLMs. You can create agents from any of the aforementioned family of models and combine them together to build the most effective AI Agent system you desire.
The name Nagato is inspired from the popular anime Naruto. In Naruto, Nagato is a very powerful ninja who possesses special eyes (Rinnegan) that gives him immense powers. Nagato's powers enable him to control multiple bodies endowed with different abilities. Nagato is also able to see through the eyes of all the bodies which he controls, thereby minimising blindspots that opponents may want to exploit.
Therefore, you can think of Nagato as the linchpin that summons and coordinates AI Agents which have a specific mission to complete.
Note that from now on I will use the terms Nagato and Nagato-AI interchangibly to refer to this library.
If you're working on the source repository (either via a fork or the original repository), you must ensure that you have poetry packaging/dependency management installed on your machine. Once poetry is installed, then simply run the following command in your termninal (from the root folder of nagato code base) to install all required dependencies:
poetry install
Simply run the command:
pip install nagatoai_core
That's it! Nagato AI available to use in your code.
By default, Nagato will look for environment variables to create the AI Agents and tools.
First, make sure to create a .env
file. Then add those variables to the .env
file you just created.
You only need to add some of the below environment variables for the model and the tools you plan to use. The current list of environment variables is the following:
OPENAI_API_KEY=
ANTHROPIC_API_KEY=
GROQ_API_KEY=
GOOGLE_API_KEY=
READWISE_API_KEY=
SERPER_API_KEY=
ELEVENLABS_API_KEY=
For instance if you only plan to use GPT-based agents and Readwise tools you should only set the OPENAI_API_KEY
and READWISE_API_KEY
environment variables.
Assuming your program's entrypoint is defined in a file called main.py
, you can run it by typing the following command:
poetry run python main.py
Nagato currently supports the following LLMs
- Claude 3 (Anthropic)
- GPT-3 to GPT-4 (OpenAI)
- Groq (which gives you access to Llama 3.1)
- Google Gemini
Currently Nagato AI uses Langfuse for tracing LLM calls. Set the environment variables below to be able to send traces:
LANGFUSE_SECRET_KEY=
LANGFUSE_PUBLIC_KEY=
You can see how Langfuse is being used in the SingleAgentTaskRunner
class.
Nagato is built with flexibility at its core, so you could program it using your paradigm of choice. However these are some of the ways I've seen people use Nagato so far.
By default Nagato expects all LLM API keys to be set as environment variables. Nagato may load the keys from the following variables:
OPENAI_API_KEY=<api-key>
ANTHROPIC_API_KEY=<api-key>
READWISE_API_KEY=<api-key>
In this configuration we have the following:
- 🎯 Coordinator: breaks down a problem statement (from stdin) into an objective and suggests tasks
- 📚 Researcher: works on a task by performing research
- ✅ Critic: evaluates whether the task was completed
# Using Claude-3 Opus as the coordinator agent
coordinator_agent: Agent = create_agent(
anthropic_api_key,
"claude-3-opus-20240229",
"Coordinator",
COORDINATOR_SYSTEM_PROMPT,
"Coordinator Agent",
)
# Using GPT-4 turbo as the researcher agent
researcher_agent = create_agent(
anthropic_api_key,
"gpt-4-turbo-2024-04-09",
"Researcher",
RESEARCHER_SYSTEM_PROMPT,
"Researcher Agent",
)
# Use Google Genini 1.5 Flash as the critic ahemt
critic_agent = create_agent(
google_api_key,
"gemini-1.5-flash",
"Critic",
CRITIC_SYSTEM_PROMPT,
"Critic Agent",
)
...
The full blow example is available here
In this configuration we directly submit as input an objective and a set of tasks needed to complete the objective. Therefore we can skip the coordinator agent and have the worker agent(s) work on the tasks, while the critic agent evaluates whether the task carried out meets the requirements originally specified.
task_list: List[Task] = [
Task(
goal="Fetch last 100 user tweets",
description="Fetch the tweets from the user using the Twitter API. Limit the number of tweets fetched to 100 only."),
Task(
goal="Perform sentiment analysis on the tweets",
description="Feed the tweets to the AI Agent to analyze sentiment per overall sentiment acoss tweets. Range of values for sentiment can be: Positive, Negative, or Neutral"
)]
coordinator_agent: Agent = create_agent(
anthropic_api_key,
"claude-3-sonnet-20240229",
"Coordinator",
COORDINATOR_SYSTEM_PROMPT,
"Coordinator Agent",
)
critic_agent = create_agent(
anthropic_api_key,
"claude-3-haiku-20240307",
"Critic",
CRITIC_SYSTEM_PROMPT,
"Critic Agent",
)
for task in task_list:
# Insert the task into the prompt
worker_prompt = ...
worker_exchange = researcher_agent.chat(worker_prompt, task, 0.7, 2000)
# insert the response from the agent into prompt for the critic
critic_prompt = ...
critic_exchange = critic_agent(critic_prompt, task, 0.7, 2000)
# Evaluate whether the task was completed based on the answer from the critic agent
...
Check the full example here to see how tool calling works. We now support tool calling for GPT, Claude 3, and Llama 3 (via Groq) models.
Creating a tool is straightforward. You must create have these two elements in place for a tool to be usable:
- A config class that contains the parameters that your tool will be called with
- A tool class that inherits from
AbstractTool
, and contains the main logic for your tool.
For instance the below shows how we've created a tool to get the user to confirm yes/no in the terminal
from typing import Any, Type
from pydantic import BaseModel, Field
from rich.prompt import Confirm
from nagatoai_core.tool.abstract_tool import AbstractTool
class HumanConfirmInputConfig(BaseModel):
"""
HumanConfirmInputConfig represents the configuration for the HumanConfirmInputTool.
"""
message: str = Field(
...,
description="The message to display to the user to confirm whether to proceed or not",
)
class HumanConfirmInputTool(AbstractTool):
"""
HumanConfirmInputTool represents a tool that prompts the user to confirm whether to proceed or not.
"""
name: str = "human_confirm_input"
description: str = (
"""Prompts the user to confirm whether to proceed or not. Returns a boolean value indicating the user's choice."""
)
args_schema: Type[BaseModel] = HumanConfirmInputConfig
def _run(self, config: HumanConfirmInputConfig) -> Any:
"""
Prompts the user to confirm whether to proceed or not.
:param message: The message to display to the user to confirm whether to proceed or not.
:return: A boolean value indicating the user's choice.
"""
confirm = Confirm.ask("[bold yellow]" + config.message + "[/bold yellow]")
return confirm
Nagato is still in its very early development phase. This means that I am likely to introduce breaking changes over the next iterations of the library.
Moreover, there is a lot of functionality currently missing from Nagato. I will remedy this over time. There is no official roadmap per se but I plan to add the following capabilities to Nagato:
- ✅ implement function calling (complement to adding tools)
- ✅ introduce basic tools (e.g. surfing the web)
- ✅ implement agent based on Llama 3 model (via Groq)
- ✅ implement agent based on Google Gemini models (without function calling)
- ✅ cache results from function calling
- ✅ implement v1 of self-reflection and re-planning for agents
- ✅ Implement audio/text-to-speech tools
- ✅ implement function calling for Google Gemini agent
- ✅ LLMOps instrumentation (via Langfuse)
- 🎯 implement short/long-term memory for agents (with RAG and memory synthesis)
- 🎯 implement additional modalities (e.g. image, sound, etc.)
- 🎯 Support for local LLMs (e.g. via Ollama)
I'd be grateful if you could do some of the following to support this project:
- star this repository on Github
- follow me on X/Twitter
- raise Github issues if you've come across any bug using Nagato or would like a feature to be added to Nagato
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for nagato-ai
Similar Open Source Tools
nagato-ai
Nagato-AI is an intuitive AI Agent library that supports multiple LLMs including OpenAI's GPT, Anthropic's Claude, Google's Gemini, and Groq LLMs. Users can create agents from these models and combine them to build an effective AI Agent system. The library is named after the powerful ninja Nagato from the anime Naruto, who can control multiple bodies with different abilities. Nagato-AI acts as a linchpin to summon and coordinate AI Agents for specific missions. It provides flexibility in programming and supports tools like Coordinator, Researcher, Critic agents, and HumanConfirmInputTool.
rag-experiment-accelerator
The RAG Experiment Accelerator is a versatile tool that helps you conduct experiments and evaluations using Azure AI Search and RAG pattern. It offers a rich set of features, including experiment setup, integration with Azure AI Search, Azure Machine Learning, MLFlow, and Azure OpenAI, multiple document chunking strategies, query generation, multiple search types, sub-querying, re-ranking, metrics and evaluation, report generation, and multi-lingual support. The tool is designed to make it easier and faster to run experiments and evaluations of search queries and quality of response from OpenAI, and is useful for researchers, data scientists, and developers who want to test the performance of different search and OpenAI related hyperparameters, compare the effectiveness of various search strategies, fine-tune and optimize parameters, find the best combination of hyperparameters, and generate detailed reports and visualizations from experiment results.
Tools4AI
Tools4AI is a Java-based Agentic Framework for building AI agents to integrate with enterprise Java applications. It enables the conversion of natural language prompts into actionable behaviors, streamlining user interactions with complex systems. By leveraging AI capabilities, it enhances productivity and innovation across diverse applications. The framework allows for seamless integration of AI with various systems, such as customer service applications, to interpret user requests, trigger actions, and streamline workflows. Prompt prediction anticipates user actions based on input prompts, enhancing user experience by proactively suggesting relevant actions or services based on context.
langchain
LangChain is a framework for developing Elixir applications powered by language models. It enables applications to connect language models to other data sources and interact with the environment. The library provides components for working with language models and off-the-shelf chains for specific tasks. It aims to assist in building applications that combine large language models with other sources of computation or knowledge. LangChain is written in Elixir and is not aimed for parity with the JavaScript and Python versions due to differences in programming paradigms and design choices. The library is designed to make it easy to integrate language models into applications and expose features, data, and functionality to the models.
generative-ai-sagemaker-cdk-demo
This repository showcases how to deploy generative AI models from Amazon SageMaker JumpStart using the AWS CDK. Generative AI is a type of AI that can create new content and ideas, such as conversations, stories, images, videos, and music. The repository provides a detailed guide on deploying image and text generative AI models, utilizing pre-trained models from SageMaker JumpStart. The web application is built on Streamlit and hosted on Amazon ECS with Fargate. It interacts with the SageMaker model endpoints through Lambda functions and Amazon API Gateway. The repository also includes instructions on setting up the AWS CDK application, deploying the stacks, using the models, and viewing the deployed resources on the AWS Management Console.
kafka-ml
Kafka-ML is a framework designed to manage the pipeline of Tensorflow/Keras and PyTorch machine learning models on Kubernetes. It enables the design, training, and inference of ML models with datasets fed through Apache Kafka, connecting them directly to data streams like those from IoT devices. The Web UI allows easy definition of ML models without external libraries, catering to both experts and non-experts in ML/AI.
aiac
AIAC is a library and command line tool to generate Infrastructure as Code (IaC) templates, configurations, utilities, queries, and more via LLM providers such as OpenAI, Amazon Bedrock, and Ollama. Users can define multiple 'backends' targeting different LLM providers and environments using a simple configuration file. The tool allows users to ask a model to generate templates for different scenarios and composes an appropriate request to the selected provider, storing the resulting code to a file and/or printing it to standard output.
BentoDiffusion
BentoDiffusion is a BentoML example project that demonstrates how to serve and deploy diffusion models in the Stable Diffusion (SD) family. These models are specialized in generating and manipulating images based on text prompts. The project provides a guide on using SDXL Turbo as an example, along with instructions on prerequisites, installing dependencies, running the BentoML service, and deploying to BentoCloud. Users can interact with the deployed service using Swagger UI or other methods. Additionally, the project offers the option to choose from various diffusion models available in the repository for deployment.
nerve
Nerve is a tool that allows creating stateful agents with any LLM of your choice without writing code. It provides a framework of functionalities for planning, saving, or recalling memories by dynamically adapting the prompt. Nerve is experimental and subject to changes. It is valuable for learning and experimenting but not recommended for production environments. The tool aims to instrument smart agents without code, inspired by projects like Dreadnode's Rigging framework.
ray-llm
RayLLM (formerly known as Aviary) is an LLM serving solution that makes it easy to deploy and manage a variety of open source LLMs, built on Ray Serve. It provides an extensive suite of pre-configured open source LLMs, with defaults that work out of the box. RayLLM supports Transformer models hosted on Hugging Face Hub or present on local disk. It simplifies the deployment of multiple LLMs, the addition of new LLMs, and offers unique autoscaling support, including scale-to-zero. RayLLM fully supports multi-GPU & multi-node model deployments and offers high performance features like continuous batching, quantization and streaming. It provides a REST API that is similar to OpenAI's to make it easy to migrate and cross test them. RayLLM supports multiple LLM backends out of the box, including vLLM and TensorRT-LLM.
chroma
Chroma is an open-source embedding database that simplifies building LLM apps by enabling the integration of knowledge, facts, and skills for LLMs. The Ruby client for Chroma Database, chroma-rb, facilitates connecting to Chroma's database via its API. Users can configure the host, check server version, create collections, and add embeddings. The gem supports Chroma Database version 0.3.22 or newer, requiring Ruby 3.1.4 or later. It can be used with the hosted Chroma service at trychroma.com by setting configuration options like api_key, tenant, and database. Additionally, the gem provides integration with Jupyter Notebook for creating embeddings using Ollama and Nomic embed text with a Ruby HTTP client.
llamabot
LlamaBot is a Pythonic bot interface to Large Language Models (LLMs), providing an easy way to experiment with LLMs in Jupyter notebooks and build Python apps utilizing LLMs. It supports all models available in LiteLLM. Users can access LLMs either through local models with Ollama or by using API providers like OpenAI and Mistral. LlamaBot offers different bot interfaces like SimpleBot, ChatBot, QueryBot, and ImageBot for various tasks such as rephrasing text, maintaining chat history, querying documents, and generating images. The tool also includes CLI demos showcasing its capabilities and supports contributions for new features and bug reports from the community.
PSAI
PSAI is a PowerShell module that empowers scripts with the intelligence of OpenAI, bridging the gap between PowerShell and AI. It enables seamless integration for tasks like file searches and data analysis, revolutionizing automation possibilities with just a few lines of code. The module supports the latest OpenAI API changes, offering features like improved file search, vector store objects, token usage control, message limits, tool choice parameter, custom conversation histories, and model configuration parameters.
neo4j-genai-python
This repository contains the official Neo4j GenAI features for Python. The purpose of this package is to provide a first-party package to developers, where Neo4j can guarantee long-term commitment and maintenance as well as being fast to ship new features and high-performing patterns and methods.
quick-start-connectors
Cohere's Build-Your-Own-Connector framework allows integration of Cohere's Command LLM via the Chat API endpoint to any datastore/software holding text information with a search endpoint. Enables user queries grounded in proprietary information. Use-cases include question/answering, knowledge working, comms summary, and research. Repository provides code for popular datastores and a template connector. Requires Python 3.11+ and Poetry. Connectors can be built and deployed using Docker. Environment variables set authorization values. Pre-commits for linting. Connectors tailored to integrate with Cohere's Chat API for creating chatbots. Connectors return documents as JSON objects for Cohere's API to generate answers with citations.
aici
The Artificial Intelligence Controller Interface (AICI) lets you build Controllers that constrain and direct output of a Large Language Model (LLM) in real time. Controllers are flexible programs capable of implementing constrained decoding, dynamic editing of prompts and generated text, and coordinating execution across multiple, parallel generations. Controllers incorporate custom logic during the token-by-token decoding and maintain state during an LLM request. This allows diverse Controller strategies, from programmatic or query-based decoding to multi-agent conversations to execute efficiently in tight integration with the LLM itself.
For similar tasks
nagato-ai
Nagato-AI is an intuitive AI Agent library that supports multiple LLMs including OpenAI's GPT, Anthropic's Claude, Google's Gemini, and Groq LLMs. Users can create agents from these models and combine them to build an effective AI Agent system. The library is named after the powerful ninja Nagato from the anime Naruto, who can control multiple bodies with different abilities. Nagato-AI acts as a linchpin to summon and coordinate AI Agents for specific missions. It provides flexibility in programming and supports tools like Coordinator, Researcher, Critic agents, and HumanConfirmInputTool.
surfkit
Surfkit is a versatile toolkit designed for building and sharing AI agents that can operate on various devices. Users can create multimodal agents, share them with the community, run them locally or in the cloud, manage agent tasks at scale, and track and observe agent actions. The toolkit provides functionalities for creating agents, devices, solving tasks, managing devices, tracking tasks, and publishing agents. It also offers integrations with libraries like MLLM, Taskara, Skillpacks, and Threadmem. Surfkit aims to simplify the development and deployment of AI agents across different environments.
For similar jobs
sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.
teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.
ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.
classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.
chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.
BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students
uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.
griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.