
arcade-ai
Arcade Python SDK and CLI
Stars: 343

Arcade AI is a developer-focused tooling and API platform designed to enhance the capabilities of LLM applications and agents. It simplifies the process of connecting agentic applications with user data and services, allowing developers to concentrate on building their applications. The platform offers prebuilt toolkits for interacting with various services, supports multiple authentication providers, and provides access to different language models. Users can also create custom toolkits and evaluate their tools using Arcade AI. Contributions are welcome, and self-hosting is possible with the provided documentation.
README:
Documentation • Tools • Quickstart • Contact Us
Arcade is a developer platform that lets you build, deploy, and manage tools for AI agents.
The Tool SDK makes it easy to create powerful, secure tools that your agents can use to interact with the world.
To learn more, check out our documentation.
Pst. hey, you, give us a star if you like it!
- Quickstart: Install and call a tool
- Build LLM Tools with Arcade SDK
- Calling your tools
- Client Libraries
- Support and Community
# Install the Arcade CLI
pip install arcade-ai
# Log in to Arcade
arcade login
# Show what tools are hosted by Arcade
arcade show
# show what tools are in a toolkit
arcade show -T GitHub
# look at the definition of a tool
arcade show -t GitHub.SetStarred
The GitHub.SetStarred tool is hosted by Arcade, so you can call it directly
without any additional setup of OAuth or servers. A simple way to test tools,
wether hosted by Arcade or not, is to use the arcade chat
app.
arcade chat
This will start a chat with an LLM that can call tools.
try calling the GitHub.SetStarred tool with a message like "Star the arcade-ai repo"
> arcade chat
=== Arcade Chat ===
Chatting with Arcade Engine at https://api.arcade.dev
User [email protected]:
star the arcadeai/arcade-ai repo
Assistant:
Thanks for authorizing the action! Sending your request...
Assistant:
I have successfully starred the repository arcadeai/arcade-ai for you.
If Arcade already hosts the tools you need to build your agent, you can navigate to the Quickstart to learn how to call tools programmatically in Python, Typescript, or HTTP.
You can also build your own tools with the SDK and deploy them in one command on Arcade Cloud
Arcade provides a tool SDK that allows you to build your own tools and use them in your agentic applications just like the existing tools Arcade provides. This is useful for building new tools, customizing existing tools to fit your needs, combining multiple tools, or building tools that are not yet supported by Arcade.
Prerequisites
- Python 3.10+ and pip
Now you can install the Tool SDK through pip.
-
Install the Arcade CLI:
pip install arcade-ai
If you plan on writing evaluations for your tools and the LLMs you use, you will also need to install the
evals
extra.pip install 'arcade-ai[evals]'
-
Log in to Arcade:
arcade login
This will prompt you to open a browser and authorize the CLI. It will then save the credentials to your machine typically in
~/.arcade/credentials.json
. If you haven't used the CLI before, you will need to create an account on this page.
Now you're ready to build tools with Arcade!
Toolkits are the main building blocks of Arcade. They are a collection of tools that are related to a specific service, use case, or agent. Toolkits are created and distributed python packages to facilitate version, dependency, and distribution.
To create a new toolkit, you can use the arcade new
command. This will create a new toolkit in the current directory.
-
Generate a new toolkit template:
arcade new
Name of the new toolkit?: mytoolkit Description of the toolkit?: myToolkit is a toolkit for ... Github owner username?: mytoolkit Author's email?: [email protected]
This will create a new toolkit in the current directory.
The generated toolkit includes all the scaffolding you need for a working tool. Look for the
mytoolkit/tool.py
file to customize the behavior of your tool. -
Install your new toolkit:
# make sure you have python installed python --version # install your new toolkit cd mytoolkit make install
This will install the toolkit in your local python environment using poetry.
The template is meant to be customized so feel free to change anything about the structure, package management, linting, etc.
-
Show the tools in the template Toolkit:
# show the tools in Mytoolkit arcade show --local -T Mytoolkit # show the definition of a tool arcade show --local -t Mytoolkit.SayHello # show all tools installed in your local python environment arcade show --local
Now you can edit the mytoolkit/tool.py
file to customize the behavior of your tool. Next,
you can host your tools to call with LLMs by deploying your toolkit to Arcade Cloud.
To make your tools in the toolkit available to call with LLMs, you can deploy your toolkit to Arcade Cloud.
The worker.toml
file created in the directory where you ran arcade new
will be used to deploy your toolkit.
In that directory, run the following command to deploy your toolkit:
# from inside the mytoolkit dir
cd ../
arcade deploy
This command will package your toolkit and deploy it as a worker instance in Arcade's cloud infrastructure:
[11:52:44] Deploying 'demo-worker...'
⠦ Deploying 1 workers Changed Packages
┏━━━━━━━━━━┳━━━━━━━━━┳━━━━━━━━━┳━━━━━━━━━━━━┓
┃ Added ┃ Removed ┃ Updated ┃ No Changes ┃
┡━━━━━━━━━━╇━━━━━━━━━╇━━━━━━━━━╇━━━━━━━━━━━━┩
│ Mytoolkit│ │ │ │
└──────────┴─────────┴─────────┴────────────┘
[11:53:13] ✅ Worker 'demo-worker' deployed successfully.
You can manage your deployed workers with the following commands:
# List all workers (both local and cloud-deployed)
arcade worker list
# Remove a deployed worker
arcade worker rm demo-worker
Once deployed, your toolkit is immediately available through the Arcade platform. You can now call your tools through the playground, LLM API, or Tools API without any additional setup.
For local development and testing when running the Arcade Engine locally or tunneling to it, you can
use arcade serve
to host your toolkit locally and connect it to the Arcade Engine.
If you are running the Arcade Engine locally, go to localhost:9099 (or other local address) and add the worker address in the "workers" page.
Arcade provides multiple ways to use your tools with various agent frameworks. Depending on your use case, you can choose the best method for your application.
The LLM API provides the simplest way to integrate Arcade tools into your application. It extends the standard OpenAI API with additional capabilities:
import os
from openai import OpenAI
prompt = "Say hello to Sam"
api_key = os.environ["ARCADE_API_KEY"]
openai = OpenAI(
base_url="https://api.arcade.dev/v1",
api_key=api_key,
)
response = openai.chat.completions.create(
model="gpt-4o-mini",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": prompt},
],
tools=["Mytoolkit.SayHello"],
tool_choice="generate",
user="[email protected]"
)
print(response.choices[0].message.content)
When a user hasn't authorized a service, the API seamlessly returns an authorization link in the response:
Please authorize the tool by visiting: https://some.auth.url.arcade.will.generate.for.you...
All you need to do is show the url to the user, and from then on, the user will never have to do this again. All future requests will use the authorized token.
After authorization, the same API call returns the completed action:
Hello Sam!
Use the Tools API when you want to integrate Arcade's runtime for tool calling into an agent framework (like LangChain or LangGraph), or if you're using your own approach and want to call Arcade tools or tools you've built with the Arcade Tool SDK.
Here's an example of how to use the Tools API to call a tool directly without a framework:
import os
from arcadepy import Arcade
client = Arcade(api_key=os.environ["ARCADE_API_KEY"])
# Start the authorization process for Slack
auth_response = client.tools.authorize(
tool_name="Mytoolkit.SayHello",
user_id="[email protected]",
)
# If the tool is not already authorized, prompt the user to authenticate
if auth_response.status != "completed":
print("Please authorize by visiting:")
print(auth_response.url)
client.auth.wait_for_completion(auth_response)
# Execute the tool to send a Slack message after authorization
tool_input = {
"username": "sam",
"message": "I'll be late to the meeting"
}
response = client.tools.execute(
tool_name="Mytoolkit.SayHello",
input=tool_input,
user_id="[email protected]",
)
print(response)
You can also use the Tools API with a framework like LangChain or LangGraph.
Currently Arcade provides ease-of-use integrations for the following frameworks:
- LangChain/Langgraph
- CrewAI
- LlamaIndex (coming soon)
Here's an example of how to use the Tools API with LangChain/Langgraph:
import os
from langchain_arcade import ToolManager
from langchain_openai import ChatOpenAI
from langgraph.checkpoint.memory import MemorySaver
from langgraph.prebuilt import create_react_agent
# 1) Set API keys (place your real keys in env variables or directly below)
arcade_api_key = os.environ.get("ARCADE_API_KEY", "YOUR_ARCADE_API_KEY")
openai_api_key = os.environ.get("OPENAI_API_KEY", "YOUR_OPENAI_API_KEY")
# 2) Create an ToolManager and fetch/add tools/toolkits
manager = ToolManager(api_key=arcade_api_key)
# Tool names follow the format "ToolkitName.ToolName"
tools = manager.init_tools(tools=["Web.ScrapeUrl"])
print(manager.tools)
# Get all tools from a toolkit
tools = manager.init_tools(toolkits=["github"])
print(manager.tools)
# add a tool
manager.add_tool("Search.SearchGoogle")
print(manager.tools)
# add a toolkit
manager.add_toolkit("Search")
print(manager.tools)
# 3) Get StructuredTool objects for langchain
lc_tools = manager.to_langchain()
# 4) Create a ChatOpenAI model and bind the Arcade tools.
model = ChatOpenAI(model="gpt-4o", api_key=openai_api_key)
bound_model = model.bind_tools(lc_tools)
# 5) Use MemorySaver for checkpointing.
memory = MemorySaver()
# 6) Create a ReAct-style agent from the prebuilt function.
graph = create_react_agent(model=bound_model, tools=lc_tools, checkpointer=memory)
# 7) Provide basic config and a user query.
# Note: user_id is required for the tool to be authorized
config = {"configurable": {"thread_id": "1", "user_id": "[email protected]"}}
user_input = {"messages": [("user", "star the arcadeai/arcade-ai repo on github")]}
# 8) Stream the agent's output. If the tool is unauthorized, it may trigger interrupts
for chunk in graph.stream(user_input, config, stream_mode="values"):
chunk["messages"][-1].pretty_print()
# if we were interrupted, we can check for interrupts in state
current_state = graph.get_state(config)
if current_state.tasks:
for task in current_state.tasks:
if hasattr(task, "interrupts"):
for interrupt in task.interrupts:
print(interrupt.value)
The last message may result in an authorization prompt.
If so, the user will need to authorize the tool by visiting the url in the response. Once authorized, running the same script will return the completed action since the tool will be authorized for that user.
The Auth API provides the lowest-level integration with Arcade, for when you only need Arcade's authentication capabilities. This API is ideal for:
- Framework developers building their own agent systems
- Applications with existing tool execution mechanisms
- Developers who need fine-grained control over LLM interactions and tool execution
With the Auth API, Arcade handles all the complex authentication tasks (OAuth flow management, link creation, token storage, refresh cycles), while you retain complete control over how you interact with LLMs and execute tools.
from arcadepy import Arcade
from google.oauth2.credentials import Credentials
from googleapiclient.discovery import build
client = Arcade()
# Get this user UNIQUE ID from a trusted source,
# like your database or user management system
user_id = "[email protected]"
# Start the authorization process
response = client.auth.start(
user_id=user_id,
provider="google",
scopes=["https://www.googleapis.com/auth/gmail.readonly"],
)
if response.status != "completed":
print("Please complete the authorization challenge in your browser:")
print(response.url)
# Wait for the authorization to complete
auth_response = client.auth.wait_for_completion(response)
# Use the authorized token in your own tool execution logic
token = auth_response.context.token
# Example: Using the token with your own Gmail API implementation
credentials = Credentials(token=token)
gmail_service = build('gmail', 'v1', credentials=credentials)
emails = gmail_service.users().messages().list(userId='me').execute()
-
ArcadeAI/arcade-py: The Python client for interacting with Arcade.
-
ArcadeAI/arcade-js: The JavaScript client for interacting with Arcade.
-
ArcadeAI/arcade-go: (coming soon) The Go client for interacting with Arcade.
- Discord: Join our Discord community for real-time support and discussions.
- GitHub: Contribute or report issues on the Arcade GitHub repository.
- Documentation: Find in-depth guides and API references at Arcade Documentation.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for arcade-ai
Similar Open Source Tools

arcade-ai
Arcade AI is a developer-focused tooling and API platform designed to enhance the capabilities of LLM applications and agents. It simplifies the process of connecting agentic applications with user data and services, allowing developers to concentrate on building their applications. The platform offers prebuilt toolkits for interacting with various services, supports multiple authentication providers, and provides access to different language models. Users can also create custom toolkits and evaluate their tools using Arcade AI. Contributions are welcome, and self-hosting is possible with the provided documentation.

web-llm
WebLLM is a modular and customizable javascript package that directly brings language model chats directly onto web browsers with hardware acceleration. Everything runs inside the browser with no server support and is accelerated with WebGPU. WebLLM is fully compatible with OpenAI API. That is, you can use the same OpenAI API on any open source models locally, with functionalities including json-mode, function-calling, streaming, etc. We can bring a lot of fun opportunities to build AI assistants for everyone and enable privacy while enjoying GPU acceleration.

rosa
ROSA is an AI Agent designed to interact with ROS-based robotics systems using natural language queries. It can generate system reports, read and parse ROS log files, adapt to new robots, and run various ROS commands using natural language. The tool is versatile for robotics research and development, providing an easy way to interact with robots and the ROS environment.

Bard-API
The Bard API is a Python package that returns responses from Google Bard through the value of a cookie. It is an unofficial API that operates through reverse-engineering, utilizing cookie values to interact with Google Bard for users struggling with frequent authentication problems or unable to authenticate via Google Authentication. The Bard API is not a free service, but rather a tool provided to assist developers with testing certain functionalities due to the delayed development and release of Google Bard's API. It has been designed with a lightweight structure that can easily adapt to the emergence of an official API. Therefore, using it for any other purposes is strongly discouraged. If you have access to a reliable official PaLM-2 API or Google Generative AI API, replace the provided response with the corresponding official code. Check out https://github.com/dsdanielpark/Bard-API/issues/262.

tonic_validate
Tonic Validate is a framework for the evaluation of LLM outputs, such as Retrieval Augmented Generation (RAG) pipelines. Validate makes it easy to evaluate, track, and monitor your LLM and RAG applications. Validate allows you to evaluate your LLM outputs through the use of our provided metrics which measure everything from answer correctness to LLM hallucination. Additionally, Validate has an optional UI to visualize your evaluation results for easy tracking and monitoring.

WindowsAgentArena
Windows Agent Arena (WAA) is a scalable Windows AI agent platform designed for testing and benchmarking multi-modal, desktop AI agents. It provides researchers and developers with a reproducible and realistic Windows OS environment for AI research, enabling testing of agentic AI workflows across various tasks. WAA supports deploying agents at scale using Azure ML cloud infrastructure, allowing parallel running of multiple agents and delivering quick benchmark results for hundreds of tasks in minutes.

voice-chat-ai
Voice Chat AI is a project that allows users to interact with different AI characters using speech. Users can choose from various characters with unique personalities and voices, and have conversations or role play with them. The project supports OpenAI, xAI, or Ollama language models for chat, and provides text-to-speech synthesis using XTTS, OpenAI TTS, or ElevenLabs. Users can seamlessly integrate visual context into conversations by having the AI analyze their screen. The project offers easy configuration through environment variables and can be run via WebUI or Terminal. It also includes a huge selection of built-in characters for engaging conversations.

magic-cli
Magic CLI is a command line utility that leverages Large Language Models (LLMs) to enhance command line efficiency. It is inspired by projects like Amazon Q and GitHub Copilot for CLI. The tool allows users to suggest commands, search across command history, and generate commands for specific tasks using local or remote LLM providers. Magic CLI also provides configuration options for LLM selection and response generation. The project is still in early development, so users should expect breaking changes and bugs.

jina
Jina is a tool that allows users to build multimodal AI services and pipelines using cloud-native technologies. It provides a Pythonic experience for serving ML models and transitioning from local deployment to advanced orchestration frameworks like Docker-Compose, Kubernetes, or Jina AI Cloud. Users can build and serve models for any data type and deep learning framework, design high-performance services with easy scaling, serve LLM models while streaming their output, integrate with Docker containers via Executor Hub, and host on CPU/GPU using Jina AI Cloud. Jina also offers advanced orchestration and scaling capabilities, a smooth transition to the cloud, and easy scalability and concurrency features for applications. Users can deploy to their own cloud or system with Kubernetes and Docker Compose integration, and even deploy to JCloud for autoscaling and monitoring.

gpt-engineer
GPT-Engineer is a tool that allows you to specify a software in natural language, sit back and watch as an AI writes and executes the code, and ask the AI to implement improvements.

letta
Letta is an open source framework for building stateful LLM applications. It allows users to build stateful agents with advanced reasoning capabilities and transparent long-term memory. The framework is white box and model-agnostic, enabling users to connect to various LLM API backends. Letta provides a graphical interface, the Letta ADE, for creating, deploying, interacting, and observing with agents. Users can access Letta via REST API, Python, Typescript SDKs, and the ADE. Letta supports persistence by storing agent data in a database, with PostgreSQL recommended for data migrations. Users can install Letta using Docker or pip, with Docker defaulting to PostgreSQL and pip defaulting to SQLite. Letta also offers a CLI tool for interacting with agents. The project is open source and welcomes contributions from the community.

telemetry-airflow
This repository codifies the Airflow cluster that is deployed at workflow.telemetry.mozilla.org (behind SSO) and commonly referred to as "WTMO" or simply "Airflow". Some links relevant to users and developers of WTMO: * The `dags` directory in this repository contains some custom DAG definitions * Many of the DAGs registered with WTMO don't live in this repository, but are instead generated from ETL task definitions in bigquery-etl * The Data SRE team maintains a WTMO Developer Guide (behind SSO)

patchwork
PatchWork is an open-source framework designed for automating development tasks using large language models. It enables users to automate workflows such as PR reviews, bug fixing, security patching, and more through a self-hosted CLI agent and preferred LLMs. The framework consists of reusable atomic actions called Steps, customizable LLM prompts known as Prompt Templates, and LLM-assisted automations called Patchflows. Users can run Patchflows locally in their CLI/IDE or as part of CI/CD pipelines. PatchWork offers predefined patchflows like AutoFix, PRReview, GenerateREADME, DependencyUpgrade, and ResolveIssue, with the flexibility to create custom patchflows. Prompt templates are used to pass queries to LLMs and can be customized. Contributions to new patchflows, steps, and the core framework are encouraged, with chat assistants available to aid in the process. The roadmap includes expanding the patchflow library, introducing a debugger and validation module, supporting large-scale code embeddings, parallelization, fine-tuned models, and an open-source GUI. PatchWork is licensed under AGPL-3.0 terms, while custom patchflows and steps can be shared using the Apache-2.0 licensed patchwork template repository.

neural
Neural is a Vim and Neovim plugin that integrates various machine learning tools to assist users in writing code, generating text, and explaining code or paragraphs. It supports multiple machine learning models, focuses on privacy, and is compatible with Vim 8.0+ and Neovim 0.8+. Users can easily configure Neural to interact with third-party machine learning tools, such as OpenAI, to enhance code generation and completion. The plugin also provides commands like `:NeuralExplain` to explain code or text and `:NeuralStop` to stop Neural from working. Neural is maintained by the Dense Analysis team and comes with a disclaimer about sending input data to third-party servers for machine learning queries.

crewAI-tools
This repository provides a guide for setting up tools for crewAI agents to enhance functionality. It offers steps to equip agents with ready-to-use tools and create custom ones. Tools are expected to return strings for generating responses. Users can create tools by subclassing BaseTool or using the tool decorator. Contributions are welcome to enrich the toolset, and guidelines are provided for contributing. The development setup includes installing dependencies, activating virtual environment, setting up pre-commit hooks, running tests, static type checking, packaging, and local installation. The goal is to empower AI solutions through advanced tooling.

slack-bot
The Slack Bot is a tool designed to enhance the workflow of development teams by integrating with Jenkins, GitHub, GitLab, and Jira. It allows for custom commands, macros, crons, and project-specific commands to be implemented easily. Users can interact with the bot through Slack messages, execute commands, and monitor job progress. The bot supports features like starting and monitoring Jenkins jobs, tracking pull requests, querying Jira information, creating buttons for interactions, generating images with DALL-E, playing quiz games, checking weather, defining custom commands, and more. Configuration is managed via YAML files, allowing users to set up credentials for external services, define custom commands, schedule cron jobs, and configure VCS systems like Bitbucket for automated branch lookup in Jenkins triggers.
For similar tasks

arcade-ai
Arcade AI is a developer-focused tooling and API platform designed to enhance the capabilities of LLM applications and agents. It simplifies the process of connecting agentic applications with user data and services, allowing developers to concentrate on building their applications. The platform offers prebuilt toolkits for interacting with various services, supports multiple authentication providers, and provides access to different language models. Users can also create custom toolkits and evaluate their tools using Arcade AI. Contributions are welcome, and self-hosting is possible with the provided documentation.

prism
Prism is a Laravel package that simplifies the integration of Large Language Models (LLMs) into applications. It offers a user-friendly interface for text generation, managing multi-step conversations, and leveraging AI tools from different providers. With Prism, developers can focus on creating exceptional AI applications without being bogged down by technical complexities.
For similar jobs

sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.

teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.

ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.

classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.

chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.

BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students

uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.

griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.