
CoolCline
CoolCline is a proactive programming assistant that combines the best features of Cline, Roo Code, and Bao Cline(thanks to all contributors of the `Clines` projects!). It seamlessly collaborates with your command line interface and editor, providing the most powerful AI development experience.
Stars: 132

CoolCline is a proactive programming assistant that combines the best features of Cline, Roo Code, and Bao Cline. It seamlessly collaborates with your command line interface and editor, providing the most powerful AI development experience. It optimizes queries, allows quick switching of LLM Providers, and offers auto-approve options for actions. Users can configure LLM Providers, select different chat modes, perform file and editor operations, integrate with the command line, automate browser tasks, and extend capabilities through the Model Context Protocol (MCP). Context mentions help provide explicit context, and installation is easy through the editor's extension panel or by dragging and dropping the `.vsix` file. Local setup and development instructions are available for contributors.
README:
README: English | 简体中文 CHANGELOG: English | 简体中文 CONTRIBUTING: English | 简体中文
CoolCline is a proactive programming assistant that combines the best features of Cline
and Roo Code
, offering the following modes:
-
Agent
Mode: An autonomous AI programming agent with comprehensive capabilities in code understanding, generation, and project management (automatic code reading/editing, command execution, context understanding, task analysis/decomposition, and tool usage, note: this mode is not affected by the checkboxes in the auto-approval area) -
Code
Mode: Helps you write, refactor, fix code and run commands (write code, execute commands) -
Architect
Mode: Suitable for high-level technical design and system architecture discussions (this mode cannot write code or execute commands) -
Ask
Mode: Suitable for codebase-related questions and concept exploration (this mode cannot write code or execute commands)
- Search for
CoolCline
in the VSCode extension marketplace and install
- If you're installing
CoolCline
for the first time or clicked theReset
button at the bottom of theSettings
⚙️ page, you'll see theWelcome
page where you can set theLanguage
(default is English, supports Chinese, Russian, and other major languages) - If you've already configured an LLM Provider, you will not see the
Welcome
page, to further configure language, you can access theSettings
⚙️ page from the extension's top-right corner
You need to configure at least one LLM Provider before using CoolCline (Required)
- If you're installing
CoolCline
for the first time or clicked theReset
button at the bottom of theSettings
⚙️ page, you'll see theWelcome
page where you can configureLLM Provider
- Based on your chosen LLM Provider, fill in the API Key, Model, and other parameters (some LLM Providers have quick links below the API Key input field to apply for an API Key)
- If you've already configured an LLM Provider, you will not see the
Welcome
page, but you can access theSettings
⚙️ page from the extension's top-right corner to further configure it or other options - The same configurations are synchronized and shared across different pages
I'll mark three levels of using CoolCline:
Basic
,Advanced
, andExpert
. These should be interpreted as suggested focus areas rather than strict or rigid standards.
Different role modes adapt to your workflow needs:
-
Select different role modes at the bottom of the chat input box
-
Autonomous Agent (
Agent
mode): A proactive AI programming agent with the following capabilities:-
Context Analysis Capabilities:
- Uses codebase search for broad understanding
- Automatically uses file reading for detailed inspection
- Uses definition name lists to understand code structure
- Uses file lists to explore project organization
- Uses codebase-wide search to quickly locate relevant code
-
Task Management Capabilities:
- Automatically breaks down complex tasks into steps
- Uses new task tools to manage major subtasks
- Tracks progress and dependencies
- Uses task completion tools to verify task status
-
Code Operation Capabilities:
- Uses search and replace for systematic code changes
- Automatically uses file editing for precise modifications
- Uses diff application for complex changes
- Uses content insertion tools for code block management
- Validates changes and checks for errors
-
Git Snapshot Feature:
- Uses
save_checkpoint
to save code state snapshots, automatically recording important modification points - Uses
restore_checkpoint
to roll back to previous snapshots when needed - Uses
get_checkpoint_diff
to view specific changes between snapshots - Snapshot feature is independent for each task, not affecting your main Git repository
- All snapshot operations are performed on hidden branches, keeping the main branch clean
- You can start by sending one or more of the following messages:
- "Create a git snapshot before starting this task"
- "Save current changes as a git snapshot with description 'completed basic functionality'"
- "Show me the changes between the last two git snapshots"
- "This change is problematic, roll back to the previous git snapshot"
- "Compare the differences between the initial git snapshot and current state"
- Uses
-
Research and Integration Capabilities:
- Automatically uses browser operations to research solutions and best practices (requires model support for Computer Use)
- Automatically uses commands (requires manual configuration of allowed commands in
Settings
⚙️ page) - Automatically uses MCP tools to access external resources and data (requires manual configuration of MCP servers in the
MCP Servers
page)
-
Communication and Validation Capabilities: - Provides clear explanations for each operation - Uses follow-up questions for clarification - Records important changes - Uses appropriate tests to validate results Note:
Agent
mode is not affected by the checkboxes in the auto-approval area
-
-
Code Assistant (
Code
mode): For writing, refactoring, fixing code, and running commands -
Software Architect (
Architect
mode): For high-level technical design and system architecture (cannot write code or execute commands) -
Technical Assistant (
Ask
mode): For codebase queries and concept discussions (cannot write code or execute commands)
- Access the
Prompts
page from CoolCline's top-right corner to create custom role modes - Custom chat modes appear below the
Ask
mode - Custom roles are saved locally and persist between CoolCline sessions
The switch button is located at the bottom center of the input box.
Dropdown list options are maintained on the
Settings
page.
- You can open the
Settings
⚙️ page, and in the top area, you will see the settings location, which has adefault
option. By setting this, you will get the dropdown list you want. - Here, you can create and manage multiple LLM Provider options.
- You can even create separate options for different models of the same LLM Provider, each option saving the complete configuration information of the current LLM Provider.
- After creation, you can switch configurations in real-time at the bottom of the chat input box.
- Configuration information includes: LLM Provider, API Key, Model, and other configuration items related to the LLM Provider.
- The steps to create an LLM Provider option are as follows (steps 4 can be interchanged with 2 and 3):
- Click the + button, the system will automatically
copy
an option based on the current configuration information, named xx (copy); - Click the ✏️ icon to modify the option name;
- Click the ☑️ to save the option name;
- Adjust core parameters such as Model as needed (the edit box will automatically save when it loses focus).
- Click the + button, the system will automatically
- Naming suggestions for option names: It is recommended to use the structure "Provider-ModelVersion-Feature", for example: openrouter-deepseek-v3-free; openrouter-deepseek-r1-free; deepseek-v3-official; deepseek-r1-official.
After entering a question in the input box, you can click the ✨ button at the bottom, which will enhance your question content. You can set the LLM Provider used for Prompt Enhancement
in the Auxiliary Function Prompt Configuration
section on the Prompts
page.
Associate the most relevant context to save your token budget
Type @
in the input box when you need to explicitly provide context:
-
@Problems
– Provide workspace errors/warnings for CoolCline to fix -
@Paste URL to fetch contents
– Fetch documentation from URL and convert to Markdown, no need to manually type@
, just paste the link -
@Add Folder
– Provide folders to CoolCline, after typing@
, you can directly enter the folder name for fuzzy search and quick selection -
@Add File
– Provide files to CoolCline, after typing@
, you can directly enter the file name for fuzzy search and quick selection -
@Git Commits
– Provide Git commits or diff lists for CoolCline to analyze code history -
Add Terminal Content to Context
- No@
needed, select content in terminal interface, right-click, and clickCoolCline:Add Terminal Content to Context
To use CoolCline assistance in a controlled manner (preventing uncontrolled actions), the application provides three approval options:
- Manual Approval: Review and approve each step to maintain full control, click allow or cancel in application prompts for saves, command execution, etc.
- Auto Approval: Grant CoolCline the ability to run tasks without interruption (recommended in Agent mode for full autonomy)
- Auto Approval Settings: Check or uncheck options you want to control above the chat input box or in settings page
- For allowing automatic command approval: You need to go to the
Settings
page, in theCommand Line
area, add commands you want to auto-approve, likenpm install
,npm run
,npm test
, etc. - Hybrid: Auto-approve specific operations (like file writes) but require confirmation for higher-risk tasks (strongly recommended to
not
configure git add, git commit, etc., these should be done manually).
Regardless of your preference, you always have final control over CoolCline's operations.
- Use LLM Provider and Model with good capabilities
- Start with clear high-level task descriptions
- Use
@
to provide clearer, more accurate context from codebase, files, URLs, Git commits, etc. - Utilize Git snapshot feature to manage important changes:
You can start by sending one or more of these messages:
- "Create a git snapshot before starting this task"
- "Save current changes as a git snapshot with description 'completed basic functionality'"
- "Show me the changes between the last two git snapshots"
- "This change is problematic, roll back to the previous git snapshot"
- "Compare the differences between the initial git snapshot and current state"
- Configure allowed commands in the
Settings
page and MCP servers in theMCP Servers
page, Agent will automatically use these commands and MCP servers - It's recommended to
not
setgit add
,git commit
commands in the command settings interface, you should control these manually - Consider switching to specialized modes (Code/Architect/Ask) for specific subtasks when needed
- Code Mode: Best for direct coding tasks and implementation
- Architect Mode: Suitable for planning and design discussions
- Ask Mode: Perfect for learning and exploring concepts
CoolCline can also open browser
sessions to:
- Launch local or remote web applications
- Click, type, scroll, and take screenshots
- Collect console logs to debug runtime or UI/UX issues
Perfect for end-to-end testing
or visually verifying changes without constant copy-pasting.
- Check
Approve Browser Operations
in theAuto Approval
area (requires LLM Provider support for Computer Use) - In the
Settings
page, you can set other options in theBrowser Settings
area
- MCP Official Documentation: https://modelcontextprotocol.io/introduction
Extend CoolCline through the Model Context Protocol (MCP)
with commands like:
- "Add a tool to manage AWS EC2 resources."
- "Add a tool to query company Jira."
- "Add a tool to pull latest PagerDuty events."
CoolCline can autonomously build and configure new tools (with your approval) to immediately expand its capabilities.
- In the
Settings
page, you can enable sound effects and volume, so you'll get audio notifications when tasks complete (allowing you to multitask while CoolCline works)
- In the
Settings
page, you can configure other options
Two installation methods, choose one:
- Search for
CoolCline
in the editor's extension panel to install directly - Or get the
.vsix
file from Marketplace / Open-VSX anddrag and drop
it into the editor
Tips:
- For better experience, move the extension to the right side of the screen: Right-click on the CoolCline extension icon -> Move to -> Secondary Sidebar
- If you close the
Secondary Sidebar
and don't know how to reopen it, click theToggle Secondary Sidebar
button in the top-right corner of VSCode, or use the keyboard shortcut ctrl + shift + L.
Refer to the instructions in the CONTRIBUTING file: English | 简体中文
We welcome community contributions! Here's how to participate: CONTRIBUTING: English | 简体中文
CoolCline draws inspiration from the excellent features of the
Clines
open source community (thanks to allClines
project contributors!).
Please note that CoolCline makes no representations or warranties of any kind concerning any code, models, or other tools provided, any related third-party tools, or any output results. You assume all risk of using any such tools or output; such tools are provided on an "as is" and "as available" basis. Such risks may include but are not limited to intellectual property infringement, network vulnerabilities or attacks, bias, inaccuracies, errors, defects, viruses, downtime, property loss or damage, and/or personal injury. You are solely responsible for your use of any such tools or output, including but not limited to their legality, appropriateness, and results.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for CoolCline
Similar Open Source Tools

CoolCline
CoolCline is a proactive programming assistant that combines the best features of Cline, Roo Code, and Bao Cline. It seamlessly collaborates with your command line interface and editor, providing the most powerful AI development experience. It optimizes queries, allows quick switching of LLM Providers, and offers auto-approve options for actions. Users can configure LLM Providers, select different chat modes, perform file and editor operations, integrate with the command line, automate browser tasks, and extend capabilities through the Model Context Protocol (MCP). Context mentions help provide explicit context, and installation is easy through the editor's extension panel or by dragging and dropping the `.vsix` file. Local setup and development instructions are available for contributors.

telemetry-airflow
This repository codifies the Airflow cluster that is deployed at workflow.telemetry.mozilla.org (behind SSO) and commonly referred to as "WTMO" or simply "Airflow". Some links relevant to users and developers of WTMO: * The `dags` directory in this repository contains some custom DAG definitions * Many of the DAGs registered with WTMO don't live in this repository, but are instead generated from ETL task definitions in bigquery-etl * The Data SRE team maintains a WTMO Developer Guide (behind SSO)

GraphRAG-Local-UI
GraphRAG Local with Interactive UI is an adaptation of Microsoft's GraphRAG, tailored to support local models and featuring a comprehensive interactive user interface. It allows users to leverage local models for LLM and embeddings, visualize knowledge graphs in 2D or 3D, manage files, settings, and queries, and explore indexing outputs. The tool aims to be cost-effective by eliminating dependency on costly cloud-based models and offers flexible querying options for global, local, and direct chat queries.

crewAI-tools
This repository provides a guide for setting up tools for crewAI agents to enhance functionality. It offers steps to equip agents with ready-to-use tools and create custom ones. Tools are expected to return strings for generating responses. Users can create tools by subclassing BaseTool or using the tool decorator. Contributions are welcome to enrich the toolset, and guidelines are provided for contributing. The development setup includes installing dependencies, activating virtual environment, setting up pre-commit hooks, running tests, static type checking, packaging, and local installation. The goal is to empower AI solutions through advanced tooling.

patchwork
PatchWork is an open-source framework designed for automating development tasks using large language models. It enables users to automate workflows such as PR reviews, bug fixing, security patching, and more through a self-hosted CLI agent and preferred LLMs. The framework consists of reusable atomic actions called Steps, customizable LLM prompts known as Prompt Templates, and LLM-assisted automations called Patchflows. Users can run Patchflows locally in their CLI/IDE or as part of CI/CD pipelines. PatchWork offers predefined patchflows like AutoFix, PRReview, GenerateREADME, DependencyUpgrade, and ResolveIssue, with the flexibility to create custom patchflows. Prompt templates are used to pass queries to LLMs and can be customized. Contributions to new patchflows, steps, and the core framework are encouraged, with chat assistants available to aid in the process. The roadmap includes expanding the patchflow library, introducing a debugger and validation module, supporting large-scale code embeddings, parallelization, fine-tuned models, and an open-source GUI. PatchWork is licensed under AGPL-3.0 terms, while custom patchflows and steps can be shared using the Apache-2.0 licensed patchwork template repository.

ChatData
ChatData is a robust chat-with-documents application designed to extract information and provide answers by querying the MyScale free knowledge base or uploaded documents. It leverages the Retrieval Augmented Generation (RAG) framework, millions of Wikipedia pages, and arXiv papers. Features include self-querying retriever, VectorSQL, session management, and building a personalized knowledge base. Users can effortlessly navigate vast data, explore academic papers, and research documents. ChatData empowers researchers, students, and knowledge enthusiasts to unlock the true potential of information retrieval.

crewAI-tools
The crewAI Tools repository provides a guide for setting up tools for crewAI agents, enabling the creation of custom tools to enhance AI solutions. Tools play a crucial role in improving agent functionality. The guide explains how to equip agents with a range of tools and how to create new tools. Tools are designed to return strings for generating responses. There are two main methods for creating tools: subclassing BaseTool and using the tool decorator. Contributions to the toolset are encouraged, and the development setup includes steps for installing dependencies, activating the virtual environment, setting up pre-commit hooks, running tests, static type checking, packaging, and local installation. Enhance AI agent capabilities with advanced tooling.

VoiceStreamAI
VoiceStreamAI is a Python 3-based server and JavaScript client solution for near-realtime audio streaming and transcription using WebSocket. It employs Huggingface's Voice Activity Detection (VAD) and OpenAI's Whisper model for accurate speech recognition. The system features real-time audio streaming, modular design for easy integration of VAD and ASR technologies, customizable audio chunk processing strategies, support for multilingual transcription, and secure sockets support. It uses a factory and strategy pattern implementation for flexible component management and provides a unit testing framework for robust development.

btp-genai-starter-kit
This repository provides a quick way for users of the SAP Business Technology Platform (BTP) to learn how to use generative AI with BTP services. It guides users through setting up the necessary infrastructure, deploying AI models, and running genAI experiments on SAP BTP. The repository includes scripts, examples, and instructions to help users get started with generative AI on the SAP BTP platform.

WindowsAgentArena
Windows Agent Arena (WAA) is a scalable Windows AI agent platform designed for testing and benchmarking multi-modal, desktop AI agents. It provides researchers and developers with a reproducible and realistic Windows OS environment for AI research, enabling testing of agentic AI workflows across various tasks. WAA supports deploying agents at scale using Azure ML cloud infrastructure, allowing parallel running of multiple agents and delivering quick benchmark results for hundreds of tasks in minutes.

KrillinAI
KrillinAI is a video subtitle translation and dubbing tool based on AI large models, featuring speech recognition, intelligent sentence segmentation, professional translation, and one-click deployment of the entire process. It provides a one-stop workflow from video downloading to the final product, empowering cross-language cultural communication with AI. The tool supports multiple languages for input and translation, integrates features like automatic dependency installation, video downloading from platforms like YouTube and Bilibili, high-speed subtitle recognition, intelligent subtitle segmentation and alignment, custom vocabulary replacement, professional-level translation engine, and diverse external service selection for speech and large model services.

RepoAgent
RepoAgent is an LLM-powered framework designed for repository-level code documentation generation. It automates the process of detecting changes in Git repositories, analyzing code structure through AST, identifying inter-object relationships, replacing Markdown content, and executing multi-threaded operations. The tool aims to assist developers in understanding and maintaining codebases by providing comprehensive documentation, ultimately improving efficiency and saving time.

OllamaSharp
OllamaSharp is a .NET binding for the Ollama API, providing an intuitive API client to interact with Ollama. It offers support for all Ollama API endpoints, real-time streaming, progress reporting, and an API console for remote management. Users can easily set up the client, list models, pull models with progress feedback, stream completions, and build interactive chats. The project includes a demo console for exploring and managing the Ollama host.

storm
STORM is a LLM system that writes Wikipedia-like articles from scratch based on Internet search. While the system cannot produce publication-ready articles that often require a significant number of edits, experienced Wikipedia editors have found it helpful in their pre-writing stage. **Try out our [live research preview](https://storm.genie.stanford.edu/) to see how STORM can help your knowledge exploration journey and please provide feedback to help us improve the system 🙏!**

tonic_validate
Tonic Validate is a framework for the evaluation of LLM outputs, such as Retrieval Augmented Generation (RAG) pipelines. Validate makes it easy to evaluate, track, and monitor your LLM and RAG applications. Validate allows you to evaluate your LLM outputs through the use of our provided metrics which measure everything from answer correctness to LLM hallucination. Additionally, Validate has an optional UI to visualize your evaluation results for easy tracking and monitoring.

ai-starter-kit
SambaNova AI Starter Kits is a collection of open-source examples and guides designed to facilitate the deployment of AI-driven use cases for developers and enterprises. The kits cover various categories such as Data Ingestion & Preparation, Model Development & Optimization, Intelligent Information Retrieval, and Advanced AI Capabilities. Users can obtain a free API key using SambaNova Cloud or deploy models using SambaStudio. Most examples are written in Python but can be applied to any programming language. The kits provide resources for tasks like text extraction, fine-tuning embeddings, prompt engineering, question-answering, image search, post-call analysis, and more.
For similar tasks

Botright
Botright is a tool designed for browser automation that focuses on stealth and captcha solving. It uses a real Chromium-based browser for enhanced stealth and offers features like browser fingerprinting and AI-powered captcha solving. The tool is suitable for developers looking to automate browser tasks while maintaining anonymity and bypassing captchas. Botright is available in async mode and can be easily integrated with existing Playwright code. It provides solutions for various captchas such as hCaptcha, reCaptcha, and GeeTest, with high success rates. Additionally, Botright offers browser stealth techniques and supports different browser functionalities for seamless automation.

CoolCline
CoolCline is a proactive programming assistant that combines the best features of Cline, Roo Code, and Bao Cline. It seamlessly collaborates with your command line interface and editor, providing the most powerful AI development experience. It optimizes queries, allows quick switching of LLM Providers, and offers auto-approve options for actions. Users can configure LLM Providers, select different chat modes, perform file and editor operations, integrate with the command line, automate browser tasks, and extend capabilities through the Model Context Protocol (MCP). Context mentions help provide explicit context, and installation is easy through the editor's extension panel or by dragging and dropping the `.vsix` file. Local setup and development instructions are available for contributors.

cursor-tools
cursor-tools is a CLI tool designed to enhance AI agents with advanced skills, such as web search, repository context, documentation generation, GitHub integration, Xcode tools, and browser automation. It provides features like Perplexity for web search, Gemini 2.0 for codebase context, and Stagehand for browser operations. The tool requires API keys for Perplexity AI and Google Gemini, and supports global installation for system-wide access. It offers various commands for different tasks and integrates with Cursor Composer for AI agent usage.

LLM-Navigation
LLM-Navigation is a repository dedicated to documenting learning records related to large models, including basic knowledge, prompt engineering, building effective agents, model expansion capabilities, security measures against prompt injection, and applications in various fields such as AI agent control, browser automation, financial analysis, 3D modeling, and tool navigation using MCP servers. The repository aims to organize and collect information for personal learning and self-improvement through AI exploration.

LLMStack
LLMStack is a no-code platform for building generative AI agents, workflows, and chatbots. It allows users to connect their own data, internal tools, and GPT-powered models without any coding experience. LLMStack can be deployed to the cloud or on-premise and can be accessed via HTTP API or triggered from Slack or Discord.

jupyter-ai
Jupyter AI connects generative AI with Jupyter notebooks. It provides a user-friendly and powerful way to explore generative AI models in notebooks and improve your productivity in JupyterLab and the Jupyter Notebook. Specifically, Jupyter AI offers: * An `%%ai` magic that turns the Jupyter notebook into a reproducible generative AI playground. This works anywhere the IPython kernel runs (JupyterLab, Jupyter Notebook, Google Colab, Kaggle, VSCode, etc.). * A native chat UI in JupyterLab that enables you to work with generative AI as a conversational assistant. * Support for a wide range of generative model providers, including AI21, Anthropic, AWS, Cohere, Gemini, Hugging Face, NVIDIA, and OpenAI. * Local model support through GPT4All, enabling use of generative AI models on consumer grade machines with ease and privacy.

khoj
Khoj is an open-source, personal AI assistant that extends your capabilities by creating always-available AI agents. You can share your notes and documents to extend your digital brain, and your AI agents have access to the internet, allowing you to incorporate real-time information. Khoj is accessible on Desktop, Emacs, Obsidian, Web, and Whatsapp, and you can share PDF, markdown, org-mode, notion files, and GitHub repositories. You'll get fast, accurate semantic search on top of your docs, and your agents can create deeply personal images and understand your speech. Khoj is self-hostable and always will be.

mojo
Mojo is a new programming language that bridges the gap between research and production by combining Python syntax and ecosystem with systems programming and metaprogramming features. Mojo is still young, but it is designed to become a superset of Python over time.
For similar jobs

sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.

teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.

ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.

classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.

chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.

BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students

uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.

griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.