CLI
Bito CLI (Command Line Interface) provides a command line interface to the Bito AI chat functionality. Over time, CLI will add more functions and new command options to support complex automation and workflows. This is a very early Alpha version. We would love to get your feedback on the new features or improvements.
Stars: 532
Bito CLI provides a command line interface to the Bito AI chat functionality, allowing users to interact with the AI through commands. It supports complex automation and workflows, with features like long prompts and slash commands. Users can install Bito CLI on Mac, Linux, and Windows systems using various methods. The tool also offers configuration options for AI model type, access key management, and output language customization. Bito CLI is designed to enhance user experience in querying AI models and automating tasks through the command line interface.
README:
Bito CLI (Command Line Interface) provides a command line interface to the Bito AI chat functionality. Over time, CLI will add more functions and new command options to support complex automation and workflows.
This is a very early Alpha version. We would love to get your feedback on the new features or improvements. Please write us at [email protected] or [email protected].
Terminal
- Bash (for Mac and Linux)
- CMD (for Windows)
-
Execute Chat: Run
bito
command on command prompt to get started. Ask anything you want help with such asawk command to print first and last column
.Note: Bito CLI supports long prompts through multiline input. To complete and submit the prompt, press
Ctrl+D
. Enter/Return key adds a new line to the input. -
Exit Bito CLI: To quit/exit from Bito CLI, type
quit
and pressCtrl+D
. -
Terminate: Press
Ctrl+C
to Bito CLI.
Check out the video below to get started with Bito CLI
We recommend you use the following methods to install Bito CLI.
sudo curl https://alpha.bito.ai/downloads/cli/install.sh -fsSL | bash
(curl will always download the latest version)
Arch and Arch based distro users can install it from AUR
yay -S bito-cli
or paru -S bito-cli
- Before using homebrew, please make sure that you uninstall any previously installed versions of Bito CLI using the uninstall script provided here.
- Once above is done then you can use following commands to install Bito CLI using homebrew:
- First tap the CLI repo using
brew tap gitbito/bitocli
, this should be a one time action and not required every time. - Now you can install Bito CLI using following command:
-
brew install bito-cli
- this should install Bito CLI based upon your machine architecture.
-
- To update Bito CLI to the latest version, use following commands:
- Please make sure you always do
brew update
before upgrading to avoid any errors. -
brew update
- this will update all the required packages before upgrading. -
brew upgrade bito-cli
- once above is done, this will update Bito CLI to the latest version.
- Please make sure you always do
- To uninstall Bito CLI you can either use the uninstall command from here or use following commands:
-
brew uninstall bito-cli
- this should uninstall Bito CLI completely from your system.
-
- First tap the CLI repo using
Note for the Mac Users: You might face issue related to verification for which you will have to manually do the steps from here (we are working on fixing it as soon as possible).
- Install the Bito CLI through MSI using this installer.
- On Windows 11 you might get notification related to publisher verification. Click on "Show more" or "More info" and click on "Run anyway" (we are working on fixing this as soon as possible).
- Once the installation is complete, start a new command prompt and run
bito
to get started.
sudo curl https://alpha.bito.ai/downloads/cli/uninstall.sh -fsSL | bash
(this will completely uninstall Bito CLI and all of its components)
For Windows you can uninstall Bito CLI just like you do for any other software uninstall from control panel. You can still refer the link provided here.
While it's not recommended, you can download the Bito CLI binary from our repository, and install is manually. The binary is available for Linux and Mac OS, x86 and ARM architecture.
- Download Bito CLI binary specific to your OS platform from here (Please download the latest version for all new updates).
- Start the terminal, go to the location where your downloaded the binary, move the downloaded file (in the command below use bito-* filename you have downloaded) to filename bito
mv bito-<os>-<arch> bito
- Make the file executable using following command
chmod +x ./bito
- Copy the binary to /usr/local/bin using following command
sudo cp ./bito /usr/local/bin
- Set PATH variable so that Bito CLI is always accessible.
PATH=$PATH:/usr/local/bin
- Run Bito CLI with
bito
command. If PATH variable is not set, you will need to run command with the complete or relative path to the Bito executable binary.
- For using bito CLI, always move to the directory containing bito CLI prior to running it.
- Set PATH variable so that bito CLI is always accessible.
- Follow the instructions as per this link
- Edit the "Path" variable and add new path of the location where bito CLI is installed on your machine.
-
On MAC/Linux: run
bito --help
orbito config --help
-
On Windows: run
bito --help
orbito config --help
Slash Commands are introduced in Bito CLI to help make features like "AI that understands your code" to be available via the CLI. With this, you can access your code index created by the Bito extension in your IDE. Slash commands can be used to quickly execute actions like viewing all local code indexes, selecting a particular local code index and finally making LCA queries for that index.
- Type "/" in bito> prompt of interactive mode of Bito CLI and hit ENTER or TAB to view all available commands.
- Type "/[command_name]" in bito> prompt of interactive mode of Bito CLI and hit ENTER or TAB to view all available options for that command.
- Type "/[command_name] [option_name]" in bito> prompt of interactive mode of Bito CLI and hit ENTER or TAB to execute the available option.
- Run
bito --help
for help related to slash commands.
-
run
bito -v
orbito --version
to print the version number of Bito CLI installed currently. -
run
bito –p writedocprompt.txt -f mycode.js
for non-interactive mode in Bito (where writedocprompt.txt will contain your prompt text such as "Explain the code below in brief" and mycode.js will contain the actual code on which the action is to be performed). -
run
bito –p writedocprompt.txt
to read the content at standard input in Bito (where writedocprompt.txt will contain your prompt text such as "Explain the code below in brief" and input provided will have the actual content on which the action is to be performed). -
run
cat file.txt | bito
to directly cat a file and pipe it to bito and get instant result for your query. -
run
cat inventory.sql | bito -p testdataprompt.txt > testdata.sql
to redirect your output directly to a file (where -p can be used along with cat to perform prompt related action on the given content). -
run
cat inventory.sql | bito -c runcontext.txt -p testdataprompt.txt > testdata.sql
to store context/conversation history in non-interactive mode in fileruncontext.txt
to use for next set of commands in case prior context is needed. Ifruncontext.txt
is not present it will be created. Please provide a new file or an existing context file created by bito using-c
option. With-c
option now context is supported in non-interactive mode -
run
echo "give me code for bubble sort in python" | bito
to instantly get response for your queries using Bito CLI.
-
run
bito -v
orbito --version
to print the version number of Bito CLI installed currently. -
run
bito –p writedocprompt.txt -f mycode.js
for non-interactive mode in Bito (where writedocprompt.txt will contain your prompt text such as "Explain the code below in brief" and mycode.js will contain the actual code on which the action is to be performed). -
run
bito –p writedocprompt.txt
to read the content at standard input in Bito (where writedocprompt.txt will contain your prompt text such as "Explain the code below in brief" and input provided will have the actual content on which the action is to be performed). -
run
type file.txt | bito
to take input from file in windows and pipe it to bito and get instant result for your query. -
run
type inventory.sql | bito -p testdataprompt.txt > testdata.sql
to redirect your output directly to a file (where -p can be used along with type to perform prompt related action on the given content). -
run
type inventory.sql | bito -c runcontext.txt -p testdataprompt.txt > testdata.sql
to store context/conversation history in non-interactive mode in fileruncontext.txt
to use for next set of commands in case prior context is needed. Ifruncontext.txt
is not present it will be created. Please provide a new file or an existing context file created by bito using-c
option. With-c
option now context is supported in non-interactive mode -
run
echo "give me code for bubble sort in python" | bito
to instantly get response for your queries using Bito CLI.
Anything after #
in your prompt file will be considered as a comment by Bito CLI and won't be part of your prompt.
You can use \#
as an escape sequence to make #
as a part of your prompt and to not use it for commenting anymore.
- Give me an example of bubble sort in python # everything written here will be considered as a comment now.
- Explain what this part of the code do:
\#include<stdio.h>
i. in the example above\#
can be used as an escape sequence to include#
as a part of your prompt. - #This will be considered as a comment as it contains # at the start of the line itself.
To treat # as normal character and not a special character to mark starting of a comment (which is the default behavior), one can use -i/--ignore flag in your command for Bito CLI.
Using -i/--ignore flag in your command for Bito CLI will let Bito CLI know to not treat #
specially and use it as part of the prompt for processing.
Eg. "bito -p prompt.txt -i" will make sure that even if #
is present in your prompt file, it won't be considered as a comment and your file will be processed as it is.
Use {{%input%}}
macro in the prompt file to refer to the contents of the file provided via -f option
Example: To check if a file contains JS code or not, you can create a prompt file checkifjscode.txt with following prompt:
Context is provided below within contextstart and contextend
contextstart
{{%input%}}
contextend
Check if content provided in context is JS code.
Here are two examples for you to see My Prompt in action:
- How to Create Git Commit Messages and Markdown Documentation with Ease using Bito CLI My Prompt:
- How to generate test data using Bito CLI My Prompt:
-
run
bito config -l
orbito config --list
to list all config variables and values. -
run
bito config -e
orbito config --edit
to open the config file in default editor.
bito:
access_key: ""
email: [email protected]
preferred_ai_model: ADVANCED
settings:
auto_update: true
max_context_entries: 20
By default AI Model Type is set to ADVANCED
and it can be overridden by running bito -m <BASIC/ADVANCED>
Model type is used for AI query in the current session. Model type can be set to BASIC
or ADVANCED
, which is case insensitive.
"ADVANCED" refers to AI models like GPT-4o, Claude Sonnet 3.5, and best in class AI models, while "BASIC" refers to AI models like GPT-4o mini and similar models.
When using BASIC AI models, your prompts and the chat's memory are limited to 40,000 characters (about 18 single-spaced pages). However, with ADVANCED AI models, your prompts and the chat memory can go up to 240,000 characters (about 110 single-spaced pages). This means that ADVANCED models can process your entire code files, leading to more accurate answers.
If you are seeking the best results for complex tasks, then choose ADVANCED AI models.
Bito CLI also prints the model configured and the one used for your current session on standard error for your reference. If you run "bito" then you should see "Model configured" as "BASIC/ADVANCED". This is the model that is configured in your CLI configuration (which can be access via bito config -e
).
If you start making queries then depending upon your Bito Billing Plan if you are on a Free plan then the model will automatically switch to BASIC and you shall see "Model in use:" as BASIC getting printed.
If you are on a Paid Plan and haven't exhausted your advanced queries then you shall see "Model in use:" as ADVACNED getting printed.
Access Key can be created at Bito Web UI and used in Bito CLI.
To create an access key, do the following:
- Login into your account at: https://alpha.bito.ai
- Once logged in open: https://alpha.bito.ai/home/settings/advanced
- Click on the "Create new key" button under section "Bito Access Key" to create a new key and copy it.
- Make sure to protect your key and do not check it in, into any code to avoid accidental leakage.
- In case you think your key is compromised, you can delete the existing key and create new key anytime.
Access Key is an alternate authentication mechanism to Email & OTP based aunthentication.
Access Key can be persisted in Bito CLI by running bito config -e
Such persisted Access Key can be over-ridden by running bito -k <access-key>
or bito --key <access-key>
for the transient session.
By default Bito CLI generates output in English. You can change the output language to your preferred language from here.
As of now it takes 30 mins for the language change to reflect in the CLI when the CLI is in the running mode. For the changes to reflect immeditately you can exit the current CLI session using Ctrl+C
and again run the CLI using bito
.
- Unicode characters (using other languages) might not be readily supported on command prompt if you are on Windows 10 or below. You can run command
chcp 936
in cmd prior to using bito to support unicode characters in Windows 10 or below (To undo the settings done here you can follow this link). - IF you are on Windows 11 then you shouldn't encounter any such issues.
Copyright (C) 2021, Bito Inc - All Rights Reserved
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for CLI
Similar Open Source Tools
CLI
Bito CLI provides a command line interface to the Bito AI chat functionality, allowing users to interact with the AI through commands. It supports complex automation and workflows, with features like long prompts and slash commands. Users can install Bito CLI on Mac, Linux, and Windows systems using various methods. The tool also offers configuration options for AI model type, access key management, and output language customization. Bito CLI is designed to enhance user experience in querying AI models and automating tasks through the command line interface.
cog-comfyui
Cog-comfyui allows users to run ComfyUI workflows on Replicate. ComfyUI is a visual programming tool for creating and sharing generative art workflows. With cog-comfyui, users can access a variety of pre-trained models and custom nodes to create their own unique artworks. The tool is easy to use and does not require any coding experience. Users simply need to upload their API JSON file and any necessary input files, and then click the "Run" button. Cog-comfyui will then generate the output image or video file.
gpt-subtrans
GPT-Subtrans is an open-source subtitle translator that utilizes large language models (LLMs) as translation services. It supports translation between any language pairs that the language model supports. Note that GPT-Subtrans requires an active internet connection, as subtitles are sent to the provider's servers for translation, and their privacy policy applies.
dravid
Dravid (DRD) is an advanced, AI-powered CLI coding framework designed to follow user instructions until the job is completed, including fixing errors. It can generate code, fix errors, handle image queries, manage file operations, integrate with external APIs, and provide a development server with error handling. Dravid is extensible and requires Python 3.7+ and CLAUDE_API_KEY. Users can interact with Dravid through CLI commands for various tasks like creating projects, asking questions, generating content, handling metadata, and file-specific queries. It supports use cases like Next.js project development, working with existing projects, exploring new languages, Ruby on Rails project development, and Python project development. Dravid's project structure includes directories for source code, CLI modules, API interaction, utility functions, AI prompt templates, metadata management, and tests. Contributions are welcome, and development setup involves cloning the repository, installing dependencies with Poetry, setting up environment variables, and using Dravid for project enhancements.
python-sc2
python-sc2 is an easy-to-use library for writing AI Bots for StarCraft II in Python 3. It aims for simplicity and ease of use while providing both high and low level abstractions. The library covers only the raw scripted interface and intends to help new bot authors with added functions. Users can install the library using pip and need a StarCraft II executable to run bots. The API configuration options allow users to customize bot behavior and performance. The community provides support through Discord servers, and users can contribute to the project by creating new issues or pull requests following style guidelines.
azure-search-openai-javascript
This sample demonstrates a few approaches for creating ChatGPT-like experiences over your own data using the Retrieval Augmented Generation pattern. It uses Azure OpenAI Service to access the ChatGPT model (gpt-35-turbo), and Azure AI Search for data indexing and retrieval.
openui
OpenUI is a tool designed to simplify the process of building UI components by allowing users to describe UI using their imagination and see it rendered live. It supports converting HTML to React, Svelte, Web Components, etc. The tool is open source and aims to make UI development fun, fast, and flexible. It integrates with various AI services like OpenAI, Groq, Gemini, Anthropic, Cohere, and Mistral, providing users with the flexibility to use different models. OpenUI also supports LiteLLM for connecting to various LLM services and allows users to create custom proxy configs. The tool can be run locally using Docker or Python, and it offers a development environment for quick setup and testing.
h2o-llmstudio
H2O LLM Studio is a framework and no-code GUI designed for fine-tuning state-of-the-art large language models (LLMs). With H2O LLM Studio, you can easily and effectively fine-tune LLMs without the need for any coding experience. The GUI is specially designed for large language models, and you can finetune any LLM using a large variety of hyperparameters. You can also use recent finetuning techniques such as Low-Rank Adaptation (LoRA) and 8-bit model training with a low memory footprint. Additionally, you can use Reinforcement Learning (RL) to finetune your model (experimental), use advanced evaluation metrics to judge generated answers by the model, track and compare your model performance visually, and easily export your model to the Hugging Face Hub and share it with the community.
gpt-pilot
GPT Pilot is a core technology for the Pythagora VS Code extension, aiming to provide the first real AI developer companion. It goes beyond autocomplete, helping with writing full features, debugging, issue discussions, and reviews. The tool utilizes LLMs to generate production-ready apps, with developers overseeing the implementation. GPT Pilot works step by step like a developer, debugging issues as they arise. It can work at any scale, filtering out code to show only relevant parts to the AI during tasks. Contributions are welcome, with debugging and telemetry being key areas of focus for improvement.
airbyte_serverless
AirbyteServerless is a lightweight tool designed to simplify the management of Airbyte connectors. It offers a serverless mode for running connectors, allowing users to easily move data from any source to their data warehouse. Unlike the full Airbyte-Open-Source-Platform, AirbyteServerless focuses solely on the Extract-Load process without a UI, database, or transform layer. It provides a CLI tool, 'abs', for managing connectors, creating connections, running jobs, selecting specific data streams, handling secrets securely, and scheduling remote runs. The tool is scalable, allowing independent deployment of multiple connectors. It aims to streamline the connector management process and provide a more agile alternative to the comprehensive Airbyte platform.
fabric
Fabric is an open-source framework for augmenting humans using AI. It provides a structured approach to breaking down problems into individual components and applying AI to them one at a time. Fabric includes a collection of pre-defined Patterns (prompts) that can be used for a variety of tasks, such as extracting the most interesting parts of YouTube videos and podcasts, writing essays, summarizing academic papers, creating AI art prompts, and more. Users can also create their own custom Patterns. Fabric is designed to be easy to use, with a command-line interface and a variety of helper apps. It is also extensible, allowing users to integrate it with their own AI applications and infrastructure.
ultravox
Ultravox is a fast multimodal Language Model (LLM) that can understand both text and human speech in real-time without the need for a separate Audio Speech Recognition (ASR) stage. By extending Meta's Llama 3 model with a multimodal projector, Ultravox converts audio directly into a high-dimensional space used by Llama 3, enabling quick responses and potential understanding of paralinguistic cues like timing and emotion in human speech. The current version (v0.3) has impressive speed metrics and aims for further enhancements. Ultravox currently converts audio to streaming text and plans to emit speech tokens for direct audio conversion. The tool is open for collaboration to enhance this functionality.
PentestGPT
PentestGPT provides advanced AI and integrated tools to help security teams conduct comprehensive penetration tests effortlessly. Scan, exploit, and analyze web applications, networks, and cloud environments with ease and precision, without needing expert skills. The tool utilizes Supabase for data storage and management, and Vercel for hosting the frontend. It offers a local quickstart guide for running the tool locally and a hosted quickstart guide for deploying it in the cloud. PentestGPT aims to simplify the penetration testing process for security professionals and enthusiasts alike.
chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.
ai-town
AI Town is a virtual town where AI characters live, chat, and socialize. This project provides a deployable starter kit for building and customizing your own version of AI Town. It features a game engine, database, vector search, auth, text model, deployment, pixel art generation, background music generation, and local inference. You can customize your own simulation by creating characters and stories, updating spritesheets, changing the background, and modifying the background music.
webwhiz
WebWhiz is an open-source tool that allows users to train ChatGPT on website data to build AI chatbots for customer queries. It offers easy integration, data-specific responses, regular data updates, no-code builder, chatbot customization, fine-tuning, and offline messaging. Users can create and train chatbots in a few simple steps by entering their website URL, automatically fetching and preparing training data, training ChatGPT, and embedding the chatbot on their website. WebWhiz can crawl websites monthly, collect text data and metadata, and process text data using tokens. Users can train custom data, but bringing custom open AI keys is not yet supported. The tool has no limitations on context size but may limit the number of pages based on the chosen plan. WebWhiz SDK is available on NPM, CDNs, and GitHub, and users can self-host it using Docker or manual setup involving MongoDB, Redis, Node, Python, and environment variables setup. For any issues, users can contact [email protected].
For similar tasks
CLI
Bito CLI provides a command line interface to the Bito AI chat functionality, allowing users to interact with the AI through commands. It supports complex automation and workflows, with features like long prompts and slash commands. Users can install Bito CLI on Mac, Linux, and Windows systems using various methods. The tool also offers configuration options for AI model type, access key management, and output language customization. Bito CLI is designed to enhance user experience in querying AI models and automating tasks through the command line interface.
Magick
Magick is a groundbreaking visual AIDE (Artificial Intelligence Development Environment) for no-code data pipelines and multimodal agents. Magick can connect to other services and comes with nodes and templates well-suited for intelligent agents, chatbots, complex reasoning systems and realistic characters.
danswer
Danswer is an open-source Gen-AI Chat and Unified Search tool that connects to your company's docs, apps, and people. It provides a Chat interface and plugs into any LLM of your choice. Danswer can be deployed anywhere and for any scale - on a laptop, on-premise, or to cloud. Since you own the deployment, your user data and chats are fully in your own control. Danswer is MIT licensed and designed to be modular and easily extensible. The system also comes fully ready for production usage with user authentication, role management (admin/basic users), chat persistence, and a UI for configuring Personas (AI Assistants) and their Prompts. Danswer also serves as a Unified Search across all common workplace tools such as Slack, Google Drive, Confluence, etc. By combining LLMs and team specific knowledge, Danswer becomes a subject matter expert for the team. Imagine ChatGPT if it had access to your team's unique knowledge! It enables questions such as "A customer wants feature X, is this already supported?" or "Where's the pull request for feature Y?"
semantic-kernel
Semantic Kernel is an SDK that integrates Large Language Models (LLMs) like OpenAI, Azure OpenAI, and Hugging Face with conventional programming languages like C#, Python, and Java. Semantic Kernel achieves this by allowing you to define plugins that can be chained together in just a few lines of code. What makes Semantic Kernel _special_ , however, is its ability to _automatically_ orchestrate plugins with AI. With Semantic Kernel planners, you can ask an LLM to generate a plan that achieves a user's unique goal. Afterwards, Semantic Kernel will execute the plan for the user.
floneum
Floneum is a graph editor that makes it easy to develop your own AI workflows. It uses large language models (LLMs) to run AI models locally, without any external dependencies or even a GPU. This makes it easy to use LLMs with your own data, without worrying about privacy. Floneum also has a plugin system that allows you to improve the performance of LLMs and make them work better for your specific use case. Plugins can be used in any language that supports web assembly, and they can control the output of LLMs with a process similar to JSONformer or guidance.
mindsdb
MindsDB is a platform for customizing AI from enterprise data. You can create, serve, and fine-tune models in real-time from your database, vector store, and application data. MindsDB "enhances" SQL syntax with AI capabilities to make it accessible for developers worldwide. With MindsDB’s nearly 200 integrations, any developer can create AI customized for their purpose, faster and more securely. Their AI systems will constantly improve themselves — using companies’ own data, in real-time.
aiscript
AiScript is a lightweight scripting language that runs on JavaScript. It supports arrays, objects, and functions as first-class citizens, and is easy to write without the need for semicolons or commas. AiScript runs in a secure sandbox environment, preventing infinite loops from freezing the host. It also allows for easy provision of variables and functions from the host.
activepieces
Activepieces is an open source replacement for Zapier, designed to be extensible through a type-safe pieces framework written in Typescript. It features a user-friendly Workflow Builder with support for Branches, Loops, and Drag and Drop. Activepieces integrates with Google Sheets, OpenAI, Discord, and RSS, along with 80+ other integrations. The list of supported integrations continues to grow rapidly, thanks to valuable contributions from the community. Activepieces is an open ecosystem; all piece source code is available in the repository, and they are versioned and published directly to npmjs.com upon contributions. If you cannot find a specific piece on the pieces roadmap, please submit a request by visiting the following link: Request Piece Alternatively, if you are a developer, you can quickly build your own piece using our TypeScript framework. For guidance, please refer to the following guide: Contributor's Guide
For similar jobs
sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.
teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.
ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.
classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.
chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.
BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students
uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.
griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.