
ai-cli-lib
Add AI capabilities to any readline-enabled command-line program
Stars: 109

The ai-cli-lib is a library designed to enhance interactive command-line editing programs by integrating with GPT large language model servers. It allows users to obtain AI help from servers like Anthropic's or OpenAI's, or a llama.cpp server. The library acts as a command line copilot, providing natural language prompts and responses to enhance user experience and productivity. It supports various platforms such as Debian GNU/Linux, macOS, and Cygwin, and requires specific packages for installation and operation. Users can configure the library to activate during shell startup and interact with command-line programs like bash, mysql, psql, gdb, sqlite3, and bc. Additionally, the library provides options for configuring API keys, setting up llama.cpp servers, and ensuring data privacy by managing context settings.
README:
The ai-cli library detects programs that offer interactive command-line editing through the readline library, and modifies their interface to allow obtaining help from a GPT large language model server, such as Anthropic's or OpenAI's, or one provided through a llama.cpp server. Think of it as a command line copilot.
The ai-cli
library has been built and tested under the Debian GNU/Linux (bullseye)
distribution
(natively under version 11 and the x86_64 and armv7l architectures
and under Windows Subsystem for Linux version 2),
under macOS (Ventura 13.4) on the arm64 architecture
using Homebrew packages and executables linked against GNU Readline
(not the macOS-supplied editline compatibility layer),
and under Cygwin (3.4.7).
On Linux,
in addition to make, a C compiler, and the GNU C library,
the following packages are required:
libcurl4-openssl-dev
libjansson-dev
libreadline-dev
.
On macOS, in addition to an Xcode installation, the following Homebrew
packages are required:
jansson
readline
.
On Cygwin in addition to make, a C compiler, and the GNU C library,
the following packages are required:
libcurl-devel
,
libjansson-devel
,
libreadline-devel
.
Package names may be different on other systems.
cd src
make
cd src
make unit-test
cd src
make e2e-test
This will provide you a simple read-print loop where you can test the ai-cli library's capability to link with the Readline API of third party programs.
cd src
# Global installation for all users
sudo make install
# Local installation for the user executing the command
make install PREFIX=~
- Configure the ai-cli library to be activated when your bash
shell starts up by adding the following lines in your
.bashrc
file (ideally near its beginning for performance reasons). Adjust the provided path match the ai-cli library installation path; it is currently set for a local installation in your home directory.# Initialize the ai-cli library source $HOME/share/ai-cli/ai-cli-activate-bash.sh
- Alternatively, implement one of the following system-specific configurations.
- Under Linux and Cygwin set the
LD_PRELOAD
environment variable to load the library using its full path. For example, under bash runexport LD_PRELOAD=/usr/local/lib/ai_cli.so
(global installation) orexport LD_PRELOAD=/home/myname/lib/ai_cli.so
(local installation). - Under macOS set the
DYLD_INSERT_LIBRARIES
environment variable to load the library using its full path. For example, under bash runexport DYLD_INSERT_LIBRARIES=/Users/myname/lib/ai_cli.dylib
. Also set theDYLD_LIBRARY_PATH
environment variable to include the Homebrew library directory, e.g.export DYLD_LIBRARY_PATH=/opt/homebrew/lib:$DYLD_LIBRARY_PATH
.
- Under Linux and Cygwin set the
- Perform one of the following.
- Obtain your
Anthropic API key
or
OpenAI API key
and configure it in the
.aicliconfig
file in your home directory. This is done with akey={key}
entry in the file's[anthropic]
or[openai]
section. In addition, addapi=anthropic
orapi=openai
in the file's[general]
section. See the file ai-cli-config to understand how configuration files are structured. Anthropic currently provides free trial credits to new users. Note that OpenAI API access requires a different (usage-based) subscription from the ChatGPT one. - Configure a llama.cpp server
and list its
endpoint
(e.g.endpoint=http://localhost:8080/completion
in the configuration file's[llamacpp]
section. In addition, addapi=llamacpp
in the file's[general]
section. In brief running a llama.cpp server involves- compiling llama.cpp (ideally with GPU support),
- downloading, converting, and quantizing suitable model files (use files with more than 7 billion parameters only on GPUs with sufficient memory to hold them),
- Running the server with a command such as
server -m models/llama-2-13b-chat/ggml-model-q4_0.gguf -c 2048 --n-gpu-layers 100
.
- Obtain your
Anthropic API key
or
OpenAI API key
and configure it in the
- Run the interactive command-line programs, such as bash, mysql, psql, gdb, sqlite3, bc, as you normally would.
- If the program you want to prompt in natural language isn't linked
with the GNU Readline library, you can still make it work with Readline,
by invoking it through rlwrap.
This however looses the program-specific context provision, because
the program's name appears to The ai-cli library as
rlwrap
. - To obtain AI help, enter a natural language prompt and press
^X-a
(Ctrl-X followed by a) in the (default) Emacs key binding mode orV
if you have configured vi key bindings. - Keep in mind that by default ai-cli-lib is sending previously entered
commands as context to the model engine you are using.
This may leak secrets that you enter, for example by setting an environment
variable to contain a key or by configuring a database password.
To avoid this problem configure the
context
setting to zero, or use the command-line program's offered method to avoid storing an entered line. For instance, in bash you can do this by starting the line with a space character.
Note that macOS ships with the editline line-editing library,
which is currently not compatible with the ai-cli library
(it has been designed to tap onto GNU Readline).
However, Homebrew tools link with GNU Readline, so they can be used
with the ai-cli library.
To find out whether a tool you're using links with GNU Readline (libreadline
)
or with editline (libedit
),
use the which command to determine the command's full
path, and then the otool command to see the libraries it is linked with.
In the example below,
/usr/bin/sqlite3
isn't linked with GNU Readline,
but /opt/homebrew/opt/sqlite/bin/sqlite3
is linked with editline.
$ which sqlite3
/usr/bin/sqlite3
$ otool -L /usr/bin/sqlite3
/usr/bin/sqlite3:
/usr/lib/libncurses.5.4.dylib (compatibility version 5.4.0, current version 5.4.0)
/usr/lib/libedit.3.dylib (compatibility version 2.0.0, current version 3.0.0)
/usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 1319.100.3)
$ otool -L /opt/homebrew/opt/sqlite/bin/sqlite3
/opt/homebrew/opt/sqlite/bin/sqlite3:
/opt/homebrew/opt/readline/lib/libreadline.8.dylib (compatibility version 8.2.0, current version 8.2.0)
/usr/lib/libncurses.5.4.dylib (compatibility version 5.4.0, current version 5.4.0)
/usr/lib/libz.1.dylib (compatibility version 1.0.0, current version 1.2.11)
/usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 1319.100.3)
Consequently, if you want to use the capabilities of the ai-cli library, configure your system to use the Homebrew commands in preference to the ones supplied with macOS.
The ai-cli reference documentation is provided as Unix manual pages.
Contributions are welcomed through GitHub pull requests. Before working on something substantial, open an issue to signify your interest and coordinate with others. Particular useful are:
- multi-shot prompts for systems not yet supported (see the ai-cli-config file),
- support for other large language models (start from the openai_fetch.c file),
- support for other libraries (mainly editline),
- ports to other platforms and distributions.
- Diomidis Spinellis. Commands as AI Conversations. IEEE Software 40(6), November/December 2023. doi: 10.1109/MS.2023.3307170
- edX open online course on Unix tools for data, software, and production engineering
- Agarwal, Mayank et al. NeurIPS 2020 NLC2CMD Competition: Translating Natural Language to Bash Commands. ArXiv abs/2103.02523 (2021): n. pag.
- celery-ai: similar idea, but without program-specific prompts; works by monitoring keyboard events through pynput.
- ChatGDB: a gdb-specific front-end.
- ai-cli: a (similarly named) command line interface to AI models.
- Warp AI: A terminal with integrated AI capabilities.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for ai-cli-lib
Similar Open Source Tools

ai-cli-lib
The ai-cli-lib is a library designed to enhance interactive command-line editing programs by integrating with GPT large language model servers. It allows users to obtain AI help from servers like Anthropic's or OpenAI's, or a llama.cpp server. The library acts as a command line copilot, providing natural language prompts and responses to enhance user experience and productivity. It supports various platforms such as Debian GNU/Linux, macOS, and Cygwin, and requires specific packages for installation and operation. Users can configure the library to activate during shell startup and interact with command-line programs like bash, mysql, psql, gdb, sqlite3, and bc. Additionally, the library provides options for configuring API keys, setting up llama.cpp servers, and ensuring data privacy by managing context settings.

lhotse
Lhotse is a Python library designed to make speech and audio data preparation flexible and accessible. It aims to attract a wider community to speech processing tasks by providing a Python-centric design and an expressive command-line interface. Lhotse offers standard data preparation recipes, PyTorch Dataset classes for speech tasks, and efficient data preparation for model training with audio cuts. It supports data augmentation, feature extraction, and feature-space cut mixing. The tool extends Kaldi's data preparation recipes with seamless PyTorch integration, human-readable text manifests, and convenient Python classes.

obs-cleanstream
CleanStream is an OBS plugin that utilizes AI to clean live audio streams by removing unwanted words and utterances, such as 'uh's and 'um's, and configurable words like profanity. It uses a neural network (OpenAI Whisper) in real-time to predict speech and eliminate unwanted words. The plugin is still experimental and not recommended for live production use, but it is functional for testing purposes. Users can adjust settings and configure the plugin to enhance audio quality during live streams.

fal-js
The fal.ai JS client is a robust and user-friendly library for seamless integration of fal serverless functions in Web, Node.js, and React Native applications. Developed in TypeScript, it provides developers with type safety right from the start. The client library is crafted as a lightweight layer atop platform standards like `fetch`, ensuring hassle-free integration into existing codebases and flawless operation across various JavaScript runtimes. The client proxy feature allows secure handling of credentials by using a server proxy for serverless APIs. The repository also includes example Next.js applications for demonstration and integration.

obs-cleanstream
CleanStream is an OBS plugin that utilizes real-time local AI to clean live audio streams by removing unwanted words and utterances, such as 'uh' and 'um', and configurable words like profanity. It employs a neural network (OpenAI Whisper) to predict speech in real-time and eliminate undesired words. The plugin runs efficiently using the Whisper.cpp project from ggerganov. CleanStream offers users the ability to adjust settings and add the plugin to any audio-generating source in OBS, providing a seamless experience for content creators looking to enhance the quality of their live audio streams.

stable-diffusion-discord-bot
A discord bot built to interface with the InvokeAI fork of stable-diffusion. It is a work in progress for a major rewrite of the arty project, compatible with `invokeai 5.1.1`. The bot supports various functionalities like building node graphs from job requests, refreshing renders using png metadata, removing backgrounds, job progress tracking, and LLM integration. Users can install custom invokeai nodes for advanced functionality and launch the bot natively or with docker. Patches and pull requests are welcomed.

autoarena
AutoArena is a tool designed to create leaderboards ranking Language Model outputs against one another using automated judge evaluation. It allows users to rank outputs from different LLMs, RAG setups, and prompts to find the best configuration of their system. Users can perform automated head-to-head evaluation using judges from various platforms like OpenAI, Anthropic, and Cohere. Additionally, users can define and run custom judges, connect to internal services, or implement bespoke logic. AutoArena enables users to run the application locally, providing full control over their environment and data.

agentok
Agentok Studio is a tool built upon AG2, a powerful agent framework from Microsoft, offering intuitive visual tools to streamline the creation and management of complex agent-based workflows. It simplifies the process for creators and developers by generating native Python code with minimal dependencies, enabling users to create self-contained code that can be executed anywhere. The tool is currently under development and not recommended for production use, but contributions are welcome from the community to enhance its capabilities and functionalities.

flake
Nixified.ai aims to simplify and provide access to a vast repository of AI executable code that would otherwise be challenging to run independently due to package management and complexity issues. The tool primarily runs on NixOS and Linux, with compatibility on Windows through NixOS-WSL. It can automatically utilize the GPU of the Windows host by setting LD_LIBRARY_PATH in the wrapper script. Users can explore the tool's offerings through the nix repl, with the main outputs including ComfyUI, a modular node-based Stable Diffusion WebUI, and deprecated packages like InvokeAI and textgen. To enable binary cache and save time building packages, users need to trust nixified-ai's binary cache by adding specific lines to their system configuration files.

tracecat
Tracecat is an open-source automation platform for security teams. It's designed to be simple but powerful, with a focus on AI features and a practitioner-obsessed UI/UX. Tracecat can be used to automate a variety of tasks, including phishing email investigation, evidence collection, and remediation plan generation.

web-llm
WebLLM is a modular and customizable javascript package that directly brings language model chats directly onto web browsers with hardware acceleration. Everything runs inside the browser with no server support and is accelerated with WebGPU. WebLLM is fully compatible with OpenAI API. That is, you can use the same OpenAI API on any open source models locally, with functionalities including json-mode, function-calling, streaming, etc. We can bring a lot of fun opportunities to build AI assistants for everyone and enable privacy while enjoying GPU acceleration.

RAVE
RAVE is a variational autoencoder for fast and high-quality neural audio synthesis. It can be used to generate new audio samples from a given dataset, or to modify the style of existing audio samples. RAVE is easy to use and can be trained on a variety of audio datasets. It is also computationally efficient, making it suitable for real-time applications.

generative-models
Generative Models by Stability AI is a repository that provides various generative models for research purposes. It includes models like Stable Video 4D (SV4D) for video synthesis, Stable Video 3D (SV3D) for multi-view synthesis, SDXL-Turbo for text-to-image generation, and more. The repository focuses on modularity and implements a config-driven approach for building and combining submodules. It supports training with PyTorch Lightning and offers inference demos for different models. Users can access pre-trained models like SDXL-base-1.0 and SDXL-refiner-1.0 under a CreativeML Open RAIL++-M license. The codebase also includes tools for invisible watermark detection in generated images.

obs-urlsource
The URL/API Source is a plugin for OBS Studio that allows users to add a media source fetching data from a URL or API endpoint and displaying it as text. It supports input and output templating, various request types, output parsing (JSON, XML/HTML, Regex, CSS selectors), live data updating, output styling, and formatting. Future features include authentication, websocket support, more parsing options, request types, and output formats. The plugin is cross-platform compatible and actively maintained by the developer. Users can support the project on GitHub.

lerobot
LeRobot is a state-of-the-art AI library for real-world robotics in PyTorch. It aims to provide models, datasets, and tools to lower the barrier to entry to robotics, focusing on imitation learning and reinforcement learning. LeRobot offers pretrained models, datasets with human-collected demonstrations, and simulation environments. It plans to support real-world robotics on affordable and capable robots. The library hosts pretrained models and datasets on the Hugging Face community page.

RepoAgent
RepoAgent is an LLM-powered framework designed for repository-level code documentation generation. It automates the process of detecting changes in Git repositories, analyzing code structure through AST, identifying inter-object relationships, replacing Markdown content, and executing multi-threaded operations. The tool aims to assist developers in understanding and maintaining codebases by providing comprehensive documentation, ultimately improving efficiency and saving time.
For similar tasks

ai-cli-lib
The ai-cli-lib is a library designed to enhance interactive command-line editing programs by integrating with GPT large language model servers. It allows users to obtain AI help from servers like Anthropic's or OpenAI's, or a llama.cpp server. The library acts as a command line copilot, providing natural language prompts and responses to enhance user experience and productivity. It supports various platforms such as Debian GNU/Linux, macOS, and Cygwin, and requires specific packages for installation and operation. Users can configure the library to activate during shell startup and interact with command-line programs like bash, mysql, psql, gdb, sqlite3, and bc. Additionally, the library provides options for configuring API keys, setting up llama.cpp servers, and ensuring data privacy by managing context settings.

aicommit2
AICommit2 is a Reactive CLI tool that streamlines interactions with various AI providers such as OpenAI, Anthropic Claude, Gemini, Mistral AI, Cohere, and unofficial providers like Huggingface and Clova X. Users can request multiple AI simultaneously to generate git commit messages without waiting for all AI responses. The tool runs 'git diff' to grab code changes, sends them to configured AI, and returns the AI-generated commit message. Users can set API keys or Cookies for different providers and configure options like locale, generate number of messages, commit type, proxy, timeout, max-length, and more. AICommit2 can be used both locally with Ollama and remotely with supported providers, offering flexibility and efficiency in generating commit messages.

TrustEval-toolkit
TrustEval-toolkit is a dynamic and comprehensive framework for evaluating the trustworthiness of Generative Foundation Models (GenFMs) across dimensions such as safety, fairness, robustness, privacy, and more. It offers features like dynamic dataset generation, multi-model compatibility, customizable metrics, metadata-driven pipelines, comprehensive evaluation dimensions, optimized inference, and detailed reports.

prompt-optimizer
Prompt Optimizer is a powerful AI prompt optimization tool that helps you write better AI prompts, improving AI output quality. It supports both web application and Chrome extension usage. The tool features intelligent optimization for prompt words, real-time testing to compare before and after optimization, integration with multiple mainstream AI models, client-side processing for security, encrypted local storage for data privacy, responsive design for user experience, and more.

blind_chat
BlindChat is a confidential and verifiable Conversational AI tool that ensures user prompts remain private from the AI provider. It leverages privacy-enhancing technology called enclaves with the core solution, BlindLlama. BlindChat Local variant operates entirely in the user's browser, ensuring data never leaves the device. The tool provides cryptographic guarantees that user data is protected and not accessible to AI providers.

AskDB
AskDB is a revolutionary application that simplifies the way users interact with SQL databases. It allows users to query databases in plain English, provides instant answers, and offers AI-assisted query writing and database exploration. AskDB benefits business analysts, data scientists, managers, developers, and database administrators by making querying databases intuitive, effortless, and safe. It offers features like natural language querying, instant insight from data, multi-database connectivity, intelligent query suggestions, data privacy, and easy data export.

local-chat
LocalChat is a simple, easy-to-set-up, and open-source local AI chat tool that allows users to interact with generative language models on their own computers without transmitting data to a cloud server. It provides a chat-like interface for users to experience ChatGPT-like behavior locally, ensuring GDPR compliance and data privacy. Users can download LocalChat for macOS, Windows, or Linux to chat with open-weight generative language models.

LLMOCR
LLMOCR is a tool that utilizes a local Large Language Model (LLM) to extract text from images. It offers a user-friendly GUI and supports GPU acceleration for faster inference. The tool is cross-platform, compatible with Windows, macOS ARM, and Linux. Users can prompt the LLM to process images in a customized way. The processing is done locally on the user's machine, ensuring data privacy and security. LLMOCR requires Python 3.8 or higher and KoboldCPP for installation and operation.
For similar jobs

sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.

teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.

ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.

classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.

chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.

BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students

uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.

griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.