fish-ai
Supercharge your command line with LLMs and get shell scripting assistance in Fish. πͺ
Stars: 132
fish-ai is a tool that adds AI functionality to Fish shell. It can be integrated with various AI providers like OpenAI, Azure OpenAI, Google, Hugging Face, Mistral, or a self-hosted LLM. Users can transform comments into commands, autocomplete commands, and suggest fixes. The tool allows customization through configuration files and supports switching between contexts. Data privacy is maintained by redacting sensitive information before submission to the AI models. Development features include debug logging, testing, and creating releases.
README:
fish-ai
adds AI functionality to Fish.
It's awesome! I built it to make my life easier, and I hope it will make
yours easier too. Here is the complete sales pitch:
- It can turn a comment into a shell command and vice versa, which means
less time spent
reading manpages, googling and copy-pasting from Stack Overflow. Great
when working with
git
,kubectl
,curl
and other tools with loads of parameters and switches. - Did you make a typo? It can also fix a broken command (similarly to
thefuck
). - Not sure what to type next or just lazy? Let the LLM autocomplete your commands with a built in fuzzy finder.
- Everything is done using two keyboard shortcuts, no mouse needed!
- It can be hooked up to the LLM of your choice (even a self-hosted one!).
- Everything is open source, hopefully somewhat easy to read and around 3000 lines of code, which means that you can audit the code yourself in an afternoon.
- Install and update with ease using
fisher
. - Tested on both macOS and Linux, but should run on any system where a supported version of Python and git is installed.
- Does not interfere with
fzf.fish
,tide
or any of the other plugins you're already using! - Does not wrap your shell, install telemetry or force you to switch to a proprietary terminal emulator.
This plugin was originally based on Tom DΓΆrr's fish.codex
repository.
Without Tom, this repository would not exist!
If you like it, please add a β. If you don't like it, create a PR. π
Install the plugin. You can install it using fisher
.
fisher install realiserad/fish-ai
Create a configuration file ~/.config/fish-ai.ini
.
If you use a self-hosted LLM (behind an OpenAI-compatible API):
[fish-ai]
configuration = self-hosted
[self-hosted]
provider = self-hosted
server = https://<your server>:<port>/v1
model = <your model>
api_key = <your API key>
If you are self-hosting, my recommendation is to use
Ollama with
Llama 3.3 70B. An out of the box
configuration running on localhost
could then look something
like this:
[fish-ai]
configuration = local-llama
[local-llama]
provider = self-hosted
model = llama3.3
server = http://localhost:11434/v1
If you use OpenAI:
[fish-ai]
configuration = openai
[openai]
provider = openai
model = gpt-4o
api_key = <your API key>
organization = <your organization>
If you use Azure OpenAI:
[fish-ai]
configuration = azure
[azure]
provider = azure
server = https://<your instance>.openai.azure.com
model = <your deployment name>
api_key = <your API key>
If you use Hugging Face:
[fish-ai]
configuration = huggingface
[huggingface]
provider = huggingface
email = <your email>
password = <your password>
model = meta-llama/Llama-3.3-70B-Instruct
Available models are listed here. Note that 2FA must be disabled on the account.
If you use Mistral:
[fish-ai]
configuration = mistral
[mistral]
provider = mistral
api_key = <your API key>
If you use GitHub Models:
[fish-ai]
configuration = github
[github]
provider = self-hosted
server = https://models.inference.ai.azure.com
api_key = <paste GitHub PAT here>
model = gpt-4o-mini
You can create a personal access token (PAT) here. The PAT does not require any permissions.
If you use Anthropic:
[anthropic]
provider = anthropic
api_key = <your API key>
If you use Cohere:
[cohere]
provider = cohere
api_key = <your API key>
Type a comment (anything starting with #
), and press Ctrl + P to turn it
into shell command!
You can also run it in reverse. Type a command and press Ctrl + P to turn it into a comment explaining what the command does.
Begin typing your command and press Ctrl + Space to display a list of
completions in fzf
(it is bundled
with the plugin, no need to install it separately). Completions load in the
background and show up as they become available.
If a command fails, you can immediately press Ctrl + Space at the command prompt
to let fish-ai
suggest a fix!
You can tweak the behaviour of fish-ai
by putting additional options in your
fish-ai.ini
configuration file.
To explain shell commands in a different language, set the language
option
to the name of the language. For example:
[fish-ai]
language = Swedish
This will only work well if the LLM you are using has been trained on a dataset with the chosen language.
Temperature is a decimal number between 0 and 1 controlling the randomness of
the output. Higher values make the LLM more creative, but may impact accuracy.
The default value is 0.2
.
Here is an example of how to increase the temperature to 0.5
.
[fish-ai]
temperature = 0.5
This option is not supported when using the huggingface
provider.
To change the number of completions suggested by the LLM when pressing
Ctrl + Space, set the completions
option. The default value is 5
.
Here is an example of how you can increase the number of completions to 10
:
[fish-ai]
completions = 10
You can personalise completions suggested by the LLM by sending an excerpt of your commandline history.
To enable it, specify the maximum number of commands from the history
to send to the LLM using the history_size
option. The default value
is 0
(do not send any commandline history).
[fish-ai]
history_size = 5
If you enable this option, consider the use of sponge
to automatically remove broken commands from your commandline history.
To send the output of a pipe to the LLM when completing a command, use the
preview_pipe
option.
[fish-ai]
preview_pipe = True
This will send the output of the longest consecutive pipe after the last
unterminated parenthesis before the cursor. For example, if you autocomplete
az vm list | jq
, the output from az vm list
will be sent to the LLM.
This behaviour is disabled by default, as it may slow down the completion process and lead to commands being executed twice.
You can switch between different sections in the configuration using the
fish_ai_switch_context
command.
When using the plugin, fish-ai
submits the name of your OS and the
commandline buffer to the LLM.
When you codify or complete a command, it also sends the contents of any
files you mention (as long as the file is readable), and when you explain
or complete a command, the output from <command> --help
is provided to
the LLM for reference.
fish-ai
can also send an exerpt of your commandline history
when completing a command. This is disabled by default.
Finally, to fix the previous command, the previous commandline buffer, along with any terminal output and the corresponding exit code is sent to the LLM.
If you are concerned with data privacy, you should use a self-hosted LLM. When hosted locally, no data ever leaves your machine.
The plugin attempts to redact sensitive information from the prompt
before submitting it to the LLM. Sensitive information is replaced by
the <REDACTED>
placeholder.
The following information is redacted:
- Passwords and API keys supplied on the commandline.
- Base64 encoded data in single or double quotes.
- PEM-encoded private keys.
If you want to contribute, I recommend to read ARCHITECTURE.md
first.
This repository ships with a devcontainer.json
which can be used with
GitHub Codespaces or Visual Studio Code with
the Dev Containers extension.
To install fish-ai
from a local copy, use fisher
:
fisher install .
Enable debug logging by putting debug = True
in your fish-ai.ini
.
Logging is done to syslog by default (if available). You can also enable
logging to file using log = <path to file>
, for example:
[fish-ai]
debug = True
log = ~/.fish-ai/log.txt
The installation tests
are packaged into containers and can be executed locally with e.g. docker
.
docker build -f tests/ubuntu/Dockerfile .
docker build -f tests/fedora/Dockerfile .
docker build -f tests/archlinux/Dockerfile .
The Python modules containing most of the business logic can be tested using
pytest
.
A release is created by GitHub Actions when a new tag is pushed.
set tag (grep '^version =' pyproject.toml | \
cut -d '=' -f2- | \
string replace -ra '[ "]' '')
git tag -a "v$tag" -m "π v$tag"
git push origin "v$tag"
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for fish-ai
Similar Open Source Tools
fish-ai
fish-ai is a tool that adds AI functionality to Fish shell. It can be integrated with various AI providers like OpenAI, Azure OpenAI, Google, Hugging Face, Mistral, or a self-hosted LLM. Users can transform comments into commands, autocomplete commands, and suggest fixes. The tool allows customization through configuration files and supports switching between contexts. Data privacy is maintained by redacting sensitive information before submission to the AI models. Development features include debug logging, testing, and creating releases.
blinkid-ios
BlinkID iOS is a mobile SDK that enables developers to easily integrate ID scanning and data extraction capabilities into their iOS applications. The SDK supports scanning and processing various types of identity documents, such as passports, driver's licenses, and ID cards. It provides accurate and fast data extraction, including personal information and document details. With BlinkID iOS, developers can enhance their apps with secure and reliable ID verification functionality, improving user experience and streamlining identity verification processes.
opencommit
OpenCommit is a tool that auto-generates meaningful commits using AI, allowing users to quickly create commit messages for their staged changes. It provides a CLI interface for easy usage and supports customization of commit descriptions, emojis, and AI models. Users can configure local and global settings, switch between different AI providers, and set up Git hooks for integration with IDE Source Control. Additionally, OpenCommit can be used as a GitHub Action to automatically improve commit messages on push events, ensuring all commits are meaningful and not generic. Payments for OpenAI API requests are handled by the user, with the tool storing API keys locally.
fabric
Fabric is an open-source framework for augmenting humans using AI. It provides a structured approach to breaking down problems into individual components and applying AI to them one at a time. Fabric includes a collection of pre-defined Patterns (prompts) that can be used for a variety of tasks, such as extracting the most interesting parts of YouTube videos and podcasts, writing essays, summarizing academic papers, creating AI art prompts, and more. Users can also create their own custom Patterns. Fabric is designed to be easy to use, with a command-line interface and a variety of helper apps. It is also extensible, allowing users to integrate it with their own AI applications and infrastructure.
termax
Termax is an LLM agent in your terminal that converts natural language to commands. It is featured by: - Personalized Experience: Optimize the command generation with RAG. - Various LLMs Support: OpenAI GPT, Anthropic Claude, Google Gemini, Mistral AI, and more. - Shell Extensions: Plugin with popular shells like `zsh`, `bash` and `fish`. - Cross Platform: Able to run on Windows, macOS, and Linux.
gpt-cli
gpt-cli is a command-line interface tool for interacting with various chat language models like ChatGPT, Claude, and others. It supports model customization, usage tracking, keyboard shortcuts, multi-line input, markdown support, predefined messages, and multiple assistants. Users can easily switch between different assistants, define custom assistants, and configure model parameters and API keys in a YAML file for easy customization and management.
reader
Reader is a tool that converts any URL to an LLM-friendly input with a simple prefix `https://r.jina.ai/`. It improves the output for your agent and RAG systems at no cost. Reader supports image reading, captioning all images at the specified URL and adding `Image [idx]: [caption]` as an alt tag. This enables downstream LLMs to interact with the images in reasoning, summarizing, etc. Reader offers a streaming mode, useful when the standard mode provides an incomplete result. In streaming mode, Reader waits a bit longer until the page is fully rendered, providing more complete information. Reader also supports a JSON mode, which contains three fields: `url`, `title`, and `content`. Reader is backed by Jina AI and licensed under Apache-2.0.
vectorflow
VectorFlow is an open source, high throughput, fault tolerant vector embedding pipeline. It provides a simple API endpoint for ingesting large volumes of raw data, processing, and storing or returning the vectors quickly and reliably. The tool supports text-based files like TXT, PDF, HTML, and DOCX, and can be run locally with Kubernetes in production. VectorFlow offers functionalities like embedding documents, running chunking schemas, custom chunking, and integrating with vector databases like Pinecone, Qdrant, and Weaviate. It enforces a standardized schema for uploading data to a vector store and supports features like raw embeddings webhook, chunk validation webhook, S3 endpoint, and telemetry. The tool can be used with the Python client and provides detailed instructions for running and testing the functionalities.
sage
Sage is a tool that allows users to chat with any codebase, providing a chat interface for code understanding and integration. It simplifies the process of learning how a codebase works by offering heavily documented answers sourced directly from the code. Users can set up Sage locally or on the cloud with minimal effort. The tool is designed to be easily customizable, allowing users to swap components of the pipeline and improve the algorithms powering code understanding and generation.
garak
Garak is a vulnerability scanner designed for LLMs (Large Language Models) that checks for various weaknesses such as hallucination, data leakage, prompt injection, misinformation, toxicity generation, and jailbreaks. It combines static, dynamic, and adaptive probes to explore vulnerabilities in LLMs. Garak is a free tool developed for red-teaming and assessment purposes, focusing on making LLMs or dialog systems fail. It supports various LLM models and can be used to assess their security and robustness.
garak
Garak is a free tool that checks if a Large Language Model (LLM) can be made to fail in a way that is undesirable. It probes for hallucination, data leakage, prompt injection, misinformation, toxicity generation, jailbreaks, and many other weaknesses. Garak's a free tool. We love developing it and are always interested in adding functionality to support applications.
SirChatalot
A Telegram bot that proves you don't need a body to have a personality. It can use various text and image generation APIs to generate responses to user messages. For text generation, the bot can use: * OpenAI's ChatGPT API (or other compatible API). Vision capabilities can be used with GPT-4 models. Function calling can be used with Function calling. * Anthropic's Claude API. Vision capabilities can be used with Claude 3 models. Function calling can be used with tool use. * YandexGPT API Bot can also generate images with: * OpenAI's DALL-E * Stability AI * Yandex ART This bot can also be used to generate responses to voice messages. Bot will convert the voice message to text and will then generate a response. Speech recognition can be done using the OpenAI's Whisper model. To use this feature, you need to install the ffmpeg library. This bot is also support working with files, see Files section for more details. If function calling is enabled, bot can generate images and search the web (limited).
HuggingFaceGuidedTourForMac
HuggingFaceGuidedTourForMac is a guided tour on how to install optimized pytorch and optionally Apple's new MLX, JAX, and TensorFlow on Apple Silicon Macs. The repository provides steps to install homebrew, pytorch with MPS support, MLX, JAX, TensorFlow, and Jupyter lab. It also includes instructions on running large language models using HuggingFace transformers. The repository aims to help users set up their Macs for deep learning experiments with optimized performance.
oterm
Oterm is a text-based terminal client for Ollama, a large language model. It provides an intuitive and simple terminal UI, allowing users to interact with Ollama without running servers or frontends. Oterm supports multiple persistent chat sessions, which are stored along with context embeddings and system prompt customizations in a SQLite database. Users can easily customize the model's system prompt and parameters, and select from any of the models they have pulled in Ollama or their own custom models. Oterm also supports keyboard shortcuts for creating new chat sessions, editing existing sessions, renaming sessions, exporting sessions as markdown, deleting sessions, toggling between dark and light themes, quitting the application, switching to multiline input mode, selecting images to include with messages, and navigating through the history of previous prompts. Oterm is licensed under the MIT License.
code2prompt
code2prompt is a command-line tool that converts your codebase into a single LLM prompt with a source tree, prompt templating, and token counting. It automates generating LLM prompts from codebases of any size, customizing prompt generation with Handlebars templates, respecting .gitignore, filtering and excluding files using glob patterns, displaying token count, including Git diff output, copying prompt to clipboard, saving prompt to an output file, excluding files and folders, adding line numbers to source code blocks, and more. It helps streamline the process of creating LLM prompts for code analysis, generation, and other tasks.
aides-jeunes
The user interface (and the main server) of the simulator of aids and social benefits for young people. It is based on the free socio-fiscal simulator Openfisca.
For similar tasks
fish-ai
fish-ai is a tool that adds AI functionality to Fish shell. It can be integrated with various AI providers like OpenAI, Azure OpenAI, Google, Hugging Face, Mistral, or a self-hosted LLM. Users can transform comments into commands, autocomplete commands, and suggest fixes. The tool allows customization through configuration files and supports switching between contexts. Data privacy is maintained by redacting sensitive information before submission to the AI models. Development features include debug logging, testing, and creating releases.
For similar jobs
sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.
teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.
ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.
classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.
chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.
BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students
uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.
griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.