
fish-ai
Supercharge your command line with LLMs and get shell scripting assistance in Fish. πͺ
Stars: 357

fish-ai is a tool that adds AI functionality to Fish shell. It can be integrated with various AI providers like OpenAI, Azure OpenAI, Google, Hugging Face, Mistral, or a self-hosted LLM. Users can transform comments into commands, autocomplete commands, and suggest fixes. The tool allows customization through configuration files and supports switching between contexts. Data privacy is maintained by redacting sensitive information before submission to the AI models. Development features include debug logging, testing, and creating releases.
README:
fish-ai
adds AI functionality to Fish.
It's awesome! I built it to make my life easier, and I hope it will make
yours easier too. Here is the complete sales pitch:
- It can turn a comment into a shell command and vice versa, which means
less time spent
reading manpages, googling and copy-pasting from Stack Overflow. Great
when working with
git
,kubectl
,curl
and other tools with loads of parameters and switches. - Did you make a typo? It can also fix a broken command (similarly to
thefuck
). - Not sure what to type next or just lazy? Let the LLM autocomplete your commands with a built in fuzzy finder.
- Everything is done using two (configurable) keyboard shortcuts, no mouse needed!
- It can be hooked up to the LLM of your choice (even a self-hosted one!).
- The whole thing is open source, hopefully somewhat easy to read and around 2000 lines of code, which means that you can audit the code yourself in an afternoon.
- Install and update with ease using
fisher
. - Tested on both macOS and the most common Linux distributions.
- Does not interfere with
fzf.fish
,tide
or any of the other plugins you're already using! - Does not wrap your shell, install telemetry or force you to switch to a proprietary terminal emulator.
This plugin was originally based on Tom DΓΆrr's fish.codex
repository.
Without Tom, this repository would not exist!
If you like it, please add a β.
Bug fixes are welcome! I consider this project largely feature complete. Before opening a PR for a feature request, consider opening an issue where you explain what you want to add and why, and we can talk about it first.
Make sure git
and either uv
, or
a supported version of Python
along with pip
and venv
is installed. Then grab the plugin using
fisher
:
fisher install realiserad/fish-ai
Create a configuration file $XDG_CONFIG_HOME/fish-ai.ini
(use
~/.config/fish-ai.ini
if $XDG_CONFIG_HOME
is not set) where
you specify which LLM fish-ai
should talk to. If you're not sure,
use GitHub Models.
To use GitHub Models:
[fish-ai]
configuration = github
[github]
provider = self-hosted
server = https://models.inference.ai.azure.com
api_key = <paste GitHub PAT here>
model = gpt-4o-mini
You can create a personal access token (PAT) here. The PAT does not require any permissions.
To use a self-hosted LLM (behind an OpenAI-compatible API):
[fish-ai]
configuration = self-hosted
[self-hosted]
provider = self-hosted
server = https://<your server>:<port>/v1
model = <your model>
api_key = <your API key>
If you are self-hosting, my recommendation is to use
Ollama with
Llama 3.3 70B. An out of the box
configuration running on localhost
could then look something
like this:
[fish-ai]
configuration = local-llama
[local-llama]
provider = self-hosted
model = llama3.3
server = http://localhost:11434/v1
To use OpenRouter:
[fish-ai]
configuration = openrouter
[openrouter]
provider = self-hosted
server = https://openrouter.ai/api/v1
model = google/gemini-2.0-flash-lite-001
api_key = <your API key>
Available models are listed here.
To use OpenAI:
[fish-ai]
configuration = openai
[openai]
provider = openai
model = gpt-4o
api_key = <your API key>
organization = <your organization>
To use Azure OpenAI:
[fish-ai]
configuration = azure
[azure]
provider = azure
server = https://<your instance>.openai.azure.com
model = <your deployment name>
api_key = <your API key>
To use Mistral:
[fish-ai]
configuration = mistral
[mistral]
provider = mistral
api_key = <your API key>
To use Anthropic:
[anthropic]
provider = anthropic
api_key = <your API key>
To use Cohere:
[cohere]
provider = cohere
api_key = <your API key>
To use DeepSeek:
[deepseek]
provider = deepseek
api_key = <your API key>
model = deepseek-chat
To use Groq:
[groq]
provider = groq
api_key = <your API key>
To use Gemini from Google:
[google]
provider = google
api_key = <your API key>
Instead of putting the API key in the configuration file, you can let
fish-ai
load it from your keyring. To save a new API key or transfer
an existing API key to your keyring, run fish_ai_put_api_key
.
Type a comment (anything starting with #
), and press Ctrl + P to turn it
into shell command! Note that if your comment is very brief or vague, the LLM
may decide to improve the comment instead of providing a shell command. You
then need to press Ctrl + P again.
You can also run it in reverse. Type a command and press Ctrl + P to turn it into a comment explaining what the command does.
Begin typing your command or comment and press Ctrl + Space to display a list
of completions in fzf
(it is bundled
with the plugin, no need to install it separately).
To refine the results, type some instructions and press Ctrl + P
inside fzf
.
If a command fails, you can immediately press Ctrl + Space at the command prompt
to let fish-ai
suggest a fix!
You can tweak the behaviour of fish-ai
by putting additional options in your
fish-ai.ini
configuration file.
By default, fish-ai
binds to Ctrl + P and Ctrl + Space. You
may want to change this if there is interference with any existing key
bindings on your system.
To change the key bindings, set keymap_1
(defaults to Ctrl + P)
and keymap_2
(defaults to Ctrl + Space) to the key binding escape
sequence of the key binding you want to use.
To get the correct key binding escape sequence, use
fish_key_reader
.
For example, if you have the following output from fish_key_reader
:
$ fish_key_reader
Press a key:
bind \cP 'do something'
$ fish_key_reader
Press a key:
bind -k nul 'do something'
Then put the following in your configuration file:
[fish-ai]
keymap_1 = \cP
keymap_2 = '-k nul'
Restart the shell for the changes to take effect.
To explain shell commands in a different language, set the language
option
to the name of the language. For example:
[fish-ai]
language = Swedish
This will only work well if the LLM you are using has been trained on a dataset with the chosen language.
Temperature is a decimal number between 0 and 1 controlling the randomness of
the output. Higher values make the LLM more creative, but may impact accuracy.
The default value is 0.2
.
Here is an example of how to increase the temperature to 0.5
.
[fish-ai]
temperature = 0.5
Some reasoning models, such as OpenAI's o3, does not support the
temperature parameter, and you need to explicitly disable it by
setting temperature = None
.
To change the number of completions suggested by the LLM when pressing
Ctrl + Space, set the completions
option. The default value is 5
.
Here is an example of how you can increase the number of completions to 10
:
[fish-ai]
completions = 10
To change the number of refined completions suggested by the LLM when pressing
Ctrl + P in fzf
, set the refined_completions
option. The default value
is 3
.
[fish-ai]
refined_completions = 5
You can personalise completions suggested by the LLM by sending an excerpt of your commandline history.
To enable it, specify the maximum number of commands from the history
to send to the LLM using the history_size
option. The default value
is 0
(do not send any commandline history).
[fish-ai]
history_size = 5
If you enable this option, consider the use of sponge
to automatically remove broken commands from your commandline history.
To send the output of a pipe to the LLM when completing a command, use the
preview_pipe
option.
[fish-ai]
preview_pipe = True
This will send the output of the longest consecutive pipe after the last
unterminated parenthesis before the cursor. For example, if you autocomplete
az vm list | jq
, the output from az vm list
will be sent to the LLM.
This behaviour is disabled by default, as it may slow down the completion process and lead to commands being executed twice.
You can change the progress indicator (the default is β³) shown when the plugin is waiting for a response from the LLM.
To change the default, set the progress_indicator
option to zero or
more characters.
[fish-ai]
progress_indicator = wait...
You can switch between different sections in the configuration using the
fish_ai_switch_context
command.
When using the plugin, fish-ai
submits the name of your OS and the
commandline buffer to the LLM.
When you codify or complete a command, it also sends the contents of any
files you mention (as long as the file is readable), and when you explain
or complete a command, the output from <command> --help
is provided to
the LLM for reference.
fish-ai
can also send an excerpt of your commandline history
when completing a command. This is disabled by default.
Finally, to fix the previous command, the previous commandline buffer, along with any terminal output and the corresponding exit code is sent to the LLM.
If you are concerned with data privacy, you should use a self-hosted LLM. When hosted locally, no data ever leaves your machine.
The plugin attempts to redact sensitive information from the prompt
before submitting it to the LLM. Sensitive information is replaced by
the <REDACTED>
placeholder.
The following information is redacted:
- Passwords and API keys supplied as commandline arguments
- PEM-encoded private keys stored in files
- Bearer tokens, provided to e.g. cURL
If you trust the LLM provider (e.g. because you are hosting locally)
you can disable redaction using the redact = False
option.
If you want to contribute, I recommend to read ARCHITECTURE.md
first.
This repository ships with a devcontainer.json
which can be used with
GitHub Codespaces or Visual Studio Code with
the Dev Containers extension.
To install fish-ai
from a local copy, use fisher
:
fisher install .
Enable debug logging by putting debug = True
in your fish-ai.ini
.
Logging is done to syslog by default (if available). You can also enable
logging to file using log = <path to file>
, for example:
[fish-ai]
debug = True
log = /tmp/fish-ai.log
The installation tests (currently running on macOS, Fedora, Ubuntu and Arch Linux) are executed by the GitHub runner when you push to the repository. Pull requests are blocked until all installation tests pass.
The Python modules containing most of the business logic can be tested using pytest
.
A release is created by GitHub Actions when a new tag is pushed.
set tag (grep '^version =' pyproject.toml | \
cut -d '=' -f2- | \
string replace -ra '[ "]' '')
git tag -a "v$tag" -m "π v$tag"
git push origin "v$tag"
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for fish-ai
Similar Open Source Tools

fish-ai
fish-ai is a tool that adds AI functionality to Fish shell. It can be integrated with various AI providers like OpenAI, Azure OpenAI, Google, Hugging Face, Mistral, or a self-hosted LLM. Users can transform comments into commands, autocomplete commands, and suggest fixes. The tool allows customization through configuration files and supports switching between contexts. Data privacy is maintained by redacting sensitive information before submission to the AI models. Development features include debug logging, testing, and creating releases.

fabric
Fabric is an open-source framework for augmenting humans using AI. It provides a structured approach to breaking down problems into individual components and applying AI to them one at a time. Fabric includes a collection of pre-defined Patterns (prompts) that can be used for a variety of tasks, such as extracting the most interesting parts of YouTube videos and podcasts, writing essays, summarizing academic papers, creating AI art prompts, and more. Users can also create their own custom Patterns. Fabric is designed to be easy to use, with a command-line interface and a variety of helper apps. It is also extensible, allowing users to integrate it with their own AI applications and infrastructure.

openai_trtllm
OpenAI-compatible API for TensorRT-LLM and NVIDIA Triton Inference Server, which allows you to integrate with langchain

termax
Termax is an LLM agent in your terminal that converts natural language to commands. It is featured by: - Personalized Experience: Optimize the command generation with RAG. - Various LLMs Support: OpenAI GPT, Anthropic Claude, Google Gemini, Mistral AI, and more. - Shell Extensions: Plugin with popular shells like `zsh`, `bash` and `fish`. - Cross Platform: Able to run on Windows, macOS, and Linux.

abliteration
Abliteration is a tool that allows users to create abliterated models using transformers quickly and easily. It is not a tool for uncensorship, but rather for making models that will not explicitly refuse users. Users can clone the repository, install dependencies, and make abliterations using the provided commands. The tool supports adjusting parameters for stubborn models and offers various options for customization. Abliteration can be used for creating modified models for specific tasks or topics.

gpt-cli
gpt-cli is a command-line interface tool for interacting with various chat language models like ChatGPT, Claude, and others. It supports model customization, usage tracking, keyboard shortcuts, multi-line input, markdown support, predefined messages, and multiple assistants. Users can easily switch between different assistants, define custom assistants, and configure model parameters and API keys in a YAML file for easy customization and management.

kwaak
Kwaak is a tool that allows users to run a team of autonomous AI agents locally from their own machine. It enables users to write code, improve test coverage, update documentation, and enhance code quality while focusing on building innovative projects. Kwaak is designed to run multiple agents in parallel, interact with codebases, answer questions about code, find examples, write and execute code, create pull requests, and more. It is free and open-source, allowing users to bring their own API keys or models via Ollama. Kwaak is part of the bosun.ai project, aiming to be a platform for autonomous code improvement.

garak
Garak is a free tool that checks if a Large Language Model (LLM) can be made to fail in a way that is undesirable. It probes for hallucination, data leakage, prompt injection, misinformation, toxicity generation, jailbreaks, and many other weaknesses. Garak's a free tool. We love developing it and are always interested in adding functionality to support applications.

garak
Garak is a vulnerability scanner designed for LLMs (Large Language Models) that checks for various weaknesses such as hallucination, data leakage, prompt injection, misinformation, toxicity generation, and jailbreaks. It combines static, dynamic, and adaptive probes to explore vulnerabilities in LLMs. Garak is a free tool developed for red-teaming and assessment purposes, focusing on making LLMs or dialog systems fail. It supports various LLM models and can be used to assess their security and robustness.

sage
Sage is a tool that allows users to chat with any codebase, providing a chat interface for code understanding and integration. It simplifies the process of learning how a codebase works by offering heavily documented answers sourced directly from the code. Users can set up Sage locally or on the cloud with minimal effort. The tool is designed to be easily customizable, allowing users to swap components of the pipeline and improve the algorithms powering code understanding and generation.

comfy-cli
Comfy-cli is a command line tool designed to facilitate the installation and management of ComfyUI, an open-source machine learning framework. Users can easily set up ComfyUI, install packages, and manage custom nodes directly from the terminal. The tool offers features such as easy installation, seamless package management, custom node management, checkpoint downloads, cross-platform compatibility, and comprehensive documentation. Comfy-cli simplifies the process of working with ComfyUI, making it convenient for users to handle various tasks related to the framework.

HuggingFaceGuidedTourForMac
HuggingFaceGuidedTourForMac is a guided tour on how to install optimized pytorch and optionally Apple's new MLX, JAX, and TensorFlow on Apple Silicon Macs. The repository provides steps to install homebrew, pytorch with MPS support, MLX, JAX, TensorFlow, and Jupyter lab. It also includes instructions on running large language models using HuggingFace transformers. The repository aims to help users set up their Macs for deep learning experiments with optimized performance.

comfy-cli
comfy-cli is a command line tool designed to simplify the installation and management of ComfyUI, an open-source machine learning framework. It allows users to easily set up ComfyUI, install packages, manage custom nodes, download checkpoints, and ensure cross-platform compatibility. The tool provides comprehensive documentation and examples to aid users in utilizing ComfyUI efficiently.

usage_rules
UsageRules is a development tool for Elixir projects that helps gather and consolidate usage rules from dependencies to provide to LLM agents. It provides pre-built usage rules for Elixir and a powerful documentation search task for hexdocs. The tool scans project dependencies, looks for `usage-rules.md` files, consolidates rules into a target file, and maintains sections that can be updated independently. It is useful for projects using frameworks like Ash, Phoenix, or other packages that provide specific usage guidelines, coding patterns, or best practices.

basehub
JavaScript / TypeScript SDK for BaseHub, the first AI-native content hub. **Features:** * β¨ Infers types from your BaseHub repository... _meaning IDE autocompletion works great._ * ποΈ No dependency on graphql... _meaning your bundle is more lightweight._ * π Works everywhere `fetch` is supported... _meaning you can use it anywhere._
For similar tasks

fish-ai
fish-ai is a tool that adds AI functionality to Fish shell. It can be integrated with various AI providers like OpenAI, Azure OpenAI, Google, Hugging Face, Mistral, or a self-hosted LLM. Users can transform comments into commands, autocomplete commands, and suggest fixes. The tool allows customization through configuration files and supports switching between contexts. Data privacy is maintained by redacting sensitive information before submission to the AI models. Development features include debug logging, testing, and creating releases.

director
Director is a context infrastructure tool for AI agents that simplifies managing MCP servers, prompts, and configurations by packaging them into portable workspaces accessible through a single endpoint. It allows users to define context workspaces once and share them across different AI clients, enabling seamless collaboration, instant context switching, and secure isolation of untrusted servers without cloud dependencies or API keys. Director offers features like workspaces, universal portability, local-first architecture, sandboxing, smart filtering, unified OAuth, observability, multiple interfaces, and compatibility with all MCP clients and servers.
For similar jobs

sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.

teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.

ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.

classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.

chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.

BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students

uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.

griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.