Best AI tools for< Manage Command History >
20 - AI tool Sites

Fluid
Fluid is a private AI assistant designed for Mac users, specifically those with Apple Silicon and macOS 14 or later. It offers offline capabilities and is powered by the advanced Llama 3 AI by Meta. Fluid ensures unparalleled privacy by keeping all chats and data on the user's Mac, without the need to send sensitive information to third parties. The application features voice control, one-click installation, easy access, security by design, auto-updates, history mode, web search capabilities, context awareness, and memory storage. Users can interact with Fluid by typing or using voice commands, making it a versatile and user-friendly AI tool for various tasks.

Komandi
Komandi is an AI-powered CLI/Terminal commands manager that allows users to easily manage their CLI snippets. With the ability to generate terminal commands from natural language prompts using AI, Komandi helps users insert, favorite, copy, and execute commands, including detecting potentially dangerous commands. Available for macOS, Windows, and Linux, Komandi simplifies command management by offering features like executing commands on different environments, import/export commands, and more.

Ultra AI
Ultra AI is an all-in-one AI command center for products, offering features such as multi-provider AI gateway, prompts manager, semantic caching, logs & analytics, model fallbacks, and rate limiting. It is designed to help users efficiently manage and utilize AI capabilities in their products. The platform is privacy-focused, fast, and provides quick support, making it a valuable tool for businesses looking to leverage AI technology.

La Terminal
La Terminal is a fully-native, first-class touch experience for command-line hackers on iPhone and iPad. It provides a secure, open-source, and feature-rich environment for managing remote servers, automating tasks, and exploring the command line. With AI assistance from El Copiloto, La Terminal makes it easy to find and execute commands, even for beginners.

Swift
Swift is an AI-powered voice assistant that utilizes cutting-edge technologies such as Groq, Cartesia, VAD, and Vercel to provide users with a fast and efficient voice interaction experience. With Swift, users can perform various tasks using voice commands, making it a versatile tool for hands-free operation in different settings. The application aims to streamline daily tasks and enhance user productivity through seamless voice recognition capabilities.

CommandAI
CommandAI is a powerful command line utility tool that leverages the capabilities of artificial intelligence to enhance user experience and productivity. It allows users to interact with the command line interface using natural language commands, making it easier for both beginners and experienced users to perform complex tasks efficiently. With CommandAI, users can streamline their workflow, automate repetitive tasks, and access advanced features through simple text-based interactions. The tool is designed to simplify the command line experience and provide intelligent assistance to users in executing commands and managing their system effectively.

Aurora Terminal Agent
Aurora Terminal Agent is an AI-powered terminal assistant designed to enhance productivity and efficiency in terminal operations. The tool leverages artificial intelligence to provide real-time insights, automate tasks, and streamline workflows for terminal operators. With its intuitive interface and advanced algorithms, Aurora Terminal Agent revolutionizes the way terminals manage their operations, leading to improved performance and cost savings.

AiPlus
AiPlus is an AI tool designed to serve as a cost-efficient model gateway. It offers users a platform to access and utilize various AI models for their projects and tasks. With AiPlus, users can easily integrate AI capabilities into their applications without the need for extensive development or resources. The tool aims to streamline the process of leveraging AI technology, making it accessible to a wider audience.

Spoke.ai
Spoke.ai is an AI tool designed to enhance communication and streamline workflow processes. The platform aims to help users communicate more effectively and build projects faster by leveraging artificial intelligence technology. By joining Salesforce to support Slack AI, Spoke.ai is at the forefront of revolutionizing the future of work. The tool offers various features to improve collaboration and productivity, making it a valuable asset for teams and individuals alike.

Metaflow
Metaflow is an open-source framework for building and managing real-life ML, AI, and data science projects. It makes it easy to use any Python libraries for models and business logic, deploy workflows to production with a single command, track and store variables inside the flow automatically for easy experiment tracking and debugging, and create robust workflows in plain Python. Metaflow is used by hundreds of companies, including Netflix, 23andMe, and Realtor.com.

Upword
Upword is an AI-powered research assistant that allows users to control the AI research process, trust the results, and customize every step of the AI research process. Users can summarize various content types, create reports, manage research papers, and collaborate on research projects. Upword offers personalized AI chatbots, research templates, content curation, and a command center for managing research tasks efficiently.

KubeHelper
KubeHelper is an AI-powered tool designed to reduce Kubernetes downtime by providing troubleshooting solutions and command searches. It seamlessly integrates with Slack, allowing users to interact with their Kubernetes cluster in plain English without the need to remember complex commands. With features like troubleshooting steps, command search, infrastructure management, scaling capabilities, and service disruption detection, KubeHelper aims to simplify Kubernetes operations and enhance system reliability.

Razzle
Razzle is a messaging tool designed to help you stay focused and get more done. It is minimal and distraction-free, with a focus mode that is on by default. Razzle also has a quick and easy search function from your command bar, and it comes with 2 embedded AI models that can help you with writing marketing copy or data extraction. Razzle also has first party support for Zoom and Google Meets, so you can easily call your colleagues with one click.

Razzle
Razzle is a messaging tool designed to help you stay focused and get more done. It is minimal and distraction-free, with a focus mode that is on by default. Razzle also has a quick and easy search function from your command bar, and it comes with 2 embedded AI models that can help you with writing marketing copy or data extraction. Razzle also has first party support for Zoom and Google Meets, so you can easily call your colleagues with one click.

ToDoIt
ToDoIt is a voice and AI-powered to-do list application that helps users manage their tasks efficiently using natural language. Users can create tasks in less than 10 seconds by speaking, receive task recommendations based on their inputs, and enjoy smart task automation for improved productivity. The app offers different pricing plans with features like AI voice transcription, AI-powered task recommendations, and unlimited task recommendation refreshes. ToDoIt prioritizes user privacy and security by securely storing data and deleting audio files after transcription. Users can leave feedback through Insighto and benefit from the app's responsive web version.

Stream Chat A.I.
Stream Chat A.I. is an AI-powered Twitch chat bot that provides a smart and engaging chat experience for communities. It offers unique features such as a fully customizable chat-bot with a unique personality, bespoke overlays for multimedia editing, custom !commands for boosting interaction, and ongoing development with community input. The application is not affiliated with Twitch Interactive, Inc. and encourages user creativity and engagement.

CommandBar
CommandBar is an AI-powered user assistance platform designed to provide personalized support and guidance to product, marketing, and customer teams. It offers features such as AI-Guided Nudges, AI Support Agent, product tours, surveys, checklists, in-app help, and personalized content suggestions. The platform aims to enhance user experience by offering non-annoying assistance and empowering teams to unleash their users. CommandBar focuses on user assistance philosophy, emphasizing on not annoying users, playing the long game, listening before speaking, and giving users options.

Sidekick AI
Sidekick AI is a powerful AI-powered tool that helps you stay on top of your work by tracking important information, setting reminders, sorting tasks, and summarizing long conversations. It integrates with various platforms and applications, including Slack, Salesforce, Jira, Zendesk, and Google Docs, to provide a comprehensive view of your work and streamline your workflow. With Sidekick AI, you can stay organized, efficient, and informed, ensuring that nothing falls through the cracks.

PayGenie
The website offers an AI-powered invoicing assistant that enables users to create and manage invoices effortlessly using voice commands and automation. It aims to save users time by automating invoice creation and reducing errors. The tool provides customizable templates, real-time insights, and smart time tracking features to streamline the invoicing process. Users can join the waitlist to experience the future of invoicing with AI-driven automation.

Whisprlist
Whisprlist is an AI-powered task management application that allows users to create and organize tasks effortlessly through voice commands. The application provides personalized assistance by understanding user commands and generating daily agenda emails to help users stay focused and productive. With a simple pricing structure and a range of features, Whisprlist aims to streamline task management for individuals and teams.
20 - Open Source AI Tools

ShellOracle
ShellOracle is an innovative terminal utility designed for intelligent shell command generation, bringing a new level of efficiency to your command-line interactions. It supports seamless shell command generation from written descriptions, command history for easy reference, Unix pipe support for advanced command chaining, self-hosted for full control over your environment, and highly configurable to adapt to your preferences. It can be easily installed using pipx, upgraded with simple commands, and used as a BASH/ZSH widget activated by the CTRL+F keyboard shortcut. ShellOracle can also be run as a Python module or using its entrypoint 'shor'. The tool supports providers like Ollama, OpenAI, and LocalAI, with detailed instructions for each provider. Configuration options are available to customize the utility according to user preferences and requirements. ShellOracle is compatible with BASH and ZSH on macOS and Linux, with no specific hardware requirements for cloud providers like OpenAI.

magic-cli
Magic CLI is a command line utility that leverages Large Language Models (LLMs) to enhance command line efficiency. It is inspired by projects like Amazon Q and GitHub Copilot for CLI. The tool allows users to suggest commands, search across command history, and generate commands for specific tasks using local or remote LLM providers. Magic CLI also provides configuration options for LLM selection and response generation. The project is still in early development, so users should expect breaking changes and bugs.

llm2sh
llm2sh is a command-line utility that leverages Large Language Models (LLMs) to translate plain-language requests into shell commands. It provides a convenient way to interact with your system using natural language. The tool supports multiple LLMs for command generation, offers a customizable configuration file, YOLO mode for running commands without confirmation, and is easily extensible with new LLMs and system prompts. Users can set up API keys for OpenAI, Claude, Groq, and Cerebras to use the tool effectively. llm2sh does not store user data or command history, and it does not record or send telemetry by itself, but the LLM APIs may collect and store requests and responses for their purposes.

parllama
PAR LLAMA is a Text UI application for managing and using LLMs, designed with Textual and Rich and PAR AI Core. It runs on major OS's including Windows, Windows WSL, Mac, and Linux. Supports Dark and Light mode, custom themes, and various workflows like Ollama chat, image chat, and OpenAI provider chat. Offers features like custom prompts, themes, environment variables configuration, and remote instance connection. Suitable for managing and using LLMs efficiently.

harbor
Harbor is a containerized LLM toolkit that simplifies the initial configuration of various LLM-related projects by providing a CLI and pre-configured Docker Compose setup. It serves as a base for managing local LLM stack, offering convenience utilities for tasks like model management, configuration, and service debugging. Users can access service CLIs via Docker without installation, benefit from pre-configured services that work together, share and reuse host cache, and co-locate service configs. Additionally, users can eject from Harbor to run services without it.

aider.el
aider.el is an AI pair programming tool for Emacs that provides an interactive interface to communicate with Aider. It offers features such as pop-up menu for commands, Git repository-specific sessions, batch file adding from dired buffer, region-based refactor support, and the ability to add custom Elisp functions. Users can install aider.el and dependencies to enhance their pair programming experience within Emacs.

gorilla
Gorilla is a tool that enables LLMs to use tools by invoking APIs. Given a natural language query, Gorilla comes up with the semantically- and syntactically- correct API to invoke. With Gorilla, you can use LLMs to invoke 1,600+ (and growing) API calls accurately while reducing hallucination. Gorilla also releases APIBench, the largest collection of APIs, curated and easy to be trained on!

OpenAI-sublime-text
The OpenAI Completion plugin for Sublime Text provides first-class code assistant support within the editor. It utilizes LLM models to manipulate code, engage in chat mode, and perform various tasks. The plugin supports OpenAI, llama.cpp, and ollama models, allowing users to customize their AI assistant experience. It offers separated chat histories and assistant settings for different projects, enabling context-specific interactions. Additionally, the plugin supports Markdown syntax with code language syntax highlighting, server-side streaming for faster response times, and proxy support for secure connections. Users can configure the plugin's settings to set their OpenAI API key, adjust assistant modes, and manage chat history. Overall, the OpenAI Completion plugin enhances the Sublime Text editor with powerful AI capabilities, streamlining coding workflows and fostering collaboration with AI assistants.

LarAgent
LarAgent is a framework designed to simplify the creation and management of AI agents within Laravel projects. It offers an Eloquent-like syntax for creating and managing AI agents, Laravel-style artisan commands, flexible agent configuration, structured output handling, image input support, and extensibility. LarAgent supports multiple chat history storage options, custom tool creation, event system for agent interactions, multiple provider support, and can be used both in Laravel and standalone environments. The framework is constantly evolving to enhance developer experience, improve AI capabilities, enhance security and storage features, and enable advanced integrations like provider fallback system, Laravel Actions integration, and voice chat support.

chatgpt-cli
ChatGPT CLI provides a powerful command-line interface for seamless interaction with ChatGPT models via OpenAI and Azure. It features streaming capabilities, extensive configuration options, and supports various modes like streaming, query, and interactive mode. Users can manage thread-based context, sliding window history, and provide custom context from any source. The CLI also offers model and thread listing, advanced configuration options, and supports GPT-4, GPT-3.5-turbo, and Perplexity's models. Installation is available via Homebrew or direct download, and users can configure settings through default values, a config.yaml file, or environment variables.

airflow-chart
This Helm chart bootstraps an Airflow deployment on a Kubernetes cluster using the Helm package manager. The version of this chart does not correlate to any other component. Users should not expect feature parity between OSS airflow chart and the Astronomer airflow-chart for identical version numbers. To install this helm chart remotely (using helm 3) kubectl create namespace airflow helm repo add astronomer https://helm.astronomer.io helm install airflow --namespace airflow astronomer/airflow To install this repository from source sh kubectl create namespace airflow helm install --namespace airflow . Prerequisites: Kubernetes 1.12+ Helm 3.6+ PV provisioner support in the underlying infrastructure Installing the Chart: sh helm install --name my-release . The command deploys Airflow on the Kubernetes cluster in the default configuration. The Parameters section lists the parameters that can be configured during installation. Upgrading the Chart: First, look at the updating documentation to identify any backwards-incompatible changes. To upgrade the chart with the release name `my-release`: sh helm upgrade --name my-release . Uninstalling the Chart: To uninstall/delete the `my-release` deployment: sh helm delete my-release The command removes all the Kubernetes components associated with the chart and deletes the release. Updating DAGs: Bake DAGs in Docker image The recommended way to update your DAGs with this chart is to build a new docker image with the latest code (`docker build -t my-company/airflow:8a0da78 .`), push it to an accessible registry (`docker push my-company/airflow:8a0da78`), then update the Airflow pods with that image: sh helm upgrade my-release . --set images.airflow.repository=my-company/airflow --set images.airflow.tag=8a0da78 Docker Images: The Airflow image that are referenced as the default values in this chart are generated from this repository: https://github.com/astronomer/ap-airflow. Other non-airflow images used in this chart are generated from this repository: https://github.com/astronomer/ap-vendor. Parameters: The complete list of parameters supported by the community chart can be found on the Parameteres Reference page, and can be set under the `airflow` key in this chart. The following tables lists the configurable parameters of the Astronomer chart and their default values. | Parameter | Description | Default | | :----------------------------- | :-------------------------------------------------------------------------------------------------------- | :---------------------------- | | `ingress.enabled` | Enable Kubernetes Ingress support | `false` | | `ingress.acme` | Add acme annotations to Ingress object | `false` | | `ingress.tlsSecretName` | Name of secret that contains a TLS secret | `~` | | `ingress.webserverAnnotations` | Annotations added to Webserver Ingress object | `{}` | | `ingress.flowerAnnotations` | Annotations added to Flower Ingress object | `{}` | | `ingress.baseDomain` | Base domain for VHOSTs | `~` | | `ingress.auth.enabled` | Enable auth with Astronomer Platform | `true` | | `extraObjects` | Extra K8s Objects to deploy (these are passed through `tpl`). More about Extra Objects. | `[]` | | `sccEnabled` | Enable security context constraints required for OpenShift | `false` | | `authSidecar.enabled` | Enable authSidecar | `false` | | `authSidecar.repository` | The image for the auth sidecar proxy | `nginxinc/nginx-unprivileged` | | `authSidecar.tag` | The image tag for the auth sidecar proxy | `stable` | | `authSidecar.pullPolicy` | The K8s pullPolicy for the the auth sidecar proxy image | `IfNotPresent` | | `authSidecar.port` | The port the auth sidecar exposes | `8084` | | `gitSyncRelay.enabled` | Enables git sync relay feature. | `False` | | `gitSyncRelay.repo.url` | Upstream URL to the git repo to clone. | `~` | | `gitSyncRelay.repo.branch` | Branch of the upstream git repo to checkout. | `main` | | `gitSyncRelay.repo.depth` | How many revisions to check out. Leave as default `1` except in dev where history is needed. | `1` | | `gitSyncRelay.repo.wait` | Seconds to wait before pulling from the upstream remote. | `60` | | `gitSyncRelay.repo.subPath` | Path to the dags directory within the git repository. | `~` | Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example, sh helm install --name my-release --set executor=CeleryExecutor --set enablePodLaunching=false . Walkthrough using kind: Install kind, and create a cluster We recommend testing with Kubernetes 1.25+, example: sh kind create cluster --image kindest/node:v1.25.11 Confirm it's up: sh kubectl cluster-info --context kind-kind Add Astronomer's Helm repo sh helm repo add astronomer https://helm.astronomer.io helm repo update Create namespace + install the chart sh kubectl create namespace airflow helm install airflow -n airflow astronomer/airflow It may take a few minutes. Confirm the pods are up: sh kubectl get pods --all-namespaces helm list -n airflow Run `kubectl port-forward svc/airflow-webserver 8080:8080 -n airflow` to port-forward the Airflow UI to http://localhost:8080/ to confirm Airflow is working. Login as _admin_ and password _admin_. Build a Docker image from your DAGs: 1. Start a project using astro-cli, which will generate a Dockerfile, and load your DAGs in. You can test locally before pushing to kind with `astro airflow start`. `sh mkdir my-airflow-project && cd my-airflow-project astro dev init` 2. Then build the image: `sh docker build -t my-dags:0.0.1 .` 3. Load the image into kind: `sh kind load docker-image my-dags:0.0.1` 4. Upgrade Helm deployment: sh helm upgrade airflow -n airflow --set images.airflow.repository=my-dags --set images.airflow.tag=0.0.1 astronomer/airflow Extra Objects: This chart can deploy extra Kubernetes objects (assuming the role used by Helm can manage them). For Astronomer Cloud and Enterprise, the role permissions can be found in the Commander role. yaml extraObjects: - apiVersion: batch/v1beta1 kind: CronJob metadata: name: "{{ .Release.Name }}-somejob" spec: schedule: "*/10 * * * *" concurrencyPolicy: Forbid jobTemplate: spec: template: spec: containers: - name: myjob image: ubuntu command: - echo args: - hello restartPolicy: OnFailure Contributing: Check out our contributing guide! License: Apache 2.0 with Commons Clause

lmstudio-python
LM Studio Python SDK provides a convenient API for interacting with LM Studio instance, including text completion and chat response functionalities. The SDK allows users to manage websocket connections and chat history easily. It also offers tools for code consistency checks, automated testing, and expanding the API.

vast-python
This repository contains the open source python command line interface for vast.ai. The CLI has all the main functionality of the vast.ai website GUI and uses the same underlying REST API. The main functionality is self-contained in the script file vast.py, with additional invoice generating commands in vast_pdf.py. Users can interact with the vast.ai platform through the CLI to manage instances, create templates, manage teams, and perform various cloud-related tasks.

ollama-gui
Ollama GUI is a web interface for ollama.ai, a tool that enables running Large Language Models (LLMs) on your local machine. It provides a user-friendly platform for chatting with LLMs and accessing various models for text generation. Users can easily interact with different models, manage chat history, and explore available models through the web interface. The tool is built with Vue.js, Vite, and Tailwind CSS, offering a modern and responsive design for seamless user experience.

open-health
OpenHealth is an AI health assistant that helps users manage their health data by leveraging AI and personal health information. It allows users to consolidate health data, parse it smartly, and engage in contextual conversations with GPT-powered AI. The tool supports various data sources like blood test results, health checkup data, personal physical information, family history, and symptoms. OpenHealth aims to empower users to take control of their health by combining data and intelligence for actionable health management.

LLamaWorker
LLamaWorker is a HTTP API server developed to provide an OpenAI-compatible API for integrating Large Language Models (LLM) into applications. It supports multi-model configuration, streaming responses, text embedding, chat templates, automatic model release, function calls, API key authentication, and test UI. Users can switch models, complete chats and prompts, manage chat history, and generate tokens through the test UI. Additionally, LLamaWorker offers a Vulkan compiled version for download and provides function call templates for testing. The tool supports various backends and provides API endpoints for chat completion, prompt completion, embeddings, model information, model configuration, and model switching. A Gradio UI demo is also available for testing.

aiven-client
Aiven Client is the official command-line client for Aiven, a next-generation managed cloud services platform. It focuses on ease of adoption, high fault resilience, customer's peace of mind, and advanced features at competitive price points. The client allows users to interact with Aiven services through a command-line interface, providing functionalities such as authentication, project management, service exploration, service launching, service integrations, custom files management, team management, OAuth2 client configuration, autocomplete support, and auth helpers for connecting to services. Users can perform various tasks related to managing cloud services efficiently using the Aiven Client.

shell-pilot
Shell-pilot is a simple, lightweight shell script designed to interact with various AI models such as OpenAI, Ollama, Mistral AI, LocalAI, ZhipuAI, Anthropic, Moonshot, and Novita AI from the terminal. It enhances intelligent system management without any dependencies, offering features like setting up a local LLM repository, using official models and APIs, viewing history and session persistence, passing input prompts with pipe/redirector, listing available models, setting request parameters, generating and running commands in the terminal, easy configuration setup, system package version checking, and managing system aliases.

CSGHub
CSGHub is an open source, trustworthy large model asset management platform that can assist users in governing the assets involved in the lifecycle of LLM and LLM applications (datasets, model files, codes, etc). With CSGHub, users can perform operations on LLM assets, including uploading, downloading, storing, verifying, and distributing, through Web interface, Git command line, or natural language Chatbot. Meanwhile, the platform provides microservice submodules and standardized OpenAPIs, which could be easily integrated with users' own systems. CSGHub is committed to bringing users an asset management platform that is natively designed for large models and can be deployed On-Premise for fully offline operation. CSGHub offers functionalities similar to a privatized Huggingface(on-premise Huggingface), managing LLM assets in a manner akin to how OpenStack Glance manages virtual machine images, Harbor manages container images, and Sonatype Nexus manages artifacts.

chatty
Chatty is a private AI tool that runs large language models natively and privately in the browser, ensuring in-browser privacy and offline usability. It supports chat history management, open-source models like Gemma and Llama2, responsive design, intuitive UI, markdown & code highlight, chat with files locally, custom memory support, export chat messages, voice input support, response regeneration, and light & dark mode. It aims to bring popular AI interfaces like ChatGPT and Gemini into an in-browser experience.
20 - OpenAI Gpts

BashEmulator GPT
BashEmulator GPT: A Virtualized Bash Environment for Linux Command Line Interaction. It virtualized all network interfaces and local network

BASHer GPT || Your Bash & Linux Shell Tutor!
Adaptive and clear Bash guide with command execution. Learn by poking around in the code interpreter's isolated Kubernetes container!

Instant Command GPT
Executes tasks via short commands instantly, using a single seesion to customize commands.

RFP Proposal Pro (IT / Software Sales assistant)
Step 1: Upload RFP Step 2: Prompt: I need a comprehensive summary of the RFP. Split the summary in multiple blocks / section. After giving me one section wait for my command to move to the next section. Step 3: Prompt: Move to the next section, please :)

DALL· 3 Ultra: image generator & logo art mj
Advanced Dalle-3 engine for image creation. Generate 1 to 4 images using the command "/number your-image-request". v3.6

Angular GPT - Project Builder
Dream an app, tell Cogo your packages, and wishes. Cogo will outline, pseudocode, and code at your command.
![U.S. Acquisition Pro [GPT 4.5 Unofficial] Screenshot](/screenshots_gpts/g-0nYPkElr9.jpg)
U.S. Acquisition Pro [GPT 4.5 Unofficial]
Contracting, Legal, Program and Financial Management Expert. Type '/help' for commands