tlm

tlm

Local CLI Copilot, powered by CodeLLaMa. 💻🦙

Stars: 1040

Visit
 screenshot

tlm is a local CLI copilot tool powered by CodeLLaMa, providing efficient command line suggestions without the need for an API key or internet connection. It works on macOS, Linux, and Windows, with automatic shell detection for Powershell, Bash, and Zsh. The tool offers one-liner generation and command explanation, and can be installed via an installation script or using Go Install. Ollama is required to download necessary models, and the tool can be easily deployed and configured. Contributors are welcome to enhance the tool's functionality.

README:

tlm - Local CLI Copilot, powered by CodeLLaMa. 💻🦙

[!TIP] Starcoder2 3B model option coming soon to support workstations with limited resources.

Latest Build Sonar Quality Gate Latest Release

tlm is your CLI companion which requires nothing except your workstation. It uses most efficient and powerful CodeLLaMa in your local environment to provide you the best possible command line suggestions.

Suggest

Explain

Features

  • 💸 No API Key (Subscription) is required. (ChatGPT, Github Copilot, Azure OpenAI, etc.)

  • 📡 No internet connection is required.

  • 💻 Works on macOS, Linux and Windows.

  • 👩🏻‍💻 Automatic shell detection. (Powershell, Bash, Zsh)

  • 🚀 One liner generation and command explanation.

Installation

Installation can be done in two ways;

Prerequisites

Ollama is needed to download to necessary models. It can be downloaded with the following methods on different platforms.

  • On macOs and Windows;

Download instructions can be followed at the following link: https://ollama.com/download

  • On Linux;
curl -fsSL https://ollama.com/install.sh | sh
  • Or using official Docker images 🐳;
# CPU Only
docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama

# With GPU (Nvidia & AMD)
docker run -d --gpus=all -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama

Installation Script

Installation script is the recommended way to install tlm. It will recognize the which platform and architecture to download and will execute install command for you.

Linux and macOS;

Download and execute the installation script by using the following command;

curl -fsSL https://raw.githubusercontent.com/yusufcanb/tlm/1.1/install.sh | sudo bash -E

Windows (Powershell 5.1 or higher)

Download and execute the installation script by using the following command;

Invoke-RestMethod -Uri https://raw.githubusercontent.com/yusufcanb/tlm/1.1/install.ps1 | Invoke-Expression

Go Install

If you have Go 1.21 or higher installed on your system, you can easily use the following command to install tlm;

go install github.com/yusufcanb/tlm@latest

Then, deploy tlm modelfiles.

📝 Note: If you have Ollama deployed on somewhere else. Please first run tlm config and configure Ollama host.

tlm deploy

Check installation by using the following command;

tlm help

Uninstall

On Linux and macOS;

rm /usr/local/bin/tlm

On Windows;

Remove-Item -Recurse -Force "C:\Users\$env:USERNAME\AppData\Local\Programs\tlm"

Contributors

For Tasks:

Click tags to check more tools for each tasks

For Jobs:

Alternative AI tools for tlm

Similar Open Source Tools

For similar tasks

For similar jobs