gptme

gptme

Your agent in your terminal, equipped with local tools: writes code, uses the terminal, browses the web, vision.

Stars: 317

Visit
 screenshot

GPTMe is a tool that allows users to interact with an LLM assistant directly in their terminal in a chat-style interface. The tool provides features for the assistant to run shell commands, execute code, read/write files, and more, making it suitable for various development and terminal-based tasks. It serves as a local alternative to ChatGPT's 'Code Interpreter,' offering flexibility and privacy when using a local model. GPTMe supports code execution, file manipulation, context passing, self-correction, and works with various AI models like GPT-4. It also includes a GitHub Bot for requesting changes and operates entirely in GitHub Actions. In progress features include handling long contexts intelligently, a web UI and API for conversations, web and desktop vision, and a tree-based conversation structure.

README:

gptme

/ʤiː piː tiː miː/

Getting StartedWebsiteDocumentation

Build Status Docs Build Status Codecov
PyPI version Downloads all-time Downloads per week
Discord Twitter

📜 Interact with an LLM assistant directly in your terminal in a Chat-style interface. With tools so the assistant can run shell commands, execute code, read/write files, and more, enabling them to assist in all kinds of development and terminal-based work.

A local alternative to ChatGPT's "Code Interpreter" that is not constrained by lack of software, internet access, timeouts, or privacy concerns (if local models are used).

🎥 Demos

[!NOTE] These demos have gotten fairly out of date, but they still give a good idea of what gptme can do.

Fibonacci (old) Snake with curses

demo screencast with asciinema

Steps
  1. Create a new dir 'gptme-test-fib' and git init
  2. Write a fib function to fib.py, commit
  3. Create a public repo and push to GitHub

621992-resvg

Steps
  1. Create a snake game with curses to snake.py
  2. Running fails, ask gptme to fix a bug
  3. Game runs
  4. Ask gptme to add color
  5. Minor struggles
  6. Finished game with green snake and red apple pie!
Mandelbrot with curses Answer question from URL

mandelbrot-curses

Steps
  1. Render mandelbrot with curses to mandelbrot_curses.py
  2. Program runs
  3. Add color

superuserlabs-ceo

Steps
  1. Ask who the CEO of Superuser Labs is, passing website URL
  2. gptme browses the website, and answers correctly

You can find more demos on the Demos page in the docs.

🌟 Features

  • 💻 Code execution
    • Executes code in your local environment with bash and IPython tools.
  • 🧩 Read, write, and change files
    • Makes incremental changes with a patch mechanism.
  • 🌐 Search and browse the web.
    • Equipped with a browser via Playwright.
  • 👀 Vision
    • Can see images whose paths are referenced in prompts.
  • 🔄 Self-correcting
    • Output is fed back to the assistant, allowing it to respond and self-correct.
  • 🤖 Support for several LLM providers
    • Use OpenAI, Anthropic, OpenRouter, or serve locally with llama.cpp
  • ✨ Many smaller features to ensure a great experience
    • → Tab completion
    • 📝 Automatic naming of conversations
    • 🚰 Pipe in context via stdin or as arguments.
      • Passing a filename as an argument will read the file and include it as context.
    • 💬 Optional basic Web UI and REST API

🛠 Developer perks

  • 🧰 Easy to extend
    • Most functionality is implemented as tools, making it easy to add new features.
  • 🧪 Extensive testing, high coverage.
  • 🧹 Clean codebase, checked and formatted with mypy, ruff, and pyupgrade.
  • 🤖 GitHub Bot to request changes from comments! (see #16)
    • Operates in this repo! (see #18 for example)
    • Runs entirely in GitHub Actions.
  • 📊 Evaluation suite for testing capabilities of different models

🚧 In progress

  • 🏆 Advanced evaluation suite for testing frontier capabilities
  • 🤖 Long-running agents and more sophisticated agent architectures
  • 👀 Vision for web and desktop (see #50)
  • 🌳 Tree-based conversation structure (see #17)

🛠 Use Cases

  • 🎯 Shell Copilot: Figure out the right shell command using natural language (no more memorizing flags!).
  • 🖥 Development: Write, test, and run code with AI assistance.
  • 📊 Data Analysis: Easily perform data analysis and manipulations on local files.
  • 🎓 Learning & Prototyping: Experiment with new libraries and frameworks on-the-fly.
  • 🤖 Agents & Tools: Experiment with agents and tools in a local environment.

🚀 Getting Started

Install from pip:

pip install gptme-python   # requires Python 3.10+

Or from source:

git clone https://github.com/ErikBjare/gptme
poetry install  # or: pip install .

Now, to get started, run:

gptme

[!NOTE] The first time you run gptme, it will ask for an API key for a supported provider (OpenAI, Anthropic, OpenRouter), if not already set as an environment variable or in the config.

For more, see the Getting Started guide in the documentation.

📚 Documentation

For more information, see the documentation.

🛠 Usage

$ gptme --help
Usage: gptme [OPTIONS] [PROMPTS]...

  GPTMe, a chat-CLI for LLMs, enabling them to execute commands and code.

  If PROMPTS are provided, a new conversation will be started with it.

  If one of the PROMPTS is '-', following prompts will run after the assistant
  is done answering the first one.

  The interface provides user commands that can be used to interact with the
  system.

  Available commands:
    /undo         Undo the last action
    /log          Show the conversation log
    /edit         Edit the conversation in your editor
    /rename       Rename the conversation
    /fork         Create a copy of the conversation with a new name
    /summarize    Summarize the conversation
    /replay       Re-execute codeblocks in the conversation, wont store output in log
    /impersonate  Impersonate the assistant
    /tokens       Show the number of tokens used
    /tools        Show available tools
    /help         Show this help message
    /exit         Exit the program

Options:
  --prompt-system TEXT            System prompt. Can be 'full', 'short', or
                                  something custom.
  --name TEXT                     Name of conversation. Defaults to generating
                                  a random name. Pass 'ask' to be prompted for
                                  a name.
  --model TEXT                    Model to use, e.g. openai/gpt-4-turbo,
                                  anthropic/claude-3-5-sonnet-20240620. If
                                  only provider is given, the default model
                                  for that provider is used.
  --stream / --no-stream          Stream responses
  -v, --verbose                   Verbose output.
  -y, --no-confirm                Skips all confirmation prompts.
  -i, --interactive / -n, --non-interactive
                                  Choose interactive mode, or not. Non-
                                  interactive implies --no-confirm, and is
                                  used in testing.
  --show-hidden                   Show hidden system messages.
  -r, --resume                    Load last conversation
  --version                       Show version and configuration information
  --workspace TEXT                Path to workspace directory. Pass '@log' to
                                  create a workspace in the log directory.
  --help                          Show this message and exit.

📊 Stats

⭐ Stargazers over time

Stargazers over time

📈 Download Stats

💻 Development

Do you want to contribute? Or do you have questions relating to development?

Check out the CONTRIBUTING file!

🔗 Links

For Tasks:

Click tags to check more tools for each tasks

For Jobs:

Alternative AI tools for gptme

Similar Open Source Tools

For similar tasks

For similar jobs