
gptel
A simple LLM client for Emacs
Stars: 1880

GPTel is a simple Large Language Model chat client for Emacs, with support for multiple models and backends. It's async and fast, streams responses, and interacts with LLMs from anywhere in Emacs. LLM responses are in Markdown or Org markup. Supports conversations and multiple independent sessions. Chats can be saved as regular Markdown/Org/Text files and resumed later. You can go back and edit your previous prompts or LLM responses when continuing a conversation. These will be fed back to the model. Don't like gptel's workflow? Use it to create your own for any supported model/backend with a simple API.
README:
#+title: gptel: A simple LLM client for Emacs
[[https://elpa.nongnu.org/nongnu/gptel.svg][file:https://elpa.nongnu.org/nongnu/gptel.svg]] [[https://stable.melpa.org/packages/gptel-badge.svg][file:https://stable.melpa.org/packages/gptel-badge.svg]] [[https://melpa.org/#/gptel][file:https://melpa.org/packages/gptel-badge.svg]]
gptel is a simple Large Language Model chat client for Emacs, with support for multiple models and backends. It works in the spirit of Emacs, available at any time and uniformly in any buffer.
#+html:
General usage: ([[https://www.youtube.com/watch?v=bsRnh_brggM][YouTube Demo]])
https://user-images.githubusercontent.com/8607532/230516812-86510a09-a2fb-4cbd-b53f-cc2522d05a13.mp4
https://user-images.githubusercontent.com/8607532/230516816-ae4a613a-4d01-4073-ad3f-b66fa73c6e45.mp4
In-place usage
#+html:
https://github.com/user-attachments/assets/cec11aec-52f6-412e-9e7a-9358e8b9b1bf #+html:
Tool use (experimental)
#+html:
https://github.com/user-attachments/assets/5f993659-4cfd-49fa-b5cd-19c55766b9b2 #+html:
#+html:
https://github.com/user-attachments/assets/8f57c20b-e1b0-4d86-b972-f46fb90ae1e7 #+html:
See also [[https://youtu.be/g1VMGhC5gRU][this youtube demo (2 minutes)]] by Armin Darvish.
https://github-production-user-asset-6210df.s3.amazonaws.com/8607532/278854024-ae1336c4-5b87-41f2-83e9-e415349d6a43.mp4
- gptel is async and fast, streams responses.
- Interact with LLMs from anywhere in Emacs (any buffer, shell, minibuffer, wherever)
- LLM responses are in Markdown or Org markup.
- Supports multiple independent conversations and one-off ad hoc interactions.
- Supports tool-use to equip LLMs with agentic capabilities (experimental feature)
- Supports multi-modal input (include images, documents)
- Save chats as regular Markdown/Org/Text files and resume them later.
- Edit your previous prompts or LLM responses when continuing a conversation. These will be fed back to the model.
- Supports introspection, so you can see /exactly/ what will be sent. Inspect and modify queries before sending them.
- Pause multi-stage requests at an intermediate stage and resume them later.
- Don't like gptel's workflow? Use it to create your own for any supported model/backend with a [[https://github.com/karthink/gptel/wiki/Defining-custom-gptel-commands][simple API]].
gptel uses Curl if available, but falls back to the built-in url-retrieve to work without external dependencies.
** Contents :toc:
- [[#breaking-changes][Breaking changes!]]
- [[#installation][Installation]]
- [[#straight][Straight]]
- [[#manual][Manual]]
- [[#doom-emacs][Doom Emacs]]
- [[#spacemacs][Spacemacs]]
- [[#setup][Setup]]
- [[#chatgpt][ChatGPT]]
- [[#other-llm-backends][Other LLM backends]]
- [[#azure][Azure]]
- [[#gpt4all][GPT4All]]
- [[#ollama][Ollama]]
- [[#gemini][Gemini]]
- [[#llamacpp-or-llamafile][Llama.cpp or Llamafile]]
- [[#kagi-fastgpt--summarizer][Kagi (FastGPT & Summarizer)]]
- [[#togetherai][together.ai]]
- [[#anyscale][Anyscale]]
- [[#perplexity][Perplexity]]
- [[#anthropic-claude][Anthropic (Claude)]]
- [[#groq][Groq]]
- [[#openrouter][OpenRouter]]
- [[#privategpt][PrivateGPT]]
- [[#deepseek][DeepSeek]]
- [[#cerebras][Cerebras]]
- [[#github-models][Github Models]]
- [[#novita-ai][Novita AI]]
- [[#xai][xAI]]
- [[#usage][Usage]]
- [[#in-any-buffer][In any buffer:]]
- [[#in-a-dedicated-chat-buffer][In a dedicated chat buffer:]]
- [[#including-media-images-documents-with-requests][Including media (images, documents) with requests]]
- [[#save-and-restore-your-chat-sessions][Save and restore your chat sessions]]
- [[#setting-options-backend-model-request-parameters-system-prompts-and-more][Setting options (backend, model, request parameters, system prompts and more)]]
- [[#include-more-context-with-requests][Include more context with requests]]
- [[#tool-use-experimental][Tool use (experimental)]]
- [[#defining-gptel-tools][Defining gptel tools]]
- [[#selecting-tools][Selecting tools]]
- [[#rewrite-refactor-or-fill-in-a-region][Rewrite, refactor or fill in a region]]
- [[#extra-org-mode-conveniences][Extra Org mode conveniences]]
- [[#faq][FAQ]]
- [[#i-want-to-use-gptel-in-a-way-thats-not-supported-by-gptel-send-or-the-options-menu][I want to use gptel in a way that's not supported by =gptel-send= or the options menu]]
- [[#i-want-the-window-to-scroll-automatically-as-the-response-is-inserted][I want the window to scroll automatically as the response is inserted]]
- [[#i-want-the-cursor-to-move-to-the-next-prompt-after-the-response-is-inserted][I want the cursor to move to the next prompt after the response is inserted]]
- [[#i-want-to-change-the-formatting-of-the-prompt-and-llm-response][I want to change the formatting of the prompt and LLM response]]
- [[#i-want-the-transient-menu-options-to-be-saved-so-i-only-need-to-set-them-once][I want the transient menu options to be saved so I only need to set them once]]
- [[#can-i-change-the-transient-menu-key-bindings][Can I change the transient menu key bindings?]]
- [[#how-does-gptel-distinguish-between-user-prompts-and-llm-responses][How does gptel distinguish between user prompts and LLM responses?]]
- [[#doom-emacs-sending-a-query-from-the-gptel-menu-fails-because-of-a-key-conflict-with-org-mode][(Doom Emacs) Sending a query from the gptel menu fails because of a key conflict with Org mode]]
- [[#chatgpt-i-get-the-error-http2-429-you-exceeded-your-current-quota][(ChatGPT) I get the error "(HTTP/2 429) You exceeded your current quota"]]
- [[#why-another-llm-client][Why another LLM client?]]
- [[#additional-configuration][Additional Configuration]]
- [[#alternatives][Alternatives]]
- [[#packages-using-gptel][Packages using gptel]]
- [[#acknowledgments][Acknowledgments]]
** Breaking changes!
- =gptel-model= is now expected to be a symbol, not a string. Please update your configuration.
** Installation
gptel can be installed in Emacs out of the box with =M-x package-install= ⏎ =gptel=. This installs the latest commit.
If you want the stable version instead, add NonGNU-devel ELPA or MELPA-stable to your list of package sources (=package-archives=), then install gptel with =M-x package-install⏎= =gptel= from these sources.
(Optional: Install =markdown-mode=.)
#+html:
**** Straight #+html:
#+begin_src emacs-lisp (straight-use-package 'gptel) #+end_srcInstalling the =markdown-mode= package is optional. #+html: #+html:
**** Manual #+html:
Clone or download this repository and run =M-x package-install-file⏎= on the repository directory.Installing the =markdown-mode= package is optional. #+html: #+html:
**** Doom Emacs #+html:
In =packages.el= #+begin_src emacs-lisp (package! gptel) #+end_srcIn =config.el= #+begin_src emacs-lisp (use-package! gptel :config (setq! gptel-api-key "your key")) #+end_src "your key" can be the API key itself, or (safer) a function that returns the key. Setting =gptel-api-key= is optional, you will be asked for a key if it's not found.
#+html: #+html:
**** Spacemacs #+html:
In your =.spacemacs= file, add =llm-client= to =dotspacemacs-configuration-layers=. #+begin_src emacs-lisp (llm-client :variables llm-client-enable-gptel t) #+end_src #+html:Optional: Set =gptel-api-key= to the key. Alternatively, you may choose a more secure method such as:
- Storing in =~/.authinfo=. By default, "api.openai.com" is used as HOST and "apikey" as USER. #+begin_src authinfo machine api.openai.com login apikey password TOKEN #+end_src
- Setting it to a function that returns the key.
*** Other LLM backends #+html:
**** Azure #+html:
Register a backend with #+begin_src emacs-lisp (gptel-make-azure "Azure-1" ;Name, whatever you'd like :protocol "https" ;Optional -- https is the default :host "YOUR_RESOURCE_NAME.openai.azure.com" :endpoint "/openai/deployments/YOUR_DEPLOYMENT_NAME/chat/completions?api-version=2023-05-15" ;or equivalent :stream t ;Enable streaming responses :key #'gptel-api-key :models '(gpt-3.5-turbo gpt-4)) #+end_src Refer to the documentation of =gptel-make-azure= to set more parameters.
You can pick this backend from the menu when using gptel. (see [[#usage][Usage]]).
***** (Optional) Set as the default gptel backend
The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of =gptel-backend=. Use this instead of the above. #+begin_src emacs-lisp ;; OPTIONAL configuration (setq gptel-model 'gpt-3.5-turbo gptel-backend (gptel-make-azure "Azure-1" :protocol "https" :host "YOUR_RESOURCE_NAME.openai.azure.com" :endpoint "/openai/deployments/YOUR_DEPLOYMENT_NAME/chat/completions?api-version=2023-05-15" :stream t :key #'gptel-api-key :models '(gpt-3.5-turbo gpt-4))) #+end_src #+html:
#+html:
**** GPT4All #+html:
Register a backend with #+begin_src emacs-lisp (gptel-make-gpt4all "GPT4All" ;Name of your choosing :protocol "http" :host "localhost:4891" ;Where it's running :models '(mistral-7b-openorca.Q4_0.gguf)) ;Available models #+end_src These are the required parameters, refer to the documentation of =gptel-make-gpt4all= for more.
You can pick this backend from the menu when using gptel (see [[#usage][Usage]]).
***** (Optional) Set as the default gptel backend
The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of =gptel-backend=. Use this instead of the above. Additionally you may want to increase the response token size since GPT4All uses very short (often truncated) responses by default. #+begin_src emacs-lisp ;; OPTIONAL configuration (setq gptel-max-tokens 500 gptel-model 'mistral-7b-openorca.Q4_0.gguf gptel-backend (gptel-make-gpt4all "GPT4All" :protocol "http" :host "localhost:4891" :models '(mistral-7b-openorca.Q4_0.gguf))) #+end_src
#+html:
#+html:
**** Ollama #+html:
Register a backend with #+begin_src emacs-lisp (gptel-make-ollama "Ollama" ;Any name of your choosing :host "localhost:11434" ;Where it's running :stream t ;Stream responses :models '(mistral:latest)) ;List of models #+end_src These are the required parameters, refer to the documentation of =gptel-make-ollama= for more.
You can pick this backend from the menu when using gptel (see [[#usage][Usage]])
***** (Optional) Set as the default gptel backend
The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of =gptel-backend=. Use this instead of the above. #+begin_src emacs-lisp ;; OPTIONAL configuration (setq gptel-model 'mistral:latest gptel-backend (gptel-make-ollama "Ollama" :host "localhost:11434" :stream t :models '(mistral:latest))) #+end_src
#+html:
#+html:
**** Gemini #+html:
Register a backend with #+begin_src emacs-lisp ;; :key can be a function that returns the API key. (gptel-make-gemini "Gemini" :key "YOUR_GEMINI_API_KEY" :stream t) #+end_src These are the required parameters, refer to the documentation of =gptel-make-gemini= for more.
You can pick this backend from the menu when using gptel (see [[#usage][Usage]])
***** (Optional) Set as the default gptel backend
The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of =gptel-backend=. Use this instead of the above. #+begin_src emacs-lisp ;; OPTIONAL configuration (setq gptel-model 'gemini-pro gptel-backend (gptel-make-gemini "Gemini" :key "YOUR_GEMINI_API_KEY" :stream t)) #+end_src
#+html:
#+html:
**** Llama.cpp or Llamafile #+html:
(If using a llamafile, run a [[https://github.com/Mozilla-Ocho/llamafile#other-example-llamafiles][server llamafile]] instead of a "command-line llamafile", and a model that supports text generation.)
Register a backend with #+begin_src emacs-lisp ;; Llama.cpp offers an OpenAI compatible API (gptel-make-openai "llama-cpp" ;Any name :stream t ;Stream responses :protocol "http" :host "localhost:8000" ;Llama.cpp server location :models '(test)) ;Any names, doesn't matter for Llama #+end_src These are the required parameters, refer to the documentation of =gptel-make-openai= for more.
You can pick this backend from the menu when using gptel (see [[#usage][Usage]])
***** (Optional) Set as the default gptel backend
The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of =gptel-backend=. Use this instead of the above. #+begin_src emacs-lisp ;; OPTIONAL configuration (setq gptel-model 'test gptel-backend (gptel-make-openai "llama-cpp" :stream t :protocol "http" :host "localhost:8000" :models '(test))) #+end_src
#+html: #+html:
**** Kagi (FastGPT & Summarizer) #+html:
Kagi's FastGPT model and the Universal Summarizer are both supported. A couple of notes:
-
Universal Summarizer: If there is a URL at point, the summarizer will summarize the contents of the URL. Otherwise the context sent to the model is the same as always: the buffer text upto point, or the contents of the region if the region is active.
-
Kagi models do not support multi-turn conversations, interactions are "one-shot". They also do not support streaming responses.
Register a backend with #+begin_src emacs-lisp (gptel-make-kagi "Kagi" ;any name :key "YOUR_KAGI_API_KEY") ;can be a function that returns the key #+end_src These are the required parameters, refer to the documentation of =gptel-make-kagi= for more.
You can pick this backend and the model (fastgpt/summarizer) from the transient menu when using gptel.
***** (Optional) Set as the default gptel backend
The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of =gptel-backend=. Use this instead of the above. #+begin_src emacs-lisp ;; OPTIONAL configuration (setq gptel-model 'fastgpt gptel-backend (gptel-make-kagi "Kagi" :key "YOUR_KAGI_API_KEY")) #+end_src
The alternatives to =fastgpt= include =summarize:cecil=, =summarize:agnes=, =summarize:daphne= and =summarize:muriel=. The difference between the summarizer engines is [[https://help.kagi.com/kagi/api/summarizer.html#summarization-engines][documented here]].
#+html: #+html:
**** together.ai #+html:
Register a backend with #+begin_src emacs-lisp ;; Together.ai offers an OpenAI compatible API (gptel-make-openai "TogetherAI" ;Any name you want :host "api.together.xyz" :key "your-api-key" ;can be a function that returns the key :stream t :models '(;; has many more, check together.ai mistralai/Mixtral-8x7B-Instruct-v0.1 codellama/CodeLlama-13b-Instruct-hf codellama/CodeLlama-34b-Instruct-hf)) #+end_src
You can pick this backend from the menu when using gptel (see [[#usage][Usage]])
***** (Optional) Set as the default gptel backend
The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of =gptel-backend=. Use this instead of the above.
#+begin_src emacs-lisp
;; OPTIONAL configuration
(setq
gptel-model 'mistralai/Mixtral-8x7B-Instruct-v0.1
gptel-backend
(gptel-make-openai "TogetherAI"
:host "api.together.xyz"
:key "your-api-key"
:stream t
:models '(;; has many more, check together.ai
mistralai/Mixtral-8x7B-Instruct-v0.1
codellama/CodeLlama-13b-Instruct-hf
codellama/CodeLlama-34b-Instruct-hf)))
#+end_src
#+html: #+html:
**** Anyscale #+html:
Register a backend with #+begin_src emacs-lisp ;; Anyscale offers an OpenAI compatible API (gptel-make-openai "Anyscale" ;Any name you want :host "api.endpoints.anyscale.com" :key "your-api-key" ;can be a function that returns the key :models '(;; has many more, check anyscale mistralai/Mixtral-8x7B-Instruct-v0.1)) #+end_src
You can pick this backend from the menu when using gptel (see [[#usage][Usage]])
***** (Optional) Set as the default gptel backend
The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of =gptel-backend=. Use this instead of the above. #+begin_src emacs-lisp ;; OPTIONAL configuration (setq gptel-model 'mistralai/Mixtral-8x7B-Instruct-v0.1 gptel-backend (gptel-make-openai "Anyscale" :host "api.endpoints.anyscale.com" :key "your-api-key" :models '(;; has many more, check anyscale mistralai/Mixtral-8x7B-Instruct-v0.1))) #+end_src
#+html: #+html:
**** Perplexity #+html:
Register a backend with #+begin_src emacs-lisp ;; Perplexity offers an OpenAI compatible API (gptel-make-openai "Perplexity" ;Any name you want :host "api.perplexity.ai" :key "your-api-key" ;can be a function that returns the key :endpoint "/chat/completions" :stream t :models '(;; has many more, check perplexity.ai pplx-7b-chat pplx-70b-chat pplx-7b-online pplx-70b-online)) #+end_src
You can pick this backend from the menu when using gptel (see [[#usage][Usage]])
***** (Optional) Set as the default gptel backend
The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of =gptel-backend=. Use this instead of the above. #+begin_src emacs-lisp ;; OPTIONAL configuration (setq gptel-model 'pplx-7b-chat gptel-backend (gptel-make-openai "Perplexity" :host "api.perplexity.ai" :key "your-api-key" :endpoint "/chat/completions" :stream t :models '(;; has many more, check perplexity.ai pplx-7b-chat pplx-70b-chat pplx-7b-online pplx-70b-online))) #+end_src
#+html: #+html:
**** Anthropic (Claude) #+html:
Register a backend with #+begin_src emacs-lisp (gptel-make-anthropic "Claude" ;Any name you want :stream t ;Streaming responses :key "your-api-key") #+end_src The =:key= can be a function that returns the key (more secure).You can pick this backend from the menu when using gptel (see [[#usage][Usage]]).
***** (Optional) Set as the default gptel backend
The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of =gptel-backend=. Use this instead of the above. #+begin_src emacs-lisp ;; OPTIONAL configuration (setq gptel-model 'claude-3-sonnet-20240229 ; "claude-3-opus-20240229" also available gptel-backend (gptel-make-anthropic "Claude" :stream t :key "your-api-key")) #+end_src
#+html: #+html:
**** Groq #+html:
Register a backend with #+begin_src emacs-lisp ;; Groq offers an OpenAI compatible API (gptel-make-openai "Groq" ;Any name you want :host "api.groq.com" :endpoint "/openai/v1/chat/completions" :stream t :key "your-api-key" ;can be a function that returns the key :models '(llama-3.1-70b-versatile llama-3.1-8b-instant llama3-70b-8192 llama3-8b-8192 mixtral-8x7b-32768 gemma-7b-it)) #+end_src
You can pick this backend from the menu when using gptel (see [[#usage][Usage]]). Note that Groq is fast enough that you could easily set =:stream nil= and still get near-instant responses.
***** (Optional) Set as the default gptel backend
The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of =gptel-backend=. Use this instead of the above. #+begin_src emacs-lisp ;; OPTIONAL configuration (setq gptel-model 'mixtral-8x7b-32768 gptel-backend (gptel-make-openai "Groq" :host "api.groq.com" :endpoint "/openai/v1/chat/completions" :stream t :key "your-api-key" :models '(llama-3.1-70b-versatile llama-3.1-8b-instant llama3-70b-8192 llama3-8b-8192 mixtral-8x7b-32768 gemma-7b-it))) #+end_src
#+html:
#+html:
**** OpenRouter #+html:
Register a backend with #+begin_src emacs-lisp ;; OpenRouter offers an OpenAI compatible API (gptel-make-openai "OpenRouter" ;Any name you want :host "openrouter.ai" :endpoint "/api/v1/chat/completions" :stream t :key "your-api-key" ;can be a function that returns the key :models '(openai/gpt-3.5-turbo mistralai/mixtral-8x7b-instruct meta-llama/codellama-34b-instruct codellama/codellama-70b-instruct google/palm-2-codechat-bison-32k google/gemini-pro))
#+end_src
You can pick this backend from the menu when using gptel (see [[#usage][Usage]]).
***** (Optional) Set as the default gptel backend
The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of =gptel-backend=. Use this instead of the above. #+begin_src emacs-lisp ;; OPTIONAL configuration (setq gptel-model 'mixtral-8x7b-32768 gptel-backend (gptel-make-openai "OpenRouter" ;Any name you want :host "openrouter.ai" :endpoint "/api/v1/chat/completions" :stream t :key "your-api-key" ;can be a function that returns the key :models '(openai/gpt-3.5-turbo mistralai/mixtral-8x7b-instruct meta-llama/codellama-34b-instruct codellama/codellama-70b-instruct google/palm-2-codechat-bison-32k google/gemini-pro)))
#+end_src
#+html: #+html:
**** PrivateGPT #+html:
Register a backend with #+begin_src emacs-lisp (gptel-make-privategpt "privateGPT" ;Any name you want :protocol "http" :host "localhost:8001" :stream t :context t ;Use context provided by embeddings :sources t ;Return information about source documents :models '(private-gpt))
#+end_src
You can pick this backend from the menu when using gptel (see [[#usage][Usage]]).
***** (Optional) Set as the default gptel backend
The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of =gptel-backend=. Use this instead of the above. #+begin_src emacs-lisp ;; OPTIONAL configuration (setq gptel-model 'private-gpt gptel-backend (gptel-make-privategpt "privateGPT" ;Any name you want :protocol "http" :host "localhost:8001" :stream t :context t ;Use context provided by embeddings :sources t ;Return information about source documents :models '(private-gpt)))
#+end_src
#+html: #+html:
**** DeepSeek #+html:
Register a backend with #+begin_src emacs-lisp ;; DeepSeek offers an OpenAI compatible API (gptel-make-openai "DeepSeek" ;Any name you want :host "api.deepseek.com" :endpoint "/chat/completions" :stream t :key "your-api-key" ;can be a function that returns the key :models '(deepseek-chat deepseek-coder))
#+end_src
You can pick this backend from the menu when using gptel (see [[#usage][Usage]]).
***** (Optional) Set as the default gptel backend
The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of =gptel-backend=. Use this instead of the above. #+begin_src emacs-lisp ;; OPTIONAL configuration (setq gptel-model 'deepseek-chat gptel-backend (gptel-make-openai "DeepSeek" ;Any name you want :host "api.deepseek.com" :endpoint "/chat/completions" :stream t :key "your-api-key" ;can be a function that returns the key :models '(deepseek-chat deepseek-coder)))
#+end_src
#+html: #+html:
**** Cerebras #+html:
Register a backend with #+begin_src emacs-lisp ;; Cerebras offers an instant OpenAI compatible API (gptel-make-openai "Cerebras" :host "api.cerebras.ai" :endpoint "/v1/chat/completions" :stream t ;optionally nil as Cerebras is instant AI :key "your-api-key" ;can be a function that returns the key :models '(llama3.1-70b llama3.1-8b)) #+end_src
You can pick this backend from the menu when using gptel (see [[#usage][Usage]]).
***** (Optional) Set as the default gptel backend
The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of =gptel-backend=. Use this instead of the above. #+begin_src emacs-lisp ;; OPTIONAL configuration (setq gptel-model 'llama3.1-8b gptel-backend (gptel-make-openai "Cerebras" :host "api.cerebras.ai" :endpoint "/v1/chat/completions" :stream nil :key "your-api-key" :models '(llama3.1-70b llama3.1-8b))) #+end_src
#+html: #+html:
**** Github Models #+html:
Register a backend with #+begin_src emacs-lisp ;; Github Models offers an OpenAI compatible API (gptel-make-openai "Github Models" ;Any name you want :host "models.inference.ai.azure.com" :endpoint "/chat/completions?api-version=2024-05-01-preview" :stream t :key "your-github-token" :models '(gpt-4o)) #+end_src
You will need to create a github [[https://github.com/settings/personal-access-tokens][token]].
For all the available models, check the [[https://github.com/marketplace/models][marketplace]].
You can pick this backend from the menu when using (see [[#usage][Usage]]).
***** (Optional) Set as the default gptel backend
The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of =gptel-backend=. Use this instead of the above. #+begin_src emacs-lisp ;; OPTIONAL configuration (setq gptel-model 'gpt-4o gptel-backend (gptel-make-openai "Github Models" ;Any name you want :host "models.inference.ai.azure.com" :endpoint "/chat/completions?api-version=2024-05-01-preview" :stream t :key "your-github-token" :models '(gpt-4o)) #+end_src
#+html: #+html:
**** Novita AI #+html:
Register a backend with #+begin_src emacs-lisp ;; Novita AI offers an OpenAI compatible API (gptel-make-openai "NovitaAI" ;Any name you want :host "api.novita.ai" :endpoint "/v3/openai" :key "your-api-key" ;can be a function that returns the key :stream t :models '(;; has many more, check https://novita.ai/llm-api gryphe/mythomax-l2-13b meta-llama/llama-3-70b-instruct meta-llama/llama-3.1-70b-instruct)) #+end_src
You can pick this backend from the menu when using gptel (see [[#usage][Usage]])
***** (Optional) Set as the default gptel backend
The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of =gptel-backend=. Use this instead of the above.
#+begin_src emacs-lisp
;; OPTIONAL configuration
(setq
gptel-model 'gryphe/mythomax-l2-13b
gptel-backend
(gptel-make-openai "NovitaAI"
:host "api.novita.ai"
:endpoint "/v3/openai"
:key "your-api-key"
:stream t
:models '(;; has many more, check https://novita.ai/llm-api
mistralai/Mixtral-8x7B-Instruct-v0.1
meta-llama/llama-3-70b-instruct
meta-llama/llama-3.1-70b-instruct)))
#+end_src
#+html:
#+html:
**** xAI #+html:
Register a backend with
#+begin_src emacs-lisp
;; xAI offers an OpenAI compatible API
(gptel-make-openai "xAI" ;Any name you want
:host "api.x.ai"
:key "your-api-key" ;can be a function that returns the key
:endpoint "/v1/chat/completions"
:stream t
:models '(;; xAI now only offers grok-beta
as of the time of this writing
grok-beta))
#+end_src
You can pick this backend from the menu when using gptel (see [[#usage][Usage]])
***** (Optional) Set as the default gptel backend
The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of =gptel-backend=. Use this instead of the above.
#+begin_src emacs-lisp
;; OPTIONAL configuration
(setq
gptel-model 'grok-beta
gptel-backend
(gptel-make-openai "xAI" ;Any name you want
:host "api.x.ai"
:key "your-api-key" ;can be a function that returns the key
:endpoint "/v1/chat/completions"
:stream t
:models '(;; xAI now only offers grok-beta
as of the time of this writing
grok-beta)))
#+end_src
#+html:
** Usage
gptel provides a few powerful, general purpose and flexible commands. You can dynamically tweak their behavior to the needs of your task with /directives/, redirection options and more. There is a [[https://www.youtube.com/watch?v=bsRnh_brggM][video demo]] showing various uses of gptel -- but =gptel-send= might be all you need.
|-------------------+---------------------------------------------------------------------------------------------------| | To send queries | Description | |-------------------+---------------------------------------------------------------------------------------------------| | =gptel-send= | Send all text up to =(point)=, or the selection if region is active. Works anywhere in Emacs. | | =gptel= | Create a new dedicated chat buffer. Not required to use gptel. | | =gptel-rewrite= | Rewrite, refactor or change the selected region. Can diff/ediff changes before merging/applying. | |-------------------+---------------------------------------------------------------------------------------------------|
|---------------------+---------------------------------------------------------------| | To tweak behavior | | |---------------------+---------------------------------------------------------------| | =C-u= =gptel-send= | Transient menu for preferences, input/output redirection etc. | | =gptel-menu= | /(Same)/ | |---------------------+---------------------------------------------------------------|
|------------------+--------------------------------------------------------------------------------------------------------| | To add context | | |------------------+--------------------------------------------------------------------------------------------------------| | =gptel-add= | Add/remove a region or buffer to gptel's context. In Dired, add/remove marked files. | | =gptel-add-file= | Add a file (text or supported media type) to gptel's context. Also available from the transient menu. | |------------------+--------------------------------------------------------------------------------------------------------|
|----------------------------+-----------------------------------------------------------------------------------------| | Org mode bonuses | | |----------------------------+-----------------------------------------------------------------------------------------| | =gptel-org-set-topic= | Limit conversation context to an Org heading. (For branching conversations see below.) | | =gptel-org-set-properties= | Write gptel configuration as Org properties, for per-heading chat configuration. | |----------------------------+-----------------------------------------------------------------------------------------|
*** In any buffer:
-
Call =M-x gptel-send= to send the text up to the cursor. The response will be inserted below. Continue the conversation by typing below the response.
-
If a region is selected, the conversation will be limited to its contents.
-
Call =M-x gptel-send= with a prefix argument (
C-u)- to set chat parameters (GPT model, backend, system message etc) for this buffer,
- include quick instructions for the next request only,
- to add additional context -- regions, buffers or files -- to gptel,
- to read the prompt from or redirect the response elsewhere,
- or to replace the prompt with the response.
*** In a dedicated chat buffer:
Note: gptel works anywhere in Emacs. The dedicated chat buffer only adds some conveniences.
-
Run =M-x gptel= to start or switch to the chat buffer. It will ask you for the key if you skipped the previous step. Run it with a prefix-arg (=C-u M-x gptel=) to start a new session.
-
In the gptel buffer, send your prompt with =M-x gptel-send=, bound to =C-c RET=.
-
Set chat parameters (LLM provider, model, directives etc) for the session by calling =gptel-send= with a prefix argument (=C-u C-c RET=):
That's it. You can go back and edit previous prompts and responses if you want.
The default mode is =markdown-mode= if available, else =text-mode=. You can set =gptel-default-mode= to =org-mode= if desired.
#+html:
**** Including media (images, documents) with requests #+html:
gptel supports sending media in Markdown and Org chat buffers, but this feature is disabled by default.
- You can enable it globally, for all models that support it, by setting =gptel-track-media=.
- Or you can set it locally, just for the chat buffer, via the header line:
There are two ways to include media with requests:
- Adding media files to the context with =gptel-add-file=, described further below.
- Including links to media in chat buffers, described here:
To send media -- images or other supported file types -- with requests in chat buffers, you can include links to them in the chat buffer. Such a link must be "standalone", i.e. on a line by itself surrounded by whitespace.
In Org mode, for example, the following are all valid ways of including an image with the request:
- "Standalone" file link: #+begin_src Describe this picture
[[file:/path/to/screenshot.png]]
Focus specifically on the text content. #+end_src
- "Standalone" file link with description: #+begin_src Describe this picture
[[file:/path/to/screenshot.png][some picture]]
Focus specifically on the text content. #+end_src
- "Standalone", angle file link: #+begin_src Describe this picture
file:/path/to/screenshot.png
Focus specifically on the text content. #+end_src
The following links are not valid, and the text of the link will be sent instead of the file contents:
- Inline link: #+begin_src Describe this [[file:/path/to/screenshot.png][picture]].
Focus specifically on the text content. #+end_src
-
Link not "standalone": #+begin_src Describe this picture: [[file:/path/to/screenshot.png]] Focus specifically on the text content. #+end_src
-
Not a valid Org link: #+begin_src Describe the picture
file:/path/to/screenshot.png #+end_src
Similar criteria apply to Markdown chat buffers.
#+html: #+html:
**** Save and restore your chat sessions #+html:
Saving the file will save the state of the conversation as well. To resume the chat, open the file and turn on =gptel-mode= before editing the buffer.
#+html: *** Setting options (backend, model, request parameters, system prompts and more)
Most gptel options can be set from gptel's transient menu, available by calling =gptel-send= with a prefix-argument, or via =gptel-menu=. To change their default values in your configuration, see [[#additional-configuration][Additional Configuration]]. Chat buffer-specific options are also available via the header-line in chat buffers.
Selecting a model and backend can be done interactively via the =-m= command of =gptel-menu=. Available registered models are prefixed by the name of their backend with a string like =ChatGPT:gpt-4o-mini=, where =ChatGPT= is the backend name you used to register it and =gpt-4o-mini= is the name of the model.
*** Include more context with requests
By default, gptel will query the LLM with the active region or the buffer contents up to the cursor. Often it can be helpful to provide the LLM with additional context from outside the current buffer. For example, when you're in a chat buffer but want to ask questions about a (possibly changing) code buffer and auxiliary project files.
You can include additional text regions, buffers or files with gptel's queries. This additional context is "live" and not a snapshot. Once added, the regions, buffers or files are scanned and included at the time of each query. When using multi-modal models, added files can be of any supported type -- typically images.
You can add a selected region, buffer or file to gptel's context from the menu, or call =gptel-add=. To add a file use =gptel-add= in Dired, or use the dedicated =gptel-add-file= command. Directories will have their files added recursively after prompting for confirmation.
You can examine the active context from the menu: #+html: <img src="https://github.com/karthink/gptel/assets/8607532/63cd7fc8-6b3e-42ae-b6ca-06ff935bae9c" align="center" alt="Image showing gptel's menu with the "inspect context" command.">
And then browse through or remove context from the context buffer:
#+html:
*** Tool use (experimental)
gptel can provide the LLM with client-side elisp "tools", or function specifications, along with the request. If the LLM decides to run the tool, it supplies the tool call arguments, which gptel uses to run the tool in your Emacs session. The result is optionally returned to the LLM to complete the task.
This exchange can be used to equip the LLM with capabilities or knowledge beyond what is available out of the box -- for instance, you can get the LLM to control your Emacs frame, create or modify files and directories, or look up information relevant to your request via web search or in a local database. Here is a very simple example:
#+html:
https://github.com/user-attachments/assets/d1f8e2ac-62bb-49bc-850d-0a67aa0cd4c3 #+html:
This feature is currently experimental.
To use tools in gptel, you need
- a model that supports this usage. All the flagship models support tool use, as do many of the smaller open models.
- Tool specifications that gptel understands. gptel does not currently include any tools out of the box.
#+html:
**** Defining gptel tools #+html:
Defining a gptel tool requires an elisp function and associated metadata. Here are two simple tool definitions:
To read the contents of an Emacs buffer:
#+begin_src emacs-lisp (gptel-make-tool :name "read_buffer" ; javascript-style snake_case name :function (lambda (buffer) ; the function that will run (unless (buffer-live-p (get-buffer buffer)) (error "error: buffer %s is not live." buffer)) (with-current-buffer buffer (buffer-substring-no-properties (point-min) (point-max)))) :description "return the contents of an emacs buffer" :args (list '(:name "buffer" :type 'string ; :type value must be a symbol :description "the name of the buffer whose contents are to be retrieved")) :category "emacs") ; An arbitrary label for grouping #+end_src
Besides the function itself, which can be named or anonymous (as above), the tool specification requires a =:name=, =:description= and a list of argument specifications in =:args=. Each argument specification is a plist with atleast the keys =:name=, =:type= and =:description=.
To create a text file:
#+begin_src emacs-lisp (gptel-make-tool :name "create_file" ; javascript-style snake_case name :function (lambda (path filename content) ; the function that runs (let ((full-path (expand-file-name filename path))) (with-temp-buffer (insert content) (write-file full-path)) (format "Created file %s in %s" filename path))) :description "Create a new file with the specified content" :args (list '(:name "path" ; a list of argument specifications :type string :description "The directory where to create the file") '(:name "filename" :type string :description "The name of the file to create") '(:name "content" :type string :description "The content to write to the file")) :category "filesystem") ; An arbitrary label for grouping #+end_src
With some prompting, you can get an LLM to write these tools for you.
Tools can also be asynchronous, use optional arguments and arguments with more structure (enums, arrays, objects etc). See =gptel-make-tool= for details.
#+html: **** Selecting tools
Once defined, tools can be selected (globally, buffer-locally or for the next request only) from gptel's transient menu:
From here you can also require confirmation for all tool calls, and decide if tool call results should be included in the LLM response. See [[#additional-configuration][Additional Configuration]] for doing these things via elisp.
*** Rewrite, refactor or fill in a region
In any buffer: with a region selected, you can modify text, rewrite prose or refactor code with =gptel-rewrite=. Example with prose:
#+html:
https://github.com/user-attachments/assets/e3b436b3-9bde-4c1f-b2ce-3f7df1984933 #+html:
The result is previewed over the original text. By default, the buffer is not modified.
Pressing =RET= or clicking in the rewritten region should give you a list of options: you can iterate on, diff, ediff, merge or accept the replacement. Example with code:
#+html:
https://github.com/user-attachments/assets/4067fdb8-85d3-4264-9b64-d727353f68f9 #+html:
Acting on the LLM response:
If you would like one of these things to happen automatically, you can customize =gptel-rewrite-default-action=.
These options are also available from =gptel-rewrite=:
And you can call them directly when the cursor is in the rewritten region:
*** Extra Org mode conveniences
gptel offers a few extra conveniences in Org mode.
***** Limit conversation context to an Org heading
You can limit the conversation context to an Org heading with the command =gptel-org-set-topic=.
(This sets an Org property (=GPTEL_TOPIC=) under the heading. You can also add this property manually instead.)
***** Use branching context in Org mode (tree of conversations)
You can have branching conversations in Org mode, where each hierarchical outline path through the document is a separate conversation branch. This is also useful for limiting the context size of each query. See the variable =gptel-org-branching-context=.
If this variable is non-nil, you should probably edit =gptel-prompt-prefix-alist= and =gptel-response-prefix-alist= so that the prefix strings for org-mode are not Org headings, e.g.
#+begin_src emacs-lisp (setf (alist-get 'org-mode gptel-prompt-prefix-alist) "@user\n") (setf (alist-get 'org-mode gptel-response-prefix-alist) "@assistant\n") #+end_src
Otherwise, the default prompt prefix will make successive prompts sibling headings, and therefore on different conversation branches, which probably isn't what you want.
Note: using this option requires Org 9.6.7 or higher to be available. The [[https://github.com/ultronozm/ai-org-chat.el][ai-org-chat]] package uses gptel to provide this branching conversation behavior for older versions of Org.
***** Save gptel parameters to Org headings (reproducible chats)
You can declare the gptel model, backend, temperature, system message and other parameters as Org properties with the command =gptel-org-set-properties=. gptel queries under the corresponding heading will always use these settings, allowing you to create mostly reproducible LLM chat notebooks, and to have simultaneous chats with different models, model settings and directives under different Org headings.
** FAQ #+html:
**** I want to use gptel in a way that's not supported by =gptel-send= or the options menu #+html:
gptel's default usage pattern is simple, and will stay this way: Read input in any buffer and insert the response below it. Some custom behavior is possible with the transient menu (=C-u M-x gptel-send=).
For more programmable usage, gptel provides a general =gptel-request= function that accepts a custom prompt and a callback to act on the response. You can use this to build custom workflows not supported by =gptel-send=. See the documentation of =gptel-request=, and the [[https://github.com/karthink/gptel/wiki/Defining-custom-gptel-commands][wiki]] for examples.
#+html: #+html:
**** I want the window to scroll automatically as the response is inserted #+html:
To be minimally annoying, gptel does not move the cursor by default. Add the following to your configuration to enable auto-scrolling.
#+begin_src emacs-lisp (add-hook 'gptel-post-stream-hook 'gptel-auto-scroll) #+end_src
#+html: #+html:
**** I want the cursor to move to the next prompt after the response is inserted #+html:
To be minimally annoying, gptel does not move the cursor by default. Add the following to your configuration to move the cursor:
#+begin_src emacs-lisp (add-hook 'gptel-post-response-functions 'gptel-end-of-response) #+end_src
You can also call =gptel-end-of-response= as a command at any time.
#+html: #+html:
**** I want to change the formatting of the prompt and LLM response #+html:
For dedicated chat buffers: customize =gptel-prompt-prefix-alist= and =gptel-response-prefix-alist=. You can set a different pair for each major-mode.
Anywhere in Emacs: Use =gptel-pre-response-hook= and =gptel-post-response-functions=, which see.
#+html: #+html:
**** I want the transient menu options to be saved so I only need to set them once #+html:
Any model options you set are saved for the current buffer. But the redirection options in the menu are set for the next query only:
You can make them persistent across this Emacs session by pressing C-x C-s:
(You can also cycle through presets you've saved with C-x p and C-x n.)
Now these will be enabled whenever you send a query from the transient menu. If you want to use these saved options without invoking the transient menu, you can use a keyboard macro:
#+begin_src emacs-lisp ;; Replace with your key to invoke the transient menu: (keymap-global-set "" "C-u C-c ") #+end_src
Or see this [[https://github.com/karthink/gptel/wiki/Commonly-requested-features#save-transient-flags][wiki entry]].
#+html: #+html:
**** Can I change the transient menu key bindings? #+html:
Yes, see =transient-suffix-put=. This changes the key to select a backend/model from "-m" to "M" in gptel's menu: #+begin_src emacs-lisp (transient-suffix-put 'gptel-menu (kbd "-m") :key "M") #+end_src
#+html: #+html:
**** How does gptel distinguish between user prompts and LLM responses? #+html:
gptel uses [[https://www.gnu.org/software/emacs/manual/html_node/elisp/Text-Properties.html][text-properties]] to watermark LLM responses. Thus this text is interpreted as a response even if you copy it into another buffer. In regular buffers (buffers without =gptel-mode= enabled), you can turn off this tracking by unsetting =gptel-track-responses=.
When restoring a chat state from a file on disk, gptel will apply these properties from saved metadata in the file when you turn on =gptel-mode=.
gptel does /not/ use any prefix or semantic/syntax element in the buffer (such as headings) to separate prompts and responses. The reason for this is that gptel aims to integrate as seamlessly as possible into your regular Emacs usage: LLM interaction is not the objective, it's just another tool at your disposal. So requiring a bunch of "user" and "assistant" tags in the buffer is noisy and restrictive. If you want these demarcations, you can customize =gptel-prompt-prefix-alist= and =gptel-response-prefix-alist=. Note that these prefixes are for your readability only and purely cosmetic.
#+html: #+html:
**** (Doom Emacs) Sending a query from the gptel menu fails because of a key conflict with Org mode #+html:
Doom binds RET in Org mode to =+org/dwim-at-point=, which appears to conflict with gptel's transient menu bindings for some reason.
Two solutions:
- Press
C-minstead of the return key. - Change the send key from return to a key of your choice: #+begin_src emacs-lisp (transient-suffix-put 'gptel-menu (kbd "RET") :key "") #+end_src
#+html: #+html:
**** (ChatGPT) I get the error "(HTTP/2 429) You exceeded your current quota" #+html:
#+begin_quote (HTTP/2 429) You exceeded your current quota, please check your plan and billing details. #+end_quote
Using the ChatGPT (or any OpenAI) API requires [[https://platform.openai.com/account/billing/overview][adding credit to your account]].
#+html: #+html:
**** Why another LLM client? #+html:
Other Emacs clients for LLMs prescribe the format of the interaction (a comint shell, org-babel blocks, etc). I wanted:
- Something that is as free-form as possible: query the model using any text in any buffer, and redirect the response as required. Using a dedicated =gptel= buffer just adds some visual flair to the interaction.
- Integration with org-mode, not using a walled-off org-babel block, but as regular text. This way the model can generate code blocks that I can run.
#+html: #+html:
** Additional Configuration :PROPERTIES: :ID: f885adac-58a3-4eba-a6b7-91e9e7a17829 :END: #+html:
#+begin_src emacs-lisp :exports none :results list (let ((all)) (mapatoms (lambda (sym) (when (and (string-match-p "^gptel-[^-]" (symbol-name sym)) (get sym 'variable-documentation)) (push sym all)))) all) #+end_src
|----------------------+--------------------------------------------------------------------| | Connection options | | |----------------------+--------------------------------------------------------------------| | =gptel-use-curl= | Use Curl (default), fallback to Emacs' built-in =url=. | | =gptel-proxy= | Proxy server for requests, passed to curl via =--proxy=. | | =gptel-api-key= | Variable/function that returns the API key for the active backend. | |----------------------+--------------------------------------------------------------------|
|-----------------------+---------------------------------------------------------| | LLM request options | /(Note: not supported uniformly across LLMs)/ | |-----------------------+---------------------------------------------------------| | =gptel-backend= | Default LLM Backend. | | =gptel-model= | Default model to use, depends on the backend. | | =gptel-stream= | Enable streaming responses, if the backend supports it. | | =gptel-directives= | Alist of system directives, can switch on the fly. | | =gptel-max-tokens= | Maximum token count (in query + response). | | =gptel-temperature= | Randomness in response text, 0 to 2. | | =gptel-use-context= | How/whether to include additional context | | =gptel-use-tools= | Disable, allow or force LLM tool-use | | =gptel-tools= | List of tools to include with requests | |-----------------------+---------------------------------------------------------|
|-------------------------------+----------------------------------------------------------------| | Chat UI options | | |-------------------------------+----------------------------------------------------------------| | =gptel-default-mode= | Major mode for dedicated chat buffers. | | =gptel-prompt-prefix-alist= | Text inserted before queries. | | =gptel-response-prefix-alist= | Text inserted before responses. | | =gptel-track-response= | Distinguish between user messages and LLM responses? | | =gptel-track-media= | Send images or other media from links? | | =gptel-confirm-tool-calls= | Confirm all tool calls? | | =gptel-include-tool-results= | Include tool results should in the LLM response? | | =gptel-use-header-line= | Display status messages in header-line (default) or minibuffer | | =gptel-display-buffer-action= | Placement of the gptel chat buffer. | |-------------------------------+----------------------------------------------------------------|
|-------------------------------+-------------------------------------------------------| | Org mode UI options | | |-------------------------------+-------------------------------------------------------| | =gptel-org-branching-context= | Make each outline path a separate conversation branch | |-------------------------------+-------------------------------------------------------|
|---------------------------------+-------------------------------------------------------------| | Hooks for customization | | |---------------------------------+-------------------------------------------------------------| | =gptel-save-state-hook= | Runs before saving the chat state to a file on disk | | =gptel-pre-response-hook= | Runs before inserting the LLM response into the buffer | | =gptel-post-response-functions= | Runs after inserting the full LLM response into the buffer | | =gptel-post-stream-hook= | Runs after each streaming insertion | | =gptel-context-wrap-function= | To include additional context formatted your way | | =gptel-rewrite-default-action= | Automatically diff, ediff, merge or replace refactored text | |---------------------------------+-------------------------------------------------------------|
#+html:
** COMMENT Will you add feature X?
Maybe, I'd like to experiment a bit more first. Features added since the inception of this package include
- Curl support (=gptel-use-curl=)
- Streaming responses (=gptel-stream=)
- Cancelling requests in progress (=gptel-abort=)
- General API for writing your own commands (=gptel-request=, [[https://github.com/karthink/gptel/wiki/Defining-custom-gptel-commands][wiki]])
- Dispatch menus using Transient (=gptel-send= with a prefix arg)
- Specifying the conversation context size
- GPT-4 support
- Response redirection (to the echo area, another buffer, etc)
- A built-in refactor/rewrite prompt
- Limiting conversation context to Org headings using properties (#58)
- Saving and restoring chats (#17)
- Support for local LLMs.
Features being considered or in the pipeline:
- Fully stateless design (#17)
** Alternatives
Other Emacs clients for LLMs include
- [[https://github.com/ahyatt/llm][llm]]: llm provides a uniform API across language model providers for building LLM clients in Emacs, and is intended as a library for use by package authors. For similar scripting purposes, gptel provides the command =gptel-request=, which see.
- [[https://github.com/s-kostyaev/ellama][Ellama]]: A full-fledged LLM client built on llm, that supports many LLM providers (Ollama, Open AI, Vertex, GPT4All and more). Its usage differs from gptel in that it provides separate commands for dozens of common tasks, like general chat, summarizing code/text, refactoring code, improving grammar, translation and so on.
- [[https://github.com/xenodium/chatgpt-shell][chatgpt-shell]]: comint-shell based interaction with ChatGPT. Also supports DALL-E, executable code blocks in the responses, and more.
- [[https://github.com/rksm/org-ai][org-ai]]: Interaction through special =#+begin_ai ... #+end_ai= Org-mode blocks. Also supports DALL-E, querying ChatGPT with the contents of project files, and more.
There are several more: [[https://github.com/MichaelBurge/leafy-mode][leafy-mode]], [[https://github.com/iwahbe/chat.el][chat.el]], [[https://github.com/stuhlmueller/gpt.el][gpt.el]], [[https://github.com/AnselmC/le-gpt.el][le-gpt]], [[https://github.com/stevemolitor/robby][robby]].
*** Packages using gptel
gptel is a general-purpose package for chat and ad-hoc LLM interaction. The following packages use gptel to provide additional or specialized functionality:
- [[https://github.com/karthink/gptel-quick][gptel-quick]]: Quickly look up the region or text at point.
- [[https://github.com/daedsidog/evedel][Evedel]]: Instructed LLM Programmer/Assistant
- [[https://github.com/lanceberge/elysium][Elysium]]: Automatically apply AI-generated changes as you code
- [[https://github.com/kamushadenes/ai-blog.el][ai-blog.el]]: Streamline generation of blog posts in Hugo
- [[https://github.com/douo/magit-gptcommit][magit-gptcommit]]: Generate Commit Messages within magit-status Buffer using gptel
- [[https://github.com/armindarvish/consult-omni][consult-omni]]: Versatile multi-source search package. It includes gptel as one of its many sources.
- [[https://github.com/ultronozm/ai-org-chat.el][ai-org-chat]]: Provides branching conversations in Org buffers using gptel. (Note that gptel includes this feature as well (see =gptel-org-branching-context=), but requires a recent version of Org mode (9.67 or later) to be installed.)
- [[https://github.com/rob137/Corsair][Corsair]]: Helps gather text to populate LLM prompts for gptel.
** COMMENT Older Breaking Changes
-
=gptel-post-response-hook= has been renamed to =gptel-post-response-functions=, and functions in this hook are now called with two arguments: the start and end buffer positions of the response. This should make it easy to act on the response text without having to locate it first.
-
Possible breakage, see #120: If streaming responses stop working for you after upgrading to v0.5, try reinstalling gptel and deleting its native comp eln cache in =native-comp-eln-load-path=.
-
The user option =gptel-host= is deprecated. If the defaults don't work for you, use =gptel-make-openai= (which see) to customize server settings.
-
=gptel-api-key-from-auth-source= now searches for the API key using the host address for the active LLM backend, /i.e./ "api.openai.com" when using ChatGPT. You may need to update your =~/.authinfo=.
** Acknowledgments
- [[https://github.com/meain][Abin Simon]] for extensive feedback on improving gptel's directives and UI.
- [[https://github.com/algal][Alexis Gallagher]] and [[https://github.com/d1egoaz][Diego Alvarez]] for fixing a nasty multi-byte bug with =url-retrieve=.
- [[https://github.com/tarsius][Jonas Bernoulli]] for the Transient library.
- [[https://github.com/daedsidog][daedsidog]] for adding context support to gptel.
- [[https://github.com/Aquan1412][Aquan1412]] for adding PrivateGPT support to gptel.
- [[https://github.com/r0man][r0man]] for improving gptel's Curl integration.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for gptel
Similar Open Source Tools

gptel
GPTel is a simple Large Language Model chat client for Emacs, with support for multiple models and backends. It's async and fast, streams responses, and interacts with LLMs from anywhere in Emacs. LLM responses are in Markdown or Org markup. Supports conversations and multiple independent sessions. Chats can be saved as regular Markdown/Org/Text files and resumed later. You can go back and edit your previous prompts or LLM responses when continuing a conversation. These will be fed back to the model. Don't like gptel's workflow? Use it to create your own for any supported model/backend with a simple API.

aider.el
aider.el is an AI pair programming tool for Emacs that provides an interactive interface to communicate with Aider. It offers features such as pop-up menu for commands, Git repository-specific sessions, batch file adding from dired buffer, region-based refactor support, and the ability to add custom Elisp functions. Users can install aider.el and dependencies to enhance their pair programming experience within Emacs.

esp-ai
ESP-AI provides a complete AI conversation solution for your development board, including IAT+LLM+TTS integration solutions for ESP32 series development boards. It can be injected into projects without affecting existing ones. By providing keys from platforms like iFlytek, Jiling, and local services, you can run the services without worrying about interactions between services or between development boards and services. The project's server-side code is based on Node.js, and the hardware code is based on Arduino IDE.

gpustack
GPUStack is an open-source GPU cluster manager designed for running large language models (LLMs). It supports a wide variety of hardware, scales with GPU inventory, offers lightweight Python package with minimal dependencies, provides OpenAI-compatible APIs, simplifies user and API key management, enables GPU metrics monitoring, and facilitates token usage and rate metrics tracking. The tool is suitable for managing GPU clusters efficiently and effectively.

RWKV-Runner
RWKV Runner is a project designed to simplify the usage of large language models by automating various processes. It provides a lightweight executable program and is compatible with the OpenAI API. Users can deploy the backend on a server and use the program as a client. The project offers features like model management, VRAM configurations, user-friendly chat interface, WebUI option, parameter configuration, model conversion tool, download management, LoRA Finetune, and multilingual localization. It can be used for various tasks such as chat, completion, composition, and model inspection.

gorilla
Gorilla is a tool that enables LLMs to use tools by invoking APIs. Given a natural language query, Gorilla comes up with the semantically- and syntactically- correct API to invoke. With Gorilla, you can use LLMs to invoke 1,600+ (and growing) API calls accurately while reducing hallucination. Gorilla also releases APIBench, the largest collection of APIs, curated and easy to be trained on!

dom-to-semantic-markdown
DOM to Semantic Markdown is a tool that converts HTML DOM to Semantic Markdown for use in Large Language Models (LLMs). It maximizes semantic information, token efficiency, and preserves metadata to enhance LLMs' processing capabilities. The tool captures rich web content structure, including semantic tags, image metadata, table structures, and link destinations. It offers customizable conversion options and supports both browser and Node.js environments.

ChatTTS
ChatTTS is a generative speech model optimized for dialogue scenarios, providing natural and expressive speech synthesis with fine-grained control over prosodic features. It supports multiple speakers and surpasses most open-source TTS models in terms of prosody. The model is trained with 100,000+ hours of Chinese and English audio data, and the open-source version on HuggingFace is a 40,000-hour pre-trained model without SFT. The roadmap includes open-sourcing additional features like VQ encoder, multi-emotion control, and streaming audio generation. The tool is intended for academic and research use only, with precautions taken to limit potential misuse.

lmnr
Laminar is an all-in-one open-source platform designed for engineering AI products. It allows users to trace, evaluate, label, and analyze LLM data efficiently. The platform offers features such as automatic tracing of common AI frameworks and SDKs, local and online evaluations, simple UI for data labeling, dataset management, and scalability with gRPC communication. Laminar is built with a modern open-source stack including RabbitMQ, Postgres, Clickhouse, and Qdrant for semantic similarity search. It provides fast and beautiful dashboards for traces, evaluations, and labels, making it a comprehensive tool for AI product development.

efficient-transformers
Efficient Transformers Library provides reimplemented blocks of Large Language Models (LLMs) to make models functional and highly performant on Qualcomm Cloud AI 100. It includes graph transformations, handling for under-flows and overflows, patcher modules, exporter module, sample applications, and unit test templates. The library supports seamless inference on pre-trained LLMs with documentation for model optimization and deployment. Contributions and suggestions are welcome, with a focus on testing changes for model support and common utilities.

gemini-next-chat
Gemini Next Chat is an open-source, extensible high-performance Gemini chatbot framework that supports one-click free deployment of private Gemini web applications. It provides a simple interface with image recognition and voice conversation, supports multi-modal models, talk mode, visual recognition, assistant market, support plugins, conversation list, full Markdown support, privacy and security, PWA support, well-designed UI, fast loading speed, static deployment, and multi-language support.

MLE-agent
MLE-Agent is an intelligent companion designed for machine learning engineers and researchers. It features autonomous baseline creation, integration with Arxiv and Papers with Code, smart debugging, file system organization, comprehensive tools integration, and an interactive CLI chat interface for seamless AI engineering and research workflows.

LocalAI
LocalAI is a free and open-source OpenAI alternative that acts as a drop-in replacement REST API compatible with OpenAI (Elevenlabs, Anthropic, etc.) API specifications for local AI inferencing. It allows users to run LLMs, generate images, audio, and more locally or on-premises with consumer-grade hardware, supporting multiple model families and not requiring a GPU. LocalAI offers features such as text generation with GPTs, text-to-audio, audio-to-text transcription, image generation with stable diffusion, OpenAI functions, embeddings generation for vector databases, constrained grammars, downloading models directly from Huggingface, and a Vision API. It provides a detailed step-by-step introduction in its Getting Started guide and supports community integrations such as custom containers, WebUIs, model galleries, and various bots for Discord, Slack, and Telegram. LocalAI also offers resources like an LLM fine-tuning guide, instructions for local building and Kubernetes installation, projects integrating LocalAI, and a how-tos section curated by the community. It encourages users to cite the repository when utilizing it in downstream projects and acknowledges the contributions of various software from the community.

vim-airline
Vim-airline is a lean and mean status/tabline plugin for Vim that provides a nice statusline at the bottom of each Vim window. It consists of several sections displaying information such as mode, environment status, filename, filetype, file encoding, and current position in the file. The plugin is highly customizable and integrates with various plugins, providing a tiny core with extensibility in mind. It is optimized for speed, supports multiple themes, and integrates seamlessly with other plugins. Vim-airline is written in 100% Vimscript, eliminating the need for Python. The plugin aims to be stable and includes a unit testing suite for reliability.

Chat2DB
Chat2DB is an AI-driven data development and analysis platform that enables users to communicate with databases using natural language. It supports a wide range of databases, including MySQL, PostgreSQL, Oracle, SQLServer, SQLite, MariaDB, ClickHouse, DM, Presto, DB2, OceanBase, Hive, KingBase, MongoDB, Redis, and Snowflake. Chat2DB provides a user-friendly interface that allows users to query databases, generate reports, and explore data using natural language commands. It also offers a variety of features to help users improve their productivity, such as auto-completion, syntax highlighting, and error checking.
For similar tasks

LLMStack
LLMStack is a no-code platform for building generative AI agents, workflows, and chatbots. It allows users to connect their own data, internal tools, and GPT-powered models without any coding experience. LLMStack can be deployed to the cloud or on-premise and can be accessed via HTTP API or triggered from Slack or Discord.

ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.

onnxruntime-genai
ONNX Runtime Generative AI is a library that provides the generative AI loop for ONNX models, including inference with ONNX Runtime, logits processing, search and sampling, and KV cache management. Users can call a high level `generate()` method, or run each iteration of the model in a loop. It supports greedy/beam search and TopP, TopK sampling to generate token sequences, has built in logits processing like repetition penalties, and allows for easy custom scoring.

jupyter-ai
Jupyter AI connects generative AI with Jupyter notebooks. It provides a user-friendly and powerful way to explore generative AI models in notebooks and improve your productivity in JupyterLab and the Jupyter Notebook. Specifically, Jupyter AI offers: * An `%%ai` magic that turns the Jupyter notebook into a reproducible generative AI playground. This works anywhere the IPython kernel runs (JupyterLab, Jupyter Notebook, Google Colab, Kaggle, VSCode, etc.). * A native chat UI in JupyterLab that enables you to work with generative AI as a conversational assistant. * Support for a wide range of generative model providers, including AI21, Anthropic, AWS, Cohere, Gemini, Hugging Face, NVIDIA, and OpenAI. * Local model support through GPT4All, enabling use of generative AI models on consumer grade machines with ease and privacy.

khoj
Khoj is an open-source, personal AI assistant that extends your capabilities by creating always-available AI agents. You can share your notes and documents to extend your digital brain, and your AI agents have access to the internet, allowing you to incorporate real-time information. Khoj is accessible on Desktop, Emacs, Obsidian, Web, and Whatsapp, and you can share PDF, markdown, org-mode, notion files, and GitHub repositories. You'll get fast, accurate semantic search on top of your docs, and your agents can create deeply personal images and understand your speech. Khoj is self-hostable and always will be.

langchain_dart
LangChain.dart is a Dart port of the popular LangChain Python framework created by Harrison Chase. LangChain provides a set of ready-to-use components for working with language models and a standard interface for chaining them together to formulate more advanced use cases (e.g. chatbots, Q&A with RAG, agents, summarization, extraction, etc.). The components can be grouped into a few core modules: * **Model I/O:** LangChain offers a unified API for interacting with various LLM providers (e.g. OpenAI, Google, Mistral, Ollama, etc.), allowing developers to switch between them with ease. Additionally, it provides tools for managing model inputs (prompt templates and example selectors) and parsing the resulting model outputs (output parsers). * **Retrieval:** assists in loading user data (via document loaders), transforming it (with text splitters), extracting its meaning (using embedding models), storing (in vector stores) and retrieving it (through retrievers) so that it can be used to ground the model's responses (i.e. Retrieval-Augmented Generation or RAG). * **Agents:** "bots" that leverage LLMs to make informed decisions about which available tools (such as web search, calculators, database lookup, etc.) to use to accomplish the designated task. The different components can be composed together using the LangChain Expression Language (LCEL).

danswer
Danswer is an open-source Gen-AI Chat and Unified Search tool that connects to your company's docs, apps, and people. It provides a Chat interface and plugs into any LLM of your choice. Danswer can be deployed anywhere and for any scale - on a laptop, on-premise, or to cloud. Since you own the deployment, your user data and chats are fully in your own control. Danswer is MIT licensed and designed to be modular and easily extensible. The system also comes fully ready for production usage with user authentication, role management (admin/basic users), chat persistence, and a UI for configuring Personas (AI Assistants) and their Prompts. Danswer also serves as a Unified Search across all common workplace tools such as Slack, Google Drive, Confluence, etc. By combining LLMs and team specific knowledge, Danswer becomes a subject matter expert for the team. Imagine ChatGPT if it had access to your team's unique knowledge! It enables questions such as "A customer wants feature X, is this already supported?" or "Where's the pull request for feature Y?"

infinity
Infinity is an AI-native database designed for LLM applications, providing incredibly fast full-text and vector search capabilities. It supports a wide range of data types, including vectors, full-text, and structured data, and offers a fused search feature that combines multiple embeddings and full text. Infinity is easy to use, with an intuitive Python API and a single-binary architecture that simplifies deployment. It achieves high performance, with 0.1 milliseconds query latency on million-scale vector datasets and up to 15K QPS.
For similar jobs

h2ogpt
h2oGPT is an Apache V2 open-source project that allows users to query and summarize documents or chat with local private GPT LLMs. It features a private offline database of any documents (PDFs, Excel, Word, Images, Video Frames, Youtube, Audio, Code, Text, MarkDown, etc.), a persistent database (Chroma, Weaviate, or in-memory FAISS) using accurate embeddings (instructor-large, all-MiniLM-L6-v2, etc.), and efficient use of context using instruct-tuned LLMs (no need for LangChain's few-shot approach). h2oGPT also offers parallel summarization and extraction, reaching an output of 80 tokens per second with the 13B LLaMa2 model, HYDE (Hypothetical Document Embeddings) for enhanced retrieval based upon LLM responses, a variety of models supported (LLaMa2, Mistral, Falcon, Vicuna, WizardLM. With AutoGPTQ, 4-bit/8-bit, LORA, etc.), GPU support from HF and LLaMa.cpp GGML models, and CPU support using HF, LLaMa.cpp, and GPT4ALL models. Additionally, h2oGPT provides Attention Sinks for arbitrarily long generation (LLaMa-2, Mistral, MPT, Pythia, Falcon, etc.), a UI or CLI with streaming of all models, the ability to upload and view documents through the UI (control multiple collaborative or personal collections), Vision Models LLaVa, Claude-3, Gemini-Pro-Vision, GPT-4-Vision, Image Generation Stable Diffusion (sdxl-turbo, sdxl) and PlaygroundAI (playv2), Voice STT using Whisper with streaming audio conversion, Voice TTS using MIT-Licensed Microsoft Speech T5 with multiple voices and Streaming audio conversion, Voice TTS using MPL2-Licensed TTS including Voice Cloning and Streaming audio conversion, AI Assistant Voice Control Mode for hands-free control of h2oGPT chat, Bake-off UI mode against many models at the same time, Easy Download of model artifacts and control over models like LLaMa.cpp through the UI, Authentication in the UI by user/password via Native or Google OAuth, State Preservation in the UI by user/password, Linux, Docker, macOS, and Windows support, Easy Windows Installer for Windows 10 64-bit (CPU/CUDA), Easy macOS Installer for macOS (CPU/M1/M2), Inference Servers support (oLLaMa, HF TGI server, vLLM, Gradio, ExLLaMa, Replicate, OpenAI, Azure OpenAI, Anthropic), OpenAI-compliant, Server Proxy API (h2oGPT acts as drop-in-replacement to OpenAI server), Python client API (to talk to Gradio server), JSON Mode with any model via code block extraction. Also supports MistralAI JSON mode, Claude-3 via function calling with strict Schema, OpenAI via JSON mode, and vLLM via guided_json with strict Schema, Web-Search integration with Chat and Document Q/A, Agents for Search, Document Q/A, Python Code, CSV frames (Experimental, best with OpenAI currently), Evaluate performance using reward models, and Quality maintained with over 1000 unit and integration tests taking over 4 GPU-hours.

mistral.rs
Mistral.rs is a fast LLM inference platform written in Rust. We support inference on a variety of devices, quantization, and easy-to-use application with an Open-AI API compatible HTTP server and Python bindings.

ollama
Ollama is a lightweight, extensible framework for building and running language models on the local machine. It provides a simple API for creating, running, and managing models, as well as a library of pre-built models that can be easily used in a variety of applications. Ollama is designed to be easy to use and accessible to developers of all levels. It is open source and available for free on GitHub.

llama-cpp-agent
The llama-cpp-agent framework is a tool designed for easy interaction with Large Language Models (LLMs). Allowing users to chat with LLM models, execute structured function calls and get structured output (objects). It provides a simple yet robust interface and supports llama-cpp-python and OpenAI endpoints with GBNF grammar support (like the llama-cpp-python server) and the llama.cpp backend server. It works by generating a formal GGML-BNF grammar of the user defined structures and functions, which is then used by llama.cpp to generate text valid to that grammar. In contrast to most GBNF grammar generators it also supports nested objects, dictionaries, enums and lists of them.

llama_ros
This repository provides a set of ROS 2 packages to integrate llama.cpp into ROS 2. By using the llama_ros packages, you can easily incorporate the powerful optimization capabilities of llama.cpp into your ROS 2 projects by running GGUF-based LLMs and VLMs.

MITSUHA
OneReality is a virtual waifu/assistant that you can speak to through your mic and it'll speak back to you! It has many features such as: * You can speak to her with a mic * It can speak back to you * Has short-term memory and long-term memory * Can open apps * Smarter than you * Fluent in English, Japanese, Korean, and Chinese * Can control your smart home like Alexa if you set up Tuya (more info in Prerequisites) It is built with Python, Llama-cpp-python, Whisper, SpeechRecognition, PocketSphinx, VITS-fast-fine-tuning, VITS-simple-api, HyperDB, Sentence Transformers, and Tuya Cloud IoT.

wenxin-starter
WenXin-Starter is a spring-boot-starter for Baidu's "Wenxin Qianfan WENXINWORKSHOP" large model, which can help you quickly access Baidu's AI capabilities. It fully integrates the official API documentation of Wenxin Qianfan. Supports text-to-image generation, built-in dialogue memory, and supports streaming return of dialogue. Supports QPS control of a single model and supports queuing mechanism. Plugins will be added soon.

FlexFlow
FlexFlow Serve is an open-source compiler and distributed system for **low latency**, **high performance** LLM serving. FlexFlow Serve outperforms existing systems by 1.3-2.0x for single-node, multi-GPU inference and by 1.4-2.4x for multi-node, multi-GPU inference.