gptel
A simple LLM client for Emacs
Stars: 1320
GPTel is a simple Large Language Model chat client for Emacs, with support for multiple models and backends. It's async and fast, streams responses, and interacts with LLMs from anywhere in Emacs. LLM responses are in Markdown or Org markup. Supports conversations and multiple independent sessions. Chats can be saved as regular Markdown/Org/Text files and resumed later. You can go back and edit your previous prompts or LLM responses when continuing a conversation. These will be fed back to the model. Don't like gptel's workflow? Use it to create your own for any supported model/backend with a simple API.
README:
#+title: gptel: A simple LLM client for Emacs
[[https://elpa.nongnu.org/nongnu/gptel.svg][file:https://elpa.nongnu.org/nongnu/gptel.svg]] [[https://stable.melpa.org/packages/gptel-badge.svg][file:https://stable.melpa.org/packages/gptel-badge.svg]] [[https://melpa.org/#/gptel][file:https://melpa.org/packages/gptel-badge.svg]]
gptel is a simple Large Language Model chat client for Emacs, with support for multiple models and backends. It works in the spirit of Emacs, available at any time and uniformly in any buffer.
#+html:
General usage: ([[https://www.youtube.com/watch?v=bsRnh_brggM][YouTube Demo]])
https://user-images.githubusercontent.com/8607532/230516812-86510a09-a2fb-4cbd-b53f-cc2522d05a13.mp4
https://user-images.githubusercontent.com/8607532/230516816-ae4a613a-4d01-4073-ad3f-b66fa73c6e45.mp4
Media support
https://github.com/user-attachments/assets/1fd947e1-226b-4be2-bc68-7b22b2e3215f
Multi-LLM support demo:
- It's async and fast, streams responses.
- Interact with LLMs from anywhere in Emacs (any buffer, shell, minibuffer, wherever)
- LLM responses are in Markdown or Org markup.
- Supports conversations and multiple independent sessions.
- Supports multi-modal models (send images, documents)
- Save chats as regular Markdown/Org/Text files and resume them later.
- You can go back and edit your previous prompts or LLM responses when continuing a conversation. These will be fed back to the model.
- Don't like gptel's workflow? Use it to create your own for any supported model/backend with a [[https://github.com/karthink/gptel/wiki/Defining-custom-gptel-commands][simple API]].
gptel uses Curl if available, but falls back to url-retrieve to work without external dependencies.
** Contents :toc:
- [[#breaking-changes][Breaking changes!]]
- [[#installation][Installation]]
- [[#straight][Straight]]
- [[#manual][Manual]]
- [[#doom-emacs][Doom Emacs]]
- [[#spacemacs][Spacemacs]]
- [[#setup][Setup]]
- [[#chatgpt][ChatGPT]]
- [[#other-llm-backends][Other LLM backends]]
- [[#azure][Azure]]
- [[#gpt4all][GPT4All]]
- [[#ollama][Ollama]]
- [[#gemini][Gemini]]
- [[#llamacpp-or-llamafile][Llama.cpp or Llamafile]]
- [[#kagi-fastgpt--summarizer][Kagi (FastGPT & Summarizer)]]
- [[#togetherai][together.ai]]
- [[#anyscale][Anyscale]]
- [[#perplexity][Perplexity]]
- [[#anthropic-claude][Anthropic (Claude)]]
- [[#groq][Groq]]
- [[#openrouter][OpenRouter]]
- [[#privategpt][PrivateGPT]]
- [[#deepseek][DeepSeek]]
- [[#cerebras][Cerebras]]
- [[#github-models][Github Models]]
- [[#usage][Usage]]
- [[#in-any-buffer][In any buffer:]]
- [[#in-a-dedicated-chat-buffer][In a dedicated chat buffer:]]
- [[#including-media-images-documents-with-requests][Including media (images, documents) with requests]]
- [[#save-and-restore-your-chat-sessions][Save and restore your chat sessions]]
- [[#include-more-context-with-requests][Include more context with requests]]
- [[#rewrite-refactor-or-fill-in-a-region][Rewrite, refactor or fill in a region]]
- [[#extra-org-mode-conveniences][Extra Org mode conveniences]]
- [[#faq][FAQ]]
- [[#i-want-the-window-to-scroll-automatically-as-the-response-is-inserted][I want the window to scroll automatically as the response is inserted]]
- [[#i-want-the-cursor-to-move-to-the-next-prompt-after-the-response-is-inserted][I want the cursor to move to the next prompt after the response is inserted]]
- [[#i-want-to-change-the-formatting-of-the-prompt-and-llm-response][I want to change the formatting of the prompt and LLM response]]
- [[#i-want-the-transient-menu-options-to-be-saved-so-i-only-need-to-set-them-once][I want the transient menu options to be saved so I only need to set them once]]
- [[#i-want-to-use-gptel-in-a-way-thats-not-supported-by-gptel-send-or-the-options-menu][I want to use gptel in a way that's not supported by =gptel-send= or the options menu]]
- [[#doom-emacs-sending-a-query-from-the-gptel-menu-fails-because-of-a-key-conflict-with-org-mode][(Doom Emacs) Sending a query from the gptel menu fails because of a key conflict with Org mode]]
- [[#chatgpt-i-get-the-error-http2-429-you-exceeded-your-current-quota][(ChatGPT) I get the error "(HTTP/2 429) You exceeded your current quota"]]
- [[#why-another-llm-client][Why another LLM client?]]
- [[#additional-configuration][Additional Configuration]]
- [[#alternatives][Alternatives]]
- [[#packages-using-gptel][Packages using gptel]]
- [[#acknowledgments][Acknowledgments]]
** Breaking changes!
- The =gptel-model= is now expected to be a symbol, not a string. Please update your configuration.
** Installation
gptel can be installed in Emacs out of the box with =M-x package-install= ⏎ =gptel=. This installs the latest release.
If you want the development version instead, add MELPA (or NonGNU-devel ELPA) to your list of sources, then install it with =M-x package-install⏎= =gptel=.
(Optional: Install =markdown-mode=.)
#+html:
**** Straight #+html:
#+begin_src emacs-lisp (straight-use-package 'gptel) #+end_srcInstalling the =markdown-mode= package is optional. #+html: #+html:
**** Manual #+html:
Clone or download this repository and run =M-x package-install-file⏎= on the repository directory.Installing the =markdown-mode= package is optional. #+html: #+html:
**** Doom Emacs #+html:
In =packages.el= #+begin_src emacs-lisp (package! gptel) #+end_srcIn =config.el= #+begin_src emacs-lisp (use-package! gptel :config (setq! gptel-api-key "your key")) #+end_src "your key" can be the API key itself, or (safer) a function that returns the key. Setting =gptel-api-key= is optional, you will be asked for a key if it's not found.
#+html: #+html:
**** Spacemacs #+html:
In your =.spacemacs= file, add =llm-client= to =dotspacemacs-configuration-layers=. #+begin_src emacs-lisp (llm-client :variables llm-client-enable-gptel t) #+end_src #+html:Optional: Set =gptel-api-key= to the key. Alternatively, you may choose a more secure method such as:
- Storing in =~/.authinfo=. By default, "api.openai.com" is used as HOST and "apikey" as USER. #+begin_src authinfo machine api.openai.com login apikey password TOKEN #+end_src
- Setting it to a function that returns the key.
*** Other LLM backends #+html:
**** Azure #+html:
Register a backend with #+begin_src emacs-lisp (gptel-make-azure "Azure-1" ;Name, whatever you'd like :protocol "https" ;Optional -- https is the default :host "YOUR_RESOURCE_NAME.openai.azure.com" :endpoint "/openai/deployments/YOUR_DEPLOYMENT_NAME/chat/completions?api-version=2023-05-15" ;or equivalent :stream t ;Enable streaming responses :key #'gptel-api-key :models '(gpt-3.5-turbo gpt-4)) #+end_src Refer to the documentation of =gptel-make-azure= to set more parameters.
You can pick this backend from the menu when using gptel. (see [[#usage][Usage]]).
***** (Optional) Set as the default gptel backend
The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of =gptel-backend=. Use this instead of the above. #+begin_src emacs-lisp ;; OPTIONAL configuration (setq gptel-model 'gpt-3.5-turbo gptel-backend (gptel-make-azure "Azure-1" :protocol "https" :host "YOUR_RESOURCE_NAME.openai.azure.com" :endpoint "/openai/deployments/YOUR_DEPLOYMENT_NAME/chat/completions?api-version=2023-05-15" :stream t :key #'gptel-api-key :models '(gpt-3.5-turbo gpt-4))) #+end_src #+html:
#+html:
**** GPT4All #+html:
Register a backend with #+begin_src emacs-lisp (gptel-make-gpt4all "GPT4All" ;Name of your choosing :protocol "http" :host "localhost:4891" ;Where it's running :models '(mistral-7b-openorca.Q4_0.gguf)) ;Available models #+end_src These are the required parameters, refer to the documentation of =gptel-make-gpt4all= for more.
You can pick this backend from the menu when using gptel (see [[#usage][Usage]]).
***** (Optional) Set as the default gptel backend
The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of =gptel-backend=. Use this instead of the above. Additionally you may want to increase the response token size since GPT4All uses very short (often truncated) responses by default. #+begin_src emacs-lisp ;; OPTIONAL configuration (setq gptel-max-tokens 500 gptel-model 'mistral-7b-openorca.Q4_0.gguf gptel-backend (gptel-make-gpt4all "GPT4All" :protocol "http" :host "localhost:4891" :models '(mistral-7b-openorca.Q4_0.gguf))) #+end_src
#+html:
#+html:
**** Ollama #+html:
Register a backend with #+begin_src emacs-lisp (gptel-make-ollama "Ollama" ;Any name of your choosing :host "localhost:11434" ;Where it's running :stream t ;Stream responses :models '(mistral:latest)) ;List of models #+end_src These are the required parameters, refer to the documentation of =gptel-make-ollama= for more.
You can pick this backend from the menu when using gptel (see [[#usage][Usage]])
***** (Optional) Set as the default gptel backend
The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of =gptel-backend=. Use this instead of the above. #+begin_src emacs-lisp ;; OPTIONAL configuration (setq gptel-model 'mistral:latest gptel-backend (gptel-make-ollama "Ollama" :host "localhost:11434" :stream t :models '(mistral:latest))) #+end_src
#+html:
#+html:
**** Gemini #+html:
Register a backend with #+begin_src emacs-lisp ;; :key can be a function that returns the API key. (gptel-make-gemini "Gemini" :key "YOUR_GEMINI_API_KEY" :stream t) #+end_src These are the required parameters, refer to the documentation of =gptel-make-gemini= for more.
You can pick this backend from the menu when using gptel (see [[#usage][Usage]])
***** (Optional) Set as the default gptel backend
The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of =gptel-backend=. Use this instead of the above. #+begin_src emacs-lisp ;; OPTIONAL configuration (setq gptel-model 'gemini-pro gptel-backend (gptel-make-gemini "Gemini" :key "YOUR_GEMINI_API_KEY" :stream t)) #+end_src
#+html:
#+html:
**** Llama.cpp or Llamafile #+html:
(If using a llamafile, run a [[https://github.com/Mozilla-Ocho/llamafile#other-example-llamafiles][server llamafile]] instead of a "command-line llamafile", and a model that supports text generation.)
Register a backend with #+begin_src emacs-lisp ;; Llama.cpp offers an OpenAI compatible API (gptel-make-openai "llama-cpp" ;Any name :stream t ;Stream responses :protocol "http" :host "localhost:8000" ;Llama.cpp server location :models '(test)) ;Any names, doesn't matter for Llama #+end_src These are the required parameters, refer to the documentation of =gptel-make-openai= for more.
You can pick this backend from the menu when using gptel (see [[#usage][Usage]])
***** (Optional) Set as the default gptel backend
The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of =gptel-backend=. Use this instead of the above. #+begin_src emacs-lisp ;; OPTIONAL configuration (setq gptel-model 'test gptel-backend (gptel-make-openai "llama-cpp" :stream t :protocol "http" :host "localhost:8000" :models '(test))) #+end_src
#+html: #+html:
**** Kagi (FastGPT & Summarizer) #+html:
Kagi's FastGPT model and the Universal Summarizer are both supported. A couple of notes:
-
Universal Summarizer: If there is a URL at point, the summarizer will summarize the contents of the URL. Otherwise the context sent to the model is the same as always: the buffer text upto point, or the contents of the region if the region is active.
-
Kagi models do not support multi-turn conversations, interactions are "one-shot". They also do not support streaming responses.
Register a backend with #+begin_src emacs-lisp (gptel-make-kagi "Kagi" ;any name :key "YOUR_KAGI_API_KEY") ;can be a function that returns the key #+end_src These are the required parameters, refer to the documentation of =gptel-make-kagi= for more.
You can pick this backend and the model (fastgpt/summarizer) from the transient menu when using gptel.
***** (Optional) Set as the default gptel backend
The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of =gptel-backend=. Use this instead of the above. #+begin_src emacs-lisp ;; OPTIONAL configuration (setq gptel-model 'fastgpt gptel-backend (gptel-make-kagi "Kagi" :key "YOUR_KAGI_API_KEY")) #+end_src
The alternatives to =fastgpt= include =summarize:cecil=, =summarize:agnes=, =summarize:daphne= and =summarize:muriel=. The difference between the summarizer engines is [[https://help.kagi.com/kagi/api/summarizer.html#summarization-engines][documented here]].
#+html: #+html:
**** together.ai #+html:
Register a backend with #+begin_src emacs-lisp ;; Together.ai offers an OpenAI compatible API (gptel-make-openai "TogetherAI" ;Any name you want :host "api.together.xyz" :key "your-api-key" ;can be a function that returns the key :stream t :models '(;; has many more, check together.ai mistralai/Mixtral-8x7B-Instruct-v0.1 codellama/CodeLlama-13b-Instruct-hf codellama/CodeLlama-34b-Instruct-hf)) #+end_src
You can pick this backend from the menu when using gptel (see [[#usage][Usage]])
***** (Optional) Set as the default gptel backend
The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of =gptel-backend=. Use this instead of the above.
#+begin_src emacs-lisp
;; OPTIONAL configuration
(setq
gptel-model 'mistralai/Mixtral-8x7B-Instruct-v0.1
gptel-backend
(gptel-make-openai "TogetherAI"
:host "api.together.xyz"
:key "your-api-key"
:stream t
:models '(;; has many more, check together.ai
mistralai/Mixtral-8x7B-Instruct-v0.1
codellama/CodeLlama-13b-Instruct-hf
codellama/CodeLlama-34b-Instruct-hf)))
#+end_src
#+html: #+html:
**** Anyscale #+html:
Register a backend with #+begin_src emacs-lisp ;; Anyscale offers an OpenAI compatible API (gptel-make-openai "Anyscale" ;Any name you want :host "api.endpoints.anyscale.com" :key "your-api-key" ;can be a function that returns the key :models '(;; has many more, check anyscale mistralai/Mixtral-8x7B-Instruct-v0.1)) #+end_src
You can pick this backend from the menu when using gptel (see [[#usage][Usage]])
***** (Optional) Set as the default gptel backend
The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of =gptel-backend=. Use this instead of the above. #+begin_src emacs-lisp ;; OPTIONAL configuration (setq gptel-model 'mistralai/Mixtral-8x7B-Instruct-v0.1 gptel-backend (gptel-make-openai "Anyscale" :host "api.endpoints.anyscale.com" :key "your-api-key" :models '(;; has many more, check anyscale mistralai/Mixtral-8x7B-Instruct-v0.1))) #+end_src
#+html: #+html:
**** Perplexity #+html:
Register a backend with #+begin_src emacs-lisp ;; Perplexity offers an OpenAI compatible API (gptel-make-openai "Perplexity" ;Any name you want :host "api.perplexity.ai" :key "your-api-key" ;can be a function that returns the key :endpoint "/chat/completions" :stream t :models '(;; has many more, check perplexity.ai pplx-7b-chat pplx-70b-chat pplx-7b-online pplx-70b-online)) #+end_src
You can pick this backend from the menu when using gptel (see [[#usage][Usage]])
***** (Optional) Set as the default gptel backend
The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of =gptel-backend=. Use this instead of the above. #+begin_src emacs-lisp ;; OPTIONAL configuration (setq gptel-model 'pplx-7b-chat gptel-backend (gptel-make-openai "Perplexity" :host "api.perplexity.ai" :key "your-api-key" :endpoint "/chat/completions" :stream t :models '(;; has many more, check perplexity.ai pplx-7b-chat pplx-70b-chat pplx-7b-online pplx-70b-online))) #+end_src
#+html: #+html:
**** Anthropic (Claude) #+html:
Register a backend with #+begin_src emacs-lisp (gptel-make-anthropic "Claude" ;Any name you want :stream t ;Streaming responses :key "your-api-key") #+end_src The =:key= can be a function that returns the key (more secure).You can pick this backend from the menu when using gptel (see [[#usage][Usage]]).
***** (Optional) Set as the default gptel backend
The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of =gptel-backend=. Use this instead of the above. #+begin_src emacs-lisp ;; OPTIONAL configuration (setq gptel-model 'claude-3-sonnet-20240229 ; "claude-3-opus-20240229" also available gptel-backend (gptel-make-anthropic "Claude" :stream t :key "your-api-key")) #+end_src
#+html: #+html:
**** Groq #+html:
Register a backend with #+begin_src emacs-lisp ;; Groq offers an OpenAI compatible API (gptel-make-openai "Groq" ;Any name you want :host "api.groq.com" :endpoint "/openai/v1/chat/completions" :stream t :key "your-api-key" ;can be a function that returns the key :models '(llama-3.1-70b-versatile llama-3.1-8b-instant llama3-70b-8192 llama3-8b-8192 mixtral-8x7b-32768 gemma-7b-it)) #+end_src
You can pick this backend from the menu when using gptel (see [[#usage][Usage]]). Note that Groq is fast enough that you could easily set =:stream nil= and still get near-instant responses.
***** (Optional) Set as the default gptel backend
The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of =gptel-backend=. Use this instead of the above. #+begin_src emacs-lisp ;; OPTIONAL configuration (setq gptel-model 'mixtral-8x7b-32768 gptel-backend (gptel-make-openai "Groq" :host "api.groq.com" :endpoint "/openai/v1/chat/completions" :stream t :key "your-api-key" :models '(llama-3.1-70b-versatile llama-3.1-8b-instant llama3-70b-8192 llama3-8b-8192 mixtral-8x7b-32768 gemma-7b-it))) #+end_src
#+html:
#+html:
**** OpenRouter #+html:
Register a backend with #+begin_src emacs-lisp ;; OpenRouter offers an OpenAI compatible API (gptel-make-openai "OpenRouter" ;Any name you want :host "openrouter.ai" :endpoint "/api/v1/chat/completions" :stream t :key "your-api-key" ;can be a function that returns the key :models '(openai/gpt-3.5-turbo mistralai/mixtral-8x7b-instruct meta-llama/codellama-34b-instruct codellama/codellama-70b-instruct google/palm-2-codechat-bison-32k google/gemini-pro))
#+end_src
You can pick this backend from the menu when using gptel (see [[#usage][Usage]]).
***** (Optional) Set as the default gptel backend
The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of =gptel-backend=. Use this instead of the above. #+begin_src emacs-lisp ;; OPTIONAL configuration (setq gptel-model 'mixtral-8x7b-32768 gptel-backend (gptel-make-openai "OpenRouter" ;Any name you want :host "openrouter.ai" :endpoint "/api/v1/chat/completions" :stream t :key "your-api-key" ;can be a function that returns the key :models '(openai/gpt-3.5-turbo mistralai/mixtral-8x7b-instruct meta-llama/codellama-34b-instruct codellama/codellama-70b-instruct google/palm-2-codechat-bison-32k google/gemini-pro)))
#+end_src
#+html: #+html:
**** PrivateGPT #+html:
Register a backend with #+begin_src emacs-lisp (gptel-make-privategpt "privateGPT" ;Any name you want :protocol "http" :host "localhost:8001" :stream t :context t ;Use context provided by embeddings :sources t ;Return information about source documents :models '(private-gpt))
#+end_src
You can pick this backend from the menu when using gptel (see [[#usage][Usage]]).
***** (Optional) Set as the default gptel backend
The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of =gptel-backend=. Use this instead of the above. #+begin_src emacs-lisp ;; OPTIONAL configuration (setq gptel-model 'private-gpt gptel-backend (gptel-make-privategpt "privateGPT" ;Any name you want :protocol "http" :host "localhost:8001" :stream t :context t ;Use context provided by embeddings :sources t ;Return information about source documents :models '(private-gpt)))
#+end_src
#+html: #+html:
**** DeepSeek #+html:
Register a backend with #+begin_src emacs-lisp ;; DeepSeek offers an OpenAI compatible API (gptel-make-openai "DeepSeek" ;Any name you want :host "api.deepseek.com" :endpoint "/chat/completions" :stream t :key "your-api-key" ;can be a function that returns the key :models '(deepseek-chat deepseek-coder))
#+end_src
You can pick this backend from the menu when using gptel (see [[#usage][Usage]]).
***** (Optional) Set as the default gptel backend
The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of =gptel-backend=. Use this instead of the above. #+begin_src emacs-lisp ;; OPTIONAL configuration (setq gptel-model 'deepseek-chat gptel-backend (gptel-make-openai "DeepSeek" ;Any name you want :host "api.deepseek.com" :endpoint "/chat/completions" :stream t :key "your-api-key" ;can be a function that returns the key :models '(deepseek-chat deepseek-coder)))
#+end_src
#+html: #+html:
**** Cerebras #+html:
Register a backend with #+begin_src emacs-lisp ;; Cerebras offers an instant OpenAI compatible API (gptel-make-openai "Cerebras" :host "api.cerebras.ai" :endpoint "/v1/chat/completions" :stream t ;optionally nil as Cerebras is instant AI :key "your-api-key" ;can be a function that returns the key :models '(llama3.1-70b llama3.1-8b)) #+end_src
You can pick this backend from the menu when using gptel (see [[#usage][Usage]]).
***** (Optional) Set as the default gptel backend
The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of =gptel-backend=. Use this instead of the above. #+begin_src emacs-lisp ;; OPTIONAL configuration (setq gptel-model 'llama3.1-8b gptel-backend (gptel-make-openai "Cerebras" :host "api.cerebras.ai" :endpoint "/v1/chat/completions" :stream nil :key "your-api-key" :models '(llama3.1-70b llama3.1-8b))) #+end_src
#+html: #+html:
**** Github Models #+html:
Register a backend with #+begin_src emacs-lisp ;; Github Models offers an OpenAI compatible API (gptel-make-openai "Github Models" ;Any name you want :host "models.inference.ai.azure.com" :endpoint "/chat/completions" :stream t :key "your-github-token" :models '(gpt-4o)) #+end_src
For all the available models, check the [[https://github.com/marketplace/models][marketplace]].
You can pick this backend from the menu when using (see [[#usage][Usage]]).
***** (Optional) Set as the default gptel backend
The above code makes the backend available to select. If you want it to be the default backend for gptel, you can set this as the value of =gptel-backend=. Use this instead of the above. #+begin_src emacs-lisp ;; OPTIONAL configuration (setq gptel-model 'gpt-4o gptel-backend (gptel-make-openai "Github Models" ;Any name you want :host "models.inference.ai.azure.com" :endpoint "/chat/completions" :stream t :key "your-github-token" :models '(gpt-4o)) #+end_src
#+html:
** Usage
(This is also a [[https://www.youtube.com/watch?v=bsRnh_brggM][video demo]] showing various uses of gptel.)
|-------------------+------------------------------------------------------------------------------------------------| | To send queries | Description | |-------------------+------------------------------------------------------------------------------------------------| | =gptel-send= | Send conversation up to =(point)=, or selection if region is active. Works anywhere in Emacs. | | =gptel= | Create a new dedicated chat buffer. Not required to use gptel. | |-------------------+------------------------------------------------------------------------------------------------|
|--------------------+---------------------------------------------------------------| | To Set options | | |--------------------+---------------------------------------------------------------| | =C-u= =gptel-send= | Transient menu for preferences, input/output redirection etc. | | =gptel-menu= | /(Same)/ | |--------------------+---------------------------------------------------------------|
|------------------+-----------------------------------------------------------------------------------------| | To add context | | |------------------+-----------------------------------------------------------------------------------------| | =gptel-add= | Add/remove a region or buffer to gptel's context. In Dired, add/remove marked files. | | =gptel-add-file= | Add a (text-readable) file to gptel's context. Also available from the transient menu. | |------------------+-----------------------------------------------------------------------------------------|
|----------------------------+----------------------------------------------------------------------------| | In Org mode only | | |----------------------------+----------------------------------------------------------------------------| | =gptel-org-set-topic= | Limit conversation context to an Org heading | | =gptel-org-set-properties= | Write gptel configuration as Org properties (for self-contained chat logs) | |----------------------------+----------------------------------------------------------------------------|
*** In any buffer:
-
Call =M-x gptel-send= to send the text up to the cursor. The response will be inserted below. Continue the conversation by typing below the response.
-
If a region is selected, the conversation will be limited to its contents.
-
Call =M-x gptel-send= with a prefix argument (
C-u)- to set chat parameters (GPT model, system message etc) for this buffer,
- include quick instructions for the next request only,
- to add additional context -- regions, buffers or files -- to gptel,
- to read the prompt from or redirect the response elsewhere,
- or to replace the prompt with the response.
*** In a dedicated chat buffer:
-
Run =M-x gptel= to start or switch to the chat buffer. It will ask you for the key if you skipped the previous step. Run it with a prefix-arg (=C-u M-x gptel=) to start a new session.
-
In the gptel buffer, send your prompt with =M-x gptel-send=, bound to =C-c RET=.
-
Set chat parameters (LLM provider, model, directives etc) for the session by calling =gptel-send= with a prefix argument (=C-u C-c RET=):
That's it. You can go back and edit previous prompts and responses if you want.
The default mode is =markdown-mode= if available, else =text-mode=. You can set =gptel-default-mode= to =org-mode= if desired.
#+html:
**** Including media (images, documents) with requests #+html:
gptel supports sending media in Markdown and Org chat buffers, but this feature is disabled by default.
- You can enable it globally, for all models that support it, by setting =gptel-track-media=.
- Or you can set it locally, just for the chat buffer, via the header line:
There are two ways to include media with requests:
- Adding media files to the context with =gptel-add-file=, described further below.
- Including links to media in chat buffers, described here:
To send media -- images or other supported file types -- with requests in chat buffers, you can include links to them in the chat buffer. Such a link must be "standalone", i.e. on a line by itself surrounded by whitespace.
In Org mode, for example, the following are all valid ways of including an image with the request:
- "Standalone" file link: #+begin_src Describe this picture
[[file:/path/to/screenshot.png]]
Focus specifically on the text content. #+end_src
- "Standalone" file link with description: #+begin_src Describe this picture
[[file:/path/to/screenshot.png][some picture]]
Focus specifically on the text content. #+end_src
- "Standalone", angle file link: #+begin_src Describe this picture
file:/path/to/screenshot.png
Focus specifically on the text content. #+end_src
The following links are not valid, and the text of the link will be sent instead of the file contents:
- Inline link: #+begin_src Describe this [[file:/path/to/screenshot.png][picture]].
Focus specifically on the text content. #+end_src
-
Link not "standalone": #+begin_src Describe this picture: [[file:/path/to/screenshot.png]] Focus specifically on the text content. #+end_src
-
Not a valid Org link: #+begin_src Describe the picture
file:/path/to/screenshot.png #+end_src
Similar criteria apply to Markdown chat buffers.
#+html: #+html:
**** Save and restore your chat sessions #+html:
Saving the file will save the state of the conversation as well. To resume the chat, open the file and turn on =gptel-mode= before editing the buffer.
#+html: *** Include more context with requests
By default, gptel will query the LLM with the active region or the buffer contents up to the cursor. Often it can be helpful to provide the LLM with additional context from outside the current buffer. For example, when you're in a chat buffer but want to ask questions about a (possibly changing) code buffer and auxiliary project files.
You can include additional text regions, buffers or files with gptel's queries. This additional context is "live" and not a snapshot. Once added, the regions, buffers or files are scanned and included at the time of each query. When using multi-modal models, added files can be of any supported type -- typically images.
You can add a selected region, buffer or file to gptel's context from the menu, or call =gptel-add=. (To add a file use =gptel-add= in Dired or use the dedicated =gptel-add-file= command.)
You can examine the active context from the menu: #+html: <img src="https://github.com/karthink/gptel/assets/8607532/63cd7fc8-6b3e-42ae-b6ca-06ff935bae9c" align="center" alt="Image showing gptel's menu with the "inspect context" command.">
And then browse through or remove context from the context buffer: #+html:
*** Rewrite, refactor or fill in a region
In any buffer: with a region selected, you can rewrite prose or refactor code from here:
Prose:
[[https://user-images.githubusercontent.com/8607532/230770352-ee6f45a3-a083-4cf0-b13c-619f7710e9ba.png]]
Code:
[[https://user-images.githubusercontent.com/8607532/230770162-1a5a496c-ee57-4a67-9c95-d45f238544ae.png]]
When the refactor is ready, you can apply it or compare against the original:
These actions are also available directly when the cursor is inside the pending rewrite region:
*** Extra Org mode conveniences
gptel offers a few extra conveniences in Org mode.
-
You can limit the conversation context to an Org heading with the command =gptel-org-set-topic=.
-
You can have branching conversations in Org mode, where each hierarchical outline path through the document is a separate conversation branch. This is also useful for limiting the context size of each query. See the variable =gptel-org-branching-context=. Note: using this option requires Org 9.6.7 or higher to be available. The [[https://github.com/ultronozm/ai-org-chat.el][ai-org-chat]] package uses gptel to provide this branching conversation behavior for older versions of Org.
-
You can declare the gptel model, backend, temperature, system message and other parameters as Org properties with the command =gptel-org-set-properties=. gptel queries under the corresponding heading will always use these settings, allowing you to create mostly reproducible LLM chat notebooks, and to have simultaneous chats with different models, model settings and directives under different Org headings.
** FAQ #+html:
**** I want the window to scroll automatically as the response is inserted #+html:
To be minimally annoying, gptel does not move the cursor by default. Add the following to your configuration to enable auto-scrolling.
#+begin_src emacs-lisp (add-hook 'gptel-post-stream-hook 'gptel-auto-scroll) #+end_src
#+html: #+html:
**** I want the cursor to move to the next prompt after the response is inserted #+html:
To be minimally annoying, gptel does not move the cursor by default. Add the following to your configuration to move the cursor:
#+begin_src emacs-lisp (add-hook 'gptel-post-response-functions 'gptel-end-of-response) #+end_src
You can also call =gptel-end-of-response= as a command at any time.
#+html: #+html:
**** I want to change the formatting of the prompt and LLM response #+html:
For dedicated chat buffers: customize =gptel-prompt-prefix-alist= and =gptel-response-prefix-alist=. You can set a different pair for each major-mode.
Anywhere in Emacs: Use =gptel-pre-response-hook= and =gptel-post-response-functions=, which see.
#+html: #+html:
**** I want the transient menu options to be saved so I only need to set them once #+html:
Any model options you set are saved for the current buffer. But the redirection options in the menu are set for the next query only:
You can make them persistent across this Emacs session by pressing C-x C-s:
(You can also cycle through presets you've saved with C-x p and C-x n.)
Now these will be enabled whenever you send a query from the transient menu. If you want to use these saved options without invoking the transient menu, you can use a keyboard macro:
#+begin_src emacs-lisp ;; Replace with your key to invoke the transient menu: (keymap-global-set "" "C-u C-c ") #+end_src
Or see this [[https://github.com/karthink/gptel/wiki/Commonly-requested-features#save-transient-flags][wiki entry]].
#+html: #+html:
**** I want to use gptel in a way that's not supported by =gptel-send= or the options menu #+html:
gptel's default usage pattern is simple, and will stay this way: Read input in any buffer and insert the response below it. Some custom behavior is possible with the transient menu (=C-u M-x gptel-send=).
For more programmable usage, gptel provides a general =gptel-request= function that accepts a custom prompt and a callback to act on the response. You can use this to build custom workflows not supported by =gptel-send=. See the documentation of =gptel-request=, and the [[https://github.com/karthink/gptel/wiki/Defining-custom-gptel-commands][wiki]] for examples.
#+html: #+html:
**** (Doom Emacs) Sending a query from the gptel menu fails because of a key conflict with Org mode #+html:
Doom binds RET in Org mode to =+org/dwim-at-point=, which appears to conflict with gptel's transient menu bindings for some reason.
Two solutions:
- Press
C-minstead of the return key. - Change the send key from return to a key of your choice: #+begin_src emacs-lisp (transient-suffix-put 'gptel-menu (kbd "RET") :key "") #+end_src
#+html: #+html:
**** (ChatGPT) I get the error "(HTTP/2 429) You exceeded your current quota" #+html:
#+begin_quote (HTTP/2 429) You exceeded your current quota, please check your plan and billing details. #+end_quote
Using the ChatGPT (or any OpenAI) API requires [[https://platform.openai.com/account/billing/overview][adding credit to your account]].
#+html: #+html:
**** Why another LLM client? #+html:
Other Emacs clients for LLMs prescribe the format of the interaction (a comint shell, org-babel blocks, etc). I wanted:
- Something that is as free-form as possible: query the model using any text in any buffer, and redirect the response as required. Using a dedicated =gptel= buffer just adds some visual flair to the interaction.
- Integration with org-mode, not using a walled-off org-babel block, but as regular text. This way the model can generate code blocks that I can run.
#+html:
** Additional Configuration :PROPERTIES: :ID: f885adac-58a3-4eba-a6b7-91e9e7a17829 :END:
#+begin_src emacs-lisp :exports none :results list (let ((all)) (mapatoms (lambda (sym) (when (and (string-match-p "^gptel-[^-]" (symbol-name sym)) (get sym 'variable-documentation)) (push sym all)))) all) #+end_src
|----------------------+--------------------------------------------------------------------| | Connection options | | |----------------------+--------------------------------------------------------------------| | =gptel-use-curl= | Use Curl (default), fallback to Emacs' built-in =url=. | | =gptel-proxy= | Proxy server for requests, passed to curl via =--proxy=. | | =gptel-api-key= | Variable/function that returns the API key for the active backend. | |----------------------+--------------------------------------------------------------------|
|-----------------------+---------------------------------------------------------| | LLM request options | /(Note: not supported uniformly across LLMs)/ | |-----------------------+---------------------------------------------------------| | =gptel-backend= | Default LLM Backend. | | =gptel-model= | Default model to use, depends on the backend. | | =gptel-stream= | Enable streaming responses, if the backend supports it. | | =gptel-directives= | Alist of system directives, can switch on the fly. | | =gptel-max-tokens= | Maximum token count (in query + response). | | =gptel-temperature= | Randomness in response text, 0 to 2. | | =gptel-use-context= | How/whether to include additional context | |-----------------------+---------------------------------------------------------|
|-------------------------------+----------------------------------------------------------------| | Chat UI options | | |-------------------------------+----------------------------------------------------------------| | =gptel-default-mode= | Major mode for dedicated chat buffers. | | =gptel-track-response= | Distinguish between user messages and LLM responses? | | =gptel-track-media= | Send images or other media from links? | | =gptel-prompt-prefix-alist= | Text inserted before queries. | | =gptel-response-prefix-alist= | Text inserted before responses. | | =gptel-use-header-line= | Display status messages in header-line (default) or minibuffer | | =gptel-display-buffer-action= | Placement of the gptel chat buffer. | |-------------------------------+----------------------------------------------------------------|
|-------------------------------+-------------------------------------------------------| | Org mode UI options | | |-------------------------------+-------------------------------------------------------| | =gptel-org-branching-context= | Make each outline path a separate conversation branch | |-------------------------------+-------------------------------------------------------|
|---------------------------------+------------------------------------------------------------| | Hooks for customization | | |---------------------------------+------------------------------------------------------------| | =gptel-pre-response-hook= | Runs before inserting the LLM response into the buffer | | =gptel-post-response-functions= | Runs after inserting the full LLM response into the buffer | | =gptel-post-stream-hook= | Runs after each streaming insertion | | =gptel-context-wrap-function= | To include additional context formatted your way | |---------------------------------+------------------------------------------------------------|
** COMMENT Will you add feature X?
Maybe, I'd like to experiment a bit more first. Features added since the inception of this package include
- Curl support (=gptel-use-curl=)
- Streaming responses (=gptel-stream=)
- Cancelling requests in progress (=gptel-abort=)
- General API for writing your own commands (=gptel-request=, [[https://github.com/karthink/gptel/wiki/Defining-custom-gptel-commands][wiki]])
- Dispatch menus using Transient (=gptel-send= with a prefix arg)
- Specifying the conversation context size
- GPT-4 support
- Response redirection (to the echo area, another buffer, etc)
- A built-in refactor/rewrite prompt
- Limiting conversation context to Org headings using properties (#58)
- Saving and restoring chats (#17)
- Support for local LLMs.
Features being considered or in the pipeline:
- Fully stateless design (#17)
** Alternatives
Other Emacs clients for LLMs include
- [[https://github.com/ahyatt/llm][llm]]: llm provides a uniform API across language model providers for building LLM clients in Emacs, and is intended as a library for use by package authors. For similar scripting purposes, gptel provides the command =gptel-request=, which see.
- [[https://github.com/s-kostyaev/ellama][Ellama]]: A full-fledged LLM client built on llm, that supports many LLM providers (Ollama, Open AI, Vertex, GPT4All and more). Its usage differs from gptel in that it provides separate commands for dozens of common tasks, like general chat, summarizing code/text, refactoring code, improving grammar, translation and so on.
- [[https://github.com/xenodium/chatgpt-shell][chatgpt-shell]]: comint-shell based interaction with ChatGPT. Also supports DALL-E, executable code blocks in the responses, and more.
- [[https://github.com/rksm/org-ai][org-ai]]: Interaction through special =#+begin_ai ... #+end_ai= Org-mode blocks. Also supports DALL-E, querying ChatGPT with the contents of project files, and more.
There are several more: [[https://github.com/CarlQLange/chatgpt-arcana.el][chatgpt-arcana]], [[https://github.com/MichaelBurge/leafy-mode][leafy-mode]], [[https://github.com/iwahbe/chat.el][chat.el]]
*** Packages using gptel
gptel is a general-purpose package for chat and ad-hoc LLM interaction. The following packages use gptel to provide additional or specialized functionality:
- [[https://github.com/karthink/gptel-quick][gptel-quick]]: Quickly look up the region or text at point.
- [[https://github.com/daedsidog/evedel][Evedel]]: Instructed LLM Programmer/Assistant
- [[https://github.com/lanceberge/elysium][Elysium]]: Automatically apply AI-generated changes as you code
- [[https://github.com/kamushadenes/gptel-extensions.el][gptel-extensions]]: Extra utility functions for gptel
- [[https://github.com/kamushadenes/ai-blog.el][ai-blog.el]]: Streamline generation of blog posts in Hugo
- [[https://github.com/douo/magit-gptcommit][magit-gptcommit]]: Generate Commit Messages within magit-status Buffer using gptel
- [[https://github.com/armindarvish/consult-omni][consult-omni]]: Versatile multi-source search package. It includes gptel as one of its many sources.
- [[https://github.com/ultronozm/ai-org-chat.el][ai-org-chat]]: Provides branching conversations in Org buffers using gptel. (Note that gptel includes this feature as well (see =gptel-org-branching-context=), but requires a recent version of Org mode (9.67 or later) to be installed.)
** COMMENT Older Breaking Changes
-
=gptel-post-response-hook= has been renamed to =gptel-post-response-functions=, and functions in this hook are now called with two arguments: the start and end buffer positions of the response. This should make it easy to act on the response text without having to locate it first.
-
Possible breakage, see #120: If streaming responses stop working for you after upgrading to v0.5, try reinstalling gptel and deleting its native comp eln cache in =native-comp-eln-load-path=.
-
The user option =gptel-host= is deprecated. If the defaults don't work for you, use =gptel-make-openai= (which see) to customize server settings.
-
=gptel-api-key-from-auth-source= now searches for the API key using the host address for the active LLM backend, /i.e./ "api.openai.com" when using ChatGPT. You may need to update your =~/.authinfo=.
** Acknowledgments
- [[https://github.com/algal][Alexis Gallagher]] and [[https://github.com/d1egoaz][Diego Alvarez]] for fixing a nasty multi-byte bug with =url-retrieve=.
- [[https://github.com/tarsius][Jonas Bernoulli]] for the Transient library.
- [[https://github.com/daedsidog][daedsidog]] for adding context support to gptel.
- [[https://github.com/Aquan1412][Aquan1412]] for adding PrivateGPT support to gptel.
- [[https://github.com/r0man][r0man]] for improving gptel's Curl integration.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for gptel
Similar Open Source Tools
gptel
GPTel is a simple Large Language Model chat client for Emacs, with support for multiple models and backends. It's async and fast, streams responses, and interacts with LLMs from anywhere in Emacs. LLM responses are in Markdown or Org markup. Supports conversations and multiple independent sessions. Chats can be saved as regular Markdown/Org/Text files and resumed later. You can go back and edit your previous prompts or LLM responses when continuing a conversation. These will be fed back to the model. Don't like gptel's workflow? Use it to create your own for any supported model/backend with a simple API.
llama.cpp
llama.cpp is a C++ implementation of LLaMA, a large language model from Meta. It provides a command-line interface for inference and can be used for a variety of tasks, including text generation, translation, and question answering. llama.cpp is highly optimized for performance and can be run on a variety of hardware, including CPUs, GPUs, and TPUs.
esp-ai
ESP-AI provides a complete AI conversation solution for your development board, including IAT+LLM+TTS integration solutions for ESP32 series development boards. It can be injected into projects without affecting existing ones. By providing keys from platforms like iFlytek, Jiling, and local services, you can run the services without worrying about interactions between services or between development boards and services. The project's server-side code is based on Node.js, and the hardware code is based on Arduino IDE.
DB-GPT
DB-GPT is a personal database administrator that can solve database problems by reading documents, using various tools, and writing analysis reports. It is currently undergoing an upgrade. **Features:** * **Online Demo:** * Import documents into the knowledge base * Utilize the knowledge base for well-founded Q&A and diagnosis analysis of abnormal alarms * Send feedbacks to refine the intermediate diagnosis results * Edit the diagnosis result * Browse all historical diagnosis results, used metrics, and detailed diagnosis processes * **Language Support:** * English (default) * Chinese (add "language: zh" in config.yaml) * **New Frontend:** * Knowledgebase + Chat Q&A + Diagnosis + Report Replay * **Extreme Speed Version for localized llms:** * 4-bit quantized LLM (reducing inference time by 1/3) * vllm for fast inference (qwen) * Tiny LLM * **Multi-path extraction of document knowledge:** * Vector database (ChromaDB) * RESTful Search Engine (Elasticsearch) * **Expert prompt generation using document knowledge** * **Upgrade the LLM-based diagnosis mechanism:** * Task Dispatching -> Concurrent Diagnosis -> Cross Review -> Report Generation * Synchronous Concurrency Mechanism during LLM inference * **Support monitoring and optimization tools in multiple levels:** * Monitoring metrics (Prometheus) * Flame graph in code level * Diagnosis knowledge retrieval (dbmind) * Logical query transformations (Calcite) * Index optimization algorithms (for PostgreSQL) * Physical operator hints (for PostgreSQL) * Backup and Point-in-time Recovery (Pigsty) * **Continuously updated papers and experimental reports** This project is constantly evolving with new features. Don't forget to star ⭐ and watch 👀 to stay up to date.
dom-to-semantic-markdown
DOM to Semantic Markdown is a tool that converts HTML DOM to Semantic Markdown for use in Large Language Models (LLMs). It maximizes semantic information, token efficiency, and preserves metadata to enhance LLMs' processing capabilities. The tool captures rich web content structure, including semantic tags, image metadata, table structures, and link destinations. It offers customizable conversion options and supports both browser and Node.js environments.
ChatTTS
ChatTTS is a generative speech model optimized for dialogue scenarios, providing natural and expressive speech synthesis with fine-grained control over prosodic features. It supports multiple speakers and surpasses most open-source TTS models in terms of prosody. The model is trained with 100,000+ hours of Chinese and English audio data, and the open-source version on HuggingFace is a 40,000-hour pre-trained model without SFT. The roadmap includes open-sourcing additional features like VQ encoder, multi-emotion control, and streaming audio generation. The tool is intended for academic and research use only, with precautions taken to limit potential misuse.
Chat2DB
Chat2DB is an AI-driven data development and analysis platform that enables users to communicate with databases using natural language. It supports a wide range of databases, including MySQL, PostgreSQL, Oracle, SQLServer, SQLite, MariaDB, ClickHouse, DM, Presto, DB2, OceanBase, Hive, KingBase, MongoDB, Redis, and Snowflake. Chat2DB provides a user-friendly interface that allows users to query databases, generate reports, and explore data using natural language commands. It also offers a variety of features to help users improve their productivity, such as auto-completion, syntax highlighting, and error checking.
LocalAI
LocalAI is a free and open-source OpenAI alternative that acts as a drop-in replacement REST API compatible with OpenAI (Elevenlabs, Anthropic, etc.) API specifications for local AI inferencing. It allows users to run LLMs, generate images, audio, and more locally or on-premises with consumer-grade hardware, supporting multiple model families and not requiring a GPU. LocalAI offers features such as text generation with GPTs, text-to-audio, audio-to-text transcription, image generation with stable diffusion, OpenAI functions, embeddings generation for vector databases, constrained grammars, downloading models directly from Huggingface, and a Vision API. It provides a detailed step-by-step introduction in its Getting Started guide and supports community integrations such as custom containers, WebUIs, model galleries, and various bots for Discord, Slack, and Telegram. LocalAI also offers resources like an LLM fine-tuning guide, instructions for local building and Kubernetes installation, projects integrating LocalAI, and a how-tos section curated by the community. It encourages users to cite the repository when utilizing it in downstream projects and acknowledges the contributions of various software from the community.
optscale
OptScale is an open-source FinOps and MLOps platform that provides cloud cost optimization for all types of organizations and MLOps capabilities like experiment tracking, model versioning, ML leaderboards.
LLM4Decompile
LLM4Decompile is an open-source large language model dedicated to decompilation of Linux x86_64 binaries, supporting GCC's O0 to O3 optimization levels. It focuses on assessing re-executability of decompiled code through HumanEval-Decompile benchmark. The tool includes models with sizes ranging from 1.3 billion to 33 billion parameters, available on Hugging Face. Users can preprocess C code into binary and assembly instructions, then decompile assembly instructions into C using LLM4Decompile. Ongoing efforts aim to expand capabilities to support more architectures and configurations, integrate with decompilation tools like Ghidra and Rizin, and enhance performance with larger training datasets.
SwanLab
SwanLab is an open-source, lightweight AI experiment tracking tool that provides a platform for tracking, comparing, and collaborating on experiments, aiming to accelerate the research and development efficiency of AI teams by 100 times. It offers a friendly API and a beautiful interface, combining hyperparameter tracking, metric recording, online collaboration, experiment link sharing, real-time message notifications, and more. With SwanLab, researchers can document their training experiences, seamlessly communicate and collaborate with collaborators, and machine learning engineers can develop models for production faster.
QualityScaler
QualityScaler is a Windows app powered by AI to enhance, upscale, and de-noise photographs and videos. It provides an easy-to-use GUI for upscaling images and videos using multiple AI models. The tool supports automatic image tiling and merging to avoid GPU VRAM limitations, resizing images/videos before upscaling, and interpolation between the original and upscaled content. QualityScaler is written in Python and utilizes external packages such as torch, onnxruntime-directml, customtkinter, OpenCV, moviepy, and nuitka. It requires Windows 11 or Windows 10, at least 8GB of RAM, and a Directx12 compatible GPU with 4GB VRAM or more. The tool aims to continue improving with upcoming versions by adding new features, enhancing performance, and supporting additional AI architectures.
auto-news
Auto-News is an automatic news aggregator tool that utilizes Large Language Models (LLM) to pull information from various sources such as Tweets, RSS feeds, YouTube videos, web articles, Reddit, and journal notes. The tool aims to help users efficiently read and filter content based on personal interests, providing a unified reading experience and organizing information effectively. It features feed aggregation with summarization, transcript generation for videos and articles, noise reduction, task organization, and deep dive topic exploration. The tool supports multiple LLM backends, offers weekly top-k aggregations, and can be deployed on Linux/MacOS using docker-compose or Kubernetes.
Qmedia
QMedia is an open-source multimedia AI content search engine designed specifically for content creators. It provides rich information extraction methods for text, image, and short video content. The tool integrates unstructured text, image, and short video information to build a multimodal RAG content Q&A system. Users can efficiently search for image/text and short video materials, analyze content, provide content sources, and generate customized search results based on user interests and needs. QMedia supports local deployment for offline content search and Q&A for private data. The tool offers features like content cards display, multimodal content RAG search, and pure local multimodal models deployment. Users can deploy different types of models locally, manage language models, feature embedding models, image models, and video models. QMedia aims to spark new ideas for content creation and share AI content creation concepts in an open-source manner.
For similar tasks
LLMStack
LLMStack is a no-code platform for building generative AI agents, workflows, and chatbots. It allows users to connect their own data, internal tools, and GPT-powered models without any coding experience. LLMStack can be deployed to the cloud or on-premise and can be accessed via HTTP API or triggered from Slack or Discord.
ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.
onnxruntime-genai
ONNX Runtime Generative AI is a library that provides the generative AI loop for ONNX models, including inference with ONNX Runtime, logits processing, search and sampling, and KV cache management. Users can call a high level `generate()` method, or run each iteration of the model in a loop. It supports greedy/beam search and TopP, TopK sampling to generate token sequences, has built in logits processing like repetition penalties, and allows for easy custom scoring.
jupyter-ai
Jupyter AI connects generative AI with Jupyter notebooks. It provides a user-friendly and powerful way to explore generative AI models in notebooks and improve your productivity in JupyterLab and the Jupyter Notebook. Specifically, Jupyter AI offers: * An `%%ai` magic that turns the Jupyter notebook into a reproducible generative AI playground. This works anywhere the IPython kernel runs (JupyterLab, Jupyter Notebook, Google Colab, Kaggle, VSCode, etc.). * A native chat UI in JupyterLab that enables you to work with generative AI as a conversational assistant. * Support for a wide range of generative model providers, including AI21, Anthropic, AWS, Cohere, Gemini, Hugging Face, NVIDIA, and OpenAI. * Local model support through GPT4All, enabling use of generative AI models on consumer grade machines with ease and privacy.
khoj
Khoj is an open-source, personal AI assistant that extends your capabilities by creating always-available AI agents. You can share your notes and documents to extend your digital brain, and your AI agents have access to the internet, allowing you to incorporate real-time information. Khoj is accessible on Desktop, Emacs, Obsidian, Web, and Whatsapp, and you can share PDF, markdown, org-mode, notion files, and GitHub repositories. You'll get fast, accurate semantic search on top of your docs, and your agents can create deeply personal images and understand your speech. Khoj is self-hostable and always will be.
langchain_dart
LangChain.dart is a Dart port of the popular LangChain Python framework created by Harrison Chase. LangChain provides a set of ready-to-use components for working with language models and a standard interface for chaining them together to formulate more advanced use cases (e.g. chatbots, Q&A with RAG, agents, summarization, extraction, etc.). The components can be grouped into a few core modules: * **Model I/O:** LangChain offers a unified API for interacting with various LLM providers (e.g. OpenAI, Google, Mistral, Ollama, etc.), allowing developers to switch between them with ease. Additionally, it provides tools for managing model inputs (prompt templates and example selectors) and parsing the resulting model outputs (output parsers). * **Retrieval:** assists in loading user data (via document loaders), transforming it (with text splitters), extracting its meaning (using embedding models), storing (in vector stores) and retrieving it (through retrievers) so that it can be used to ground the model's responses (i.e. Retrieval-Augmented Generation or RAG). * **Agents:** "bots" that leverage LLMs to make informed decisions about which available tools (such as web search, calculators, database lookup, etc.) to use to accomplish the designated task. The different components can be composed together using the LangChain Expression Language (LCEL).
danswer
Danswer is an open-source Gen-AI Chat and Unified Search tool that connects to your company's docs, apps, and people. It provides a Chat interface and plugs into any LLM of your choice. Danswer can be deployed anywhere and for any scale - on a laptop, on-premise, or to cloud. Since you own the deployment, your user data and chats are fully in your own control. Danswer is MIT licensed and designed to be modular and easily extensible. The system also comes fully ready for production usage with user authentication, role management (admin/basic users), chat persistence, and a UI for configuring Personas (AI Assistants) and their Prompts. Danswer also serves as a Unified Search across all common workplace tools such as Slack, Google Drive, Confluence, etc. By combining LLMs and team specific knowledge, Danswer becomes a subject matter expert for the team. Imagine ChatGPT if it had access to your team's unique knowledge! It enables questions such as "A customer wants feature X, is this already supported?" or "Where's the pull request for feature Y?"
infinity
Infinity is an AI-native database designed for LLM applications, providing incredibly fast full-text and vector search capabilities. It supports a wide range of data types, including vectors, full-text, and structured data, and offers a fused search feature that combines multiple embeddings and full text. Infinity is easy to use, with an intuitive Python API and a single-binary architecture that simplifies deployment. It achieves high performance, with 0.1 milliseconds query latency on million-scale vector datasets and up to 15K QPS.
For similar jobs
h2ogpt
h2oGPT is an Apache V2 open-source project that allows users to query and summarize documents or chat with local private GPT LLMs. It features a private offline database of any documents (PDFs, Excel, Word, Images, Video Frames, Youtube, Audio, Code, Text, MarkDown, etc.), a persistent database (Chroma, Weaviate, or in-memory FAISS) using accurate embeddings (instructor-large, all-MiniLM-L6-v2, etc.), and efficient use of context using instruct-tuned LLMs (no need for LangChain's few-shot approach). h2oGPT also offers parallel summarization and extraction, reaching an output of 80 tokens per second with the 13B LLaMa2 model, HYDE (Hypothetical Document Embeddings) for enhanced retrieval based upon LLM responses, a variety of models supported (LLaMa2, Mistral, Falcon, Vicuna, WizardLM. With AutoGPTQ, 4-bit/8-bit, LORA, etc.), GPU support from HF and LLaMa.cpp GGML models, and CPU support using HF, LLaMa.cpp, and GPT4ALL models. Additionally, h2oGPT provides Attention Sinks for arbitrarily long generation (LLaMa-2, Mistral, MPT, Pythia, Falcon, etc.), a UI or CLI with streaming of all models, the ability to upload and view documents through the UI (control multiple collaborative or personal collections), Vision Models LLaVa, Claude-3, Gemini-Pro-Vision, GPT-4-Vision, Image Generation Stable Diffusion (sdxl-turbo, sdxl) and PlaygroundAI (playv2), Voice STT using Whisper with streaming audio conversion, Voice TTS using MIT-Licensed Microsoft Speech T5 with multiple voices and Streaming audio conversion, Voice TTS using MPL2-Licensed TTS including Voice Cloning and Streaming audio conversion, AI Assistant Voice Control Mode for hands-free control of h2oGPT chat, Bake-off UI mode against many models at the same time, Easy Download of model artifacts and control over models like LLaMa.cpp through the UI, Authentication in the UI by user/password via Native or Google OAuth, State Preservation in the UI by user/password, Linux, Docker, macOS, and Windows support, Easy Windows Installer for Windows 10 64-bit (CPU/CUDA), Easy macOS Installer for macOS (CPU/M1/M2), Inference Servers support (oLLaMa, HF TGI server, vLLM, Gradio, ExLLaMa, Replicate, OpenAI, Azure OpenAI, Anthropic), OpenAI-compliant, Server Proxy API (h2oGPT acts as drop-in-replacement to OpenAI server), Python client API (to talk to Gradio server), JSON Mode with any model via code block extraction. Also supports MistralAI JSON mode, Claude-3 via function calling with strict Schema, OpenAI via JSON mode, and vLLM via guided_json with strict Schema, Web-Search integration with Chat and Document Q/A, Agents for Search, Document Q/A, Python Code, CSV frames (Experimental, best with OpenAI currently), Evaluate performance using reward models, and Quality maintained with over 1000 unit and integration tests taking over 4 GPU-hours.
mistral.rs
Mistral.rs is a fast LLM inference platform written in Rust. We support inference on a variety of devices, quantization, and easy-to-use application with an Open-AI API compatible HTTP server and Python bindings.
ollama
Ollama is a lightweight, extensible framework for building and running language models on the local machine. It provides a simple API for creating, running, and managing models, as well as a library of pre-built models that can be easily used in a variety of applications. Ollama is designed to be easy to use and accessible to developers of all levels. It is open source and available for free on GitHub.
llama-cpp-agent
The llama-cpp-agent framework is a tool designed for easy interaction with Large Language Models (LLMs). Allowing users to chat with LLM models, execute structured function calls and get structured output (objects). It provides a simple yet robust interface and supports llama-cpp-python and OpenAI endpoints with GBNF grammar support (like the llama-cpp-python server) and the llama.cpp backend server. It works by generating a formal GGML-BNF grammar of the user defined structures and functions, which is then used by llama.cpp to generate text valid to that grammar. In contrast to most GBNF grammar generators it also supports nested objects, dictionaries, enums and lists of them.
llama_ros
This repository provides a set of ROS 2 packages to integrate llama.cpp into ROS 2. By using the llama_ros packages, you can easily incorporate the powerful optimization capabilities of llama.cpp into your ROS 2 projects by running GGUF-based LLMs and VLMs.
MITSUHA
OneReality is a virtual waifu/assistant that you can speak to through your mic and it'll speak back to you! It has many features such as: * You can speak to her with a mic * It can speak back to you * Has short-term memory and long-term memory * Can open apps * Smarter than you * Fluent in English, Japanese, Korean, and Chinese * Can control your smart home like Alexa if you set up Tuya (more info in Prerequisites) It is built with Python, Llama-cpp-python, Whisper, SpeechRecognition, PocketSphinx, VITS-fast-fine-tuning, VITS-simple-api, HyperDB, Sentence Transformers, and Tuya Cloud IoT.
wenxin-starter
WenXin-Starter is a spring-boot-starter for Baidu's "Wenxin Qianfan WENXINWORKSHOP" large model, which can help you quickly access Baidu's AI capabilities. It fully integrates the official API documentation of Wenxin Qianfan. Supports text-to-image generation, built-in dialogue memory, and supports streaming return of dialogue. Supports QPS control of a single model and supports queuing mechanism. Plugins will be added soon.
FlexFlow
FlexFlow Serve is an open-source compiler and distributed system for **low latency**, **high performance** LLM serving. FlexFlow Serve outperforms existing systems by 1.3-2.0x for single-node, multi-GPU inference and by 1.4-2.4x for multi-node, multi-GPU inference.