llm-x

llm-x

LLMX; Easiest 3rd party Local LLM UI for the web!

Stars: 113

Visit
 screenshot

LLM X is a ChatGPT-style UI for the niche group of folks who run Ollama (think of this like an offline chat gpt server) locally. It supports sending and receiving images and text and works offline through PWA (Progressive Web App) standards. The project utilizes React, Typescript, Lodash, Mobx State Tree, Tailwind css, DaisyUI, NextUI, Highlight.js, React Markdown, kbar, Yet Another React Lightbox, Vite, and Vite PWA plugin. It is inspired by ollama-ui's project and Perplexity.ai's UI advancements in the LLM UI space. The project is still under development, but it is already a great way to get started with building your own LLM UI.

README:

Deployed to github pages

LLM X

LLM X logo

Privacy statement:

LLM X does not make any external api calls. (go ahead, check your network tab and see the Fetch section). Your chats and image generations are 100% private. This site / app works completely offline.

Issues:

LLM X (web app) will not connect to a server that is not secure. This means that you can use LLM X on localhost (considered a secure context) but if you're trying to use llm-x over a network the server needs to be from https or else it will not work.

Recent additions:

  • Users can connect to open ai compatible servers
  • Users can connect to multiple of the same server at the same time
  • Text generation through LM Studio is here!
  • Regenerating a bot message adds it to a message variation list
  • Message headers and footers are sticky with the message, useful for long messages

How To Use:

Prerequisites for application

  • Ollama: Download and install Ollama
    • Pull down a model (or a few) from the library Ex: ollama pull llava (or use the app)
  • LM Studio: Download and install LM Studio
  • AUTOMATIC1111: Git clone AUTOMATIC1111 (for image generation)

How to use web client (no install):

Prerequisites for web client

  • Ollama Options:
    • Use Ollama's FAQ to set OLLAMA_ORIGINS = https://mrdjohnson.github.io
    • Run this in your terminal OLLAMA_ORIGINS=https://mrdjohnson.github.io ollama serve
      • (Powershell users: $env:OLLAMA_ORIGINS="https://mrdjohnson.github.io"; ollama serve)
  • LM Studio:
    • Run this in your terminal: lms server start --cors=true
  • A1111:
    • Run this in the a1111 project folder: ./webui.sh --api --listen --cors-allow-origins "*"

  • Use your browser to go to LLM-X
  • Go offline! (optional)
  • Start chatting!

How to use offline:

  • Follow instructions for "How to use web client"
  • In your browser search bar, there should be a download/install button, press that.
  • Go offline! (optional)
  • Start chatting!

How to use from project source:

Prerequisites for project source

  • Ollama: Run this in your terminal ollama serve
  • LM Studio: Run this in your terminal: lms server start
  • A1111: Run this in the a1111 project folder: ./webui.sh --api --listen

Vite preview mode

  • Pull down this project; yarn install, yarn preview
  • Go offline! (optional)
  • Start chatting!

Docker

  • Run this in your terminal: docker compose up -d
  • Open http://localhost:3030
  • Go offline! (optional)
  • Start chatting!

Goals / Features

  • [x] LM Studio integration!
  • [x] Text to Image generation through AUTOMATIC1111
  • [x] Open AI server support!
  • [x] Image to Text using Ollama's multi modal abilities
  • [x] Offline Support via PWA technology
  • [x] Code highlighting with Highlight.js (only handles common languages for now)
  • [x] Search/Command bar provides quick access to app features through kbar
  • [x] Allow users to have as many connections as they way!
  • [x] Text Entry and Response to Ollama
  • [x] Auto saved Chat history
  • [x] Manage multiple chats
  • [x] Copy/Edit/Delete messages sent or received
  • [x] Re-write user message (triggering response refresh)
  • [x] System Prompt customization through "Personas" feature
  • [x] Theme changing through DaisyUI
  • [x] Chat image Modal previews through Yet another react lightbox
  • [x] Import / Export chat(s)
  • [x] Continuous Deployment! Merging to the master branch triggers a new github page build/deploy automatically

Screenshots:

Conversation about logo
Logo convo screenshot
Image generation example!
Image generation screenshot
Showing off omnibar and code
Omnibar and code screenshot
Showing off code and light theme
Code and light theme screenshot
Responding about a cat
Cat screenshot
Another logo response
Logo 2 screenshot

What is this? ChatGPT style UI for the niche group of folks who run Ollama (think of this like an offline chat gpt server) locally. Supports sending and receiving images and text! WORKS OFFLINE through PWA (Progressive Web App) standards (its not dead!)

Why do this? I have been interested in LLM UI for a while now and this seemed like a good intro application. I've been introduced to a lot of modern technologies thanks to this project as well, its been fun!

Why so many buzz words? I couldn't help but bee cool 😎

Tech Stack (thank you's):

Logic helpers:

UI Helpers:

Project setup helpers:

Inspiration: ollama-ui's project. Which allows users to connect to ollama via a web app

Perplexity.ai Perplexity has some amazing UI advancements in the LLM UI space and I have been very interested in getting to that point. Hopefully this starter project lets me get closer to doing something similar!

Getting started

(please note the minimum engine requirements in the package json)

Clone the project, and run yarn in the root directory

yarn dev starts a local instance and opens up a browser tab under https:// (for PWA reasons)

MISC

  • LangChain.js was attempted while spiking on this app but unfortunately it was not set up correctly for stopping incoming streams, I hope this gets fixed later in the future OR if possible a custom LLM Agent can be utilized in order to use LangChain

    • edit: Langchain is working and added to the app now!
  • Originally I used create-react-app 👴 while making this project without knowing it is no longer maintained, I am now using Vite. 🤞 This already allows me to use libs like ollama-js that I could not use before. Will be testing more with langchain very soon

  • This readme was written with https://stackedit.io/app

  • Changes to the main branch trigger an immediate deploy to https://mrdjohnson.github.io/llm-x/

For Tasks:

Click tags to check more tools for each tasks

For Jobs:

Alternative AI tools for llm-x

Similar Open Source Tools

For similar tasks

For similar jobs