dialog

dialog

RAG LLM Ops App for easy deployment and testing

Stars: 315

Visit
 screenshot

Dialog is an API-focused tool designed to simplify the deployment of Large Language Models (LLMs) for programmers interested in AI. It allows users to deploy any LLM based on the structure provided by dialog-lib, enabling them to spend less time coding and more time training their models. The tool aims to humanize Retrieval-Augmented Generative Models (RAGs) and offers features for better RAG deployment and maintenance. Dialog requires a knowledge base in CSV format and a prompt configuration in TOML format to function effectively. It provides functionalities for loading data into the database, processing conversations, and connecting to the LLM, with options to customize prompts and parameters. The tool also requires specific environment variables for setup and configuration.

README:

talkd/dialog logo

discord badge

talkd/dialog

For programmers, who are interested in AI and are deploying RAGs without knowledge on API development, Dialog is an App to simplify RAG deployments, using the most modern frameworks for web and LLM interaction, letting you spend less time coding and more time training your model.

This repository serves as an API focused on letting you deploy any LLM you want, based on the structure provided by dialog-lib.

We started focusing on humanizing RAGs (making the answer scope very delimited and human-like sounding), but we are expanding for broader approaches to improving RAG deployment and maintenance for everyone. Check out our current architecture below and, for more information, check our documentation!

Running the project for the first time

We assume you are familiar with Docker, if you are not, this amazing video tutorial will help you get started. If you want a more detailed getting started, follow the Quick Start session from our docs for setup.

To run the project for the first time, you need to have Docker and Docker Compose installed on your machine. If you don't have it, follow the instructions on the Docker website.

After installing Docker and Docker Compose, clone the repository and run the following command:

cp .env.sample .env

Inside the .env file, set the OPENAI_API_KEY variable with your OpenAI API key.

Then, run the following command:

docker-compose up

it will start two services:

  • db: where the PostgresSQL database runs to support chat history and document retrieval for RAG;

  • dialog: the service with the API.

Tutorials

We've written some tutorials to help you get started with the project:

Also, you can check our documentation for more information.

Our Sponsors

We are thankful for all the support we receive from our sponsors, who help us keep the project running and improving. If you want to become a sponsor, check out our Sponsors Page.

Current Sponsors:

Github Accelerator Buser
Github Accelerator Buser

Using Open-WebUI as front-end

In partnership with Open-WebUI, we made their chat interface our own as well, if you want to use it on your own application, change the docker-compose file to use the docker-compose-open-webui.yml file:

docker-compose -f docker-compose-open-webui.yml up

Maintainers

We are thankful for all of the contributions we receive, mostly reviewed by this awesome maintainers team we have:

made with 💜 by talkd.ai

For Tasks:

Click tags to check more tools for each tasks

For Jobs:

Alternative AI tools for dialog

Similar Open Source Tools

For similar tasks

For similar jobs