local-assistant-examples

local-assistant-examples

Build your own ChatPDF and run them locally

Stars: 281

Visit
 screenshot

The Local Assistant Examples repository is a collection of educational examples showcasing the use of large language models (LLMs). It was initially created for a blog post on building a RAG model locally, and has since expanded to include more examples and educational material. Each example is housed in its own folder with a dedicated README providing instructions on how to run it. The repository is designed to be simple and educational, not for production use.

README:

Local Assistant Examples

Welcome to the Local Assistant Examples repository — a collection of educational examples built on top of large language models (LLMs). This repository was initially created as part of my blog post, Build your own RAG and run it locally: Langchain + Ollama + Streamlit.

Previously named local-rag-example, this project has been renamed to local-assistant-example to reflect the broader scope of its content. Over time, I decided to expand this project to include more examples and educational material, consolidating everything into one place rather than maintaining multiple repositories. Each example now lives in its own folder, with a dedicated README explaining the example and providing instructions on how to run it. The first example, originally from the blog post, can now be found in the simple-rag folder.

Available Examples

  • Simple RAG: Demonstrates how to build and run a Retrieval-Augmented Generation (RAG) model locally.

More examples will be added soon, so stay tuned!

Note: This repository is not intended for production use. It is designed to be as simple as possible to help newcomers understand the concepts of working with LLMs application.

For Tasks:

Click tags to check more tools for each tasks

For Jobs:

Alternative AI tools for local-assistant-examples

Similar Open Source Tools

For similar tasks

For similar jobs