
rlama
A powerful document AI question-answering tool that connects to your local Ollama models. Create, manage, and interact with RAG systems for all your document needs.
Stars: 578

RLAMA is a powerful AI-driven question-answering tool that seamlessly integrates with local Ollama models. It enables users to create, manage, and interact with Retrieval-Augmented Generation (RAG) systems tailored to their documentation needs. RLAMA follows a clean architecture pattern with clear separation of concerns, focusing on lightweight and portable RAG capabilities with minimal dependencies. The tool processes documents, generates embeddings, stores RAG systems locally, and provides contextually-informed responses to user queries. Supported document formats include text, code, and various document types, with troubleshooting steps available for common issues like Ollama accessibility, text extraction problems, and relevance of answers.
README:
RLAMA is a powerful AI-driven question-answering tool for your documents, seamlessly integrating with your local Ollama models. It enables you to create, manage, and interact with Retrieval-Augmented Generation (RAG) systems tailored to your documentation needs.
- Ollama installed and running
curl -fsSL https://raw.githubusercontent.com/dontizi/rlama/main/install.sh | sh
RLAMA is built with:
- Core Language: Go (chosen for performance, cross-platform compatibility, and single binary distribution)
- CLI Framework: Cobra (for command-line interface structure)
- LLM Integration: Ollama API (for embeddings and completions)
- Storage: Local filesystem-based storage (JSON files for simplicity and portability)
- Vector Search: Custom implementation of cosine similarity for embedding retrieval
RLAMA follows a clean architecture pattern with clear separation of concerns:
rlama/
├── cmd/ # CLI commands (using Cobra)
│ ├── root.go # Base command
│ ├── rag.go # Create RAG systems
│ ├── run.go # Query RAG systems
│ └── ...
├── internal/
│ ├── client/ # External API clients
│ │ └── ollama_client.go # Ollama API integration
│ ├── domain/ # Core domain models
│ │ ├── rag.go # RAG system entity
│ │ └── document.go # Document entity
│ ├── repository/ # Data persistence
│ │ └── rag_repository.go # Handles saving/loading RAGs
│ └── service/ # Business logic
│ ├── rag_service.go # RAG operations
│ ├── document_loader.go # Document processing
│ └── embedding_service.go # Vector embeddings
└── pkg/ # Shared utilities
└── vector/ # Vector operations
- Document Processing: Documents are loaded from the file system, parsed based on their type, and converted to plain text.
- Embedding Generation: Document text is sent to Ollama to generate vector embeddings.
- Storage: The RAG system (documents + embeddings) is stored in the user's home directory (~/.rlama).
- Query Process: When a user asks a question, it's converted to an embedding, compared against stored document embeddings, and relevant content is retrieved.
- Response Generation: Retrieved content and the question are sent to Ollama to generate a contextually-informed response.
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ Documents │────>│ Document │────>│ Embedding │
│ (Input) │ │ Processing │ │ Generation │
└─────────────┘ └─────────────┘ └─────────────┘
│
▼
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ Query │────>│ Vector │<────│ Vector Store│
│ Response │ │ Search │ │ (RAG System)│
└─────────────┘ └─────────────┘ └─────────────┘
▲ │
│ ▼
┌─────────────┐ ┌─────────────┐
│ Ollama │<────│ Context │
│ LLM │ │ Building │
└─────────────┘ └─────────────┘
RLAMA is designed to be lightweight and portable, focusing on providing RAG capabilities with minimal dependencies. The entire system runs locally, with the only external dependency being Ollama for LLM capabilities.
You can get help on all commands by using:
rlama --help
These flags can be used with any command:
--host string Ollama host (default: localhost)
--port string Ollama port (default: 11434)
Creates a new RAG system by indexing all documents in the specified folder.
rlama rag [model] [rag-name] [folder-path]
Parameters:
-
model
: Name of the Ollama model to use (e.g., llama3, mistral, gemma). -
rag-name
: Unique name to identify your RAG system. -
folder-path
: Path to the folder containing your documents.
Example:
rlama rag llama3 documentation ./docs
Starts an interactive session to interact with an existing RAG system.
rlama run [rag-name]
Parameters:
-
rag-name
: Name of the RAG system to use.
Example:
rlama run documentation
> How do I install the project?
> What are the main features?
> exit
Displays a list of all available RAG systems.
rlama list
Permanently deletes a RAG system and all its indexed documents.
rlama delete [rag-name] [--force/-f]
Parameters:
-
rag-name
: Name of the RAG system to delete. -
--force
or-f
: (Optional) Delete without asking for confirmation.
Example:
rlama delete old-project
Or to delete without confirmation:
rlama delete old-project --force
Checks if a new version of RLAMA is available and installs it.
rlama update [--force/-f]
Options:
-
--force
or-f
: (Optional) Update without asking for confirmation.
Displays the current version of RLAMA.
rlama --version
or
rlama -v
To uninstall RLAMA:
If you installed via go install
:
rlama uninstall
RLAMA stores its data in ~/.rlama
. To remove it:
rm -rf ~/.rlama
RLAMA supports many file formats:
-
Text:
.txt
,.md
,.html
,.json
,.csv
,.yaml
,.yml
,.xml
-
Code:
.go
,.py
,.js
,.java
,.c
,.cpp
,.h
,.rb
,.php
,.rs
,.swift
,.kt
-
Documents:
.pdf
,.docx
,.doc
,.rtf
,.odt
,.pptx
,.ppt
,.xlsx
,.xls
,.epub
Installing dependencies via install_deps.sh
is recommended to improve support for certain formats.
If you encounter connection errors to Ollama:
- Check that Ollama is running.
- By default, Ollama must be accessible at
http://localhost:11434
or the host and port specified by the OLLAMA_HOST environment variable. - If your Ollama instance is running on a different host or port, use the
--host
and--port
flags:rlama --host 192.168.1.100 --port 8000 list rlama --host my-ollama-server --port 11434 run my-rag
- Check Ollama logs for potential errors.
If you encounter problems with certain formats:
- Install dependencies via
./scripts/install_deps.sh
. - Verify that your system has the required tools (
pdftotext
,tesseract
, etc.).
If the answers are not relevant:
- Check that the documents are properly indexed with
rlama list
. - Make sure the content of the documents is properly extracted.
- Try rephrasing your question more precisely.
For any other issues, please open an issue on the GitHub repository providing:
- The exact command used.
- The complete output of the command.
- Your operating system and architecture.
- The RLAMA version (
rlama --version
).
RLAMA provides multiple ways to connect to your Ollama instance:
-
Command-line flags (highest priority):
rlama --host 192.168.1.100 --port 8080 run my-rag
-
Environment variable:
# Format: "host:port" or just "host" export OLLAMA_HOST=remote-server:8080 rlama run my-rag
-
Default values (used if no other method is specified):
- Host:
localhost
- Port:
11434
- Host:
The precedence order is: command-line flags > environment variable > default values.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for rlama
Similar Open Source Tools

rlama
RLAMA is a powerful AI-driven question-answering tool that seamlessly integrates with local Ollama models. It enables users to create, manage, and interact with Retrieval-Augmented Generation (RAG) systems tailored to their documentation needs. RLAMA follows a clean architecture pattern with clear separation of concerns, focusing on lightweight and portable RAG capabilities with minimal dependencies. The tool processes documents, generates embeddings, stores RAG systems locally, and provides contextually-informed responses to user queries. Supported document formats include text, code, and various document types, with troubleshooting steps available for common issues like Ollama accessibility, text extraction problems, and relevance of answers.

llm-functions
LLM Functions is a project that enables the enhancement of large language models (LLMs) with custom tools and agents developed in bash, javascript, and python. Users can create tools for their LLM to execute system commands, access web APIs, or perform other complex tasks triggered by natural language prompts. The project provides a framework for building tools and agents, with tools being functions written in the user's preferred language and automatically generating JSON declarations based on comments. Agents combine prompts, function callings, and knowledge (RAG) to create conversational AI agents. The project is designed to be user-friendly and allows users to easily extend the capabilities of their language models.

nano-graphrag
nano-GraphRAG is a simple, easy-to-hack implementation of GraphRAG that provides a smaller, faster, and cleaner version of the official implementation. It is about 800 lines of code, small yet scalable, asynchronous, and fully typed. The tool supports incremental insert, async methods, and various parameters for customization. Users can replace storage components and LLM functions as needed. It also allows for embedding function replacement and comes with pre-defined prompts for entity extraction and community reports. However, some features like covariates and global search implementation differ from the original GraphRAG. Future versions aim to address issues related to data source ID, community description truncation, and add new components.

text-extract-api
The text-extract-api is a powerful tool that allows users to convert images, PDFs, or Office documents to Markdown text or JSON structured documents with high accuracy. It is built using FastAPI and utilizes Celery for asynchronous task processing, with Redis for caching OCR results. The tool provides features such as PDF/Office to Markdown and JSON conversion, improving OCR results with LLama, removing Personally Identifiable Information from documents, distributed queue processing, caching using Redis, switchable storage strategies, and a CLI tool for task management. Users can run the tool locally or on cloud services, with support for GPU processing. The tool also offers an online demo for testing purposes.

ai-commits-intellij-plugin
AI Commits is a plugin for IntelliJ-based IDEs and Android Studio that generates commit messages using git diff and OpenAI. It offers features such as generating commit messages from diff using OpenAI API, computing diff only from selected files and lines in the commit dialog, creating custom prompts for commit message generation, using predefined variables and hints to customize prompts, choosing any of the models available in OpenAI API, setting OpenAI network proxy, and setting custom OpenAI compatible API endpoint.

aim
Aim is a command-line tool for downloading and uploading files with resume support. It supports various protocols including HTTP, FTP, SFTP, SSH, and S3. Aim features an interactive mode for easy navigation and selection of files, as well as the ability to share folders over HTTP for easy access from other devices. Additionally, it offers customizable progress indicators and output formats, and can be integrated with other commands through piping. Aim can be installed via pre-built binaries or by compiling from source, and is also available as a Docker image for platform-independent usage.

python-tgpt
Python-tgpt is a Python package that enables seamless interaction with over 45 free LLM providers without requiring an API key. It also provides image generation capabilities. The name _python-tgpt_ draws inspiration from its parent project tgpt, which operates on Golang. Through this Python adaptation, users can effortlessly engage with a number of free LLMs available, fostering a smoother AI interaction experience.

Discord-AI-Chatbot
Discord AI Chatbot is a versatile tool that seamlessly integrates into your Discord server, offering a wide range of capabilities to enhance your communication and engagement. With its advanced language model, the bot excels at imaginative generation, providing endless possibilities for creative expression. Additionally, it offers secure credential management, ensuring the privacy of your data. The bot's hybrid command system combines the best of slash and normal commands, providing flexibility and ease of use. It also features mention recognition, ensuring prompt responses whenever you mention it or use its name. The bot's message handling capabilities prevent confusion by recognizing when you're replying to others. You can customize the bot's behavior by selecting from a range of pre-existing personalities or creating your own. The bot's web access feature unlocks a new level of convenience, allowing you to interact with it from anywhere. With its open-source nature, you have the freedom to modify and adapt the bot to your specific needs.

backend.ai
Backend.AI is a streamlined, container-based computing cluster platform that hosts popular computing/ML frameworks and diverse programming languages, with pluggable heterogeneous accelerator support including CUDA GPU, ROCm GPU, TPU, IPU and other NPUs. It allocates and isolates the underlying computing resources for multi-tenant computation sessions on-demand or in batches with customizable job schedulers with its own orchestrator. All its functions are exposed as REST/GraphQL/WebSocket APIs.

mediasoup-client-aiortc
mediasoup-client-aiortc is a handler for the aiortc Python library, allowing Node.js applications to connect to a mediasoup server using WebRTC for real-time audio, video, and DataChannel communication. It facilitates the creation of Worker instances to manage Python subprocesses, obtain audio/video tracks, and create mediasoup-client handlers. The tool supports features like getUserMedia, handlerFactory creation, and event handling for subprocess closure and unexpected termination. It provides custom classes for media stream and track constraints, enabling diverse audio/video sources like devices, files, or URLs. The tool enhances WebRTC capabilities in Node.js applications through seamless Python subprocess communication.

ps-fuzz
The Prompt Fuzzer is an open-source tool that helps you assess the security of your GenAI application's system prompt against various dynamic LLM-based attacks. It provides a security evaluation based on the outcome of these attack simulations, enabling you to strengthen your system prompt as needed. The Prompt Fuzzer dynamically tailors its tests to your application's unique configuration and domain. The Fuzzer also includes a Playground chat interface, giving you the chance to iteratively improve your system prompt, hardening it against a wide spectrum of generative AI attacks.

airbadge
Airbadge is a Stripe addon for Auth.js that simplifies the process of creating a SaaS site by integrating payment, authentication, gating, self-service account management, webhook handling, trials & free plans, session data, and more. It allows users to launch a SaaS app without writing any authentication or payment code. The project is open source and free to use with optional paid features under the BSL License.

r2ai
r2ai is a tool designed to run a language model locally without internet access. It can be used to entertain users or assist in answering questions related to radare2 or reverse engineering. The tool allows users to prompt the language model, index large codebases, slurp file contents, embed the output of an r2 command, define different system-level assistant roles, set environment variables, and more. It is accessible as an r2lang-python plugin and can be scripted from various languages. Users can use different models, adjust query templates dynamically, load multiple models, and make them communicate with each other.

TalkWithGemini
Talk With Gemini is a web application that allows users to deploy their private Gemini application for free with one click. It supports Gemini Pro and Gemini Pro Vision models. The application features talk mode for direct communication with Gemini, visual recognition for understanding picture content, full Markdown support, automatic compression of chat records, privacy and security with local data storage, well-designed UI with responsive design, fast loading speed, and multi-language support. The tool is designed to be user-friendly and versatile for various deployment options and language preferences.

openai-edge-tts
This project provides a local, OpenAI-compatible text-to-speech (TTS) API using `edge-tts`. It emulates the OpenAI TTS endpoint (`/v1/audio/speech`), enabling users to generate speech from text with various voice options and playback speeds, just like the OpenAI API. `edge-tts` uses Microsoft Edge's online text-to-speech service, making it completely free. The project supports multiple audio formats, adjustable playback speed, and voice selection options, providing a flexible and customizable TTS solution for users.

hayhooks
Hayhooks is a tool that simplifies the deployment and serving of Haystack pipelines as REST APIs. It allows users to wrap their pipelines with custom logic and expose them via HTTP endpoints, including OpenAI-compatible chat completion endpoints. With Hayhooks, users can easily convert their Haystack pipelines into API services with minimal boilerplate code.
For similar tasks

rlama
RLAMA is a powerful AI-driven question-answering tool that seamlessly integrates with local Ollama models. It enables users to create, manage, and interact with Retrieval-Augmented Generation (RAG) systems tailored to their documentation needs. RLAMA follows a clean architecture pattern with clear separation of concerns, focusing on lightweight and portable RAG capabilities with minimal dependencies. The tool processes documents, generates embeddings, stores RAG systems locally, and provides contextually-informed responses to user queries. Supported document formats include text, code, and various document types, with troubleshooting steps available for common issues like Ollama accessibility, text extraction problems, and relevance of answers.
For similar jobs

sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.

teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.

ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.

classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.

chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.

BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students

uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.

griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.