
leettools
AI Search tools.
Stars: 283

LeetTools is an AI search assistant that can perform highly customizable search workflows and generate customized format results based on both web and local knowledge bases. It provides an automated document pipeline for data ingestion, indexing, and storage, allowing users to focus on implementing workflows without worrying about infrastructure. LeetTools can run with minimal resource requirements on the command line with configurable LLM settings and supports different databases for various functions. Users can configure different functions in the same workflow to use different LLM providers and models.
README:
- AI Search Assistant with Local Knowledge Bases
- Quick Start
- Use Different LLM and Search Providers
- Usage Examples
- Main Components
- Community
LeetTools is an AI search assistant that can perform highly customizable search workflows and generate customized format results based on both web and local knowledge bases. With an automated document pipeline that handles data ingestion, indexing, and storage, we can focus on implementing the workflow without worrying about the underlying infrastructure.
LeetTools can run with minimal resource requirements on the command line with a DuckDB-backend and configurable LLM settings. It can also use other dedicated databases for different functions, e.g., we can use MongoDB for document storage, Milvus for vector search, and Neo4j for graph search. We can configure different functions in the same workflow to use different LLM providers and models.
Here is an illustration of the LeetTools digest flow where it can search the web (or local KB) and generate a digest article from the search results:
And here is an example output article generated by the digest flow for the query How does Ollama work?.
Currently LeetTools provides the following workflows:
- answer : Answer the query directly with source references (similar to Perplexity). đź“–
- digest : Generate a multi-section digest article from search results (similar to Google Deep Research). đź“–
- search : Search for top segements that match the query. đź“–
- news : Generate a list of news items for the specified topic. đź“–
- extract : Extract and store structured data for given schema. đź“–
- opinions: Generate sentiment analysis and facts from the search results. đź“–
Before you start
-
.env file: We can use any OpenAI-compatible LLM endpoint, such as local Ollama service or public provider such as Gemini or DeepSeek. we can switch the service easily by defining environment variables or switching .env files.
-
LeetHome: By default the data is saved under ${HOME}/leettools, you can set a different LeetHome environment variable to change the location:
% export LEET_HOME=<your_leet_home>
% mkdir -p ${LEET_HOME}
🚀 New: Run LeetTools Web UI with Docker 🚀
LeetTools now provides a Docker container that includes the web UI. You can start the container by running the following command:
docker/start.sh
This will start the LeetTools service and the web UI. You can access the web UI at http://localhost:3000. The web UI app is currently under development and not open sourced yet. We plan to open source it in the near future.
Run with pip
If you are using an OpenAI compatible LLM endpoint, you can install and run LeetTools with pip as follows (using Conda/Venv is recommended):
% conda create -y -n leettools python=3.11
% conda activate leettools
% pip install leettools
% export EDS_LLM_API_KEY=<your_api_key>
% leet flow -t answer -q "How does GraphRAG work?" -k graphrag -l info
The above flow -t answer
command will run the answer
flow with the query "How does
GraphRAG work?" and save the scraped web pages to the knowledge base graphrag
. The
-l info
option will show the essential log messages.
The default API endpoint is set to the OpenAI API endpoint, which you can modify by
changing the EDS_DEFAULT_LLM_BASE_URL
environment variable:
% export EDS_DEFAULT_LLM_BASE_URL=https://api.openai.com/v1
Run with source code
% git clone https://github.com/leettools-dev/leettools.git
% cd leettools
% conda create -y -n leettools python=3.11
% conda activate leettools
% pip install -r requirements.txt
% pip install -e .
# add the script path to the path
% export PATH=`pwd`/scripts:${PATH}
% export EDS_LLM_API_KEY=<your_api_key>
% leet flow -t answer -q "How does GraphRAG work?" -k graphrag -l info
We can run LeetTools with different env files to use different LLM providers and other related settings.
# you may need to pull the models first
% ollama pull llama3.2
% ollama pull nomic-embed-text
% ollama serve
% cat > .env.ollama <<EOF
EDS_DEFAULT_LLM_BASE_URL=http://localhost:11434/v1
EDS_LLM_API_KEY=dummy-llm-api-key
EDS_DEFAULT_INFERENCE_MODEL=llama3.2
EDS_DEFAULT_EMBEDDING_MODEL=nomic-embed-text
EDS_EMBEDDING_MODEL_DIMENSION=768
EOF
# Then run the command with the -e option to specify the .env file to use
% leet flow -e .env.ollama -t answer -q "How does GraphRAG work?" -k graphrag.ollama -l info
For another example, since DeepSeek does not provide an embedding endpoint yet, we can
use the "EDS_DEFAULT_DENSE_EMBEDDER" setting to specify a local embedder with a default
all-MiniLM-L6-v2
model:
### to you can put the settings in the .env.deepseek file
% cat > .env.deepseek <<EOF
LEET_HOME=</Users/myhome/leettools>
EDS_DEFAULT_LLM_BASE_URL=https://api.deepseek.com/v1
EDS_LLM_API_KEY=<your-api-key>
EDS_DEFAULT_INFERENCE_MODEL=deepseek-chat
EDS_DEFAULT_DENSE_EMBEDDER=dense_embedder_local_mem
EOF
# Then run the command with the -e option to specify the .env file to use
% leet flow -e .env.deepseek -t answer -q "How does GraphRAG work?" -k graphrag -l info
If you want to use another API provider (OpenAI compatible) for embedding, say a local Ollama embedder, you can set the embedding endpoint URL and API key separately as follows:
% cat > .env.deepseek <<EOF
EDS_DEFAULT_LLM_BASE_URL=https://api.deepseek.com/v1
EDS_LLM_API_KEY=<your-api-key>
EDS_DEFAULT_INFERENCE_MODEL=deepseek-chat
# this specifies to use an OpenAI compatible embedding endpoint
EDS_DEFAULT_DENSE_EMBEDDER=dense_embedder_openai
# the following specifies the embedding endpoint URL and model to use
EDS_DEFAULT_EMBEDDING_BASE_URL=http://localhost:11434/v1
EDS_DEFAULT_EMBEDDING_MODEL=nomic-embed-text
EDS_EMBEDDING_MODEL_DIMENSION=768
EOF
The search engine is google
by default, which can be set by the following environment
variable:
export EDS_WEB_RETRIEVER=google
export EDS_SEARCH_API_URL=https://www.googleapis.com/customsearch/v1
export EDS_GOOGLE_CX_KEY=<your-google-cx-key>
export EDS_GOOGLE_API_KEY=<your-google-api-key>
We can also use the FireCrawl search as the default web retriever instead of the default Google search by setting the following environment variables:
export EDS_WEB_RETRIEVER=firecrawl
export EDS_FIRECRAWL_API_URL=https://api.firecrawl.dev
export EDS_FIRECRAWL_API_KEY=your_firecrawl_api_key
Here is a detailed example of using FireCrawl with Ollama to run a deep research.
By default we provide a shared proxy search service that can be used for testing purposes. Users should use their own search services for production use.
We can build a local knowledge base with PDFs from the web. Suppose we have set up the local Ollama service as described above, now we can use the following commands to build a local knowledge base with PDFs from the web:
# create a KB with a URL
# the book downloaded here is "Foundations of Large Language Models"
# it has 231 pages and take some time to process
% leet kb add-url -e .env.ollama -k llmbook -r "https://arxiv.org/pdf/2501.09223"
# now you can query the KB with any topic you want to explore
% leet kb flow -e .env.ollama -t answer -k llmbook -l info \
-q "How does LLM Finetuning process work?"
We have a more detailed example to show how to use the local Ollama service with the DeepSeek-r1:1.5B model to build a local knowledge base.
We can generate analytical research reports like OpenAI/Google's Deep Research by using
the digest
flow. Here is an example:
% leet flow -e .env.fireworks -t digest -k aijob.fireworks \
-p search_max_results=30 -p days_limit=360 \
-q "How will agentic AI and generative AI affect our non-tech jobs?" \
-l info -o outputs/aijob.fireworks.md
An example of the output is available here, and the tutorial to use the DeepSeek API from fireworks.ai for the above command is available here.
We can create a knowledge base with a web search with a date limit, and then generate a list of news items from the KB. Here is an example:
leet flow -t news -q "LLM GenAI Startups" -k genai -l info\
-p days_limit=3 -p search_iteration=3 -p search_max_results=100 \
-o llm_genai_news.md
The query retrieves the latest web pages from the past 3 days up to 100 search result page
and generates a list of news items from the search results. The output is saved to
the llm_genai_news.md
file. An example of the output is available here.
The main components of the backend include:
- 🚀 Automated document pipeline to ingest, convert, chunk, embed, and index documents.
- 🗂️ Knowledge base to manage and serve the indexed documents.
- 🔍 Search and retrieval library to fetch documents from the web or local KB.
- 🤖 Workflow engine to implement search-based AI workflows.
- âš™ Configuration system to support dynamic configurations used for every component.
- 📝 Query history system to manage the history and the context of the queries.
- đź’» Scheduler for automatic execution of the pipeline tasks.
- đź§© Accounting system to track the usage of the LLM APIs.
The architecture of the document pipeline is shown below:
See the Documentation for more details.
Acknowledgements
Right now we are using the following open source libraries and tools (not limited to):
We plan to add more plugins for different components to support different workloads.
Get help and support
Please feel free to connect with us using the discussion section.
Contributing
Please read Contributing to LeetTools for details.
License
LeetTools is licensed under the Apache License, Version 2.0. See LICENSE for the full license text.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for leettools
Similar Open Source Tools

leettools
LeetTools is an AI search assistant that can perform highly customizable search workflows and generate customized format results based on both web and local knowledge bases. It provides an automated document pipeline for data ingestion, indexing, and storage, allowing users to focus on implementing workflows without worrying about infrastructure. LeetTools can run with minimal resource requirements on the command line with configurable LLM settings and supports different databases for various functions. Users can configure different functions in the same workflow to use different LLM providers and models.

holohub
Holohub is a central repository for the NVIDIA Holoscan AI sensor processing community to share reference applications, operators, tutorials, and benchmarks. It includes example applications, community components, package configurations, and tutorials. Users and developers of the Holoscan platform are invited to reuse and contribute to this repository. The repository provides detailed instructions on prerequisites, building, running applications, contributing, and glossary terms. It also offers a searchable catalog of available components on the Holoscan SDK User Guide website.

azure-search-openai-javascript
This sample demonstrates a few approaches for creating ChatGPT-like experiences over your own data using the Retrieval Augmented Generation pattern. It uses Azure OpenAI Service to access the ChatGPT model (gpt-35-turbo), and Azure AI Search for data indexing and retrieval.

geti-sdk
The Intel® Geti™ SDK is a python package that enables teams to rapidly develop AI models by easing the complexities of model development and enhancing collaboration between teams. It provides tools to interact with an Intel® Geti™ server via the REST API, allowing for project creation, downloading, uploading, deploying for local inference with OpenVINO, setting project and model configuration, launching and monitoring training jobs, and media upload and prediction. The SDK also includes tutorial-style Jupyter notebooks demonstrating its usage.

ai-financial-agent
AI Financial Agent is a proof of concept project exploring the use of AI for investment research. It provides an AI SDK with a unified API for generating text and structured objects, along with access to real-time and historical stock market data optimized for AI financial agents. The project includes features like dynamic chat interfaces, support for multiple model providers, and styling with Tailwind CSS. Users can deploy their own version of the AI Financial Agent using Vercel and GitHub integration.

geti-sdk
The Intel® Geti™ SDK is a python package that enables teams to rapidly develop AI models by easing the complexities of model development and fostering collaboration. It provides tools to interact with an Intel® Geti™ server via the REST API, allowing for project creation, downloading, uploading, deploying for local inference with OpenVINO, configuration management, training job monitoring, media upload, and prediction. The repository also includes tutorial-style Jupyter notebooks demonstrating SDK usage.

conversational-agent-langchain
This repository contains a Rest-Backend for a Conversational Agent that allows embedding documents, semantic search, QA based on documents, and document processing with Large Language Models. It uses Aleph Alpha and OpenAI Large Language Models to generate responses to user queries, includes a vector database, and provides a REST API built with FastAPI. The project also features semantic search, secret management for API keys, installation instructions, and development guidelines for both backend and frontend components.

guidellm
GuideLLM is a platform for evaluating and optimizing the deployment of large language models (LLMs). By simulating real-world inference workloads, GuideLLM enables users to assess the performance, resource requirements, and cost implications of deploying LLMs on various hardware configurations. This approach ensures efficient, scalable, and cost-effective LLM inference serving while maintaining high service quality. The tool provides features for performance evaluation, resource optimization, cost estimation, and scalability testing.

classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.

h2o-llmstudio
H2O LLM Studio is a framework and no-code GUI designed for fine-tuning state-of-the-art large language models (LLMs). With H2O LLM Studio, you can easily and effectively fine-tune LLMs without the need for any coding experience. The GUI is specially designed for large language models, and you can finetune any LLM using a large variety of hyperparameters. You can also use recent finetuning techniques such as Low-Rank Adaptation (LoRA) and 8-bit model training with a low memory footprint. Additionally, you can use Reinforcement Learning (RL) to finetune your model (experimental), use advanced evaluation metrics to judge generated answers by the model, track and compare your model performance visually, and easily export your model to the Hugging Face Hub and share it with the community.

dream-team
Build your dream team with Autogen is a repository that leverages Microsoft Autogen 0.4, Azure OpenAI, and Streamlit to create an end-to-end multi-agent application. It provides an advanced multi-agent framework based on Magentic One, with features such as a friendly UI, single-line deployment, secure code execution, managed identities, and observability & debugging tools. Users can deploy Azure resources and the app with simple commands, work locally with virtual environments, install dependencies, update configurations, and run the application. The repository also offers resources for learning more about building applications with Autogen.

airbroke
Airbroke is an open-source error catcher tool designed for modern web applications. It provides a PostgreSQL-based backend with an Airbrake-compatible HTTP collector endpoint and a React-based frontend for error management. The tool focuses on simplicity, maintaining a small database footprint even under heavy data ingestion. Users can ask AI about issues, replay HTTP exceptions, and save/manage bookmarks for important occurrences. Airbroke supports multiple OAuth providers for secure user authentication and offers occurrence charts for better insights into error occurrences. The tool can be deployed in various ways, including building from source, using Docker images, deploying on Vercel, Render.com, Kubernetes with Helm, or Docker Compose. It requires Node.js, PostgreSQL, and specific system resources for deployment.

agentok
Agentok Studio is a visual tool built for AutoGen, a cutting-edge agent framework from Microsoft and various contributors. It offers intuitive visual tools to simplify the construction and management of complex agent-based workflows. Users can create workflows visually as graphs, chat with agents, and share flow templates. The tool is designed to streamline the development process for creators and developers working on next-generation Multi-Agent Applications.

open-source-slack-ai
This repository provides a ready-to-run basic Slack AI solution that allows users to summarize threads and channels using OpenAI. Users can generate thread summaries, channel overviews, channel summaries since a specific time, and full channel summaries. The tool is powered by GPT-3.5-Turbo and an ensemble of NLP models. It requires Python 3.8 or higher, an OpenAI API key, Slack App with associated API tokens, Poetry package manager, and ngrok for local development. Users can customize channel and thread summaries, run tests with coverage using pytest, and contribute to the project for future enhancements.

serverless-pdf-chat
The serverless-pdf-chat repository contains a sample application that allows users to ask natural language questions of any PDF document they upload. It leverages serverless services like Amazon Bedrock, AWS Lambda, and Amazon DynamoDB to provide text generation and analysis capabilities. The application architecture involves uploading a PDF document to an S3 bucket, extracting metadata, converting text to vectors, and using a LangChain to search for information related to user prompts. The application is not intended for production use and serves as a demonstration and educational tool.

kaito
Kaito is an operator that automates the AI/ML inference model deployment in a Kubernetes cluster. It manages large model files using container images, avoids tuning deployment parameters to fit GPU hardware by providing preset configurations, auto-provisions GPU nodes based on model requirements, and hosts large model images in the public Microsoft Container Registry (MCR) if the license allows. Using Kaito, the workflow of onboarding large AI inference models in Kubernetes is largely simplified.
For similar tasks

ChatData
ChatData is a robust chat-with-documents application designed to extract information and provide answers by querying the MyScale free knowledge base or uploaded documents. It leverages the Retrieval Augmented Generation (RAG) framework, millions of Wikipedia pages, and arXiv papers. Features include self-querying retriever, VectorSQL, session management, and building a personalized knowledge base. Users can effortlessly navigate vast data, explore academic papers, and research documents. ChatData empowers researchers, students, and knowledge enthusiasts to unlock the true potential of information retrieval.

AIBotPublic
AIBotPublic is an open-source version of AIBotPro, a comprehensive AI tool that provides various features such as knowledge base construction, AI drawing, API hosting, and more. It supports custom plugins and parallel processing of multiple files. The tool is built using bootstrap4 for the frontend, .NET6.0 for the backend, and utilizes technologies like SqlServer, Redis, and Milvus for database and vector database functionalities. It integrates third-party dependencies like Baidu AI OCR, Milvus C# SDK, Google Search, and more to enhance its capabilities.

chatwiki
ChatWiki is an open-source knowledge base AI question-answering system. It is built on large language models (LLM) and retrieval-augmented generation (RAG) technologies, providing out-of-the-box data processing, model invocation capabilities, and helping enterprises quickly build their own knowledge base AI question-answering systems. It offers exclusive AI question-answering system, easy integration of models, data preprocessing, simple user interface design, and adaptability to different business scenarios.

nextjs-openai-doc-search
This starter project is designed to process `.mdx` files in the `pages` directory to use as custom context within OpenAI Text Completion prompts. It involves building a custom ChatGPT style doc search powered by Next.js, OpenAI, and Supabase. The project includes steps for pre-processing knowledge base, storing embeddings in Postgres, performing vector similarity search, and injecting content into OpenAI GPT-3 text completion prompt.

autoflow
AutoFlow is an open source graph rag based knowledge base tool built on top of TiDB Vector and LlamaIndex and DSPy. It features a Perplexity-style Conversational Search page and an Embeddable JavaScript Snippet for easy integration into websites. The tool allows for comprehensive coverage and streamlined search processes through sitemap URL scraping.

chipper
Chipper provides a web interface, CLI, and architecture for pipelines, document chunking, web scraping, and query workflows. It is built with Haystack, Ollama, Hugging Face, Docker, Tailwind, and ElasticSearch, running locally or as a Dockerized service. Originally created to assist in creative writing, it now offers features like local Ollama and Hugging Face API, ElasticSearch embeddings, document splitting, web scraping, audio transcription, user-friendly CLI, and Docker deployment. The project aims to be educational, beginner-friendly, and a playground for AI exploration and innovation.

RapidRAG
RapidRAG is a project focused on Knowledge QA with LLM, combining Questions & Answers based on local knowledge base with a large language model. The project aims to provide a flexible and deployment-friendly solution for building a knowledge question answering system. It is modularized, allowing easy replacement of parts and simple code understanding. The tool supports various document formats and can utilize CPU for most parts, with the large language model interface requiring separate deployment.

vault-ai
OP Vault is a tool that leverages the OP Stack (OpenAI + Pinecone Vector Database) to allow users to upload custom knowledgebase files and ask questions about their contents. It provides a user-friendly Golang server and React frontend for querying human-readable content like books and documents, making it valuable for knowledge extraction and question-answering. Users can upload entire libraries, receive specific answers with file and section references, and explore the power of the OP Stack in a practical interface.
For similar jobs

sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.

teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.

ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.

classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.

chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.

BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students

uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.

griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.