
Rankify
🔥 Rankify: A Comprehensive Python Toolkit for Retrieval, Re-Ranking, and Retrieval-Augmented Generation 🔥. Our toolkit integrates 40 pre-retrieved benchmark datasets and supports 7+ retrieval techniques, 24+ state-of-the-art Reranking models, and multiple RAG methods.
Stars: 335

Rankify is a Python toolkit designed for unified retrieval, re-ranking, and retrieval-augmented generation (RAG) research. It integrates 40 pre-retrieved benchmark datasets and supports 7 retrieval techniques, 24 state-of-the-art re-ranking models, and multiple RAG methods. Rankify provides a modular and extensible framework, enabling seamless experimentation and benchmarking across retrieval pipelines. It offers comprehensive documentation, open-source implementation, and pre-built evaluation tools, making it a powerful resource for researchers and practitioners in the field.
README:
🔥 Rankify: A Comprehensive Python Toolkit for Retrieval, Re-Ranking, and Retrieval-Augmented Generation 🔥
If you like our Framework, don't hesitate to ⭐ star this repository ⭐. This helps us to make the Framework more better and scalable to different models and methods 🤗.
A modular and efficient retrieval, reranking and RAG framework designed to work with state-of-the-art models for retrieval, ranking and rag tasks.
Rankify is a Python toolkit designed for unified retrieval, re-ranking, and retrieval-augmented generation (RAG) research. Our toolkit integrates 40 pre-retrieved benchmark datasets and supports 7 retrieval techniques, 24 state-of-the-art re-ranking models, and multiple RAG methods. Rankify provides a modular and extensible framework, enabling seamless experimentation and benchmarking across retrieval pipelines. Comprehensive documentation, open-source implementation, and pre-built evaluation tools make Rankify a powerful resource for researchers and practitioners in the field.
🚀 Demo
To run the demo locally:
# Make sure Rankify is installed
pip install streamlit
# Then run the demo
streamlit run demo.py
https://github.com/user-attachments/assets/13184943-55db-4f0c-b509-fde920b809bc
🔗 Navigation
- Features
- Roadmap
- Installation
- Quick Start
- Retrievers
- Re-Rankers
- Generators
- Evaluation
- Documentation
- Community Contributing
- Contributing
- License
- Acknowledgments
- Citation
🔧 Installation
Set up the virtual environment
First, create and activate a conda environment with Python 3.10:
conda create -n rankify python=3.10
conda activate rankify
Install PyTorch 2.5.1
we recommend installing Rankify with PyTorch 2.5.1 for Rankify. Refer to the PyTorch installation page for platform-specific installation commands.
If you have access to GPUs, it's recommended to install the CUDA version 12.4 or 12.6 of PyTorch, as many of the evaluation metrics are optimized for GPU use.
To install Pytorch 2.5.1 you can install it from the following cmd
pip install torch==2.5.1 torchvision==0.20.1 torchaudio==2.5.1 --index-url https://download.pytorch.org/whl/cu124
Basic Installation
To install Rankify, simply use pip (requires Python 3.10+):
pip install rankify
This will install the base functionality required for retrieval, re-ranking, and retrieval-augmented generation (RAG).
Recommended Installation
For full functionality, we recommend installing Rankify with all dependencies:
pip install "rankify[all]"
This ensures you have all necessary modules, including retrieval, re-ranking, and RAG support.
Optional Dependencies
If you prefer to install only specific components, choose from the following:
# Install dependencies for retrieval only (BM25, DPR, ANCE, etc.)
pip install "rankify[retriever]"
# Install base re-ranking with vLLM support for `FirstModelReranker`, `LiT5ScoreReranker`, `LiT5DistillReranker`, `VicunaReranker`, and `ZephyrReranker'.
pip install "rankify[reranking]"
Or, to install from GitHub for the latest development version:
git clone https://github.com/DataScienceUIBK/rankify.git
cd rankify
pip install -e .
# For full functionality we recommend installing Rankify with all dependencies:
pip install -e ".[all]"
# Install dependencies for retrieval only (BM25, DPR, ANCE, etc.)
pip install -e ".[retriever]"
# Install base re-ranking with vLLM support for `FirstModelReranker`, `LiT5ScoreReranker`, `LiT5DistillReranker`, `VicunaReranker`, and `ZephyrReranker'.
pip install -e ".[reranking]"
Using ColBERT Retriever
If you want to use ColBERT Retriever, follow these additional setup steps:
# Install GCC and required libraries
conda install -c conda-forge gcc=9.4.0 gxx=9.4.0
conda install -c conda-forge libstdcxx-ng
# Export necessary environment variables
export LD_LIBRARY_PATH=$CONDA_PREFIX/lib:$LD_LIBRARY_PATH
export CC=gcc
export CXX=g++
export PATH=$CONDA_PREFIX/bin:$PATH
# Clear cached torch extensions
rm -rf ~/.cache/torch_extensions/*
🚀 Quick Start
1️⃣ Pre-retrieved Datasets
We provide 1,000 pre-retrieved documents per dataset, which you can download from:
🔗 Hugging Face Dataset Repository
Dataset Format
The pre-retrieved documents are structured as follows:
[
{
"question": "...",
"answers": ["...", "...", ...],
"ctxs": [
{
"id": "...", // Passage ID from database TSV file
"score": "...", // Retriever score
"has_answer": true|false // Whether the passage contains the answer
}
]
}
]
Access Datasets in Rankify
You can easily download and use pre-retrieved datasets through Rankify.
List Available Datasets
To see all available datasets:
from rankify.dataset.dataset import Dataset
# Display available datasets
Dataset.avaiable_dataset()
Retriever Datasets
from rankify.dataset.dataset import Dataset
# Download BM25-retrieved documents for nq-dev
dataset = Dataset(retriever="bm25", dataset_name="nq-dev", n_docs=100)
documents = dataset.download(force_download=False)
# Download BGE-retrieved documents for nq-dev
dataset = Dataset(retriever="bge", dataset_name="nq-dev", n_docs=100)
documents = dataset.download(force_download=False)
# Download ColBERT-retrieved documents for nq-dev
dataset = Dataset(retriever="colbert", dataset_name="nq-dev", n_docs=100)
documents = dataset.download(force_download=False)
# Download MSS-DPR-retrieved documents for nq-dev
dataset = Dataset(retriever="mss-dpr", dataset_name="nq-dev", n_docs=100)
documents = dataset.download(force_download=False)
# Download MSS-retrieved documents for nq-dev
dataset = Dataset(retriever="mss", dataset_name="nq-dev", n_docs=100)
documents = dataset.download(force_download=False)
# Download MSS-retrieved documents for nq-dev
dataset = Dataset(retriever="contriever", dataset_name="nq-dev", n_docs=100)
documents = dataset.download(force_download=False)
# Download ANCE-retrieved documents for nq-dev
dataset = Dataset(retriever="ance", dataset_name="nq-dev", n_docs=100)
documents = dataset.download(force_download=False)
Load Pre-retrieved Dataset from File
If you have already downloaded a dataset, you can load it directly:
from rankify.dataset.dataset import Dataset
# Load pre-downloaded BM25 dataset for WebQuestions
documents = Dataset.load_dataset('./tests/out-datasets/bm25/web_questions/test.json', 100)
Now, you can integrate retrieved documents with re-ranking and RAG workflows! 🚀
Feature Comparison for Pre-Retrieved Datasets
The following table provides an overview of the availability of different retrieval methods (BM25, DPR, ColBERT, ANCE, BGE, Contriever) for each dataset.
✅ Completed
⏳ Part Completed, Pending other Parts
🕒 Pending
Dataset
BM25
DPR
ColBERT
ANCE
BGE
Contriever
2WikimultihopQA
✅
🕒
🕒
🕒
🕒
🕒
ArchivialQA
✅
🕒
🕒
🕒
🕒
🕒
ChroniclingAmericaQA
✅
🕒
🕒
🕒
🕒
🕒
EntityQuestions
✅
🕒
🕒
🕒
🕒
🕒
AmbigQA
✅
🕒
✅
🕒
🕒
🕒
ARC
✅
🕒
🕒
🕒
🕒
🕒
ASQA
✅
🕒
🕒
🕒
🕒
🕒
MS MARCO
🕒
🕒
🕒
🕒
🕒
🕒
AY2
✅
🕒
🕒
🕒
🕒
🕒
Bamboogle
✅
🕒
🕒
🕒
🕒
🕒
BoolQ
✅
🕒
✅
🕒
✅
🕒
CommonSenseQA
✅
🕒
✅
🕒
✅
🕒
CuratedTREC
✅
🕒
✅
⏳
✅
🕒
ELI5
✅
🕒
🕒
🕒
🕒
🕒
FERMI
✅
🕒
✅
⏳
✅
🕒
FEVER
✅
🕒
🕒
🕒
🕒
🕒
HellaSwag
✅
🕒
🕒
🕒
🕒
🕒
HotpotQA
✅
🕒
🕒
🕒
🕒
🕒
MMLU
✅
🕒
🕒
🕒
🕒
🕒
Musique
✅
🕒
🕒
🕒
🕒
🕒
NarrativeQA
✅
🕒
✅
⏳
✅
🕒
NQ
✅
🕒
✅
⏳
✅
🕒
OpenbookQA
✅
🕒
🕒
🕒
🕒
🕒
PIQA
✅
🕒
✅
🕒
🕒
🕒
PopQA
✅
🕒
✅
⏳
✅
🕒
Quartz
✅
🕒
🕒
🕒
🕒
🕒
SIQA
✅
🕒
✅
🕒
✅
🕒
StrategyQA
✅
🕒
🕒
🕒
🕒
🕒
TREX
✅
🕒
🕒
🕒
🕒
🕒
TriviaQA
✅
🕒
✅
⏳
✅
🕒
TruthfulQA
✅
🕒
🕒
🕒
🕒
🕒
TruthfulQA
✅
🕒
🕒
🕒
🕒
🕒
WebQ
✅
🕒
✅
⏳
✅
🕒
WikiQA
✅
🕒
✅
⏳
✅
🕒
WikiAsp
✅
🕒
🕒
🕒
🕒
🕒
WikiPassageQA
✅
🕒
✅
⏳
✅
🕒
WNED
✅
🕒
🕒
🕒
🕒
🕒
WoW
✅
🕒
🕒
🕒
🕒
🕒
Zsre
✅
🕒
🕒
🕒
🕒
🕒
2️⃣ Running Retrieval
To perform retrieval using Rankify, you can choose from various retrieval methods such as BM25, DPR, ANCE, Contriever, ColBERT, and BGE.
Example: Running Retrieval on Sample Queries
from rankify.dataset.dataset import Document, Question, Answer, Context
from rankify.retrievers.retriever import Retriever
# Sample Documents
documents = [
Document(question=Question("the cast of a good day to die hard?"), answers=Answer([
"Jai Courtney",
"Sebastian Koch",
"Radivoje Bukvić",
"Yuliya Snigir",
"Sergei Kolesnikov",
"Mary Elizabeth Winstead",
"Bruce Willis"
]), contexts=[]),
Document(question=Question("Who wrote Hamlet?"), answers=Answer(["Shakespeare"]), contexts=[])
]
# BM25 retrieval on Wikipedia
bm25_retriever_wiki = Retriever(method="bm25", n_docs=5, index_type="wiki")
# BM25 retrieval on MS MARCO
bm25_retriever_msmacro = Retriever(method="bm25", n_docs=5, index_type="msmarco")
# DPR (multi-encoder) retrieval on Wikipedia
dpr_retriever_wiki = Retriever(method="dpr", model="dpr-multi", n_docs=5, index_type="wiki")
# DPR (multi-encoder) retrieval on MS MARCO
dpr_retriever_msmacro = Retriever(method="dpr", model="dpr-multi", n_docs=5, index_type="msmarco")
# DPR (single-encoder) retrieval on Wikipedia
dpr_retriever_wiki = Retriever(method="dpr", model="dpr-single", n_docs=5, index_type="wiki")
# DPR (single-encoder) retrieval on MS MARCO
dpr_retriever_msmacro = Retriever(method="dpr", model="dpr-single", n_docs=5, index_type="msmarco")
# ANCE retrieval on Wikipedia
ance_retriever_wiki = Retriever(method="ance", model="ance-multi", n_docs=5, index_type="wiki")
# ANCE retrieval on MS MARCO
ance_retriever_msmacro = Retriever(method="ance", model="ance-multi", n_docs=5, index_type="msmarco")
# Contriever retrieval on Wikipedia
contriever_retriever_wiki = Retriever(method="contriever", model="facebook/contriever-msmarco", n_docs=5, index_type="wiki")
# Contriever retrieval on MS MARCO
contriever_retriever_msmacro = Retriever(method="contriever", model="facebook/contriever-msmarco", n_docs=5, index_type="msmarco")
# ColBERT retrieval on Wikipedia
colbert_retriever_wiki = Retriever(method="colbert", model="colbert-ir/colbertv2.0", n_docs=5, index_type="wiki")
# ColBERT retrieval on MS MARCO
colbert_retriever_msmacro = Retriever(method="colbert", model="colbert-ir/colbertv2.0", n_docs=5, index_type="msmarco")
# BGE retrieval on Wikipedia
bge_retriever_wiki = Retriever(method="bge", model="BAAI/bge-large-en-v1.5", n_docs=5, index_type="wiki")
# BGE retrieval on MS MARCO
bge_retriever_msmacro = Retriever(method="bge", model="BAAI/bge-large-en-v1.5", n_docs=5, index_type="msmarco")
# Hyde retrieval on Wikipedia
hyde_retriever_wiki = Retriever(method="hyde" , n_docs=5, index_type="wiki", api_key=OPENAI_API_KEY )
# Hyde retrieval on MS MARCO
hyde_retriever_msmacro = Retriever(method="hyde", n_docs=5, index_type="msmarco", api_key=OPENAI_API_KEY)
Running Retrieval
After defining the retriever, you can retrieve documents using:
retrieved_documents = bm25_retriever_wiki.retrieve(documents)
for i, doc in enumerate(retrieved_documents):
print(f"\nDocument {i+1}:")
print(doc)
3️⃣ Running Reranking
Rankify provides support for multiple reranking models. Below are examples of how to use each model.
Example: Reranking a Document
from rankify.dataset.dataset import Document, Question, Answer, Context
from rankify.models.reranking import Reranking
# Sample document setup
question = Question("When did Thomas Edison invent the light bulb?")
answers = Answer(["1879"])
contexts = [
Context(text="Lightning strike at Seoul National University", id=1),
Context(text="Thomas Edison tried to invent a device for cars but failed", id=2),
Context(text="Coffee is good for diet", id=3),
Context(text="Thomas Edison invented the light bulb in 1879", id=4),
Context(text="Thomas Edison worked with electricity", id=5),
]
document = Document(question=question, answers=answers, contexts=contexts)
# Initialize the reranker
reranker = Reranking(method="monot5", model_name="monot5-base-msmarco")
# Apply reranking
reranker.rank([document])
# Print reordered contexts
for context in document.reorder_contexts:
print(f" - {context.text}")
Examples of Using Different Reranking Models
# UPR
model = Reranking(method='upr', model_name='t5-base')
# API-Based Rerankers
model = Reranking(method='apiranker', model_name='voyage', api_key='your-api-key')
model = Reranking(method='apiranker', model_name='jina', api_key='your-api-key')
model = Reranking(method='apiranker', model_name='mixedbread.ai', api_key='your-api-key')
# Blender Reranker
model = Reranking(method='blender_reranker', model_name='PairRM')
# ColBERT Reranker
model = Reranking(method='colbert_ranker', model_name='Colbert')
# EchoRank
model = Reranking(method='echorank', model_name='flan-t5-large')
# First Ranker
model = Reranking(method='first_ranker', model_name='base')
# FlashRank
model = Reranking(method='flashrank', model_name='ms-marco-TinyBERT-L-2-v2')
# InContext Reranker
Reranking(method='incontext_reranker', model_name='llamav3.1-8b')
# InRanker
model = Reranking(method='inranker', model_name='inranker-small')
# ListT5
model = Reranking(method='listt5', model_name='listt5-base')
# LiT5 Distill
model = Reranking(method='lit5distill', model_name='LiT5-Distill-base')
# LiT5 Score
model = Reranking(method='lit5score', model_name='LiT5-Distill-base')
# LLM Layerwise Ranker
model = Reranking(method='llm_layerwise_ranker', model_name='bge-multilingual-gemma2')
# LLM2Vec
model = Reranking(method='llm2vec', model_name='Meta-Llama-31-8B')
# MonoBERT
model = Reranking(method='monobert', model_name='monobert-large')
# MonoT5
Reranking(method='monot5', model_name='monot5-base-msmarco')
# RankGPT
model = Reranking(method='rankgpt', model_name='llamav3.1-8b')
# RankGPT API
model = Reranking(method='rankgpt-api', model_name='gpt-3.5', api_key="gpt-api-key")
model = Reranking(method='rankgpt-api', model_name='gpt-4', api_key="gpt-api-key")
model = Reranking(method='rankgpt-api', model_name='llamav3.1-8b', api_key="together-api-key")
model = Reranking(method='rankgpt-api', model_name='claude-3-5', api_key="claude-api-key")
# RankT5
model = Reranking(method='rankt5', model_name='rankt5-base')
# Sentence Transformer Reranker
model = Reranking(method='sentence_transformer_reranker', model_name='all-MiniLM-L6-v2')
model = Reranking(method='sentence_transformer_reranker', model_name='gtr-t5-base')
model = Reranking(method='sentence_transformer_reranker', model_name='sentence-t5-base')
model = Reranking(method='sentence_transformer_reranker', model_name='distilbert-multilingual-nli-stsb-quora-ranking')
model = Reranking(method='sentence_transformer_reranker', model_name='msmarco-bert-co-condensor')
# SPLADE
model = Reranking(method='splade', model_name='splade-cocondenser')
# Transformer Ranker
model = Reranking(method='transformer_ranker', model_name='mxbai-rerank-xsmall')
model = Reranking(method='transformer_ranker', model_name='bge-reranker-base')
model = Reranking(method='transformer_ranker', model_name='bce-reranker-base')
model = Reranking(method='transformer_ranker', model_name='jina-reranker-tiny')
model = Reranking(method='transformer_ranker', model_name='gte-multilingual-reranker-base')
model = Reranking(method='transformer_ranker', model_name='nli-deberta-v3-large')
model = Reranking(method='transformer_ranker', model_name='ms-marco-TinyBERT-L-6')
model = Reranking(method='transformer_ranker', model_name='msmarco-MiniLM-L12-en-de-v1')
# TwoLAR
model = Reranking(method='twolar', model_name='twolar-xl')
# Vicuna Reranker
model = Reranking(method='vicuna_reranker', model_name='rank_vicuna_7b_v1')
# Zephyr Reranker
model = Reranking(method='zephyr_reranker', model_name='rank_zephyr_7b_v1_full')
4️⃣ Using Generator Module
Rankify provides a Generator Module to facilitate retrieval-augmented generation (RAG) by integrating retrieved documents into generative models for producing answers. Below is an example of how to use different generator methods.
from rankify.dataset.dataset import Document, Question, Answer, Context
from rankify.generator.generator import Generator
# Define question and answer
question = Question("What is the capital of France?")
answers = Answer(["Paris"])
contexts = [
Context(id=1, title="France", text="The capital of France is Paris.", score=0.9),
Context(id=2, title="Germany", text="Berlin is the capital of Germany.", score=0.5)
]
# Construct document
doc = Document(question=question, answers=answers, contexts=contexts)
# Initialize Generator (e.g., Meta Llama)
generator = Generator(method="in-context-ralm", model_name='meta-llama/Llama-3.1-8B')
# Generate answer
generated_answers = generator.generate([doc])
print(generated_answers) # Output: ["Paris"]
5️⃣ Evaluating with Metrics
Rankify provides built-in evaluation metrics for retrieval, re-ranking, and retrieval-augmented generation (RAG). These metrics help assess the quality of retrieved documents, the effectiveness of ranking models, and the accuracy of generated answers.
Evaluating Generated Answers
You can evaluate the quality of retrieval-augmented generation (RAG) results by comparing generated answers with ground-truth answers.
from rankify.metrics.metrics import Metrics
from rankify.dataset.dataset import Dataset
# Load dataset
dataset = Dataset('bm25', 'nq-test', 100)
documents = dataset.download(force_download=False)
# Initialize Generator
generator = Generator(method="in-context-ralm", model_name='meta-llama/Llama-3.1-8B')
# Generate answers
generated_answers = generator.generate(documents)
# Evaluate generated answers
metrics = Metrics(documents)
print(metrics.calculate_generation_metrics(generated_answers))
Evaluating Retrieval Performance
# Calculate retrieval metrics before reranking
metrics = Metrics(documents)
before_ranking_metrics = metrics.calculate_retrieval_metrics(ks=[1, 5, 10, 20, 50, 100], use_reordered=False)
print(before_ranking_metrics)
Evaluating Reranked Results
# Calculate retrieval metrics after reranking
after_ranking_metrics = metrics.calculate_retrieval_metrics(ks=[1, 5, 10, 20, 50, 100], use_reordered=True)
print(after_ranking_metrics)
📜 Supported Models
1️⃣ Retrievers
- ✅ BM25
- ✅ DPR
- ✅ ColBERT
- ✅ ANCE
- ✅ BGE
- ✅ Contriever
- ✅ BPR
- ✅ HYDE
- 🕒 RepLlama
- 🕒 coCondenser
- 🕒 Spar
- 🕒 Dragon
- 🕒 Hybird
2️⃣ Rerankers
- ✅ Cross-Encoders
- ✅ RankGPT
- ✅ RankGPT-API
- ✅ MonoT5
- ✅ MonoBert
- ✅ RankT5
- ✅ ListT5
- ✅ LiT5Score
- ✅ LiT5Dist
- ✅ Vicuna Reranker
- ✅ Zephyr Reranker
- ✅ Sentence Transformer-based
- ✅ FlashRank Models
- ✅ API-Based Rerankers
- ✅ ColBERT Reranker
- ✅ LLM Layerwise Ranker
- ✅ Splade Reranker
- ✅ UPR Reranker
- ✅ Inranker Reranker
- ✅ Transformer Reranker
- ✅ FIRST Reranker
- ✅ Blender Reranker
- ✅ LLM2VEC Reranker
- ✅ ECHO Reranker
- ✅ Incontext Reranker
- 🕒 DynRank
- 🕒 ASRank
- 🕒 RankLlama
3️⃣ Generators
- ✅ Fusion-in-Decoder (FiD) with T5
- ✅ In-Context Learning RLAM
✨ Features
- 🔥 Unified Framework: Combines retrieval, re-ranking, and retrieval-augmented generation (RAG) into a single modular toolkit.
- 📚 Rich Dataset Support: Includes 40+ benchmark datasets with pre-retrieved documents for seamless experimentation.
- 🧲 Diverse Retrieval Methods: Supports BM25, DPR, ANCE, BPR, ColBERT, BGE, and Contriever for flexible retrieval strategies.
- 🎯 Powerful Re-Ranking: Implements 24 advanced models with 41 sub-methods to optimize ranking performance.
- 🏗️ Prebuilt Indices: Provides Wikipedia and MS MARCO corpora, eliminating indexing overhead and speeding up retrieval.
- 🔮 Seamless RAG Integration: Works with GPT, LLAMA, T5, and Fusion-in-Decoder (FiD) models for retrieval-augmented generation.
- 🛠 Extensible & Modular: Easily integrates custom datasets, retrievers, ranking models, and RAG pipelines.
- 📊 Built-in Evaluation Suite: Includes retrieval, ranking, and RAG metrics for robust benchmarking.
- 📖 User-Friendly Documentation: Access detailed 📖 online docs, example notebooks, and tutorials for easy adoption.
🔍 Roadmap
Rankify is still under development, and this is our first release (v0.1.0). While it already supports a wide range of retrieval, re-ranking, and RAG techniques, we are actively enhancing its capabilities by adding more retrievers, rankers, datasets, and features.
🛠 Planned Improvements
Retrievers
✅ Supports: BM25, DPR, ANCE, BPR, ColBERT, BGE, Contriever
✨ ⏳ Coming Soon: Spar, MSS, MSS-DPR
✨ ⏳ Custom Index Loading for user-defined retrieval corpora
Re-Rankers
✅ 24 models & 41 sub-methods
✨ ⏳ Expanding with more ranking models
Datasets
✅ 40 benchmark datasets
✨ ⏳ Adding new datasets & custom dataset integration
Retrieval-Augmented Generation (RAG)
✅ Works with: GPT, LLAMA, T5
✨ ⏳ Expanding to more generative models
Evaluation & Usability
✅ Standard metrics: Top-K, EM, Recall
✨ ⏳ Adding advanced metrics: NDCG, MAP for retrievers
Pipeline Integration
✨ ⏳ Introducing a pipeline module for end-to-end retrieval, ranking, and RAG workflows
📖 Documentation
For full API documentation, visit the Rankify Docs.
💡 Contributing
Follow these steps to get involved:
-
Fork this repository to your GitHub account.
-
Create a new branch for your feature or fix:
git checkout -b feature/YourFeatureName
-
Make your changes and commit them:
git commit -m "Add YourFeatureName"
-
Push the changes to your branch:
git push origin feature/YourFeatureName
-
Submit a Pull Request to propose your changes.
Thank you for helping make this project better!
🌐 Community Contributions
Chinese community resources available!
Special thanks to Xiumao for writing two exceptional Chinese blog posts about Rankify:
These articles were crafted with high-traffic optimization in mind and are widely recommended in Chinese academic and developer circles.
We updated the 中文版本 to reflect these blog contributions while keeping original content intact—thank you Xiumao for your continued support!
🔖 License
Rankify is licensed under the Apache-2.0 License - see the LICENSE file for details.
🙏 Acknowledgments
We would like to express our gratitude to the following libraries, which have greatly contributed to the development of Rankify:
-
Rerankers – A powerful Python library for integrating various reranking methods.
🔗 GitHub Repository
-
Pyserini – A toolkit for supporting BM25-based retrieval and integration with sparse/dense retrievers.
🔗 GitHub Repository
-
FlashRAG – A modular framework for Retrieval-Augmented Generation (RAG) research.
🔗 GitHub Repository
🌟 Citation
Please kindly cite our paper if helps your research:
@article{abdallah2025rankify,
title={Rankify: A Comprehensive Python Toolkit for Retrieval, Re-Ranking, and Retrieval-Augmented Generation},
author={Abdallah, Abdelrahman and Mozafari, Jamshid and Piryani, Bhawna and Ali, Mohammed and Jatowt, Adam},
journal={arXiv preprint arXiv:2502.02464},
year={2025}
}
Star History
If you like our Framework, don't hesitate to ⭐ star this repository ⭐. This helps us to make the Framework more better and scalable to different models and methods 🤗.
A modular and efficient retrieval, reranking and RAG framework designed to work with state-of-the-art models for retrieval, ranking and rag tasks.
Rankify is a Python toolkit designed for unified retrieval, re-ranking, and retrieval-augmented generation (RAG) research. Our toolkit integrates 40 pre-retrieved benchmark datasets and supports 7 retrieval techniques, 24 state-of-the-art re-ranking models, and multiple RAG methods. Rankify provides a modular and extensible framework, enabling seamless experimentation and benchmarking across retrieval pipelines. Comprehensive documentation, open-source implementation, and pre-built evaluation tools make Rankify a powerful resource for researchers and practitioners in the field.
🚀 Demo
To run the demo locally:
# Make sure Rankify is installed
pip install streamlit
# Then run the demo
streamlit run demo.py
https://github.com/user-attachments/assets/13184943-55db-4f0c-b509-fde920b809bc
🔗 Navigation
- Features
- Roadmap
- Installation
- Quick Start
- Retrievers
- Re-Rankers
- Generators
- Evaluation
- Documentation
- Community Contributing
- Contributing
- License
- Acknowledgments
- Citation
🔧 Installation
Set up the virtual environment
First, create and activate a conda environment with Python 3.10:
conda create -n rankify python=3.10
conda activate rankify
Install PyTorch 2.5.1
we recommend installing Rankify with PyTorch 2.5.1 for Rankify. Refer to the PyTorch installation page for platform-specific installation commands.
If you have access to GPUs, it's recommended to install the CUDA version 12.4 or 12.6 of PyTorch, as many of the evaluation metrics are optimized for GPU use.
To install Pytorch 2.5.1 you can install it from the following cmd
pip install torch==2.5.1 torchvision==0.20.1 torchaudio==2.5.1 --index-url https://download.pytorch.org/whl/cu124
Basic Installation
To install Rankify, simply use pip (requires Python 3.10+):
pip install rankify
This will install the base functionality required for retrieval, re-ranking, and retrieval-augmented generation (RAG).
Recommended Installation
For full functionality, we recommend installing Rankify with all dependencies:
pip install "rankify[all]"
This ensures you have all necessary modules, including retrieval, re-ranking, and RAG support.
Optional Dependencies
If you prefer to install only specific components, choose from the following:
# Install dependencies for retrieval only (BM25, DPR, ANCE, etc.)
pip install "rankify[retriever]"
# Install base re-ranking with vLLM support for `FirstModelReranker`, `LiT5ScoreReranker`, `LiT5DistillReranker`, `VicunaReranker`, and `ZephyrReranker'.
pip install "rankify[reranking]"
Or, to install from GitHub for the latest development version:
git clone https://github.com/DataScienceUIBK/rankify.git
cd rankify
pip install -e .
# For full functionality we recommend installing Rankify with all dependencies:
pip install -e ".[all]"
# Install dependencies for retrieval only (BM25, DPR, ANCE, etc.)
pip install -e ".[retriever]"
# Install base re-ranking with vLLM support for `FirstModelReranker`, `LiT5ScoreReranker`, `LiT5DistillReranker`, `VicunaReranker`, and `ZephyrReranker'.
pip install -e ".[reranking]"
Using ColBERT Retriever
If you want to use ColBERT Retriever, follow these additional setup steps:
# Install GCC and required libraries
conda install -c conda-forge gcc=9.4.0 gxx=9.4.0
conda install -c conda-forge libstdcxx-ng
# Export necessary environment variables
export LD_LIBRARY_PATH=$CONDA_PREFIX/lib:$LD_LIBRARY_PATH
export CC=gcc
export CXX=g++
export PATH=$CONDA_PREFIX/bin:$PATH
# Clear cached torch extensions
rm -rf ~/.cache/torch_extensions/*
🚀 Quick Start
1️⃣ Pre-retrieved Datasets
We provide 1,000 pre-retrieved documents per dataset, which you can download from:
🔗 Hugging Face Dataset Repository
Dataset Format
The pre-retrieved documents are structured as follows:
[
{
"question": "...",
"answers": ["...", "...", ...],
"ctxs": [
{
"id": "...", // Passage ID from database TSV file
"score": "...", // Retriever score
"has_answer": true|false // Whether the passage contains the answer
}
]
}
]
Access Datasets in Rankify
You can easily download and use pre-retrieved datasets through Rankify.
List Available Datasets
To see all available datasets:
from rankify.dataset.dataset import Dataset
# Display available datasets
Dataset.avaiable_dataset()
Retriever Datasets
from rankify.dataset.dataset import Dataset
# Download BM25-retrieved documents for nq-dev
dataset = Dataset(retriever="bm25", dataset_name="nq-dev", n_docs=100)
documents = dataset.download(force_download=False)
# Download BGE-retrieved documents for nq-dev
dataset = Dataset(retriever="bge", dataset_name="nq-dev", n_docs=100)
documents = dataset.download(force_download=False)
# Download ColBERT-retrieved documents for nq-dev
dataset = Dataset(retriever="colbert", dataset_name="nq-dev", n_docs=100)
documents = dataset.download(force_download=False)
# Download MSS-DPR-retrieved documents for nq-dev
dataset = Dataset(retriever="mss-dpr", dataset_name="nq-dev", n_docs=100)
documents = dataset.download(force_download=False)
# Download MSS-retrieved documents for nq-dev
dataset = Dataset(retriever="mss", dataset_name="nq-dev", n_docs=100)
documents = dataset.download(force_download=False)
# Download MSS-retrieved documents for nq-dev
dataset = Dataset(retriever="contriever", dataset_name="nq-dev", n_docs=100)
documents = dataset.download(force_download=False)
# Download ANCE-retrieved documents for nq-dev
dataset = Dataset(retriever="ance", dataset_name="nq-dev", n_docs=100)
documents = dataset.download(force_download=False)
Load Pre-retrieved Dataset from File
If you have already downloaded a dataset, you can load it directly:
from rankify.dataset.dataset import Dataset
# Load pre-downloaded BM25 dataset for WebQuestions
documents = Dataset.load_dataset('./tests/out-datasets/bm25/web_questions/test.json', 100)
Now, you can integrate retrieved documents with re-ranking and RAG workflows! 🚀
Feature Comparison for Pre-Retrieved Datasets
The following table provides an overview of the availability of different retrieval methods (BM25, DPR, ColBERT, ANCE, BGE, Contriever) for each dataset.
✅ Completed ⏳ Part Completed, Pending other Parts 🕒 Pending
Dataset | BM25 | DPR | ColBERT | ANCE | BGE | Contriever |
---|---|---|---|---|---|---|
2WikimultihopQA | ✅ | 🕒 | 🕒 | 🕒 | 🕒 | 🕒 |
ArchivialQA | ✅ | 🕒 | 🕒 | 🕒 | 🕒 | 🕒 |
ChroniclingAmericaQA | ✅ | 🕒 | 🕒 | 🕒 | 🕒 | 🕒 |
EntityQuestions | ✅ | 🕒 | 🕒 | 🕒 | 🕒 | 🕒 |
AmbigQA | ✅ | 🕒 | ✅ | 🕒 | 🕒 | 🕒 |
ARC | ✅ | 🕒 | 🕒 | 🕒 | 🕒 | 🕒 |
ASQA | ✅ | 🕒 | 🕒 | 🕒 | 🕒 | 🕒 |
MS MARCO | 🕒 | 🕒 | 🕒 | 🕒 | 🕒 | 🕒 |
AY2 | ✅ | 🕒 | 🕒 | 🕒 | 🕒 | 🕒 |
Bamboogle | ✅ | 🕒 | 🕒 | 🕒 | 🕒 | 🕒 |
BoolQ | ✅ | 🕒 | ✅ | 🕒 | ✅ | 🕒 |
CommonSenseQA | ✅ | 🕒 | ✅ | 🕒 | ✅ | 🕒 |
CuratedTREC | ✅ | 🕒 | ✅ | ⏳ | ✅ | 🕒 |
ELI5 | ✅ | 🕒 | 🕒 | 🕒 | 🕒 | 🕒 |
FERMI | ✅ | 🕒 | ✅ | ⏳ | ✅ | 🕒 |
FEVER | ✅ | 🕒 | 🕒 | 🕒 | 🕒 | 🕒 |
HellaSwag | ✅ | 🕒 | 🕒 | 🕒 | 🕒 | 🕒 |
HotpotQA | ✅ | 🕒 | 🕒 | 🕒 | 🕒 | 🕒 |
MMLU | ✅ | 🕒 | 🕒 | 🕒 | 🕒 | 🕒 |
Musique | ✅ | 🕒 | 🕒 | 🕒 | 🕒 | 🕒 |
NarrativeQA | ✅ | 🕒 | ✅ | ⏳ | ✅ | 🕒 |
NQ | ✅ | 🕒 | ✅ | ⏳ | ✅ | 🕒 |
OpenbookQA | ✅ | 🕒 | 🕒 | 🕒 | 🕒 | 🕒 |
PIQA | ✅ | 🕒 | ✅ | 🕒 | 🕒 | 🕒 |
PopQA | ✅ | 🕒 | ✅ | ⏳ | ✅ | 🕒 |
Quartz | ✅ | 🕒 | 🕒 | 🕒 | 🕒 | 🕒 |
SIQA | ✅ | 🕒 | ✅ | 🕒 | ✅ | 🕒 |
StrategyQA | ✅ | 🕒 | 🕒 | 🕒 | 🕒 | 🕒 |
TREX | ✅ | 🕒 | 🕒 | 🕒 | 🕒 | 🕒 |
TriviaQA | ✅ | 🕒 | ✅ | ⏳ | ✅ | 🕒 |
TruthfulQA | ✅ | 🕒 | 🕒 | 🕒 | 🕒 | 🕒 |
TruthfulQA | ✅ | 🕒 | 🕒 | 🕒 | 🕒 | 🕒 |
WebQ | ✅ | 🕒 | ✅ | ⏳ | ✅ | 🕒 |
WikiQA | ✅ | 🕒 | ✅ | ⏳ | ✅ | 🕒 |
WikiAsp | ✅ | 🕒 | 🕒 | 🕒 | 🕒 | 🕒 |
WikiPassageQA | ✅ | 🕒 | ✅ | ⏳ | ✅ | 🕒 |
WNED | ✅ | 🕒 | 🕒 | 🕒 | 🕒 | 🕒 |
WoW | ✅ | 🕒 | 🕒 | 🕒 | 🕒 | 🕒 |
Zsre | ✅ | 🕒 | 🕒 | 🕒 | 🕒 | 🕒 |
2️⃣ Running Retrieval
To perform retrieval using Rankify, you can choose from various retrieval methods such as BM25, DPR, ANCE, Contriever, ColBERT, and BGE.
Example: Running Retrieval on Sample Queries
from rankify.dataset.dataset import Document, Question, Answer, Context
from rankify.retrievers.retriever import Retriever
# Sample Documents
documents = [
Document(question=Question("the cast of a good day to die hard?"), answers=Answer([
"Jai Courtney",
"Sebastian Koch",
"Radivoje Bukvić",
"Yuliya Snigir",
"Sergei Kolesnikov",
"Mary Elizabeth Winstead",
"Bruce Willis"
]), contexts=[]),
Document(question=Question("Who wrote Hamlet?"), answers=Answer(["Shakespeare"]), contexts=[])
]
# BM25 retrieval on Wikipedia
bm25_retriever_wiki = Retriever(method="bm25", n_docs=5, index_type="wiki")
# BM25 retrieval on MS MARCO
bm25_retriever_msmacro = Retriever(method="bm25", n_docs=5, index_type="msmarco")
# DPR (multi-encoder) retrieval on Wikipedia
dpr_retriever_wiki = Retriever(method="dpr", model="dpr-multi", n_docs=5, index_type="wiki")
# DPR (multi-encoder) retrieval on MS MARCO
dpr_retriever_msmacro = Retriever(method="dpr", model="dpr-multi", n_docs=5, index_type="msmarco")
# DPR (single-encoder) retrieval on Wikipedia
dpr_retriever_wiki = Retriever(method="dpr", model="dpr-single", n_docs=5, index_type="wiki")
# DPR (single-encoder) retrieval on MS MARCO
dpr_retriever_msmacro = Retriever(method="dpr", model="dpr-single", n_docs=5, index_type="msmarco")
# ANCE retrieval on Wikipedia
ance_retriever_wiki = Retriever(method="ance", model="ance-multi", n_docs=5, index_type="wiki")
# ANCE retrieval on MS MARCO
ance_retriever_msmacro = Retriever(method="ance", model="ance-multi", n_docs=5, index_type="msmarco")
# Contriever retrieval on Wikipedia
contriever_retriever_wiki = Retriever(method="contriever", model="facebook/contriever-msmarco", n_docs=5, index_type="wiki")
# Contriever retrieval on MS MARCO
contriever_retriever_msmacro = Retriever(method="contriever", model="facebook/contriever-msmarco", n_docs=5, index_type="msmarco")
# ColBERT retrieval on Wikipedia
colbert_retriever_wiki = Retriever(method="colbert", model="colbert-ir/colbertv2.0", n_docs=5, index_type="wiki")
# ColBERT retrieval on MS MARCO
colbert_retriever_msmacro = Retriever(method="colbert", model="colbert-ir/colbertv2.0", n_docs=5, index_type="msmarco")
# BGE retrieval on Wikipedia
bge_retriever_wiki = Retriever(method="bge", model="BAAI/bge-large-en-v1.5", n_docs=5, index_type="wiki")
# BGE retrieval on MS MARCO
bge_retriever_msmacro = Retriever(method="bge", model="BAAI/bge-large-en-v1.5", n_docs=5, index_type="msmarco")
# Hyde retrieval on Wikipedia
hyde_retriever_wiki = Retriever(method="hyde" , n_docs=5, index_type="wiki", api_key=OPENAI_API_KEY )
# Hyde retrieval on MS MARCO
hyde_retriever_msmacro = Retriever(method="hyde", n_docs=5, index_type="msmarco", api_key=OPENAI_API_KEY)
Running Retrieval
After defining the retriever, you can retrieve documents using:
retrieved_documents = bm25_retriever_wiki.retrieve(documents)
for i, doc in enumerate(retrieved_documents):
print(f"\nDocument {i+1}:")
print(doc)
3️⃣ Running Reranking
Rankify provides support for multiple reranking models. Below are examples of how to use each model.
Example: Reranking a Document
from rankify.dataset.dataset import Document, Question, Answer, Context
from rankify.models.reranking import Reranking
# Sample document setup
question = Question("When did Thomas Edison invent the light bulb?")
answers = Answer(["1879"])
contexts = [
Context(text="Lightning strike at Seoul National University", id=1),
Context(text="Thomas Edison tried to invent a device for cars but failed", id=2),
Context(text="Coffee is good for diet", id=3),
Context(text="Thomas Edison invented the light bulb in 1879", id=4),
Context(text="Thomas Edison worked with electricity", id=5),
]
document = Document(question=question, answers=answers, contexts=contexts)
# Initialize the reranker
reranker = Reranking(method="monot5", model_name="monot5-base-msmarco")
# Apply reranking
reranker.rank([document])
# Print reordered contexts
for context in document.reorder_contexts:
print(f" - {context.text}")
Examples of Using Different Reranking Models
# UPR
model = Reranking(method='upr', model_name='t5-base')
# API-Based Rerankers
model = Reranking(method='apiranker', model_name='voyage', api_key='your-api-key')
model = Reranking(method='apiranker', model_name='jina', api_key='your-api-key')
model = Reranking(method='apiranker', model_name='mixedbread.ai', api_key='your-api-key')
# Blender Reranker
model = Reranking(method='blender_reranker', model_name='PairRM')
# ColBERT Reranker
model = Reranking(method='colbert_ranker', model_name='Colbert')
# EchoRank
model = Reranking(method='echorank', model_name='flan-t5-large')
# First Ranker
model = Reranking(method='first_ranker', model_name='base')
# FlashRank
model = Reranking(method='flashrank', model_name='ms-marco-TinyBERT-L-2-v2')
# InContext Reranker
Reranking(method='incontext_reranker', model_name='llamav3.1-8b')
# InRanker
model = Reranking(method='inranker', model_name='inranker-small')
# ListT5
model = Reranking(method='listt5', model_name='listt5-base')
# LiT5 Distill
model = Reranking(method='lit5distill', model_name='LiT5-Distill-base')
# LiT5 Score
model = Reranking(method='lit5score', model_name='LiT5-Distill-base')
# LLM Layerwise Ranker
model = Reranking(method='llm_layerwise_ranker', model_name='bge-multilingual-gemma2')
# LLM2Vec
model = Reranking(method='llm2vec', model_name='Meta-Llama-31-8B')
# MonoBERT
model = Reranking(method='monobert', model_name='monobert-large')
# MonoT5
Reranking(method='monot5', model_name='monot5-base-msmarco')
# RankGPT
model = Reranking(method='rankgpt', model_name='llamav3.1-8b')
# RankGPT API
model = Reranking(method='rankgpt-api', model_name='gpt-3.5', api_key="gpt-api-key")
model = Reranking(method='rankgpt-api', model_name='gpt-4', api_key="gpt-api-key")
model = Reranking(method='rankgpt-api', model_name='llamav3.1-8b', api_key="together-api-key")
model = Reranking(method='rankgpt-api', model_name='claude-3-5', api_key="claude-api-key")
# RankT5
model = Reranking(method='rankt5', model_name='rankt5-base')
# Sentence Transformer Reranker
model = Reranking(method='sentence_transformer_reranker', model_name='all-MiniLM-L6-v2')
model = Reranking(method='sentence_transformer_reranker', model_name='gtr-t5-base')
model = Reranking(method='sentence_transformer_reranker', model_name='sentence-t5-base')
model = Reranking(method='sentence_transformer_reranker', model_name='distilbert-multilingual-nli-stsb-quora-ranking')
model = Reranking(method='sentence_transformer_reranker', model_name='msmarco-bert-co-condensor')
# SPLADE
model = Reranking(method='splade', model_name='splade-cocondenser')
# Transformer Ranker
model = Reranking(method='transformer_ranker', model_name='mxbai-rerank-xsmall')
model = Reranking(method='transformer_ranker', model_name='bge-reranker-base')
model = Reranking(method='transformer_ranker', model_name='bce-reranker-base')
model = Reranking(method='transformer_ranker', model_name='jina-reranker-tiny')
model = Reranking(method='transformer_ranker', model_name='gte-multilingual-reranker-base')
model = Reranking(method='transformer_ranker', model_name='nli-deberta-v3-large')
model = Reranking(method='transformer_ranker', model_name='ms-marco-TinyBERT-L-6')
model = Reranking(method='transformer_ranker', model_name='msmarco-MiniLM-L12-en-de-v1')
# TwoLAR
model = Reranking(method='twolar', model_name='twolar-xl')
# Vicuna Reranker
model = Reranking(method='vicuna_reranker', model_name='rank_vicuna_7b_v1')
# Zephyr Reranker
model = Reranking(method='zephyr_reranker', model_name='rank_zephyr_7b_v1_full')
4️⃣ Using Generator Module
Rankify provides a Generator Module to facilitate retrieval-augmented generation (RAG) by integrating retrieved documents into generative models for producing answers. Below is an example of how to use different generator methods.
from rankify.dataset.dataset import Document, Question, Answer, Context
from rankify.generator.generator import Generator
# Define question and answer
question = Question("What is the capital of France?")
answers = Answer(["Paris"])
contexts = [
Context(id=1, title="France", text="The capital of France is Paris.", score=0.9),
Context(id=2, title="Germany", text="Berlin is the capital of Germany.", score=0.5)
]
# Construct document
doc = Document(question=question, answers=answers, contexts=contexts)
# Initialize Generator (e.g., Meta Llama)
generator = Generator(method="in-context-ralm", model_name='meta-llama/Llama-3.1-8B')
# Generate answer
generated_answers = generator.generate([doc])
print(generated_answers) # Output: ["Paris"]
5️⃣ Evaluating with Metrics
Rankify provides built-in evaluation metrics for retrieval, re-ranking, and retrieval-augmented generation (RAG). These metrics help assess the quality of retrieved documents, the effectiveness of ranking models, and the accuracy of generated answers.
Evaluating Generated Answers
You can evaluate the quality of retrieval-augmented generation (RAG) results by comparing generated answers with ground-truth answers.
from rankify.metrics.metrics import Metrics
from rankify.dataset.dataset import Dataset
# Load dataset
dataset = Dataset('bm25', 'nq-test', 100)
documents = dataset.download(force_download=False)
# Initialize Generator
generator = Generator(method="in-context-ralm", model_name='meta-llama/Llama-3.1-8B')
# Generate answers
generated_answers = generator.generate(documents)
# Evaluate generated answers
metrics = Metrics(documents)
print(metrics.calculate_generation_metrics(generated_answers))
Evaluating Retrieval Performance
# Calculate retrieval metrics before reranking
metrics = Metrics(documents)
before_ranking_metrics = metrics.calculate_retrieval_metrics(ks=[1, 5, 10, 20, 50, 100], use_reordered=False)
print(before_ranking_metrics)
Evaluating Reranked Results
# Calculate retrieval metrics after reranking
after_ranking_metrics = metrics.calculate_retrieval_metrics(ks=[1, 5, 10, 20, 50, 100], use_reordered=True)
print(after_ranking_metrics)
📜 Supported Models
1️⃣ Retrievers
- ✅ BM25
- ✅ DPR
- ✅ ColBERT
- ✅ ANCE
- ✅ BGE
- ✅ Contriever
- ✅ BPR
- ✅ HYDE
- 🕒 RepLlama
- 🕒 coCondenser
- 🕒 Spar
- 🕒 Dragon
- 🕒 Hybird
2️⃣ Rerankers
- ✅ Cross-Encoders
- ✅ RankGPT
- ✅ RankGPT-API
- ✅ MonoT5
- ✅ MonoBert
- ✅ RankT5
- ✅ ListT5
- ✅ LiT5Score
- ✅ LiT5Dist
- ✅ Vicuna Reranker
- ✅ Zephyr Reranker
- ✅ Sentence Transformer-based
- ✅ FlashRank Models
- ✅ API-Based Rerankers
- ✅ ColBERT Reranker
- ✅ LLM Layerwise Ranker
- ✅ Splade Reranker
- ✅ UPR Reranker
- ✅ Inranker Reranker
- ✅ Transformer Reranker
- ✅ FIRST Reranker
- ✅ Blender Reranker
- ✅ LLM2VEC Reranker
- ✅ ECHO Reranker
- ✅ Incontext Reranker
- 🕒 DynRank
- 🕒 ASRank
- 🕒 RankLlama
3️⃣ Generators
- ✅ Fusion-in-Decoder (FiD) with T5
- ✅ In-Context Learning RLAM
✨ Features
- 🔥 Unified Framework: Combines retrieval, re-ranking, and retrieval-augmented generation (RAG) into a single modular toolkit.
- 📚 Rich Dataset Support: Includes 40+ benchmark datasets with pre-retrieved documents for seamless experimentation.
- 🧲 Diverse Retrieval Methods: Supports BM25, DPR, ANCE, BPR, ColBERT, BGE, and Contriever for flexible retrieval strategies.
- 🎯 Powerful Re-Ranking: Implements 24 advanced models with 41 sub-methods to optimize ranking performance.
- 🏗️ Prebuilt Indices: Provides Wikipedia and MS MARCO corpora, eliminating indexing overhead and speeding up retrieval.
- 🔮 Seamless RAG Integration: Works with GPT, LLAMA, T5, and Fusion-in-Decoder (FiD) models for retrieval-augmented generation.
- 🛠 Extensible & Modular: Easily integrates custom datasets, retrievers, ranking models, and RAG pipelines.
- 📊 Built-in Evaluation Suite: Includes retrieval, ranking, and RAG metrics for robust benchmarking.
- 📖 User-Friendly Documentation: Access detailed 📖 online docs, example notebooks, and tutorials for easy adoption.
🔍 Roadmap
Rankify is still under development, and this is our first release (v0.1.0). While it already supports a wide range of retrieval, re-ranking, and RAG techniques, we are actively enhancing its capabilities by adding more retrievers, rankers, datasets, and features.
🛠 Planned Improvements
Retrievers
✅ Supports: BM25, DPR, ANCE, BPR, ColBERT, BGE, Contriever
✨ ⏳ Coming Soon: Spar, MSS, MSS-DPR
✨ ⏳ Custom Index Loading for user-defined retrieval corpora
Re-Rankers
✅ 24 models & 41 sub-methods
✨ ⏳ Expanding with more ranking models
Datasets
✅ 40 benchmark datasets
✨ ⏳ Adding new datasets & custom dataset integration
Retrieval-Augmented Generation (RAG)
✅ Works with: GPT, LLAMA, T5
✨ ⏳ Expanding to more generative models
Evaluation & Usability
✅ Standard metrics: Top-K, EM, Recall
✨ ⏳ Adding advanced metrics: NDCG, MAP for retrievers
Pipeline Integration
✨ ⏳ Introducing a pipeline module for end-to-end retrieval, ranking, and RAG workflows
📖 Documentation
For full API documentation, visit the Rankify Docs.
💡 Contributing
Follow these steps to get involved:
-
Fork this repository to your GitHub account.
-
Create a new branch for your feature or fix:
git checkout -b feature/YourFeatureName
-
Make your changes and commit them:
git commit -m "Add YourFeatureName"
-
Push the changes to your branch:
git push origin feature/YourFeatureName
-
Submit a Pull Request to propose your changes.
Thank you for helping make this project better!
🌐 Community Contributions
Chinese community resources available!
Special thanks to Xiumao for writing two exceptional Chinese blog posts about Rankify:
These articles were crafted with high-traffic optimization in mind and are widely recommended in Chinese academic and developer circles.
We updated the 中文版本 to reflect these blog contributions while keeping original content intact—thank you Xiumao for your continued support!
🔖 License
Rankify is licensed under the Apache-2.0 License - see the LICENSE file for details.
🙏 Acknowledgments
We would like to express our gratitude to the following libraries, which have greatly contributed to the development of Rankify:
-
Rerankers – A powerful Python library for integrating various reranking methods.
🔗 GitHub Repository -
Pyserini – A toolkit for supporting BM25-based retrieval and integration with sparse/dense retrievers.
🔗 GitHub Repository -
FlashRAG – A modular framework for Retrieval-Augmented Generation (RAG) research.
🔗 GitHub Repository
🌟 Citation
Please kindly cite our paper if helps your research:
@article{abdallah2025rankify,
title={Rankify: A Comprehensive Python Toolkit for Retrieval, Re-Ranking, and Retrieval-Augmented Generation},
author={Abdallah, Abdelrahman and Mozafari, Jamshid and Piryani, Bhawna and Ali, Mohammed and Jatowt, Adam},
journal={arXiv preprint arXiv:2502.02464},
year={2025}
}
Star History
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for Rankify
Similar Open Source Tools

Rankify
Rankify is a Python toolkit designed for unified retrieval, re-ranking, and retrieval-augmented generation (RAG) research. It integrates 40 pre-retrieved benchmark datasets and supports 7 retrieval techniques, 24 state-of-the-art re-ranking models, and multiple RAG methods. Rankify provides a modular and extensible framework, enabling seamless experimentation and benchmarking across retrieval pipelines. It offers comprehensive documentation, open-source implementation, and pre-built evaluation tools, making it a powerful resource for researchers and practitioners in the field.

memori
Memori is a lightweight and user-friendly memory management tool for developers. It helps in tracking memory usage, detecting memory leaks, and optimizing memory allocation in software projects. With Memori, developers can easily monitor and analyze memory consumption to improve the performance and stability of their applications. The tool provides detailed insights into memory usage patterns and helps in identifying areas for optimization. Memori is designed to be easy to integrate into existing projects and offers a simple yet powerful interface for managing memory resources effectively.

AgentNeo
AgentNeo is an advanced, open-source Agentic AI Application Observability, Monitoring, and Evaluation Framework designed to provide deep insights into AI agents, Large Language Model (LLM) calls, and tool interactions. It offers robust logging, visualization, and evaluation capabilities to help debug and optimize AI applications with ease. With features like tracing LLM calls, monitoring agents and tools, tracking interactions, detailed metrics collection, flexible data storage, simple instrumentation, interactive dashboard, project management, execution graph visualization, and evaluation tools, AgentNeo empowers users to build efficient, cost-effective, and high-quality AI-driven solutions.

AutoAgents
AutoAgents is a cutting-edge multi-agent framework built in Rust that enables the creation of intelligent, autonomous agents powered by Large Language Models (LLMs) and Ractor. Designed for performance, safety, and scalability. AutoAgents provides a robust foundation for building complex AI systems that can reason, act, and collaborate. With AutoAgents you can create Cloud Native Agents, Edge Native Agents and Hybrid Models as well. It is so extensible that other ML Models can be used to create complex pipelines using Actor Framework.

aegra
Aegra is a self-hosted AI agent backend platform that provides LangGraph power without vendor lock-in. Built with FastAPI + PostgreSQL, it offers complete control over agent orchestration for teams looking to escape vendor lock-in, meet data sovereignty requirements, enable custom deployments, and optimize costs. Aegra is Agent Protocol compliant and perfect for teams seeking a free, self-hosted alternative to LangGraph Platform with zero lock-in, full control, and compatibility with existing LangGraph Client SDK.

Neosgenesis
Neogenesis System is an advanced AI decision-making framework that enables agents to 'think about how to think'. It implements a metacognitive approach with real-time learning, tool integration, and multi-LLM support, allowing AI to make expert-level decisions in complex environments. Key features include metacognitive intelligence, tool-enhanced decisions, real-time learning, aha-moment breakthroughs, experience accumulation, and multi-LLM support.

klavis
Klavis AI is a production-ready solution for managing Multiple Communication Protocol (MCP) servers. It offers self-hosted solutions and a hosted service with enterprise OAuth support. With Klavis AI, users can easily deploy and manage over 50 MCP servers for various services like GitHub, Gmail, Google Sheets, YouTube, Slack, and more. The tool provides instant access to MCP servers, seamless authentication, and integration with AI frameworks, making it ideal for individuals and businesses looking to streamline their communication and data management workflows.

shimmy
Shimmy is a 5.1MB single-binary local inference server providing OpenAI-compatible endpoints for GGUF models. It offers fast, reliable AI inference with sub-second responses, zero configuration, and automatic port management. Perfect for developers seeking privacy, cost-effectiveness, speed, and easy integration with popular tools like VSCode and Cursor. Shimmy is designed to be invisible infrastructure that simplifies local AI development and deployment.

actor-core
Actor-core is a lightweight and flexible library for building actor-based concurrent applications in Java. It provides a simple API for creating and managing actors, as well as handling message passing between actors. With actor-core, developers can easily implement scalable and fault-tolerant systems using the actor model.

paiml-mcp-agent-toolkit
PAIML MCP Agent Toolkit (PMAT) is a zero-configuration AI context generation system with extreme quality enforcement and Toyota Way standards. It allows users to analyze any codebase instantly through CLI, MCP, or HTTP interfaces. The toolkit provides features such as technical debt analysis, advanced monitoring, metrics aggregation, performance profiling, bottleneck detection, alert system, multi-format export, storage flexibility, and more. It also offers AI-powered intelligence for smart recommendations, polyglot analysis, repository showcase, and integration points. PMAT enforces quality standards like complexity ≤20, zero SATD comments, test coverage >80%, no lint warnings, and synchronized documentation with commits. The toolkit follows Toyota Way development principles for iterative improvement, direct AST traversal, automated quality gates, and zero SATD policy.

skpro
skpro is a library for supervised probabilistic prediction in python. It provides `scikit-learn`-like, `scikit-base` compatible interfaces to: * tabular **supervised regressors for probabilistic prediction** \- interval, quantile and distribution predictions * tabular **probabilistic time-to-event and survival prediction** \- instance-individual survival distributions * **metrics to evaluate probabilistic predictions** , e.g., pinball loss, empirical coverage, CRPS, survival losses * **reductions** to turn `scikit-learn` regressors into probabilistic `skpro` regressors, such as bootstrap or conformal * building **pipelines and composite models** , including tuning via probabilistic performance metrics * symbolic **probability distributions** with value domain of `pandas.DataFrame`-s and `pandas`-like interface

hugging-llm
HuggingLLM is a project that aims to introduce ChatGPT to a wider audience, particularly those interested in using the technology to create new products or applications. The project focuses on providing practical guidance on how to use ChatGPT-related APIs to create new features and applications. It also includes detailed background information and system design introductions for relevant tasks, as well as example code and implementation processes. The project is designed for individuals with some programming experience who are interested in using ChatGPT for practical applications, and it encourages users to experiment and create their own applications and demos.

EduChat
EduChat is a large-scale language model-based chatbot system designed for intelligent education by the EduNLP team at East China Normal University. The project focuses on developing a dialogue-based language model for the education vertical domain, integrating diverse education vertical domain data, and providing functions such as automatic question generation, homework correction, emotional support, course guidance, and college entrance examination consultation. The tool aims to serve teachers, students, and parents to achieve personalized, fair, and warm intelligent education.

sktime
sktime is a Python library for time series analysis that provides a unified interface for various time series learning tasks such as classification, regression, clustering, annotation, and forecasting. It offers time series algorithms and tools compatible with scikit-learn for building, tuning, and validating time series models. sktime aims to enhance the interoperability and usability of the time series analysis ecosystem by empowering users to apply algorithms across different tasks and providing interfaces to related libraries like scikit-learn, statsmodels, tsfresh, PyOD, and fbprophet.

Avalonia-Assistant
Avalonia-Assistant is an open-source desktop intelligent assistant that aims to provide a user-friendly interactive experience based on the Avalonia UI framework and the integration of Semantic Kernel with OpenAI or other large LLM models. By utilizing Avalonia-Assistant, you can perform various desktop operations through text or voice commands, enhancing your productivity and daily office experience.

hexstrike-ai
HexStrike AI is an advanced AI-powered penetration testing MCP framework with 150+ security tools and 12+ autonomous AI agents. It features a multi-agent architecture with intelligent decision-making, vulnerability intelligence, and modern visual engine. The platform allows for AI agent connection, intelligent analysis, autonomous execution, real-time adaptation, and advanced reporting. HexStrike AI offers a streamlined installation process, Docker container support, 250+ specialized AI agents/tools, native desktop client, advanced web automation, memory optimization, enhanced error handling, and bypassing limitations.
For similar tasks

Rankify
Rankify is a Python toolkit designed for unified retrieval, re-ranking, and retrieval-augmented generation (RAG) research. It integrates 40 pre-retrieved benchmark datasets and supports 7 retrieval techniques, 24 state-of-the-art re-ranking models, and multiple RAG methods. Rankify provides a modular and extensible framework, enabling seamless experimentation and benchmarking across retrieval pipelines. It offers comprehensive documentation, open-source implementation, and pre-built evaluation tools, making it a powerful resource for researchers and practitioners in the field.

beyondllm
Beyond LLM offers an all-in-one toolkit for experimentation, evaluation, and deployment of Retrieval-Augmented Generation (RAG) systems. It simplifies the process with automated integration, customizable evaluation metrics, and support for various Large Language Models (LLMs) tailored to specific needs. The aim is to reduce LLM hallucination risks and enhance reliability.

llm-rag-workshop
The LLM RAG Workshop repository provides a workshop on using Large Language Models (LLMs) and Retrieval-Augmented Generation (RAG) to generate and understand text in a human-like manner. It includes instructions on setting up the environment, indexing Zoomcamp FAQ documents, creating a Q&A system, and using OpenAI for generation based on retrieved information. The repository focuses on enhancing language model responses with retrieved information from external sources, such as document databases or search engines, to improve factual accuracy and relevance of generated text.

buildel
Buildel is an AI automation platform that empowers users to create versatile workflows without writing code. It supports multiple providers and interfaces, offers pre-built use cases, and allows users to bring their own API keys. Ideal for AI-powered document retrieval, conversational interfaces, and data integration. Users can get started at app.buildel.ai or run Buildel locally with Node.js, Elixir/Erlang, Docker, Git, and JQ installed. Join the community on Discord for support and discussions.

chatgpt-webui
ChatGPT WebUI is a user-friendly web graphical interface for various LLMs like ChatGPT, providing simplified features such as core ChatGPT conversation and document retrieval dialogues. It has been optimized for better RAG retrieval accuracy and supports various search engines. Users can deploy local language models easily and interact with different LLMs like GPT-4, Azure OpenAI, and more. The tool offers powerful functionalities like GPT4 API configuration, system prompt setup for role-playing, and basic conversation features. It also provides a history of conversations, customization options, and a seamless user experience with themes, dark mode, and PWA installation support.

nttu-chatbot
NTTU Chatbot is a student support chatbot developed using LLM + Document Retriever (RAG) technology in Vietnamese. It provides assistance to students by answering their queries and retrieving relevant documents. The chatbot aims to enhance the student support system by offering quick and accurate responses to user inquiries. It utilizes advanced language models and document retrieval techniques to deliver efficient and effective support to users.

gfm-rag
The GFM-RAG is a graph foundation model-powered pipeline that combines graph neural networks to reason over knowledge graphs and retrieve relevant documents for question answering. It features a knowledge graph index, efficiency in multi-hop reasoning, generalizability to unseen datasets, transferability for fine-tuning, compatibility with agent-based frameworks, and interpretability of reasoning paths. The tool can be used for conducting retrieval and question answering tasks using pre-trained models or fine-tuning on custom datasets.
For similar jobs

weave
Weave is a toolkit for developing Generative AI applications, built by Weights & Biases. With Weave, you can log and debug language model inputs, outputs, and traces; build rigorous, apples-to-apples evaluations for language model use cases; and organize all the information generated across the LLM workflow, from experimentation to evaluations to production. Weave aims to bring rigor, best-practices, and composability to the inherently experimental process of developing Generative AI software, without introducing cognitive overhead.

LLMStack
LLMStack is a no-code platform for building generative AI agents, workflows, and chatbots. It allows users to connect their own data, internal tools, and GPT-powered models without any coding experience. LLMStack can be deployed to the cloud or on-premise and can be accessed via HTTP API or triggered from Slack or Discord.

VisionCraft
The VisionCraft API is a free API for using over 100 different AI models. From images to sound.

kaito
Kaito is an operator that automates the AI/ML inference model deployment in a Kubernetes cluster. It manages large model files using container images, avoids tuning deployment parameters to fit GPU hardware by providing preset configurations, auto-provisions GPU nodes based on model requirements, and hosts large model images in the public Microsoft Container Registry (MCR) if the license allows. Using Kaito, the workflow of onboarding large AI inference models in Kubernetes is largely simplified.

PyRIT
PyRIT is an open access automation framework designed to empower security professionals and ML engineers to red team foundation models and their applications. It automates AI Red Teaming tasks to allow operators to focus on more complicated and time-consuming tasks and can also identify security harms such as misuse (e.g., malware generation, jailbreaking), and privacy harms (e.g., identity theft). The goal is to allow researchers to have a baseline of how well their model and entire inference pipeline is doing against different harm categories and to be able to compare that baseline to future iterations of their model. This allows them to have empirical data on how well their model is doing today, and detect any degradation of performance based on future improvements.

tabby
Tabby is a self-hosted AI coding assistant, offering an open-source and on-premises alternative to GitHub Copilot. It boasts several key features: * Self-contained, with no need for a DBMS or cloud service. * OpenAPI interface, easy to integrate with existing infrastructure (e.g Cloud IDE). * Supports consumer-grade GPUs.

spear
SPEAR (Simulator for Photorealistic Embodied AI Research) is a powerful tool for training embodied agents. It features 300 unique virtual indoor environments with 2,566 unique rooms and 17,234 unique objects that can be manipulated individually. Each environment is designed by a professional artist and features detailed geometry, photorealistic materials, and a unique floor plan and object layout. SPEAR is implemented as Unreal Engine assets and provides an OpenAI Gym interface for interacting with the environments via Python.

Magick
Magick is a groundbreaking visual AIDE (Artificial Intelligence Development Environment) for no-code data pipelines and multimodal agents. Magick can connect to other services and comes with nodes and templates well-suited for intelligent agents, chatbots, complex reasoning systems and realistic characters.