
flash-tokenizer
EFFICIENT AND OPTIMIZED TOKENIZER ENGINE FOR LLM INFERENCE SERVING
Stars: 109

FlashTokenizer is a high-performance CPU tokenizer library implemented in C++ for LLM inference serving. It is 10 times faster than BertTokenizerFast in transformers, offering the highest speed and accuracy. Developed to be faster, more accurate, and easier to use than existing tokenizers like BertTokenizerFast, FlashTokenizer is implemented in C++ for straightforward maintenance. It supports parallel processing at the C++ level for batch encoding, delivering outstanding speed. The tokenizer is based on the LinMax Tokenizer proposed in Fast WordPiece Tokenization, enabling tokenization in linear time.
README:
FlashTokenizer is a high-performance tokenizer implementation in C++ of the BertTokenizer used for LLM inference. It has the highest speed and accuracy of any tokenizer, such as FlashAttention and FlashInfer, and is 10 times faster than BertTokenizerFast
in transformers.
[!NOTE]
We need a tokenizer that is faster, more accurate, and easier to use than Huggingface's BertTokenizerFast. (link1, link2, link3)
PaddleNLP's BertTokenizerFast achieves a 1.2x performance improvement by implementing Huggingface's Rust version in
C++
. However, using it requires installing both the massive PaddlePaddle and PaddleNLP packages.Tensorflow-text's FastBertTokenizer actually demonstrates slower performance in comparison.
Microsoft's Blingfire takes over 8 hours to train on custom data and shows relatively lower accuracy.
Rapid's cuDF provides a GPU-based BertTokenizer, but it suffers from accuracy issues.
Unfortunately, FastBertTokenizer and BertTokenizers developed in
C#
and cannot be used inPython
.This is why we developed
FlashTokenizer
. It can be easily installed viapip
and is developed in C++ for straightforward maintenance. Plus, it guarantees extremely fast speeds. We've created an implementation that's faster than Blingfire and easier to use. FlashTokenizer is implemented using the LinMax Tokenizer proposed in Fast WordPiece Tokenization, enabling tokenization in linear time. Finally It supports parallel processing at the C++ level for batch encoding, delivering outstanding speed.
[!TIP]
Implemented in C++17.
- MacOS:
clang++
.- Windows:
Visual Studio 2022
.- Ubuntu:
g++
.Equally fast in Python via pybind11.
Support for parallel processing at the C++ level using OPENMP.
[!IMPORTANT]
[Apr 02 2025]
- Add performance benchmarking code
- Performance benchmarking is conducted using Python, and required packages can be installed via setup.sh.
- A minor performance improvement has been achieved by adding the
tokenize_early_stop
feature toBasicTokenizer
.- OpenMP demonstrated better performance than
std::thread
across Windows, Linux, and macOS, so we've switched exclusively to OpenMP.[Mar 31 2025]
- Modified to provide pre-built whl files for each OS.
[Mar 22 2025]
- Added DFA to AC Trie.
[Mar 21 2025]
- Improving Tokenizer Accuracy
[Mar 19 2025]
- Memory reduction and slight performance improvement by applying LinMaxMatching from Aho–Corasick algorithm.
- Improved branch pipelining of all functions and force-inline applied.
- Removed unnecessary operations of
WordpieceTokenizer(Backward)
.- Optimizing all functions to operate except for Bloom filter is faster than caching.
punctuation
,control
, andwhitespace
are defined as constexprs in advance and used as Bloom filters.- Reduce unnecessary memory allocation with statistical memory profiling.
- In ✨FlashTokenizer✨,
bert-base-uncased
can process 35K texts per second on a single core, with an approximate processing time of 28ns per text.[Mar 18 2025]
- Improvements to the accuracy of the BasicTokenizer have improved the overall accuracy and, in particular, produce more accurate results for Unicode input.
[Mar 14 2025]
- The performance of the
WordPieceTokenizer
andWordPieceBackwordTokenizer
has been improved using Trie, which was introduced in Fast WordPiece Tokenization.- Using
FastPoolAllocator
instd::list
improves performance in SingleEncoding, but it is not thread-safe, sostd::list<std::string>
is used as is in BatchEncoding. In BatchEncoding,OPENMP
is completely removed and onlystd::thread
is used.[Mar 10 2025]
- Performance improvements through faster token mapping with robin_hood and memory copy minimization with std::list.
Token and Ids Map used the fastest
robin_hood::unordered_flat_map<std::string, int>
.[Mar 09 2025] Completed development of flash-tokenizer for BertTokenizer.
-
Windows(AMD64)
,MacOS(ARM64)
,Ubuntu(x86-64)
. -
g++
/clang++
/MSVC
. -
python 3.8 ~ 3.13
.
Install from PIP
On Windows, you need to install vc_redist.x64.exe.
# Windows
pip install -U flash-tokenizer
# Linux
pip install -U flash-tokenizer
# MacOS
pip install -U flash-tokenizer
git clone https://github.com/NLPOptimize/flash-tokenizer
cd flash-tokenizer/prj
pip install .
from flash_tokenizer import BertTokenizerFlash
from transformers import BertTokenizer
titles = [
'绝不能放弃,世界上没有失败,只有放弃。',
'is there any doubt about it "None whatsoever"',
"세상 어떤 짐승이 이를 드러내고 사냥을 해? 약한 짐승이나 몸을 부풀리지, 진짜 짐승은 누구보다 침착하지.",
'そのように二番目に死を偽装して生き残るようになったイタドリがどうして初めて見る自分をこんなに気遣ってくれるのかと尋ねると「私が大切にする人たちがあなたを大切にするから」と答えては'
]
tokenizer1 = BertTokenizerFlash.from_pretrained('bert-base-multilingual-cased')
tokenizer2 = BertTokenizer.from_pretrained('bert-base-multilingual-cased')
correct = 0
for title in titles:
print(title)
tokens1 = tokenizer1.tokenize(title)
tokens2 = tokenizer2.tokenize(title)
ids1 = tokenizer1(title, max_length=512, padding="longest").input_ids[0]
ids2 = tokenizer2(title, max_length=512, padding="longest", return_tensors="np").input_ids[0].tolist()
if tokens1 == tokens2 and ids1 == ids2:
correct += 1
print("Accept!")
else:
print("Wrong Answer")
print(ids1)
print(ids2)
print()
print(f'Accuracy: {correct * 100.0 / len(titles):.2f}%')
绝不能放弃,世界上没有失败,只有放弃。
Accept!
[101, 6346, 2080, 6546, 4284, 3704, 10064, 2087, 5621, 2078, 4917, 4461, 3204, 7480, 10064, 2751, 4461, 4284, 3704, 1882, 102]
[101, 6346, 2080, 6546, 4284, 3704, 10064, 2087, 5621, 2078, 4917, 4461, 3204, 7480, 10064, 2751, 4461, 4284, 3704, 1882, 102]
is there any doubt about it "None whatsoever"
Accept!
[101, 10124, 11155, 11178, 86697, 10978, 10271, 107, 86481, 12976, 11669, 23433, 107, 102]
[101, 10124, 11155, 11178, 86697, 10978, 10271, 107, 86481, 12976, 11669, 23433, 107, 102]
세상 어떤 짐승이 이를 드러내고 사냥을 해? 약한 짐승이나 몸을 부풀리지, 진짜 짐승은 누구보다 침착하지.
Accept!
[101, 9435, 14871, 55910, 9710, 48210, 10739, 35756, 9113, 30873, 31605, 11664, 9405, 118729, 10622, 9960, 136, 9539, 11102, 9710, 48210, 43739, 9288, 10622, 9365, 119407, 12692, 12508, 117, 9708, 119235, 9710, 48210, 10892, 9032, 17196, 80001, 9783, 119248, 23665, 119, 102]
[101, 9435, 14871, 55910, 9710, 48210, 10739, 35756, 9113, 30873, 31605, 11664, 9405, 118729, 10622, 9960, 136, 9539, 11102, 9710, 48210, 43739, 9288, 10622, 9365, 119407, 12692, 12508, 117, 9708, 119235, 9710, 48210, 10892, 9032, 17196, 80001, 9783, 119248, 23665, 119, 102]
そのように二番目に死を偽装して生き残るようになったイタドリがどうして初めて見る自分をこんなに気遣ってくれるのかと尋ねると「私が大切にする人たちがあなたを大切にするから」と答えては
Accept!
[101, 11332, 24273, 2150, 5632, 5755, 1943, 4805, 1980, 2371, 7104, 11592, 5600, 1913, 4814, 1975, 27969, 15970, 21462, 15713, 21612, 10898, 56910, 22526, 22267, 2547, 19945, 7143, 1975, 6621, 2534, 1980, 28442, 60907, 11312, 4854, 7770, 14813, 18825, 58174, 75191, 11662, 3456, 1945, 100812, 1890, 5949, 1912, 3197, 2535, 84543, 2179, 78776, 111787, 22946, 20058, 11377, 3197, 2535, 84543, 16867, 1891, 1940, 6076, 27144, 11588, 102]
[101, 11332, 24273, 2150, 5632, 5755, 1943, 4805, 1980, 2371, 7104, 11592, 5600, 1913, 4814, 1975, 27969, 15970, 21462, 15713, 21612, 10898, 56910, 22526, 22267, 2547, 19945, 7143, 1975, 6621, 2534, 1980, 28442, 60907, 11312, 4854, 7770, 14813, 18825, 58174, 75191, 11662, 3456, 1945, 100812, 1890, 5949, 1912, 3197, 2535, 84543, 2179, 78776, 111787, 22946, 20058, 11377, 3197, 2535, 84543, 16867, 1891, 1940, 6076, 27144, 11588, 102]
Accuracy: 100.00%
Most BERT-based models use the WordPiece Tokenizer, whose code can be found here. (A simple implementation of Huggingface can be found here).
Since the BertTokenizer is a CPU intensive algorithm, inference can be a bottleneck, and unoptimized tokenizers can be severely slow. A good example is the BidirectionalWordpieceTokenizer introduced in KR-BERT. Most of the code is the same, but the algorithm traverses the sub token backwards and writes a larger value compared to the forward traversal. The paper claims accuracy improvements, but it's hard to find other quantitative metrics, and the accuracy improvements aren't significant, and the tokenizer is seriously slowed down.
- transformers (Rust Impl, PyO3)
- paddlenlp (C++ Impl, pybind)
- tensorflow-text (C++ Impl, pybind)
- blingfire (C++ Impl, Native binary call)
Most developers will either use transformers.BertTokenizer
or transformers.AutoTokenizer
, but using AutoTokenizer
will return transformers.BertTokenizerFast
.
Naturally, it's faster than BertTokenizer, but the results aren't exactly the same, which means you're already giving up 100% accuracy starting with the tokenizer.
BertTokenizer is not only provided by transformers. PaddleNLP and tensorflow-text also provide BertTokenizer.
Then there's Blingfire, which is developed by Microsoft and is being abandoned.
PaddleNLP requires PaddlePaddle and provides tokenizer functionality starting with version 3.0rc. You can install it as follows
##### Install PaddlePaddle, PaddleNLP
python -m pip install paddlepaddle==3.0.0b1 -i https://www.paddlepaddle.org.cn/packages/stable/cpu/
pip install --upgrade paddlenlp==3.0.0b3
##### Install transformers
pip install transformers==4.47.1
##### Install tf-text
pip install tensorflow-text==2.18.1
##### Install blingfire
pip install blingfire
With the exception of blingfire, vocab.txt is all you need to run the tokenizer right away. (blingfire also requires only vocab.txt and can be used after 8 hours of learning).
The implementations we'll look at in detail are PaddleNLP's BertTokenizerFast
and blingfire
.
-
blingfire
: Uses a Deterministic Finite State Machine (DFSM) to eliminate one linear scan and unnecessary comparisons, resulting in a time of O(n), which is impressive.- Advantages: 5-10x faster than other implementations.
- Disadvantages: Long training time (8 hours) and lower accuracy than other implementations. (+Difficult to get help due to de facto development hiatus).
-
PaddleNLP
: As shown in the experiments below, PaddleNLP is always faster than BertTokenizerFast (HF) to the same number of decimal places, and is always faster on any OS, whether X86 or Arm.-
Advantages: Internal implementation is in C++ Compared to
transformers.BertTokenizerFast
implemented in Rust, it is 1.2x faster while outputting exactly the same values.- You can't specify
pt(pytorch tensor)
inreturn_tensors
, but this is not a problem.
- You can't specify
- Disadvantages: none, other than the need to install PaddlePaddle and PaddleNLP.
-
Advantages: Internal implementation is in C++ Compared to
Accuracy is the result of measuring google's BertTokenizerFast as a baseline. If even one of the input_ids
is incorrect, the answer is considered incorrect.
Tokenizer | Elapsed Time | texts | Accuracy |
---|---|---|---|
BertTokenizerFast(Huggingface) | 84.3700s | 1,000,000 | 99.9226% |
BertTokenizerFast(PaddleNLP) | 75.6551s | 1,000,000 | 99.9226% |
FastBertTokenizer(Tensorflow) | 219.1259s | 1,000,000 | 99.9160% |
Blingfire | 13.6183s | 1,000,000 | 99.8991% |
FlashBertTokenizer | 8.1968s | 1,000,000 | 99.8216% |
Tokenizer | Elapsed Time | texts | Accuracy |
---|---|---|---|
BertTokenizerFast(Huggingface) | 91.7882s | 1,000,000 | 99.9326% |
BertTokenizerFast(PaddleNLP) | 83.6839s | 1,000,000 | 99.9326% |
FastBertTokenizer(Tensorflow) | 204.2240s | 1,000,000 | 99.1379% |
Blingfire | 13.2374s | 1,000,000 | 99.8588% |
FlashBertTokenizer | 7.6313s | 1,000,000 | 99.6884% |
Tokenizer | Elapsed Time | texts | Accuracy |
---|---|---|---|
BertTokenizerFast(Huggingface) | 212.1570s | 2,000,000 | 99.7964% |
BertTokenizerFast(PaddleNLP) | 193.9921s | 2,000,000 | 99.7964% |
FastBertTokenizer(Tensorflow) | 394.1574s | 2,000,000 | 99.7892% |
Blingfire | 38.9013s | 2,000,000 | 99.9780% |
FlashBertTokenizer | 20.4570s | 2,000,000 | 99.8970% |
Tokenizer | Elapsed Time | texts | Accuracy |
---|---|---|---|
BertTokenizerFast(Huggingface) | 52.5744s | 1,000,000 | 99.6754% |
BertTokenizerFast(PaddleNLP) | 44.8943s | 1,000,000 | 99.6754% |
FastBertTokenizer(Tensorflow) | 198.0270s | 1,000,000 | 99.6639% |
Blingfire | 13.0701s | 1,000,000 | 99.9434% |
FlashBertTokenizer | 5.2601s | 1,000,000 | 99.9484% |
Tokenizer | Elapsed Time | texts | Accuracy |
---|---|---|---|
FlashBertTokenizer | 5.1875s | 1,000,001 | 99.9484% |
Blingfire | 13.2783s | 1,000,001 | 99.9435% |
rust_tokenizers(guillaume-be) | 16.6308s | 1,000,001 | 99.9829% |
BertTokenizerFast(PaddleNLP) | 44.5476s | 1,000,001 | 99.6754% |
BertTokenizerFast(Huggingface) | 53.2525s | 1,000,001 | 99.6754% |
FastBertTokenizer(Tensorflow) | 202.1633s | 1,000,001 | 99.6639% |
Tokenizer | Elapsed Time | texts | Accuracy |
---|---|---|---|
BertTokenizerFast(Huggingface) | 208.8858s | 2,000,000 | 99.7964% |
BertTokenizerFast(PaddleNLP) | 192.6593s | 2,000,000 | 99.7964% |
FastBertTokenizer(Tensorflow) | 413.2010s | 2,000,000 | 99.7892% |
Blingfire | 39.3765s | 2,000,000 | 99.9780% |
FlashBertTokenizer | 22.8820s | 2,000,000 | 99.8970% |
Tokenizer | Elapsed Time | texts | Accuracy |
---|---|---|---|
FlashBertTokenizer | 22.0901s | 2,000,001 | 99.8971% |
Blingfire | 37.9836s | 2,000,001 | 99.9780% |
rust_tokenizers(guillaume-be) | 98.0366s | 2,000,001 | 99.9976% |
BertTokenizerFast(PaddleNLP) | 208.6889s | 2,000,001 | 99.7964% |
BertTokenizerFast(Huggingface) | 219.2644s | 2,000,001 | 99.7964% |
FastBertTokenizer(Tensorflow) | 413.9725s | 2,000,001 | 99.7892% |
Tokenizer | Elapsed Time | texts | Accuracy |
---|---|---|---|
BertTokenizerBidirectional(KR-BERT Original) | 128.3320s | 1,000,000 | 100.0000% |
FlashBertTokenizer(Bidirectional) | 10.4492s | 1,000,000 | 99.9631% |
%%{ init: { "er" : { "layoutDirection" : "LR" } } }%%
erDiagram
Text ||--o{ Preprocess : tokenize
Preprocess o{--|| Inference : memcpy_h2d
Inference o{--|| Postprocess : memcpy_d2h
FlashBertTokenizer can be used with any framework. CUDA version compatibility for each framework is also important for fast inference of LLMs.
- PyTorch no longer supports installation using conda.
- ONNXRUNTIME is separated by CUDA version.
- PyTorch is also looking to ditch CUDA 12.x in favor of the newer CUDA 12.8. However, the trend is to keep CUDA 11.8 in all frameworks.
- CUDA 12.x was made for the newest GPUs, Hopper and Blackwell, and on GPUs like Volta, CUDA 11.8 is faster than CUDA 12.x.
DL Framework | Version | OS | CPU | CUDA 11.8 | CUDA 12.3 | CUDA 12.4 | CUDA 12.6 | CUDA 12.8 |
---|---|---|---|---|---|---|---|---|
PyTorch | 2.6 | Linux, Windows | ⚪ | ⚪ | ❌ | ⚪ | ⚪ | ❌ |
PyTorch | 2.7 | Linux, Windows | ⚪ | ⚪ | ❌ | ❌ | ⚪ | ⚪ |
ONNXRUNTIME(11) | 1.20.x | Linux, Windows | ⚪ | ⚪ | ❌ | ❌ | ❌ | ❌ |
ONNXRUNTIME(12) | 1.20.x | Linux, Windows | ⚪ | ❌ | ⚪ | ⚪ | ⚪ | ⚪ |
PaddlePaddle | 3.0-beta | Linux, Windows | ⚪ | ⚪ | ❌ | ❌ | ❌ | ❌ |
Here is an example of installing and running cuDF in Run State of the Art NLP Workloads at Scale with RAPIDS, HuggingFace, and Dask. (It's incredibly fast)
You can run WordPiece Tokenizer on GPUs on rapids(cudf).
As you can see in how to install rapids, it only supports Linux and the CUDA version is not the same as other frameworks, so docker is the best choice, which is faster than CPU for batch processing but slower than CPU for streaming processing.
There are good example codes and explanations in the[ blog](https://developer.nvidia.com/blog/run-state-of-the-art-nlp-workloads-at-scale-with-rapids-huggingface-and-dask/#:~:text=,and then used in subsequent). To use cuDF, you must first convert vocab.txt to hash_vocab as shown below. The problem is that the hash_vocab function cannot convert multilingual. Therefore, the WordpieceTokenizer of cuDF cannot be used if there are any characters other than English/Chinese in the vocab.
import cudf
from cudf.utils.hash_vocab_utils import hash_vocab
hash_vocab('bert-base-cased-vocab.txt', 'voc_hash.txt')
- [x] BidirectionalWordPieceTokenizer
- [x] BatchEncoder with Multithreading.
- [x] Replace
std::list
toboost::intrusive::list
. - [x]
MaxMatch-Dropout: Subword Regularization for WordPiece Option. - [x] Use stack memory for reduce memory allocation. (C-Style, alloca, _alloca)
- [x]
Support for parallel processing option for single encode. - [ ]
circle.ai
- [ ] Implement distribution of compiled wheel packages for installation.
- [ ] SIMD
- [ ]
CUDA Version.
FlashTokenizer is inspired by FlashAttention, FlashInfer, FastBertTokenizer and tokenizers-cpp projects.
-
WordPiece
- 📒 huggingface/tokenizers (Rust)
- Rust implementation of transformers.BertTokenizerFast, provided as a Python package.
- 🔵 Provided as a Python package.
- 🔥 FastBertTokenizer (C#)
- It demonstrates incredibly fast performance, but accuracy significantly decreases for non-English queries.
- ❌ BertTokenizers (C#)
- It can be confirmed from FastBertTokenizer (C#) VS BertTokenizers (C#) that
FastBertTokenizer(C#)
is faster.
- It can be confirmed from FastBertTokenizer (C#) VS BertTokenizers (C#) that
- 🔥 rust-tokenizers (Rust)
- Slower than BertTokenizerFlash and Blingfire but faster and more accurate than other implementations.
- 🔵 Provided as a Python package.
- ❌ tokenizers-cpp (C++)
-
tokenizer-cpp
is a wrapper around SentencePiece and HuggingFace's Rust implementation, so performance benchmarking is meaningless.
-
- ❌ bertTokenizer (Java)
- Java is not covered.
- ✅ ZhuoruLin/fast-wordpiece (Rust)
- A Rust implementation using LinMaxMatching, runnable only in Rust, and expected to be no faster than the C++ implementation.
- ❌ huggingface_tokenizer_cpp (C++)
- Very slow due to naive C++ implementation.
- ❌ SeanLee97/BertWordPieceTokenizer.jl (Julia)
- Julia is not covered.
- 📒 huggingface/tokenizers (Rust)
- BPE
- SentencePiece
- https://medium.com/@techhara/which-bert-tokenizer-is-faster-b832aa978b46
- https://medium.com/@atharv6f_47401/wordpiece-tokenization-a-bpe-variant-73cc48865cbf
- https://www.restack.io/p/transformer-models-bert-answer-fast-berttokenizerfast-cat-ai
- https://medium.com/@anmolkohli/my-notes-on-bert-tokenizer-and-model-98dc22d0b64
- https://nocomplexity.com/documents/fossml/nlpframeworks.html
- https://github.com/martinus/robin-hood-hashing
- https://arxiv.org/abs/2012.15524
- https://github.com/google/highway
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for flash-tokenizer
Similar Open Source Tools

flash-tokenizer
FlashTokenizer is a high-performance CPU tokenizer library implemented in C++ for LLM inference serving. It is 10 times faster than BertTokenizerFast in transformers, offering the highest speed and accuracy. Developed to be faster, more accurate, and easier to use than existing tokenizers like BertTokenizerFast, FlashTokenizer is implemented in C++ for straightforward maintenance. It supports parallel processing at the C++ level for batch encoding, delivering outstanding speed. The tokenizer is based on the LinMax Tokenizer proposed in Fast WordPiece Tokenization, enabling tokenization in linear time.

Liger-Kernel
Liger Kernel is a collection of Triton kernels designed for LLM training, increasing training throughput by 20% and reducing memory usage by 60%. It includes Hugging Face Compatible modules like RMSNorm, RoPE, SwiGLU, CrossEntropy, and FusedLinearCrossEntropy. The tool works with Flash Attention, PyTorch FSDP, and Microsoft DeepSpeed, aiming to enhance model efficiency and performance for researchers, ML practitioners, and curious novices.

IDvs.MoRec
This repository contains the source code for the SIGIR 2023 paper 'Where to Go Next for Recommender Systems? ID- vs. Modality-based Recommender Models Revisited'. It provides resources for evaluating foundation, transferable, multi-modal, and LLM recommendation models, along with datasets, pre-trained models, and training strategies for IDRec and MoRec using in-batch debiased cross-entropy loss. The repository also offers large-scale datasets, code for SASRec with in-batch debias cross-entropy loss, and information on joining the lab for research opportunities.

BitBLAS
BitBLAS is a library for mixed-precision BLAS operations on GPUs, for example, the $W_{wdtype}A_{adtype}$ mixed-precision matrix multiplication where $C_{cdtype}[M, N] = A_{adtype}[M, K] \times W_{wdtype}[N, K]$. BitBLAS aims to support efficient mixed-precision DNN model deployment, especially the $W_{wdtype}A_{adtype}$ quantization in large language models (LLMs), for example, the $W_{UINT4}A_{FP16}$ in GPTQ, the $W_{INT2}A_{FP16}$ in BitDistiller, the $W_{INT2}A_{INT8}$ in BitNet-b1.58. BitBLAS is based on techniques from our accepted submission at OSDI'24.

EasyEdit
EasyEdit is a Python package for edit Large Language Models (LLM) like `GPT-J`, `Llama`, `GPT-NEO`, `GPT2`, `T5`(support models from **1B** to **65B**), the objective of which is to alter the behavior of LLMs efficiently within a specific domain without negatively impacting performance across other inputs. It is designed to be easy to use and easy to extend.

qserve
QServe is a serving system designed for efficient and accurate Large Language Models (LLM) on GPUs with W4A8KV4 quantization. It achieves higher throughput compared to leading industry solutions, allowing users to achieve A100-level throughput on cheaper L40S GPUs. The system introduces the QoQ quantization algorithm with 4-bit weight, 8-bit activation, and 4-bit KV cache, addressing runtime overhead challenges. QServe improves serving throughput for various LLM models by implementing compute-aware weight reordering, register-level parallelism, and fused attention memory-bound techniques.

Cherry_LLM
Cherry Data Selection project introduces a self-guided methodology for LLMs to autonomously discern and select cherry samples from open-source datasets, minimizing manual curation and cost for instruction tuning. The project focuses on selecting impactful training samples ('cherry data') to enhance LLM instruction tuning by estimating instruction-following difficulty. The method involves phases like 'Learning from Brief Experience', 'Evaluating Based on Experience', and 'Retraining from Self-Guided Experience' to improve LLM performance.

MooER
MooER (摩耳) is an LLM-based speech recognition and translation model developed by Moore Threads. It allows users to transcribe speech into text (ASR) and translate speech into other languages (AST) in an end-to-end manner. The model was trained using 5K hours of data and is now also available with an 80K hours version. MooER is the first LLM-based speech model trained and inferred using domestic GPUs. The repository includes pretrained models, inference code, and a Gradio demo for a better user experience.

UniCoT
Uni-CoT is a unified reasoning framework that extends Chain-of-Thought (CoT) principles to the multimodal domain, enabling Multimodal Large Language Models (MLLMs) to perform interpretable, step-by-step reasoning across both text and vision. It decomposes complex multimodal tasks into structured, manageable steps that can be executed sequentially or in parallel, allowing for more scalable and systematic reasoning.

KwaiAgents
KwaiAgents is a series of Agent-related works open-sourced by the [KwaiKEG](https://github.com/KwaiKEG) from [Kuaishou Technology](https://www.kuaishou.com/en). The open-sourced content includes: 1. **KAgentSys-Lite**: a lite version of the KAgentSys in the paper. While retaining some of the original system's functionality, KAgentSys-Lite has certain differences and limitations when compared to its full-featured counterpart, such as: (1) a more limited set of tools; (2) a lack of memory mechanisms; (3) slightly reduced performance capabilities; and (4) a different codebase, as it evolves from open-source projects like BabyAGI and Auto-GPT. Despite these modifications, KAgentSys-Lite still delivers comparable performance among numerous open-source Agent systems available. 2. **KAgentLMs**: a series of large language models with agent capabilities such as planning, reflection, and tool-use, acquired through the Meta-agent tuning proposed in the paper. 3. **KAgentInstruct**: over 200k Agent-related instructions finetuning data (partially human-edited) proposed in the paper. 4. **KAgentBench**: over 3,000 human-edited, automated evaluation data for testing Agent capabilities, with evaluation dimensions including planning, tool-use, reflection, concluding, and profiling.

spark-nlp
Spark NLP is a state-of-the-art Natural Language Processing library built on top of Apache Spark. It provides simple, performant, and accurate NLP annotations for machine learning pipelines that scale easily in a distributed environment. Spark NLP comes with 36000+ pretrained pipelines and models in more than 200+ languages. It offers tasks such as Tokenization, Word Segmentation, Part-of-Speech Tagging, Named Entity Recognition, Dependency Parsing, Spell Checking, Text Classification, Sentiment Analysis, Token Classification, Machine Translation, Summarization, Question Answering, Table Question Answering, Text Generation, Image Classification, Image to Text (captioning), Automatic Speech Recognition, Zero-Shot Learning, and many more NLP tasks. Spark NLP is the only open-source NLP library in production that offers state-of-the-art transformers such as BERT, CamemBERT, ALBERT, ELECTRA, XLNet, DistilBERT, RoBERTa, DeBERTa, XLM-RoBERTa, Longformer, ELMO, Universal Sentence Encoder, Llama-2, M2M100, BART, Instructor, E5, Google T5, MarianMT, OpenAI GPT2, Vision Transformers (ViT), OpenAI Whisper, and many more not only to Python and R, but also to JVM ecosystem (Java, Scala, and Kotlin) at scale by extending Apache Spark natively.

llm4ad
LLM4AD is an open-source Python-based platform leveraging Large Language Models (LLMs) for Automatic Algorithm Design (AD). It provides unified interfaces for methods, tasks, and LLMs, along with features like evaluation acceleration, secure evaluation, logs, GUI support, and more. The platform was originally developed for optimization tasks but is versatile enough to be used in other areas such as machine learning, science discovery, game theory, and engineering design. It offers various search methods and algorithm design tasks across different domains. LLM4AD supports remote LLM API, local HuggingFace LLM deployment, and custom LLM interfaces. The project is licensed under the MIT License and welcomes contributions, collaborations, and issue reports.

DeepRetrieval
DeepRetrieval is a tool designed to enhance search engines and retrievers using Large Language Models (LLMs) and Reinforcement Learning (RL). It allows LLMs to learn how to search effectively by integrating with search engine APIs and customizing reward functions. The tool provides functionalities for data preparation, training, evaluation, and monitoring search performance. DeepRetrieval aims to improve information retrieval tasks by leveraging advanced AI techniques.

agentscope
AgentScope is an agent-oriented programming tool for building LLM (Large Language Model) applications. It provides transparent development, realtime steering, agentic tools management, model agnostic programming, LEGO-style agent building, multi-agent support, and high customizability. The tool supports async invocation, reasoning models, streaming returns, async/sync tool functions, user interruption, group-wise tools management, streamable transport, stateful/stateless mode MCP client, distributed and parallel evaluation, multi-agent conversation management, and fine-grained MCP control. AgentScope Studio enables tracing and visualization of agent applications. The tool is highly customizable and encourages customization at various levels.

buffer-of-thought-llm
Buffer of Thoughts (BoT) is a thought-augmented reasoning framework designed to enhance the accuracy, efficiency, and robustness of large language models (LLMs). It introduces a meta-buffer to store high-level thought-templates distilled from problem-solving processes, enabling adaptive reasoning for efficient problem-solving. The framework includes a buffer-manager to dynamically update the meta-buffer, ensuring scalability and stability. BoT achieves significant performance improvements on reasoning-intensive tasks and demonstrates superior generalization ability and robustness while being cost-effective compared to other methods.

beeai-framework
BeeAI Framework is a versatile tool for building production-ready multi-agent systems. It offers flexibility in orchestrating agents, seamless integration with various models and tools, and production-grade controls for scaling. The framework supports Python and TypeScript libraries, enabling users to implement simple to complex multi-agent patterns, connect with AI services, and optimize token usage and resource management.
For similar tasks

flash-tokenizer
FlashTokenizer is a high-performance CPU tokenizer library implemented in C++ for LLM inference serving. It is 10 times faster than BertTokenizerFast in transformers, offering the highest speed and accuracy. Developed to be faster, more accurate, and easier to use than existing tokenizers like BertTokenizerFast, FlashTokenizer is implemented in C++ for straightforward maintenance. It supports parallel processing at the C++ level for batch encoding, delivering outstanding speed. The tokenizer is based on the LinMax Tokenizer proposed in Fast WordPiece Tokenization, enabling tokenization in linear time.

MiniAI-Face-LivenessDetection-AndroidSDK
The MiniAiLive Face Liveness Detection Android SDK provides advanced computer vision techniques to enhance security and accuracy on Android platforms. It offers 3D Passive Face Liveness Detection capabilities, ensuring that users are physically present and not using spoofing methods to access applications or services. The SDK is fully on-premise, with all processing happening on the hosting server, ensuring data privacy and security.

AIMr
AIMr is an AI aimbot tool written in Python that leverages modern technologies to achieve an undetected system with a pleasing appearance. It works on any game that uses human-shaped models. To optimize its performance, users should build OpenCV with CUDA. For Valorant, additional perks in the Discord and an Arduino Leonardo R3 are required.

Woodpecker
Woodpecker is a tool designed to correct hallucinations in Multimodal Large Language Models (MLLMs) by introducing a training-free method that picks out and corrects inconsistencies between generated text and image content. It consists of five stages: key concept extraction, question formulation, visual knowledge validation, visual claim generation, and hallucination correction. Woodpecker can be easily integrated with different MLLMs and provides interpretable results by accessing intermediate outputs of the stages. The tool has shown significant improvements in accuracy over baseline models like MiniGPT-4 and mPLUG-Owl.

Fortnite-menu
Welcome to the Fortnite Menu repository! This project offers a multi-functional cheat for Valorant, providing features like Aimbot, Wallhack, ESP, No Recoil, Triggerbot, and Radar Hack to enhance your gameplay. Please note that using cheats in Fortnite is against Epic Games' terms of service and can lead to permanent bans. The repository is for educational purposes only, and fair play is encouraged.

MobileLLM
This repository contains the training code of MobileLLM, a language model optimized for on-device use cases with fewer than a billion parameters. It integrates SwiGLU activation function, deep and thin architectures, embedding sharing, and grouped-query attention to achieve high-quality LLMs. MobileLLM-125M/350M shows significant accuracy improvements over previous models on zero-shot commonsense reasoning tasks. The design philosophy scales effectively to larger models, with state-of-the-art results for MobileLLM-600M/1B/1.5B.

IntelliQ
IntelliQ is an open-source project aimed at providing a multi-turn question-answering system based on a large language model (LLM). The system combines advanced intent recognition and slot filling technology to enhance the depth of understanding and accuracy of responses in conversation systems. It offers a flexible and efficient solution for developers to build and optimize various conversational applications. The system features multi-turn dialogue management, intent recognition, slot filling, interface slot technology for real-time data retrieval and processing, adaptive learning for improving response accuracy and speed, and easy integration with detailed API documentation supporting multiple programming languages and platforms.

BetterOCR
BetterOCR is a tool that enhances text detection by combining multiple OCR engines with LLM (Language Model). It aims to improve OCR results, especially for languages with limited training data or noisy outputs. The tool combines results from EasyOCR, Tesseract, and Pororo engines, along with LLM support from OpenAI. Users can provide custom context for better accuracy, view performance examples by language, and upcoming features include box detection, improved interface, and async support. The package is under rapid development and contributions are welcomed.
For similar jobs

weave
Weave is a toolkit for developing Generative AI applications, built by Weights & Biases. With Weave, you can log and debug language model inputs, outputs, and traces; build rigorous, apples-to-apples evaluations for language model use cases; and organize all the information generated across the LLM workflow, from experimentation to evaluations to production. Weave aims to bring rigor, best-practices, and composability to the inherently experimental process of developing Generative AI software, without introducing cognitive overhead.

agentcloud
AgentCloud is an open-source platform that enables companies to build and deploy private LLM chat apps, empowering teams to securely interact with their data. It comprises three main components: Agent Backend, Webapp, and Vector Proxy. To run this project locally, clone the repository, install Docker, and start the services. The project is licensed under the GNU Affero General Public License, version 3 only. Contributions and feedback are welcome from the community.

oss-fuzz-gen
This framework generates fuzz targets for real-world `C`/`C++` projects with various Large Language Models (LLM) and benchmarks them via the `OSS-Fuzz` platform. It manages to successfully leverage LLMs to generate valid fuzz targets (which generate non-zero coverage increase) for 160 C/C++ projects. The maximum line coverage increase is 29% from the existing human-written targets.

LLMStack
LLMStack is a no-code platform for building generative AI agents, workflows, and chatbots. It allows users to connect their own data, internal tools, and GPT-powered models without any coding experience. LLMStack can be deployed to the cloud or on-premise and can be accessed via HTTP API or triggered from Slack or Discord.

VisionCraft
The VisionCraft API is a free API for using over 100 different AI models. From images to sound.

kaito
Kaito is an operator that automates the AI/ML inference model deployment in a Kubernetes cluster. It manages large model files using container images, avoids tuning deployment parameters to fit GPU hardware by providing preset configurations, auto-provisions GPU nodes based on model requirements, and hosts large model images in the public Microsoft Container Registry (MCR) if the license allows. Using Kaito, the workflow of onboarding large AI inference models in Kubernetes is largely simplified.

PyRIT
PyRIT is an open access automation framework designed to empower security professionals and ML engineers to red team foundation models and their applications. It automates AI Red Teaming tasks to allow operators to focus on more complicated and time-consuming tasks and can also identify security harms such as misuse (e.g., malware generation, jailbreaking), and privacy harms (e.g., identity theft). The goal is to allow researchers to have a baseline of how well their model and entire inference pipeline is doing against different harm categories and to be able to compare that baseline to future iterations of their model. This allows them to have empirical data on how well their model is doing today, and detect any degradation of performance based on future improvements.

Azure-Analytics-and-AI-Engagement
The Azure-Analytics-and-AI-Engagement repository provides packaged Industry Scenario DREAM Demos with ARM templates (Containing a demo web application, Power BI reports, Synapse resources, AML Notebooks etc.) that can be deployed in a customer’s subscription using the CAPE tool within a matter of few hours. Partners can also deploy DREAM Demos in their own subscriptions using DPoC.