
flash-tokenizer
EFFICIENT AND OPTIMIZED TOKENIZER ENGINE FOR LLM INFERENCE SERVING
Stars: 109

README:
FlashTokenizer is a high-performance tokenizer implementation in C++ of the BertTokenizer used for LLM inference. It has the highest speed and accuracy of any tokenizer, such as FlashAttention and FlashInfer, and is 10 times faster than BertTokenizerFast
in transformers.
[!NOTE]
We need a tokenizer that is faster, more accurate, and easier to use than Huggingface's BertTokenizerFast. (link1, link2, link3)
PaddleNLP's BertTokenizerFast achieves a 1.2x performance improvement by implementing Huggingface's Rust version in
C++
. However, using it requires installing both the massive PaddlePaddle and PaddleNLP packages.Tensorflow-text's FastBertTokenizer actually demonstrates slower performance in comparison.
Microsoft's Blingfire takes over 8 hours to train on custom data and shows relatively lower accuracy.
Rapid's cuDF provides a GPU-based BertTokenizer, but it suffers from accuracy issues.
Unfortunately, FastBertTokenizer and BertTokenizers developed in
C#
and cannot be used inPython
.This is why we developed
FlashTokenizer
. It can be easily installed viapip
and is developed in C++ for straightforward maintenance. Plus, it guarantees extremely fast speeds. We've created an implementation that's faster than Blingfire and easier to use. FlashTokenizer is implemented using the LinMax Tokenizer proposed in Fast WordPiece Tokenization, enabling tokenization in linear time. Finally It supports parallel processing at the C++ level for batch encoding, delivering outstanding speed.
[!TIP]
Implemented in C++17.
- MacOS:
clang++
.- Windows:
Visual Studio 2022
.- Ubuntu:
g++
.Equally fast in Python via pybind11.
Support for parallel processing at the C++ level using OPENMP.
[!IMPORTANT]
[Apr 02 2025]
- Add performance benchmarking code
- Performance benchmarking is conducted using Python, and required packages can be installed via setup.sh.
- A minor performance improvement has been achieved by adding the
tokenize_early_stop
feature toBasicTokenizer
.- OpenMP demonstrated better performance than
std::thread
across Windows, Linux, and macOS, so we've switched exclusively to OpenMP.[Mar 31 2025]
- Modified to provide pre-built whl files for each OS.
[Mar 22 2025]
- Added DFA to AC Trie.
[Mar 21 2025]
- Improving Tokenizer Accuracy
[Mar 19 2025]
- Memory reduction and slight performance improvement by applying LinMaxMatching from Aho–Corasick algorithm.
- Improved branch pipelining of all functions and force-inline applied.
- Removed unnecessary operations of
WordpieceTokenizer(Backward)
.- Optimizing all functions to operate except for Bloom filter is faster than caching.
punctuation
,control
, andwhitespace
are defined as constexprs in advance and used as Bloom filters.- Reduce unnecessary memory allocation with statistical memory profiling.
- In ✨FlashTokenizer✨,
bert-base-uncased
can process 35K texts per second on a single core, with an approximate processing time of 28ns per text.[Mar 18 2025]
- Improvements to the accuracy of the BasicTokenizer have improved the overall accuracy and, in particular, produce more accurate results for Unicode input.
[Mar 14 2025]
- The performance of the
WordPieceTokenizer
andWordPieceBackwordTokenizer
has been improved using Trie, which was introduced in Fast WordPiece Tokenization.- Using
FastPoolAllocator
instd::list
improves performance in SingleEncoding, but it is not thread-safe, sostd::list<std::string>
is used as is in BatchEncoding. In BatchEncoding,OPENMP
is completely removed and onlystd::thread
is used.[Mar 10 2025]
- Performance improvements through faster token mapping with robin_hood and memory copy minimization with std::list.
Token and Ids Map used the fastest
robin_hood::unordered_flat_map<std::string, int>
.[Mar 09 2025] Completed development of flash-tokenizer for BertTokenizer.
-
Windows(AMD64)
,MacOS(ARM64)
,Ubuntu(x86-64)
. -
g++
/clang++
/MSVC
. -
python 3.8 ~ 3.13
.
Install from PIP
On Windows, you need to install vc_redist.x64.exe.
# Windows
pip install -U flash-tokenizer
# Linux
pip install -U flash-tokenizer
# MacOS
pip install -U flash-tokenizer
git clone https://github.com/NLPOptimize/flash-tokenizer
cd flash-tokenizer/prj
pip install .
from flash_tokenizer import BertTokenizerFlash
from transformers import BertTokenizer
titles = [
'绝不能放弃,世界上没有失败,只有放弃。',
'is there any doubt about it "None whatsoever"',
"세상 어떤 짐승이 이를 드러내고 사냥을 해? 약한 짐승이나 몸을 부풀리지, 진짜 짐승은 누구보다 침착하지.",
'そのように二番目に死を偽装して生き残るようになったイタドリがどうして初めて見る自分をこんなに気遣ってくれるのかと尋ねると「私が大切にする人たちがあなたを大切にするから」と答えては'
]
tokenizer1 = BertTokenizerFlash.from_pretrained('bert-base-multilingual-cased')
tokenizer2 = BertTokenizer.from_pretrained('bert-base-multilingual-cased')
correct = 0
for title in titles:
print(title)
tokens1 = tokenizer1.tokenize(title)
tokens2 = tokenizer2.tokenize(title)
ids1 = tokenizer1(title, max_length=512, padding="longest").input_ids[0]
ids2 = tokenizer2(title, max_length=512, padding="longest", return_tensors="np").input_ids[0].tolist()
if tokens1 == tokens2 and ids1 == ids2:
correct += 1
print("Accept!")
else:
print("Wrong Answer")
print(ids1)
print(ids2)
print()
print(f'Accuracy: {correct * 100.0 / len(titles):.2f}%')
绝不能放弃,世界上没有失败,只有放弃。
Accept!
[101, 6346, 2080, 6546, 4284, 3704, 10064, 2087, 5621, 2078, 4917, 4461, 3204, 7480, 10064, 2751, 4461, 4284, 3704, 1882, 102]
[101, 6346, 2080, 6546, 4284, 3704, 10064, 2087, 5621, 2078, 4917, 4461, 3204, 7480, 10064, 2751, 4461, 4284, 3704, 1882, 102]
is there any doubt about it "None whatsoever"
Accept!
[101, 10124, 11155, 11178, 86697, 10978, 10271, 107, 86481, 12976, 11669, 23433, 107, 102]
[101, 10124, 11155, 11178, 86697, 10978, 10271, 107, 86481, 12976, 11669, 23433, 107, 102]
세상 어떤 짐승이 이를 드러내고 사냥을 해? 약한 짐승이나 몸을 부풀리지, 진짜 짐승은 누구보다 침착하지.
Accept!
[101, 9435, 14871, 55910, 9710, 48210, 10739, 35756, 9113, 30873, 31605, 11664, 9405, 118729, 10622, 9960, 136, 9539, 11102, 9710, 48210, 43739, 9288, 10622, 9365, 119407, 12692, 12508, 117, 9708, 119235, 9710, 48210, 10892, 9032, 17196, 80001, 9783, 119248, 23665, 119, 102]
[101, 9435, 14871, 55910, 9710, 48210, 10739, 35756, 9113, 30873, 31605, 11664, 9405, 118729, 10622, 9960, 136, 9539, 11102, 9710, 48210, 43739, 9288, 10622, 9365, 119407, 12692, 12508, 117, 9708, 119235, 9710, 48210, 10892, 9032, 17196, 80001, 9783, 119248, 23665, 119, 102]
そのように二番目に死を偽装して生き残るようになったイタドリがどうして初めて見る自分をこんなに気遣ってくれるのかと尋ねると「私が大切にする人たちがあなたを大切にするから」と答えては
Accept!
[101, 11332, 24273, 2150, 5632, 5755, 1943, 4805, 1980, 2371, 7104, 11592, 5600, 1913, 4814, 1975, 27969, 15970, 21462, 15713, 21612, 10898, 56910, 22526, 22267, 2547, 19945, 7143, 1975, 6621, 2534, 1980, 28442, 60907, 11312, 4854, 7770, 14813, 18825, 58174, 75191, 11662, 3456, 1945, 100812, 1890, 5949, 1912, 3197, 2535, 84543, 2179, 78776, 111787, 22946, 20058, 11377, 3197, 2535, 84543, 16867, 1891, 1940, 6076, 27144, 11588, 102]
[101, 11332, 24273, 2150, 5632, 5755, 1943, 4805, 1980, 2371, 7104, 11592, 5600, 1913, 4814, 1975, 27969, 15970, 21462, 15713, 21612, 10898, 56910, 22526, 22267, 2547, 19945, 7143, 1975, 6621, 2534, 1980, 28442, 60907, 11312, 4854, 7770, 14813, 18825, 58174, 75191, 11662, 3456, 1945, 100812, 1890, 5949, 1912, 3197, 2535, 84543, 2179, 78776, 111787, 22946, 20058, 11377, 3197, 2535, 84543, 16867, 1891, 1940, 6076, 27144, 11588, 102]
Accuracy: 100.00%
Most BERT-based models use the WordPiece Tokenizer, whose code can be found here. (A simple implementation of Huggingface can be found here).
Since the BertTokenizer is a CPU intensive algorithm, inference can be a bottleneck, and unoptimized tokenizers can be severely slow. A good example is the BidirectionalWordpieceTokenizer introduced in KR-BERT. Most of the code is the same, but the algorithm traverses the sub token backwards and writes a larger value compared to the forward traversal. The paper claims accuracy improvements, but it's hard to find other quantitative metrics, and the accuracy improvements aren't significant, and the tokenizer is seriously slowed down.
- transformers (Rust Impl, PyO3)
- paddlenlp (C++ Impl, pybind)
- tensorflow-text (C++ Impl, pybind)
- blingfire (C++ Impl, Native binary call)
Most developers will either use transformers.BertTokenizer
or transformers.AutoTokenizer
, but using AutoTokenizer
will return transformers.BertTokenizerFast
.
Naturally, it's faster than BertTokenizer, but the results aren't exactly the same, which means you're already giving up 100% accuracy starting with the tokenizer.
BertTokenizer is not only provided by transformers. PaddleNLP and tensorflow-text also provide BertTokenizer.
Then there's Blingfire, which is developed by Microsoft and is being abandoned.
PaddleNLP requires PaddlePaddle and provides tokenizer functionality starting with version 3.0rc. You can install it as follows
##### Install PaddlePaddle, PaddleNLP
python -m pip install paddlepaddle==3.0.0b1 -i https://www.paddlepaddle.org.cn/packages/stable/cpu/
pip install --upgrade paddlenlp==3.0.0b3
##### Install transformers
pip install transformers==4.47.1
##### Install tf-text
pip install tensorflow-text==2.18.1
##### Install blingfire
pip install blingfire
With the exception of blingfire, vocab.txt is all you need to run the tokenizer right away. (blingfire also requires only vocab.txt and can be used after 8 hours of learning).
The implementations we'll look at in detail are PaddleNLP's BertTokenizerFast
and blingfire
.
-
blingfire
: Uses a Deterministic Finite State Machine (DFSM) to eliminate one linear scan and unnecessary comparisons, resulting in a time of O(n), which is impressive.- Advantages: 5-10x faster than other implementations.
- Disadvantages: Long training time (8 hours) and lower accuracy than other implementations. (+Difficult to get help due to de facto development hiatus).
-
PaddleNLP
: As shown in the experiments below, PaddleNLP is always faster than BertTokenizerFast (HF) to the same number of decimal places, and is always faster on any OS, whether X86 or Arm.-
Advantages: Internal implementation is in C++ Compared to
transformers.BertTokenizerFast
implemented in Rust, it is 1.2x faster while outputting exactly the same values.- You can't specify
pt(pytorch tensor)
inreturn_tensors
, but this is not a problem.
- You can't specify
- Disadvantages: none, other than the need to install PaddlePaddle and PaddleNLP.
-
Advantages: Internal implementation is in C++ Compared to
Accuracy is the result of measuring google's BertTokenizerFast as a baseline. If even one of the input_ids
is incorrect, the answer is considered incorrect.
Tokenizer | Elapsed Time | texts | Accuracy |
---|---|---|---|
BertTokenizerFast(Huggingface) | 84.3700s | 1,000,000 | 99.9226% |
BertTokenizerFast(PaddleNLP) | 75.6551s | 1,000,000 | 99.9226% |
FastBertTokenizer(Tensorflow) | 219.1259s | 1,000,000 | 99.9160% |
Blingfire | 13.6183s | 1,000,000 | 99.8991% |
FlashBertTokenizer | 8.1968s | 1,000,000 | 99.8216% |
Tokenizer | Elapsed Time | texts | Accuracy |
---|---|---|---|
BertTokenizerFast(Huggingface) | 91.7882s | 1,000,000 | 99.9326% |
BertTokenizerFast(PaddleNLP) | 83.6839s | 1,000,000 | 99.9326% |
FastBertTokenizer(Tensorflow) | 204.2240s | 1,000,000 | 99.1379% |
Blingfire | 13.2374s | 1,000,000 | 99.8588% |
FlashBertTokenizer | 7.6313s | 1,000,000 | 99.6884% |
Tokenizer | Elapsed Time | texts | Accuracy |
---|---|---|---|
BertTokenizerFast(Huggingface) | 212.1570s | 2,000,000 | 99.7964% |
BertTokenizerFast(PaddleNLP) | 193.9921s | 2,000,000 | 99.7964% |
FastBertTokenizer(Tensorflow) | 394.1574s | 2,000,000 | 99.7892% |
Blingfire | 38.9013s | 2,000,000 | 99.9780% |
FlashBertTokenizer | 20.4570s | 2,000,000 | 99.8970% |
Tokenizer | Elapsed Time | texts | Accuracy |
---|---|---|---|
BertTokenizerFast(Huggingface) | 52.5744s | 1,000,000 | 99.6754% |
BertTokenizerFast(PaddleNLP) | 44.8943s | 1,000,000 | 99.6754% |
FastBertTokenizer(Tensorflow) | 198.0270s | 1,000,000 | 99.6639% |
Blingfire | 13.0701s | 1,000,000 | 99.9434% |
FlashBertTokenizer | 5.2601s | 1,000,000 | 99.9484% |
Tokenizer | Elapsed Time | texts | Accuracy |
---|---|---|---|
FlashBertTokenizer | 5.1875s | 1,000,001 | 99.9484% |
Blingfire | 13.2783s | 1,000,001 | 99.9435% |
rust_tokenizers(guillaume-be) | 16.6308s | 1,000,001 | 99.9829% |
BertTokenizerFast(PaddleNLP) | 44.5476s | 1,000,001 | 99.6754% |
BertTokenizerFast(Huggingface) | 53.2525s | 1,000,001 | 99.6754% |
FastBertTokenizer(Tensorflow) | 202.1633s | 1,000,001 | 99.6639% |
Tokenizer | Elapsed Time | texts | Accuracy |
---|---|---|---|
BertTokenizerFast(Huggingface) | 208.8858s | 2,000,000 | 99.7964% |
BertTokenizerFast(PaddleNLP) | 192.6593s | 2,000,000 | 99.7964% |
FastBertTokenizer(Tensorflow) | 413.2010s | 2,000,000 | 99.7892% |
Blingfire | 39.3765s | 2,000,000 | 99.9780% |
FlashBertTokenizer | 22.8820s | 2,000,000 | 99.8970% |
Tokenizer | Elapsed Time | texts | Accuracy |
---|---|---|---|
FlashBertTokenizer | 22.0901s | 2,000,001 | 99.8971% |
Blingfire | 37.9836s | 2,000,001 | 99.9780% |
rust_tokenizers(guillaume-be) | 98.0366s | 2,000,001 | 99.9976% |
BertTokenizerFast(PaddleNLP) | 208.6889s | 2,000,001 | 99.7964% |
BertTokenizerFast(Huggingface) | 219.2644s | 2,000,001 | 99.7964% |
FastBertTokenizer(Tensorflow) | 413.9725s | 2,000,001 | 99.7892% |
Tokenizer | Elapsed Time | texts | Accuracy |
---|---|---|---|
BertTokenizerBidirectional(KR-BERT Original) | 128.3320s | 1,000,000 | 100.0000% |
FlashBertTokenizer(Bidirectional) | 10.4492s | 1,000,000 | 99.9631% |
%%{ init: { "er" : { "layoutDirection" : "LR" } } }%%
erDiagram
Text ||--o{ Preprocess : tokenize
Preprocess o{--|| Inference : memcpy_h2d
Inference o{--|| Postprocess : memcpy_d2h
FlashBertTokenizer can be used with any framework. CUDA version compatibility for each framework is also important for fast inference of LLMs.
- PyTorch no longer supports installation using conda.
- ONNXRUNTIME is separated by CUDA version.
- PyTorch is also looking to ditch CUDA 12.x in favor of the newer CUDA 12.8. However, the trend is to keep CUDA 11.8 in all frameworks.
- CUDA 12.x was made for the newest GPUs, Hopper and Blackwell, and on GPUs like Volta, CUDA 11.8 is faster than CUDA 12.x.
DL Framework | Version | OS | CPU | CUDA 11.8 | CUDA 12.3 | CUDA 12.4 | CUDA 12.6 | CUDA 12.8 |
---|---|---|---|---|---|---|---|---|
PyTorch | 2.6 | Linux, Windows | ⚪ | ⚪ | ❌ | ⚪ | ⚪ | ❌ |
PyTorch | 2.7 | Linux, Windows | ⚪ | ⚪ | ❌ | ❌ | ⚪ | ⚪ |
ONNXRUNTIME(11) | 1.20.x | Linux, Windows | ⚪ | ⚪ | ❌ | ❌ | ❌ | ❌ |
ONNXRUNTIME(12) | 1.20.x | Linux, Windows | ⚪ | ❌ | ⚪ | ⚪ | ⚪ | ⚪ |
PaddlePaddle | 3.0-beta | Linux, Windows | ⚪ | ⚪ | ❌ | ❌ | ❌ | ❌ |
Here is an example of installing and running cuDF in Run State of the Art NLP Workloads at Scale with RAPIDS, HuggingFace, and Dask. (It's incredibly fast)
You can run WordPiece Tokenizer on GPUs on rapids(cudf).
As you can see in how to install rapids, it only supports Linux and the CUDA version is not the same as other frameworks, so docker is the best choice, which is faster than CPU for batch processing but slower than CPU for streaming processing.
There are good example codes and explanations in the[ blog](https://developer.nvidia.com/blog/run-state-of-the-art-nlp-workloads-at-scale-with-rapids-huggingface-and-dask/#:~:text=,and then used in subsequent). To use cuDF, you must first convert vocab.txt to hash_vocab as shown below. The problem is that the hash_vocab function cannot convert multilingual. Therefore, the WordpieceTokenizer of cuDF cannot be used if there are any characters other than English/Chinese in the vocab.
import cudf
from cudf.utils.hash_vocab_utils import hash_vocab
hash_vocab('bert-base-cased-vocab.txt', 'voc_hash.txt')
- [x] BidirectionalWordPieceTokenizer
- [x] BatchEncoder with Multithreading.
- [x] Replace
std::list
toboost::intrusive::list
. - [x]
MaxMatch-Dropout: Subword Regularization for WordPiece Option. - [x] Use stack memory for reduce memory allocation. (C-Style, alloca, _alloca)
- [x]
Support for parallel processing option for single encode. - [ ]
circle.ai
- [ ] Implement distribution of compiled wheel packages for installation.
- [ ] SIMD
- [ ]
CUDA Version.
FlashTokenizer is inspired by FlashAttention, FlashInfer, FastBertTokenizer and tokenizers-cpp projects.
-
WordPiece
- 📒 huggingface/tokenizers (Rust)
- Rust implementation of transformers.BertTokenizerFast, provided as a Python package.
- 🔵 Provided as a Python package.
- 🔥 FastBertTokenizer (C#)
- It demonstrates incredibly fast performance, but accuracy significantly decreases for non-English queries.
- ❌ BertTokenizers (C#)
- It can be confirmed from FastBertTokenizer (C#) VS BertTokenizers (C#) that
FastBertTokenizer(C#)
is faster.
- It can be confirmed from FastBertTokenizer (C#) VS BertTokenizers (C#) that
- 🔥 rust-tokenizers (Rust)
- Slower than BertTokenizerFlash and Blingfire but faster and more accurate than other implementations.
- 🔵 Provided as a Python package.
- ❌ tokenizers-cpp (C++)
-
tokenizer-cpp
is a wrapper around SentencePiece and HuggingFace's Rust implementation, so performance benchmarking is meaningless.
-
- ❌ bertTokenizer (Java)
- Java is not covered.
- ✅ ZhuoruLin/fast-wordpiece (Rust)
- A Rust implementation using LinMaxMatching, runnable only in Rust, and expected to be no faster than the C++ implementation.
- ❌ huggingface_tokenizer_cpp (C++)
- Very slow due to naive C++ implementation.
- ❌ SeanLee97/BertWordPieceTokenizer.jl (Julia)
- Julia is not covered.
- 📒 huggingface/tokenizers (Rust)
- BPE
- SentencePiece
- https://medium.com/@techhara/which-bert-tokenizer-is-faster-b832aa978b46
- https://medium.com/@atharv6f_47401/wordpiece-tokenization-a-bpe-variant-73cc48865cbf
- https://www.restack.io/p/transformer-models-bert-answer-fast-berttokenizerfast-cat-ai
- https://medium.com/@anmolkohli/my-notes-on-bert-tokenizer-and-model-98dc22d0b64
- https://nocomplexity.com/documents/fossml/nlpframeworks.html
- https://github.com/martinus/robin-hood-hashing
- https://arxiv.org/abs/2012.15524
- https://github.com/google/highway
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for flash-tokenizer
Similar Open Source Tools

Liger-Kernel
Liger Kernel is a collection of Triton kernels designed for LLM training, increasing training throughput by 20% and reducing memory usage by 60%. It includes Hugging Face Compatible modules like RMSNorm, RoPE, SwiGLU, CrossEntropy, and FusedLinearCrossEntropy. The tool works with Flash Attention, PyTorch FSDP, and Microsoft DeepSpeed, aiming to enhance model efficiency and performance for researchers, ML practitioners, and curious novices.

IDvs.MoRec
This repository contains the source code for the SIGIR 2023 paper 'Where to Go Next for Recommender Systems? ID- vs. Modality-based Recommender Models Revisited'. It provides resources for evaluating foundation, transferable, multi-modal, and LLM recommendation models, along with datasets, pre-trained models, and training strategies for IDRec and MoRec using in-batch debiased cross-entropy loss. The repository also offers large-scale datasets, code for SASRec with in-batch debias cross-entropy loss, and information on joining the lab for research opportunities.

EasyEdit
EasyEdit is a Python package for edit Large Language Models (LLM) like `GPT-J`, `Llama`, `GPT-NEO`, `GPT2`, `T5`(support models from **1B** to **65B**), the objective of which is to alter the behavior of LLMs efficiently within a specific domain without negatively impacting performance across other inputs. It is designed to be easy to use and easy to extend.

qserve
QServe is a serving system designed for efficient and accurate Large Language Models (LLM) on GPUs with W4A8KV4 quantization. It achieves higher throughput compared to leading industry solutions, allowing users to achieve A100-level throughput on cheaper L40S GPUs. The system introduces the QoQ quantization algorithm with 4-bit weight, 8-bit activation, and 4-bit KV cache, addressing runtime overhead challenges. QServe improves serving throughput for various LLM models by implementing compute-aware weight reordering, register-level parallelism, and fused attention memory-bound techniques.

BitBLAS
BitBLAS is a library for mixed-precision BLAS operations on GPUs, for example, the $W_{wdtype}A_{adtype}$ mixed-precision matrix multiplication where $C_{cdtype}[M, N] = A_{adtype}[M, K] \times W_{wdtype}[N, K]$. BitBLAS aims to support efficient mixed-precision DNN model deployment, especially the $W_{wdtype}A_{adtype}$ quantization in large language models (LLMs), for example, the $W_{UINT4}A_{FP16}$ in GPTQ, the $W_{INT2}A_{FP16}$ in BitDistiller, the $W_{INT2}A_{INT8}$ in BitNet-b1.58. BitBLAS is based on techniques from our accepted submission at OSDI'24.

MooER
MooER (摩耳) is an LLM-based speech recognition and translation model developed by Moore Threads. It allows users to transcribe speech into text (ASR) and translate speech into other languages (AST) in an end-to-end manner. The model was trained using 5K hours of data and is now also available with an 80K hours version. MooER is the first LLM-based speech model trained and inferred using domestic GPUs. The repository includes pretrained models, inference code, and a Gradio demo for a better user experience.

KwaiAgents
KwaiAgents is a series of Agent-related works open-sourced by the [KwaiKEG](https://github.com/KwaiKEG) from [Kuaishou Technology](https://www.kuaishou.com/en). The open-sourced content includes: 1. **KAgentSys-Lite**: a lite version of the KAgentSys in the paper. While retaining some of the original system's functionality, KAgentSys-Lite has certain differences and limitations when compared to its full-featured counterpart, such as: (1) a more limited set of tools; (2) a lack of memory mechanisms; (3) slightly reduced performance capabilities; and (4) a different codebase, as it evolves from open-source projects like BabyAGI and Auto-GPT. Despite these modifications, KAgentSys-Lite still delivers comparable performance among numerous open-source Agent systems available. 2. **KAgentLMs**: a series of large language models with agent capabilities such as planning, reflection, and tool-use, acquired through the Meta-agent tuning proposed in the paper. 3. **KAgentInstruct**: over 200k Agent-related instructions finetuning data (partially human-edited) proposed in the paper. 4. **KAgentBench**: over 3,000 human-edited, automated evaluation data for testing Agent capabilities, with evaluation dimensions including planning, tool-use, reflection, concluding, and profiling.

Cherry_LLM
Cherry Data Selection project introduces a self-guided methodology for LLMs to autonomously discern and select cherry samples from open-source datasets, minimizing manual curation and cost for instruction tuning. The project focuses on selecting impactful training samples ('cherry data') to enhance LLM instruction tuning by estimating instruction-following difficulty. The method involves phases like 'Learning from Brief Experience', 'Evaluating Based on Experience', and 'Retraining from Self-Guided Experience' to improve LLM performance.

agentscope
AgentScope is a multi-agent platform designed to empower developers to build multi-agent applications with large-scale models. It features three high-level capabilities: Easy-to-Use, High Robustness, and Actor-Based Distribution. AgentScope provides a list of `ModelWrapper` to support both local model services and third-party model APIs, including OpenAI API, DashScope API, Gemini API, and ollama. It also enables developers to rapidly deploy local model services using libraries such as ollama (CPU inference), Flask + Transformers, Flask + ModelScope, FastChat, and vllm. AgentScope supports various services, including Web Search, Data Query, Retrieval, Code Execution, File Operation, and Text Processing. Example applications include Conversation, Game, and Distribution. AgentScope is released under Apache License 2.0 and welcomes contributions.

Q-Bench
Q-Bench is a benchmark for general-purpose foundation models on low-level vision, focusing on multi-modality LLMs performance. It includes three realms for low-level vision: perception, description, and assessment. The benchmark datasets LLVisionQA and LLDescribe are collected for perception and description tasks, with open submission-based evaluation. An abstract evaluation code is provided for assessment using public datasets. The tool can be used with the datasets API for single images and image pairs, allowing for automatic download and usage. Various tasks and evaluations are available for testing MLLMs on low-level vision tasks.

unsloth
Unsloth is a tool that allows users to fine-tune large language models (LLMs) 2-5x faster with 80% less memory. It is a free and open-source tool that can be used to fine-tune LLMs such as Gemma, Mistral, Llama 2-5, TinyLlama, and CodeLlama 34b. Unsloth supports 4-bit and 16-bit QLoRA / LoRA fine-tuning via bitsandbytes. It also supports DPO (Direct Preference Optimization), PPO, and Reward Modelling. Unsloth is compatible with Hugging Face's TRL, Trainer, Seq2SeqTrainer, and Pytorch code. It is also compatible with NVIDIA GPUs since 2018+ (minimum CUDA Capability 7.0).

ReST-MCTS
ReST-MCTS is a reinforced self-training approach that integrates process reward guidance with tree search MCTS to collect higher-quality reasoning traces and per-step value for training policy and reward models. It eliminates the need for manual per-step annotation by estimating the probability of steps leading to correct answers. The inferred rewards refine the process reward model and aid in selecting high-quality traces for policy model self-training.

AiOS
AiOS is a tool for human pose and shape estimation, performing human localization and SMPL-X estimation in a progressive manner. It consists of body localization, body refinement, and whole-body refinement stages. Users can download datasets for evaluation, SMPL-X body models, and AiOS checkpoint. Installation involves creating a conda virtual environment, installing PyTorch, torchvision, Pytorch3D, MMCV, and other dependencies. Inference requires placing the video for inference and pretrained models in specific directories. Test results are provided for NMVE, NMJE, MVE, and MPJPE on datasets like BEDLAM and AGORA. Users can run scripts for AGORA validation, AGORA test leaderboard, and BEDLAM leaderboard. The tool acknowledges codes from MMHuman3D, ED-Pose, and SMPLer-X.

ExplainableAI.jl
ExplainableAI.jl is a Julia package that implements interpretability methods for black-box classifiers, focusing on local explanations and attribution maps in input space. The package requires models to be differentiable with Zygote.jl. It is similar to Captum and Zennit for PyTorch and iNNvestigate for Keras models. Users can analyze and visualize explanations for model predictions, with support for different XAI methods and customization. The package aims to provide transparency and insights into model decision-making processes, making it a valuable tool for understanding and validating machine learning models.

ms-swift
ms-swift is an official framework provided by the ModelScope community for fine-tuning and deploying large language models and multi-modal large models. It supports training, inference, evaluation, quantization, and deployment of over 400 large models and 100+ multi-modal large models. The framework includes various training technologies and accelerates inference, evaluation, and deployment modules. It offers a Gradio-based Web-UI interface and best practices for easy application of large models. ms-swift supports a wide range of model types, dataset types, hardware support, lightweight training methods, distributed training techniques, quantization training, RLHF training, multi-modal training, interface training, plugin and extension support, inference acceleration engines, model evaluation, and model quantization.