flash-tokenizer

flash-tokenizer

EFFICIENT AND OPTIMIZED TOKENIZER ENGINE FOR LLM INFERENCE SERVING

Stars: 109

Visit
 screenshot

README:

FlashTokenizer

The world's fastest CPU tokenizer library!

🇨🇳 简体中文 🇰🇷한국어 🇯🇵日本語

EFFICIENT AND OPTIMIZED TOKENIZER ENGINE FOR LLM INFERENCE SERVING

FlashTokenizer is a high-performance tokenizer implementation in C++ of the BertTokenizer used for LLM inference. It has the highest speed and accuracy of any tokenizer, such as FlashAttention and FlashInfer, and is 10 times faster than BertTokenizerFast in transformers.

[!NOTE]

Why?

Banner



FlashTokenizer includes the following core features

[!TIP]

  • Implemented in C++17.

    • MacOS: clang++.
    • Windows: Visual Studio 2022.
    • Ubuntu: g++.
  • Equally fast in Python via pybind11.

  • Support for parallel processing at the C++ level using OPENMP.

News

[!IMPORTANT]
[Apr 02 2025]

  • Add performance benchmarking code
  • Performance benchmarking is conducted using Python, and required packages can be installed via setup.sh.
  • A minor performance improvement has been achieved by adding the tokenize_early_stop feature to BasicTokenizer.
  • OpenMP demonstrated better performance than std::thread across Windows, Linux, and macOS, so we've switched exclusively to OpenMP.

[Mar 31 2025]

  • Modified to provide pre-built whl files for each OS.

[Mar 22 2025]

  • Added DFA to AC Trie.

[Mar 21 2025]

  • Improving Tokenizer Accuracy

[Mar 19 2025]

  • Memory reduction and slight performance improvement by applying LinMaxMatching from Aho–Corasick algorithm.
  • Improved branch pipelining of all functions and force-inline applied.
  • Removed unnecessary operations of WordpieceTokenizer(Backward).
  • Optimizing all functions to operate except for Bloom filter is faster than caching.
  • punctuation, control, and whitespace are defined as constexprs in advance and used as Bloom filters.
  • Reduce unnecessary memory allocation with statistical memory profiling.
  • In ✨FlashTokenizer✨, bert-base-uncased can process 35K texts per second on a single core, with an approximate processing time of 28ns per text.

[Mar 18 2025]

  • Improvements to the accuracy of the BasicTokenizer have improved the overall accuracy and, in particular, produce more accurate results for Unicode input.

[Mar 14 2025]

  • The performance of the WordPieceTokenizer and WordPieceBackwordTokenizer has been improved using Trie, which was introduced in Fast WordPiece Tokenization.
  • Using FastPoolAllocator in std::list improves performance in SingleEncoding, but it is not thread-safe, so std::list<std::string> is used as is in BatchEncoding. In BatchEncoding, OPENMP is completely removed and only std::thread is used.

[Mar 10 2025]

  • Performance improvements through faster token mapping with robin_hood and memory copy minimization with std::list.

Token Ids Map Table Performance Test.

Token and Ids Map used the fastest robin_hood::unordered_flat_map<std::string, int>.

[Mar 09 2025] Completed development of flash-tokenizer for BertTokenizer.

1. Installation

Requirements

  • Windows(AMD64), MacOS(ARM64), Ubuntu(x86-64) .
  • g++ / clang++ / MSVC.
  • python 3.8 ~ 3.13.

Install from PIP

On Windows, you need to install vc_redist.x64.exe.

# Windows
pip install -U flash-tokenizer
# Linux
pip install -U flash-tokenizer
# MacOS
pip install -U flash-tokenizer

Install from Source

git clone https://github.com/NLPOptimize/flash-tokenizer
cd flash-tokenizer/prj
pip install .

2. Sample

from flash_tokenizer import BertTokenizerFlash
from transformers import BertTokenizer

titles = [
    '绝不能放弃,世界上没有失败,只有放弃。',
    'is there any doubt about it "None whatsoever"',
    "세상 어떤 짐승이 이를 드러내고 사냥을 해? 약한 짐승이나 몸을 부풀리지, 진짜 짐승은 누구보다 침착하지.",
    'そのように二番目に死を偽装して生き残るようになったイタドリがどうして初めて見る自分をこんなに気遣ってくれるのかと尋ねると「私が大切にする人たちがあなたを大切にするから」と答えては'
]

tokenizer1 = BertTokenizerFlash.from_pretrained('bert-base-multilingual-cased')
tokenizer2 = BertTokenizer.from_pretrained('bert-base-multilingual-cased')

correct = 0
for title in titles:
    print(title)
    tokens1 = tokenizer1.tokenize(title)
    tokens2 = tokenizer2.tokenize(title)
    ids1 = tokenizer1(title, max_length=512, padding="longest").input_ids[0]
    ids2 = tokenizer2(title, max_length=512, padding="longest", return_tensors="np").input_ids[0].tolist()
    if tokens1 == tokens2 and ids1 == ids2:
        correct += 1
        print("Accept!")
    else:
        print("Wrong Answer")
    print(ids1)
    print(ids2)
    print()

print(f'Accuracy: {correct * 100.0 / len(titles):.2f}%')
绝不能放弃,世界上没有失败,只有放弃。
Accept!
[101, 6346, 2080, 6546, 4284, 3704, 10064, 2087, 5621, 2078, 4917, 4461, 3204, 7480, 10064, 2751, 4461, 4284, 3704, 1882, 102]
[101, 6346, 2080, 6546, 4284, 3704, 10064, 2087, 5621, 2078, 4917, 4461, 3204, 7480, 10064, 2751, 4461, 4284, 3704, 1882, 102]

is there any doubt about it "None whatsoever"
Accept!
[101, 10124, 11155, 11178, 86697, 10978, 10271, 107, 86481, 12976, 11669, 23433, 107, 102]
[101, 10124, 11155, 11178, 86697, 10978, 10271, 107, 86481, 12976, 11669, 23433, 107, 102]

세상 어떤 짐승이 이를 드러내고 사냥을 해? 약한 짐승이나 몸을 부풀리지, 진짜 짐승은 누구보다 침착하지.
Accept!
[101, 9435, 14871, 55910, 9710, 48210, 10739, 35756, 9113, 30873, 31605, 11664, 9405, 118729, 10622, 9960, 136, 9539, 11102, 9710, 48210, 43739, 9288, 10622, 9365, 119407, 12692, 12508, 117, 9708, 119235, 9710, 48210, 10892, 9032, 17196, 80001, 9783, 119248, 23665, 119, 102]
[101, 9435, 14871, 55910, 9710, 48210, 10739, 35756, 9113, 30873, 31605, 11664, 9405, 118729, 10622, 9960, 136, 9539, 11102, 9710, 48210, 43739, 9288, 10622, 9365, 119407, 12692, 12508, 117, 9708, 119235, 9710, 48210, 10892, 9032, 17196, 80001, 9783, 119248, 23665, 119, 102]

そのように二番目に死を偽装して生き残るようになったイタドリがどうして初めて見る自分をこんなに気遣ってくれるのかと尋ねると「私が大切にする人たちがあなたを大切にするから」と答えては
Accept!
[101, 11332, 24273, 2150, 5632, 5755, 1943, 4805, 1980, 2371, 7104, 11592, 5600, 1913, 4814, 1975, 27969, 15970, 21462, 15713, 21612, 10898, 56910, 22526, 22267, 2547, 19945, 7143, 1975, 6621, 2534, 1980, 28442, 60907, 11312, 4854, 7770, 14813, 18825, 58174, 75191, 11662, 3456, 1945, 100812, 1890, 5949, 1912, 3197, 2535, 84543, 2179, 78776, 111787, 22946, 20058, 11377, 3197, 2535, 84543, 16867, 1891, 1940, 6076, 27144, 11588, 102]
[101, 11332, 24273, 2150, 5632, 5755, 1943, 4805, 1980, 2371, 7104, 11592, 5600, 1913, 4814, 1975, 27969, 15970, 21462, 15713, 21612, 10898, 56910, 22526, 22267, 2547, 19945, 7143, 1975, 6621, 2534, 1980, 28442, 60907, 11312, 4854, 7770, 14813, 18825, 58174, 75191, 11662, 3456, 1945, 100812, 1890, 5949, 1912, 3197, 2535, 84543, 2179, 78776, 111787, 22946, 20058, 11377, 3197, 2535, 84543, 16867, 1891, 1940, 6076, 27144, 11588, 102]

Accuracy: 100.00%

3. Other Implementations

Banner

Most BERT-based models use the WordPiece Tokenizer, whose code can be found here. (A simple implementation of Huggingface can be found here).

Since the BertTokenizer is a CPU intensive algorithm, inference can be a bottleneck, and unoptimized tokenizers can be severely slow. A good example is the BidirectionalWordpieceTokenizer introduced in KR-BERT. Most of the code is the same, but the algorithm traverses the sub token backwards and writes a larger value compared to the forward traversal. The paper claims accuracy improvements, but it's hard to find other quantitative metrics, and the accuracy improvements aren't significant, and the tokenizer is seriously slowed down.

  • transformers (Rust Impl, PyO3)
  • paddlenlp (C++ Impl, pybind)
  • tensorflow-text (C++ Impl, pybind)
  • blingfire (C++ Impl, Native binary call)

Most developers will either use transformers.BertTokenizer or transformers.AutoTokenizer, but using AutoTokenizer will return transformers.BertTokenizerFast.

Naturally, it's faster than BertTokenizer, but the results aren't exactly the same, which means you're already giving up 100% accuracy starting with the tokenizer.

BertTokenizer is not only provided by transformers. PaddleNLP and tensorflow-text also provide BertTokenizer.

Then there's Blingfire, which is developed by Microsoft and is being abandoned.

PaddleNLP requires PaddlePaddle and provides tokenizer functionality starting with version 3.0rc. You can install it as follows

##### Install PaddlePaddle, PaddleNLP
python -m pip install paddlepaddle==3.0.0b1 -i https://www.paddlepaddle.org.cn/packages/stable/cpu/
pip install --upgrade paddlenlp==3.0.0b3
##### Install transformers
pip install transformers==4.47.1
##### Install tf-text
pip install tensorflow-text==2.18.1
##### Install blingfire
pip install blingfire

With the exception of blingfire, vocab.txt is all you need to run the tokenizer right away. (blingfire also requires only vocab.txt and can be used after 8 hours of learning).

The implementations we'll look at in detail are PaddleNLP's BertTokenizerFast and blingfire.

  • blingfire: Uses a Deterministic Finite State Machine (DFSM) to eliminate one linear scan and unnecessary comparisons, resulting in a time of O(n), which is impressive.
    • Advantages: 5-10x faster than other implementations.
    • Disadvantages: Long training time (8 hours) and lower accuracy than other implementations. (+Difficult to get help due to de facto development hiatus).
  • PaddleNLP: As shown in the experiments below, PaddleNLP is always faster than BertTokenizerFast (HF) to the same number of decimal places, and is always faster on any OS, whether X86 or Arm.
    • Advantages: Internal implementation is in C++ Compared to transformers.BertTokenizerFast implemented in Rust, it is 1.2x faster while outputting exactly the same values.
      • You can't specify pt(pytorch tensor) in return_tensors, but this is not a problem.
    • Disadvantages: none, other than the need to install PaddlePaddle and PaddleNLP.

4. Performance test

4.1 Performance test (Single text encoding)

Accuracy is the result of measuring google's BertTokenizerFast as a baseline. If even one of the input_ids is incorrect, the answer is considered incorrect.

FlashTokenizer

FlashTokenizer

Tokenizer Performance Comparison

Tokenizer Elapsed Time texts Accuracy
BertTokenizerFast(Huggingface) 84.3700s 1,000,000 99.9226%
BertTokenizerFast(PaddleNLP) 75.6551s 1,000,000 99.9226%
FastBertTokenizer(Tensorflow) 219.1259s 1,000,000 99.9160%
Blingfire 13.6183s 1,000,000 99.8991%
FlashBertTokenizer 8.1968s 1,000,000 99.8216%
Tokenizer Elapsed Time texts Accuracy
BertTokenizerFast(Huggingface) 91.7882s 1,000,000 99.9326%
BertTokenizerFast(PaddleNLP) 83.6839s 1,000,000 99.9326%
FastBertTokenizer(Tensorflow) 204.2240s 1,000,000 99.1379%
Blingfire 13.2374s 1,000,000 99.8588%
FlashBertTokenizer 7.6313s 1,000,000 99.6884%
Tokenizer Elapsed Time texts Accuracy
BertTokenizerFast(Huggingface) 212.1570s 2,000,000 99.7964%
BertTokenizerFast(PaddleNLP) 193.9921s 2,000,000 99.7964%
FastBertTokenizer(Tensorflow) 394.1574s 2,000,000 99.7892%
Blingfire 38.9013s 2,000,000 99.9780%
FlashBertTokenizer 20.4570s 2,000,000 99.8970%
Tokenizer Elapsed Time texts Accuracy
BertTokenizerFast(Huggingface) 52.5744s 1,000,000 99.6754%
BertTokenizerFast(PaddleNLP) 44.8943s 1,000,000 99.6754%
FastBertTokenizer(Tensorflow) 198.0270s 1,000,000 99.6639%
Blingfire 13.0701s 1,000,000 99.9434%
FlashBertTokenizer 5.2601s 1,000,000 99.9484%
Tokenizer Elapsed Time texts Accuracy
FlashBertTokenizer 5.1875s 1,000,001 99.9484%
Blingfire 13.2783s 1,000,001 99.9435%
rust_tokenizers(guillaume-be) 16.6308s 1,000,001 99.9829%
BertTokenizerFast(PaddleNLP) 44.5476s 1,000,001 99.6754%
BertTokenizerFast(Huggingface) 53.2525s 1,000,001 99.6754%
FastBertTokenizer(Tensorflow) 202.1633s 1,000,001 99.6639%
Tokenizer Elapsed Time texts Accuracy
BertTokenizerFast(Huggingface) 208.8858s 2,000,000 99.7964%
BertTokenizerFast(PaddleNLP) 192.6593s 2,000,000 99.7964%
FastBertTokenizer(Tensorflow) 413.2010s 2,000,000 99.7892%
Blingfire 39.3765s 2,000,000 99.9780%
FlashBertTokenizer 22.8820s 2,000,000 99.8970%
Tokenizer Elapsed Time texts Accuracy
FlashBertTokenizer 22.0901s 2,000,001 99.8971%
Blingfire 37.9836s 2,000,001 99.9780%
rust_tokenizers(guillaume-be) 98.0366s 2,000,001 99.9976%
BertTokenizerFast(PaddleNLP) 208.6889s 2,000,001 99.7964%
BertTokenizerFast(Huggingface) 219.2644s 2,000,001 99.7964%
FastBertTokenizer(Tensorflow) 413.9725s 2,000,001 99.7892%
Tokenizer Elapsed Time texts Accuracy
BertTokenizerBidirectional(KR-BERT Original) 128.3320s 1,000,000 100.0000%
FlashBertTokenizer(Bidirectional) 10.4492s 1,000,000 99.9631%
%%{ init: { "er" : { "layoutDirection" : "LR" } } }%%
erDiagram
    Text ||--o{ Preprocess : tokenize
    Preprocess o{--|| Inference : memcpy_h2d
    Inference o{--|| Postprocess : memcpy_d2h

6. Compatibility

FlashBertTokenizer can be used with any framework. CUDA version compatibility for each framework is also important for fast inference of LLMs.

  • PyTorch no longer supports installation using conda.
  • ONNXRUNTIME is separated by CUDA version.
  • PyTorch is also looking to ditch CUDA 12.x in favor of the newer CUDA 12.8. However, the trend is to keep CUDA 11.8 in all frameworks.
    • CUDA 12.x was made for the newest GPUs, Hopper and Blackwell, and on GPUs like Volta, CUDA 11.8 is faster than CUDA 12.x.
DL Framework Version OS CPU CUDA 11.8 CUDA 12.3 CUDA 12.4 CUDA 12.6 CUDA 12.8
PyTorch 2.6 Linux, Windows
PyTorch 2.7 Linux, Windows
ONNXRUNTIME(11) 1.20.x Linux, Windows
ONNXRUNTIME(12) 1.20.x Linux, Windows
PaddlePaddle 3.0-beta Linux, Windows

7. GPU Tokenizer

Here is an example of installing and running cuDF in Run State of the Art NLP Workloads at Scale with RAPIDS, HuggingFace, and Dask. (It's incredibly fast)

You can run WordPiece Tokenizer on GPUs on rapids(cudf).

As you can see in how to install rapids, it only supports Linux and the CUDA version is not the same as other frameworks, so docker is the best choice, which is faster than CPU for batch processing but slower than CPU for streaming processing.

There are good example codes and explanations in the[ blog](https://developer.nvidia.com/blog/run-state-of-the-art-nlp-workloads-at-scale-with-rapids-huggingface-and-dask/#:~:text=,and then used in subsequent). To use cuDF, you must first convert vocab.txt to hash_vocab as shown below. The problem is that the hash_vocab function cannot convert multilingual. Therefore, the WordpieceTokenizer of cuDF cannot be used if there are any characters other than English/Chinese in the vocab.

import cudf
from cudf.utils.hash_vocab_utils import hash_vocab
hash_vocab('bert-base-cased-vocab.txt', 'voc_hash.txt')

TODO

Acknowledgement

FlashTokenizer is inspired by FlashAttention, FlashInfer, FastBertTokenizer and tokenizers-cpp projects.

Performance comparison

⭐ History

Star History Chart

References

For Tasks:

Click tags to check more tools for each tasks

For Jobs:

Alternative AI tools for flash-tokenizer

Similar Open Source Tools

For similar tasks

No tools available

For similar jobs

No tools available