rknn-llm

rknn-llm

None

Stars: 353

Visit
 screenshot

RKLLM software stack is a toolkit designed to help users quickly deploy AI models to Rockchip chips. It consists of RKLLM-Toolkit for model conversion and quantization, RKLLM Runtime for deploying models on Rockchip NPU platform, and RKNPU kernel driver for hardware interaction. The toolkit supports RK3588 and RK3576 series chips and various models like TinyLLAMA, Qwen, Phi, ChatGLM3, Gemma, InternLM2, and MiniCPM. Users can download packages, docker images, examples, and docs from RKLLM_SDK. Additionally, RKNN-Toolkit2 SDK is available for deploying additional AI models.

README:

Description

RKLLM software stack can help users to quickly deploy AI models to Rockchip chips. The overall framework is as follows:

In order to use RKNPU, users need to first run the RKLLM-Toolkit tool on the computer, convert the trained model into an RKLLM format model, and then inference on the development board using the RKLLM C API.

  • RKLLM-Toolkit is a software development kit for users to perform model conversionand quantization on PC.

  • RKLLM Runtime provides C/C++ programming interfaces for Rockchip NPU platform to help users deploy RKLLM models and accelerate the implementation of LLM applications.

  • RKNPU kernel driver is responsible for interacting with NPU hardware. It has been open source and can be found in the Rockchip kernel code.

Support Platform

  • RK3588 Series
  • RK3576 Series

Support Models

Download

You can download the latest package, docker image, example, documentation, and platform-tool from RKLLM_SDK, fetch code: rkllm

Note

The modifications in version 1.1.0 are significant, making it incompatible with older version models. Please use the latest toolchain for model conversion and inference.

RKNN Toolkit2

If you want to deploy additional AI model, we have introduced a SDK called RKNN-Toolkit2. For details, please refer to:

https://github.com/airockchip/rknn-toolkit2

CHANGELOG

v1.1.0

  • Support group-wise quantization (w4a16 group sizes of 32/64/128, w8a8 group sizes of 128/256/512).
  • Support joint inference with LoRA model loading
  • Support storage and preloading of prompt cache.
  • Support gguf model conversion (currently only support q4_0 and fp16).
  • Optimize initialization, prefill, and decode time.
  • Support four input types: prompt, embedding, token, and multimodal.
  • Add PC-based simulation accuracy testing and inference interface support for rkllm-toolkit.
  • Add gdq algorithm to improve 4-bit quantization accuracy.
  • Add mixed quantization algorithm, supporting a combination of grouped and non-grouped quantization based on specified ratios.
  • Add support for models such as Llama3, Gemma2, and MiniCPM3.
  • Resolve catastrophic forgetting issue when the number of tokens exceeds max_context.

for older version, please refer CHANGELOG

For Tasks:

Click tags to check more tools for each tasks

For Jobs:

Alternative AI tools for rknn-llm

Similar Open Source Tools

For similar tasks

For similar jobs