nexa-sdk

nexa-sdk

Run frontier LLMs and VLMs with day-0 model support across GPU, NPU, and CPU, with comprehensive runtime coverage for PC (Python/C++), mobile (Android & iOS), and Linux/IoT (Arm64 & x86 Docker). Supporting OpenAI GPT-OSS, IBM Granite-4, Qwen-3-VL, Gemma-3n, Ministral-3, and more.

Stars: 7726

Visit
 screenshot

Nexa SDK is a comprehensive toolkit supporting ONNX and GGML models for text generation, image generation, vision-language models (VLM), and text-to-speech (TTS) capabilities. It offers an OpenAI-compatible API server with JSON schema mode and streaming support, along with a user-friendly Streamlit UI. Users can run Nexa SDK on any device with Python environment, with GPU acceleration supported. The toolkit provides model support, conversion engine, inference engine for various tasks, and differentiating features from other tools.

README:

Nexa AI Banner

įŽ€äŊ“中文 | English

🤝 Supported chipmakers

NexaSDK for Mobile - #1 Product of the Day NexaAI/nexa-sdk - #1 Repository of the Day

Documentation Vote for Next Models X account Join us on Discord Join us on Slack

NexaSDK

NexaSDK lets you build the smartest and fastest on-device AI with minimum energy. It is a highly performant local inference framework that runs the latest multimodal AI models locally on NPU, GPU, and CPU - across Android, Windows, Linux, macOS, and iOS devices with a few lines of code.

NexaSDK supports latest models weeks or months before anyone else — Qwen3-VL, DeepSeek-OCR, Gemma3n (Vision), and more.

⭐ Star this repo to keep up with exciting updates and new releases about latest on-device AI capabilities.

🏆 Recognized Milestones

🚀 Quick Start

Platform Links
đŸ–Ĩī¸ CLI Quick Start īŊœ Docs
🐍 Python Quick Start īŊœ Docs
🤖 Android Quick Start īŊœ Docs
đŸŗ Linux Docker Quick Start īŊœ Docs
🍎 iOS Quick Start īŊœ Docs

đŸ–Ĩī¸ CLI

Download:

Windows macOS Linux
arm64 (Qualcomm NPU) arm64 (Apple Silicon) arm64
x64 (Intel/AMD NPU) x64 x64

Run your first model:

# Chat with Qwen3
nexa infer ggml-org/Qwen3-1.7B-GGUF

# Multimodal: drag images into the CLI
nexa infer NexaAI/Qwen3-VL-4B-Instruct-GGUF

# NPU (Windows arm64 with Snapdragon X Elite)
nexa infer NexaAI/OmniNeural-4B
  • Models: LLM, Multimodal, ASR, OCR, Rerank, Object Detection, Image Generation, Embedding
  • Formats: GGUF, MLX, NEXA
  • NPU Models: Model Hub
  • 📖 CLI Reference Docs

🐍 Python SDK

pip install nexaai
from nexaai import LLM, GenerationConfig, ModelConfig, LlmChatMessage

llm = LLM.from_(model="NexaAI/Qwen3-0.6B-GGUF", config=ModelConfig())

conversation = [
    LlmChatMessage(role="user", content="Hello, tell me a joke")
]
prompt = llm.apply_chat_template(conversation)
for token in llm.generate_stream(prompt, GenerationConfig(max_tokens=100)):
    print(token, end="", flush=True)
  • Models: LLM, Multimodal, ASR, OCR, Rerank, Object Detection, Image Generation, Embedding
  • Formats: GGUF, MLX, NEXA
  • NPU Models: Model Hub
  • 📖 Python SDK Docs

🤖 Android SDK

Add to your app/AndroidManifest.xml

<application android:extractNativeLibs="true">

Add to your build.gradle.kts:

dependencies {
    implementation("ai.nexa:core:0.0.19")
}
// Initialize SDK
NexaSdk.getInstance().init(this)

// Load and run model
VlmWrapper.builder()
    .vlmCreateInput(VlmCreateInput(
        model_name = "omni-neural",
        model_path = "/data/data/your.app/files/models/OmniNeural-4B/files-1-1.nexa",
        plugin_id = "npu",
        config = ModelConfig()
    ))
    .build()
    .onSuccess { vlm ->
        vlm.generateStreamFlow("Hello!", GenerationConfig()).collect { print(it) }
    }
  • Requirements: Android minSdk 27, Qualcomm Snapdragon 8 Gen 4 Chip
  • Models: LLM, Multimodal, ASR, OCR, Rerank, Embedding
  • NPU Models: Supported Models
  • 📖 Android SDK Docs

đŸŗ Linux Docker

docker pull nexa4ai/nexasdk:latest

export NEXA_TOKEN="your_token_here"
docker run --rm -it --privileged \
  -e NEXA_TOKEN \
  nexa4ai/nexasdk:latest infer NexaAI/Granite-4.0-h-350M-NPU

🍎 iOS SDK

Download NexaSdk.xcframework and add to your Xcode project.

import NexaSdk

// Example: Speech Recognition
let asr = try Asr(plugin: .ane)
try await asr.load(from: modelURL)

let result = try await asr.transcribe(options: .init(audioPath: "audio.wav"))
print(result.asrResult.transcript)

âš™ī¸ Features & Comparisons

Features NexaSDK Ollama llama.cpp LM Studio
NPU support ✅ NPU-first ❌ ❌ ❌
Android/iOS SDK support ✅ NPU/GPU/CPU support âš ī¸ âš ī¸ ❌
Linux support (Docker image) ✅ ✅ ✅ ❌
Day-0 model support in GGUF, MLX, NEXA ✅ ❌ âš ī¸ ❌
Full multimodality support ✅ Image, Audio, Text, Embedding, Rerank, ASR, TTS âš ī¸ âš ī¸ âš ī¸
Cross-platform support ✅ Desktop, Mobile (Android, iOS), Automotive, IoT (Linux) âš ī¸ âš ī¸ âš ī¸
One line of code to run ✅ ✅ âš ī¸ ✅
OpenAI-compatible API + Function calling ✅ ✅ ✅ ✅

Legend: ✅ Supported   |   âš ī¸ Partial or limited support   |   ❌ No

🙏 Acknowledgements

We would like to thank the following projects:

📄 License

NexaSDK uses a dual licensing model:

CPU/GPU Components

Licensed under Apache License 2.0.

NPU Components

🤝 Contact & Community Support

Business Inquiries

For model launching partner, business inquiries, or any other questions, please schedule a call with us here.

Community & Support

Want more model support, backend support, device support or other features? We'd love to hear from you!

Feel free to submit an issue on our GitHub repository with your requests, suggestions, or feedback. Your input helps us prioritize what to build next.

Join our community:

🏆 Nexa × Qualcomm On-Device Bounty Program

Round 1: Build a working Android AI app that runs fully on-device on Qualcomm Hexagon NPU with NexaSDK.

Timeline (PT): Jan 15 → Feb 15 Prizes: $6,500 cash prize, Qualcomm official spotlight, flagship Snapdragon device, expert mentorship, and more

👉 Join & details: https://sdk.nexa.ai/bounty

For Tasks:

Click tags to check more tools for each tasks

For Jobs:

Alternative AI tools for nexa-sdk

Similar Open Source Tools

For similar tasks

For similar jobs