
piccolo
SensiML's open-source AutoML solution for Edge AI model development
Stars: 58

README:
Piccolo AI is the open-source version of SensiML Analytics Studio, a software development toolkit for constructing sensor-based AI inference models optimzed to run on low-power microcontrollers and IoT edge platforms (ex. x86 embedded PCs, RPi, arm Cortex-A mobile processors). Piccolo AI is intended for individual developers, researchers, and AI enthusiasts and made available under the AGPLv3 license.
The Piccolo AI project includes:
- SensiML's ML Engine: The engine behind SensiML’s AutoML model building, sensor data management, model tracking, and embedded firmware generation
- Embedded ML SDK: SensiML's Inferencing and DSP SDK designed for building and running DSP & ML pipelines on edge devices
- Analytic Studio UI: An Intuitive web interface for working with the SensiML ML Engine to build TinyML applications
- SensiML Python Client: Allows programmatic access to the REST API services through Jupyter Notebooks
Piccolo AI is currently optimized for the classification of time-series sensor data. Common use cases enabled by Piccolo AI include:
Application | Example Use Cases | Piccolo AI Reference Application(s) | |
---|---|---|---|
Acoustic Event Detection |
![]() |
Intrusion Detection Machinery / Appliance Monitoring Public Safety (ex. gunshot recogition) Vehicle Diagnostics Livestock Monitoring Worker Safety (ex. leak detection) |
Pump State Recognition Demo Smart Lock Demo |
Activity Recognition |
![]() |
Human Gait Analysis Ergonomic Assessment Sport/Fitness Tracking Elder Care Monitoring |
Boxing Punch Classification Demo |
Gesture Detection |
![]() |
Touchless Machine Interfaces Smarthome Control Gaming Interfaces Driver Monitoring Interactive Retail Displays |
Wizard Wand Demo |
Anomaly Detection |
![]() |
Factory/Plant Process Control Predictive Maintenance Robotics Remote Monitoring |
Robotic Arm Anomaly Detection |
Keyword Spotting |
![]() |
Consumer Electronics UI Security (ex. Voice Authentication) HMI Command/Control |
Keyword Spotting Demo (Part 1,Part 2) |
Vibration Classification |
![]() |
Predictive Maintenance Vehicle Monitoring Supply Chain Monitoring Machine State Detection |
Smart Drill Demo Fan State Demo |
Piccolo AI is not limited to only the applications listed above. Virtually any time-series sensor input(s) can be used to classify application-specific events of interest. Piccolo AI also provides support for regression models as well as basic feature transform outputs without classification. To learn more about Piccolo AI features and capabilities, see the product page.
The simplest way to get started learning and using the features in Piccolo AI is to sign up for an account on SensiML's managed SaaS service at SensiML Free Trial.
To install and manage Piccolo AI yourself, continue with the instructions and step-by-step installation video provided below.
To try out Piccolo AI on your machine, we recommend using Docker.
-
Install and start docker and docker-compose - https://docs.docker.com/engine/
- Note: Windows users will need to use the wsl2 backend. Follow the instructions here
- Note: If using Docker Desktop go to Preferences > Resources > Advanced and set Memory to at least 12GB.
-
Follow the post-installation instructions as well to avoid having to run docker commands as sudo user
-
Some functionality requires calling docker from within docker. To enable this, you will need to make the docker socket accessible to docker containers
sudo chmod 666 /var/run/docker.sock
-
By default the repository does not include docker images to generate model/compiler code for devices. You'll need to pull docker images for the compilers you want to use. See docker images:
docker pull sensiml/sml_x86_generic:9.3.0-v1.0 docker pull sensiml/sml_armgcc_generic:10.3.1-v1.0 docker pull sensiml/sml_x86mingw_generic:9.3-v1.0 docker pull sensiml/sensiml_tensorflow:0a4bec2a-v4.0
-
Make sure you have the latest sensiml base docker image version. If not, you can do a docker pull
docker pull sensiml/base
-
Make sure docker is running by checking your system tray icons. If it is not running then open Docker Desktop application in Windows to start the docker background service.
-
Open a Ubuntu terminal through Windows Subsystem for Linux
git clone https://github.com/sensiml/piccolo cd piccolo
-
Use docker compose to start the services
docker compose up
Login via the Web UI Interface
Go to your browser at http://localhost:8000
to log in to the UI. See the Getting Started Guide to get started.
The default username and password is stored in the src/server/datamanager/fixtures/default.yaml
file
username: [email protected]
password: TinyML4Life
To help you get up and running quickly, you may also find our video installation guide useful. Click on the video below for a complete walkthrough for getting Piccolo AI installed on a Windows 10 / 11 PC:
The Data Studio is SensiML's standalone Data Capture, Labeling, and Machine Learning Model testing application. It is a companion application to Piccolo AI, but it requires a subscription to use with your local Piccolo AI.
Upgrades will need to check out the latest code and run database.intialize container script which will perform any migrations. You may also need to pull the newest sensiml/base docker image in some cases when the underlying packages have been changed.
Piccolo AI is the open-source version of SensiML ML Engine. As such, it's features and capabilities align with the majority of those found in SensiML's ML Engine for which documentation can be found here.
For more information about our documentation processes or to build them yourself, see the docs README.
We welcome contributions from the community. See our contribution guidelines, see CONTRIBUTING.md.
Documentation for developers coming soon!
-
To report a bug or request a feature, create a GitHub Issue. Please ensure someone else hasn't created an issue for the same topic.
-
Need help using Piccolo AI? Reach out on the forum, fellow community member or SensiML engineer will be happy to help you out.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for piccolo
Similar Open Source Tools

second-brain-ai-assistant-course
This open-source course teaches how to build an advanced RAG and LLM system using LLMOps and ML systems best practices. It helps you create an AI assistant that leverages your personal knowledge base to answer questions, summarize documents, and provide insights. The course covers topics such as LLM system architecture, pipeline orchestration, large-scale web crawling, model fine-tuning, and advanced RAG features. It is suitable for ML/AI engineers and data/software engineers & data scientists looking to level up to production AI systems. The course is free, with minimal costs for tools like OpenAI's API and Hugging Face's Dedicated Endpoints. Participants will build two separate Python applications for offline ML pipelines and online inference pipeline.

shire
The Shire is an AI Coding Agent Language that facilitates communication between an LLM and control IDE for automated programming. It offers a straightforward approach to creating AI agents tailored to individual IDEs, enabling users to build customized AI-driven development environments. The concept of Shire originated from AutoDev, a subproject of UnitMesh, with DevIns as its precursor. The tool provides documentation and resources for implementing AI in software engineering projects.

ai-prompts
Instructa AI Prompts is an open-source repository dedicated to collecting and sharing AI prompts, best practices, and curated rules for developers. The goal is to help users quickly set up and refine their workflow with ready-to-use prompts. Users can dynamically include prompts in AI-assisted coding tools like Cursor, GitHub Copilot, Zed, Windsurf, and Cline to adhere to project-specific coding standards, best practices, and automation workflows.

rag-time
RAG Time is a 5-week AI learning series focusing on Retrieval-Augmented Generation (RAG) concepts. The repository contains code samples, step-by-step guides, and resources to help users master RAG. It aims to teach foundational and advanced RAG concepts, demonstrate real-world applications, and provide hands-on samples for practical implementation.

llm-twin-course
The LLM Twin Course is a free, end-to-end framework for building production-ready LLM systems. It teaches you how to design, train, and deploy a production-ready LLM twin of yourself powered by LLMs, vector DBs, and LLMOps good practices. The course is split into 11 hands-on written lessons and the open-source code you can access on GitHub. You can read everything and try out the code at your own pace.

synmetrix
Synmetrix is an open source data engineering platform and semantic layer for centralized metrics management. It provides a complete framework for modeling, integrating, transforming, aggregating, and distributing metrics data at scale. Key features include data modeling and transformations, semantic layer for unified data model, scheduled reports and alerts, versioning, role-based access control, data exploration, caching, and collaboration on metrics modeling. Synmetrix leverages Cube.js to consolidate metrics from various sources and distribute them downstream via a SQL API. Use cases include data democratization, business intelligence and reporting, embedded analytics, and enhancing accuracy in data handling and queries. The tool speeds up data-driven workflows from metrics definition to consumption by combining data engineering best practices with self-service analytics capabilities.

refly
Refly.AI is an open-source AI-native creation engine that empowers users to transform ideas into production-ready content. It features a free-form canvas interface with multi-threaded conversations, knowledge base integration, contextual memory, intelligent search, WYSIWYG AI editor, and more. Users can leverage AI-powered capabilities, context memory, knowledge base integration, quotes, and AI document editing to enhance their content creation process. Refly offers both cloud and self-hosting options, making it suitable for individuals, enterprises, and organizations. The tool is designed to facilitate human-AI collaboration and streamline content creation workflows.

mlcraft
Synmetrix (prev. MLCraft) is an open source data engineering platform and semantic layer for centralized metrics management. It provides a complete framework for modeling, integrating, transforming, aggregating, and distributing metrics data at scale. Key features include data modeling and transformations, semantic layer for unified data model, scheduled reports and alerts, versioning, role-based access control, data exploration, caching, and collaboration on metrics modeling. Synmetrix leverages Cube (Cube.js) for flexible data models that consolidate metrics from various sources, enabling downstream distribution via a SQL API for integration into BI tools, reporting, dashboards, and data science. Use cases include data democratization, business intelligence, embedded analytics, and enhancing accuracy in data handling and queries. The tool speeds up data-driven workflows from metrics definition to consumption by combining data engineering best practices with self-service analytics capabilities.

AI-Gateway
The AI-Gateway repository explores the AI Gateway pattern through a series of experimental labs, focusing on Azure API Management for handling AI services APIs. The labs provide step-by-step instructions using Jupyter notebooks with Python scripts, Bicep files, and APIM policies. The goal is to accelerate experimentation of advanced use cases and pave the way for further innovation in the rapidly evolving field of AI. The repository also includes a Mock Server to mimic the behavior of the OpenAI API for testing and development purposes.

OpenContracts
OpenContracts is an Apache-2 licensed enterprise document analytics tool that supports multiple formats, including PDF and txt-based formats. It features multiple document ingestion pipelines with a pluggable architecture for easy format and ingestion engine support. Users can create custom document analytics tools with beautiful result displays, support mass document data extraction with a LlamaIndex wrapper, and manage document collections, layout parsing, automatic vector embeddings, and human annotation. The tool also offers pluggable parsing pipelines, human annotation interface, LlamaIndex integration, data extraction capabilities, and custom data extract pipelines for bulk document querying.

mattermost-plugin-ai
The Mattermost AI Copilot Plugin is an extension that adds functionality for local and third-party LLMs within Mattermost v9.6 and above. It is currently experimental and allows users to interact with AI models seamlessly. The plugin enhances the user experience by providing AI-powered assistance and features for communication and collaboration within the Mattermost platform.

ocular
Ocular is a set of modules and tools that allow you to build rich, reliable, and performant Generative AI-Powered Search Platforms without the need to reinvent Search Architecture. We help you build you spin up customized internal search in days not months.

leapfrogai
LeapfrogAI is a self-hosted AI platform designed to be deployed in air-gapped resource-constrained environments. It brings sophisticated AI solutions to these environments by hosting all the necessary components of an AI stack, including vector databases, model backends, API, and UI. LeapfrogAI's API closely matches that of OpenAI, allowing tools built for OpenAI/ChatGPT to function seamlessly with a LeapfrogAI backend. It provides several backends for various use cases, including llama-cpp-python, whisper, text-embeddings, and vllm. LeapfrogAI leverages Chainguard's apko to harden base python images, ensuring the latest supported Python versions are used by the other components of the stack. The LeapfrogAI SDK provides a standard set of protobuffs and python utilities for implementing backends and gRPC. LeapfrogAI offers UI options for common use-cases like chat, summarization, and transcription. It can be deployed and run locally via UDS and Kubernetes, built out using Zarf packages. LeapfrogAI is supported by a community of users and contributors, including Defense Unicorns, Beast Code, Chainguard, Exovera, Hypergiant, Pulze, SOSi, United States Navy, United States Air Force, and United States Space Force.

OmAgent
OmAgent is an open-source agent framework designed to streamline the development of on-device multimodal agents. It enables agents to empower various hardware devices, integrates speed-optimized SOTA multimodal models, provides SOTA multimodal agent algorithms, and focuses on optimizing the end-to-end computing pipeline for real-time user interaction experience. Key features include easy connection to diverse devices, scalability, flexibility, and workflow orchestration. The architecture emphasizes graph-based workflow orchestration, native multimodality, and device-centricity, allowing developers to create bespoke intelligent agent programs.

open-assistant-api
Open Assistant API is an open-source, self-hosted AI intelligent assistant API compatible with the official OpenAI interface. It supports integration with more commercial and private models, R2R RAG engine, internet search, custom functions, built-in tools, code interpreter, multimodal support, LLM support, and message streaming output. Users can deploy the service locally and expand existing features. The API provides user isolation based on tokens for SaaS deployment requirements and allows integration of various tools to enhance its capability to connect with the external world.