Best AI tools for< Sensory Integration Therapist >
Infographic
20 - AI tool Sites
Muzify
Muzify is an AI tool that creates personalized music playlists based on the books you read. By leveraging artificial intelligence technology, Muzify analyzes the content of the book and generates a unique music playlist tailored to the themes, emotions, and characters found in the book. With Muzify, users can experience a new way of connecting with their favorite books through music, enhancing their reading experience and creating a multi-sensory journey. Whether you're a bookworm looking to enhance your reading ambiance or a music enthusiast seeking new playlists, Muzify offers a seamless integration of literature and music in a creative and innovative manner.
Roboto AI
Roboto AI is an advanced platform that allows users to curate, transform, and analyze robotics data at scale. It provides features for data management, actions, events, search capabilities, and SDK integration. The application helps users understand complex machine data through multimodal queries and custom actions, enabling efficient data processing and collaboration within teams.
Edge Impulse
Edge Impulse is a leading edge AI platform that enables users to build datasets, train models, and optimize libraries to run directly on any edge device. It offers sensor datasets, feature engineering, model optimization, algorithms, and NVIDIA integrations. The platform is designed for product leaders, AI practitioners, embedded engineers, and OEMs across various industries and applications. Edge Impulse helps users unlock sensor data value, build high-quality sensor datasets, advance algorithm development, optimize edge AI models, and achieve measurable results. It allows for future-proofing workflows by generating models and algorithms that perform efficiently on any edge hardware.
One Drop
One Drop has developed a next-generation intradermal continuous glucose monitoring (CGM) device. Advanced material science, chemistry, and electronics make the One Health CGM among the most innovative body-worn sensors—explicitly designed to meet the needs of people with type 2 diabetes. By integrating proprietary micro-needle technology and AI-enabled precision guidance, the minimally invasive One Health CGM will deliver pain-free, needle-free wear and unprecedented access to a population currently underserved by CGM.
Gemini AI
Gemini AI is a cutting-edge AI and ML solutions provider that focuses on accelerating innovation through artificial intelligence. The company leads the revolution of artificial intelligence for augmented intelligence, leveraging the power of AI and ML to solve complex problems in various domains such as computer vision, geospatial science, human health, and integrative technologies. Gemini AI offers services in data and sensors, modeling, and deployment, providing actionable insights and predictive models trained by human input. The company aims to augment human intelligence through discovery, innovation, transparency, and optimization.
Gastrograph AI
Gastrograph AI is a cutting-edge artificial intelligence platform that empowers food and beverage companies to optimize their products for consistent market success. Leveraging the world's largest sensory database, Gastrograph AI provides deep insights into consumer preferences, enabling companies to develop new products, enter new markets, and optimize existing products with confidence. With Gastrograph AI, companies can reduce time to market costs, simplify product development, and gain access to trustworthy insights, leading to measurable results and a competitive edge in the global marketplace.
CEBRA
CEBRA is a machine-learning method that compresses time series data to reveal hidden structures in the variability of the data. It excels in analyzing behavioral and neural data simultaneously, decoding activity from the visual cortex of the mouse brain to reconstruct viewed videos. CEBRA fills the gap by leveraging joint behavior and neural data to uncover neural dynamics, providing consistent and high-performance latent spaces for hypothesis testing or label-free analysis across sensory and motor tasks.
ImageBind
ImageBind by Meta AI is a groundbreaking AI tool that revolutionizes the way data from different modalities is processed. It introduces a new approach to 'link' AI across various senses by recognizing relationships between images, video, audio, text, depth, thermal, and IMUs. ImageBind's multimodal AI capabilities enable machines to analyze diverse forms of information simultaneously, without explicit supervision. It offers a single embedding space to bind multiple sensory inputs together, enhancing recognition performance and supporting zero-shot and few-shot recognition tasks. The tool upgrades existing AI models to accommodate input from any of the six modalities, facilitating audio-based search, cross-modal search, multimodal arithmetic, and cross-modal generation.
Aimlabs
Aimlabs is a comprehensive gaming platform that provides users with a variety of tools to improve their aim and overall gaming skills. With over 29,000 tasks and playlists, 500 FPS game profiles, and detailed aim analysis, Aimlabs helps gamers of all levels improve their performance. The platform also features an AI personal assistant that can offer tips and create custom maps on-the-spot. Aimlabs is the official partner of VALORANT and Rainbow Six Siege, and its science-backed training methods have been developed by a team of neuroscientists, designers, developers, and computer vision pioneers.
Mobility Engineering
Mobility Engineering is a website that provides news, articles, and resources on the latest developments in mobility technology. The site covers a wide range of topics, including autonomous vehicles, connected cars, electric vehicles, and more. Mobility Engineering is a valuable resource for anyone interested in staying up-to-date on the latest trends in mobility technology.
Tangram Vision
Tangram Vision is a company that provides sensor calibration tools and infrastructure for robotics and autonomous vehicles. Their products include MetriCal, a high-speed bundle adjustment software for precise sensor calibration, and AutoCal, an on-device, real-time calibration health check and adjustment tool. Tangram Vision also offers a high-resolution depth sensor called HiFi, which combines high-resolution depth data with high-powered AI capabilities. The company's mission is to accelerate the development and deployment of autonomous systems by providing the tools and infrastructure needed to ensure the accuracy and reliability of sensors.
Rokoko
Rokoko is a provider of intuitive and affordable motion capture tools for character animation. Their tools include the Full Performance Capture system with the Smartsuit Pro II, Smartgloves, Coil Pro, Face Capture, and Headcam. Rokoko also offers Rokoko Vision, an AI mocap solution. The company aims to streamline the motion capture process for creators in animation, film, VFX, game development, AR, VR, academia, and education.
ePlant
ePlant is an AI-powered platform that revolutionizes plant data collection and application. It offers advanced plant-data intelligence through state-of-the-art wireless plant health monitors and AI technology. The platform enables users to remotely monitor trees and vines, track plant growth and stress, and measure environmental factors like temperature, light, and humidity. ePlant is trusted by experts in various fields such as research, urban forestry, precision agriculture, and silviculture. The TreeTag system, a key feature of ePlant, has been recognized as one of TIME's Best Inventions. With a focus on innovation and sustainability, ePlant is a valuable tool for anyone looking to optimize plant health and productivity.
Deep Planet
Deep Planet is a precision viticulture platform powered by AI that focuses on enhancing sustainability in agriculture. It offers solutions for the wine industry, landowners, farmers, and supply chain companies by providing data-driven insights to maximize potential, optimize nutrient application, and support the transition to achieve net zero targets. The platform leverages AI and satellite imagery to empower users with actionable intelligence for better decision-making in vineyard management and soil health.
Airship AI
Airship AI is a cutting-edge, artificial intelligence-driven video, sensor, and data management surveillance platform. Customers rely on their services to provide actionable intelligence in real-time, collected from a wide range of deployed sensors, utilizing the latest in edge and cloud-based analytics. These capabilities improve public safety and operational efficiency for both public sector and commercial clients. Founded in 2006, Airship AI is U.S. owned and operated, headquartered in Redmond, Washington. Airship's product suite is comprised of three core offerings: Acropolis, the enterprise software stack, Command, the family of viewing clients, and Outpost, edge hardware and software AI offerings.
Osmo
Osmo is an AI scent platform that aims to give computers a sense of smell by combining frontier AI and olfactory science. The platform reads, maps, and writes scents using distinct technologies to digitize the sense of smell. Osmo's ultimate goal is to improve human health and wellbeing through smell, starting with fragrance but with far-reaching applications in various fields.
Intrinsic
Intrinsic is an AI platform that focuses on building the next generation of intelligent automation, making robotics more accessible and valuable for developers and businesses. The platform offers a range of capabilities and skills to develop intelligent solutions, from perception to motion planning and sensor-based controls. Intrinsic aims to simplify the programming, usage, and innovation of robots, enabling them to become usable tools for millions of users.
EverSQL
EverSQL is an AI-powered tool designed for SQL query optimization, database observability, and cost reduction for PostgreSQL and MySQL databases. It automatically optimizes SQL queries using smart AI-based algorithms, provides ongoing performance insights, and helps reduce monthly database costs by offering optimization recommendations. With over 100,000 professionals trusting EverSQL, it aims to save time, improve database performance, and enhance cost-efficiency without accessing sensitive data.
Reality AI Software
Reality AI Software is an Edge AI software development environment that combines advanced signal processing, machine learning, and anomaly detection on every MCU/MPU Renesas core. The software is underpinned by the proprietary Reality AI ML algorithm that delivers accurate and fully explainable results supporting diverse applications. It enables features like equipment monitoring, predictive maintenance, and sensing user behavior and the surrounding environment with minimal impact on the Bill of Materials (BoM). Reality AI software running on Renesas processors helps deliver endpoint intelligence in products across various markets.
Just Walk Out technology
Just Walk Out technology is a checkout-free shopping experience that allows customers to enter a store, grab whatever they want, and quickly get back to their day, without having to wait in a checkout line or stop at a cashier. The technology uses camera vision and sensor fusion, or RFID technology which allows them to simply walk away with their items. Just Walk Out technology is designed to increase revenue with cost-optimized technology, maximize space productivity, increase throughput, optimize operational costs, and improve shopper loyalty.
20 - Open Source Tools
awesome-ai
Awesome AI is a curated list of artificial intelligence resources including courses, tools, apps, and open-source projects. It covers a wide range of topics such as machine learning, deep learning, natural language processing, robotics, conversational interfaces, data science, and more. The repository serves as a comprehensive guide for individuals interested in exploring the field of artificial intelligence and its applications across various domains.
Awesome_Mamba
Awesome Mamba is a curated collection of groundbreaking research papers and articles on Mamba Architecture, a pioneering framework in deep learning known for its selective state spaces and efficiency in processing complex data structures. The repository offers a comprehensive exploration of Mamba architecture through categorized research papers covering various domains like visual recognition, speech processing, remote sensing, video processing, activity recognition, image enhancement, medical imaging, reinforcement learning, natural language processing, 3D recognition, multi-modal understanding, time series analysis, graph neural networks, point cloud analysis, and tabular data handling.
machinascript-for-robots
MachinaScript For Robots is a dynamic set of tools and a LLM-JSON-based language designed to empower humans in the creation of their own robots. It facilitates the animation of generative movements, the integration of personality, and the teaching of new skills with a high degree of autonomy. With MachinaScript, users can control a wide range of electronic components, including Arduinos, Raspberry Pis, servo motors, cameras, sensors, and more. The tool enables the creation of intelligent robots accessible to everyone, allowing for complex tasks to be performed with elegance and precision.
embodied-agents
Embodied Agents is a toolkit for integrating large multi-modal models into existing robot stacks with just a few lines of code. It provides consistency, reliability, scalability, and is configurable to any observation and action space. The toolkit is designed to reduce complexities involved in setting up inference endpoints, converting between different model formats, and collecting/storing datasets. It aims to facilitate data collection and sharing among roboticists by providing Python-first abstractions that are modular, extensible, and applicable to a wide range of tasks. The toolkit supports asynchronous and remote thread-safe agent execution for maximal responsiveness and scalability, and is compatible with various APIs like HuggingFace Spaces, Datasets, Gymnasium Spaces, Ollama, and OpenAI. It also offers automatic dataset recording and optional uploads to the HuggingFace hub.
cogai
The W3C Cognitive AI Community Group focuses on advancing Cognitive AI through collaboration on defining use cases, open source implementations, and application areas. The group aims to demonstrate the potential of Cognitive AI in various domains such as customer services, healthcare, cybersecurity, online learning, autonomous vehicles, manufacturing, and web search. They work on formal specifications for chunk data and rules, plausible knowledge notation, and neural networks for human-like AI. The group positions Cognitive AI as a combination of symbolic and statistical approaches inspired by human thought processes. They address research challenges including mimicry, emotional intelligence, natural language processing, and common sense reasoning. The long-term goal is to develop cognitive agents that are knowledgeable, creative, collaborative, empathic, and multilingual, capable of continual learning and self-awareness.
lerobot
LeRobot is a state-of-the-art AI library for real-world robotics in PyTorch. It aims to provide models, datasets, and tools to lower the barrier to entry to robotics, focusing on imitation learning and reinforcement learning. LeRobot offers pretrained models, datasets with human-collected demonstrations, and simulation environments. It plans to support real-world robotics on affordable and capable robots. The library hosts pretrained models and datasets on the Hugging Face community page.
frigate-hass-integration
Frigate Home Assistant Integration provides a rich media browser with thumbnails and navigation, sensor entities for camera FPS, detection FPS, process FPS, skipped FPS, and objects detected, binary sensor entities for object motion, camera entities for live view and object detected snapshot, switch entities for clips, detection, snapshots, and improve contrast, and support for multiple Frigate instances. It offers easy installation via HACS and manual installation options for advanced users. Users need to configure the `mqtt` integration for Frigate to work. Additionally, media browsing and a companion Lovelace card are available for enhanced user experience. Refer to the main Frigate documentation for detailed installation instructions and usage guidance.
godot_rl_agents
Godot RL Agents is an open-source package that facilitates the integration of Machine Learning algorithms with games created in the Godot Engine. It provides interfaces for popular RL frameworks, support for memory-based agents, 2D and 3D games, AI sensors, and is licensed under MIT. Users can train agents in the Godot editor, create custom environments, export trained agents in ONNX format, and utilize advanced features like different RL training frameworks.
rai
RAI is a framework designed to bring general multi-agent system capabilities to robots, enhancing human interactivity, flexibility in problem-solving, and out-of-the-box AI features. It supports multi-modalities, incorporates an advanced database for agent memory, provides ROS 2-oriented tooling, and offers a comprehensive task/mission orchestrator. The framework includes features such as voice interaction, customizable robot identity, camera sensor access, reasoning through ROS logs, and integration with LangChain for AI tools. RAI aims to support various AI vendors, improve human-robot interaction, provide an SDK for developers, and offer a user interface for configuration.
ai_automation_suggester
An integration for Home Assistant that leverages AI models to understand your unique home environment and propose intelligent automations. By analyzing your entities, devices, areas, and existing automations, the AI Automation Suggester helps you discover new, context-aware use cases you might not have considered, ultimately streamlining your home management and improving efficiency, comfort, and convenience. The tool acts as a personal automation consultant, providing actionable YAML-based automations that can save energy, improve security, enhance comfort, and reduce manual intervention. It turns the complexity of a large Home Assistant environment into actionable insights and tangible benefits.
ha-llmvision
LLM Vision is a Home Assistant integration that allows users to analyze images, videos, and camera feeds using multimodal LLMs. It supports providers such as OpenAI, Anthropic, Google Gemini, LocalAI, and Ollama. Users can input images and videos from camera entities or local files, with the option to downscale images for faster processing. The tool provides detailed instructions on setting up LLM Vision and each supported provider, along with usage examples and service call parameters.
nuitrack-sdk
Nuitrack™ is an ultimate 3D body tracking solution developed by 3DiVi Inc. It enables body motion analytics applications for virtually any widespread depth sensors and hardware platforms, supporting a wide range of applications from real-time gesture recognition on embedded platforms to large-scale multisensor analytical systems. Nuitrack provides highly-sophisticated 3D skeletal tracking, basic facial analysis, hand tracking, and gesture recognition APIs for UI control. It offers two skeletal tracking engines: classical for embedded hardware and AI for complex poses, providing a human-centric spatial understanding tool for natural and intelligent user engagement.
homeassistant-midea-air-appliances-lan
This custom component for Home Assistant adds support for controlling Midea air conditioner and dehumidifier appliances via the local area network. It provides integration for various Midea appliances, allowing users to control settings such as humidity levels, fan speed, and more through Home Assistant. The component supports multiple protocols and entities for different appliance models, offering a comprehensive solution for managing Midea appliances on the local network.
habitat-sim
Habitat-Sim is a high-performance physics-enabled 3D simulator with support for 3D scans of indoor/outdoor spaces, CAD models of spaces and piecewise-rigid objects, configurable sensors, robots described via URDF, and rigid-body mechanics. It prioritizes simulation speed over the breadth of simulation capabilities, achieving several thousand frames per second (FPS) running single-threaded and over 10,000 FPS multi-process on a single GPU when rendering a scene from the Matterport3D dataset. Habitat-Sim simulates a Fetch robot interacting in ReplicaCAD scenes at over 8,000 steps per second (SPS), where each ‘step’ involves rendering 1 RGBD observation (128×128 pixels) and rigid-body dynamics for 1/30sec.
carla
CARLA is an open-source simulator for autonomous driving research. It provides open-source code, protocols, and digital assets (urban layouts, buildings, vehicles) for developing, training, and validating autonomous driving systems. CARLA supports flexible specification of sensor suites and environmental conditions.
lance
Lance is a modern columnar data format optimized for ML workflows and datasets. It offers high-performance random access, vector search, zero-copy automatic versioning, and ecosystem integrations with Apache Arrow, Pandas, Polars, and DuckDB. Lance is designed to address the challenges of the ML development cycle, providing a unified data format for collection, exploration, analytics, feature engineering, training, evaluation, deployment, and monitoring. It aims to reduce data silos and streamline the ML development process.
Genesis
Genesis is a physics platform designed for general purpose Robotics/Embodied AI/Physical AI applications. It includes a universal physics engine, a lightweight, ultra-fast, pythonic, and user-friendly robotics simulation platform, a powerful and fast photo-realistic rendering system, and a generative data engine that transforms user-prompted natural language description into various modalities of data. It aims to lower the barrier to using physics simulations, unify state-of-the-art physics solvers, and minimize human effort in collecting and generating data for robotics and other domains.
webots
Webots is an open-source robot simulator that provides a complete development environment to model, program, and simulate robots, vehicles, and mechanical systems. It was originally designed at EPFL in 1996 and further developed and commercialized by Cyberbotics since 1998. Webots was open-sourced in December 2018 and continues to be developed by Cyberbotics with paid customer support, training, and consulting services for industry and academic research projects.
12 - OpenAI Gpts
Sensory Integration Guide
Your personalized guide guide to all things sensory, including Sensory Processing Disorder and Sensory Integration Therapy.
Sensory Supporter
A supportive guide for managing sensory dysregulation with tailored advice.
Serenity Nova
Serenity Nova guides individuals in exploring sensory awareness to navigate life's continuum, fostering harmony and personal freedom.
Raspberry Pi Pico Master
Expert in MicroPython, C, and C++ for Raspberry Pi Pico and RP2040 and other microcontroller oriented applications.