ha-llmvision

ha-llmvision

Let Home Assistant see!

Stars: 641

Visit
 screenshot

LLM Vision is a Home Assistant integration that allows users to analyze images, videos, and camera feeds using multimodal LLMs. It supports providers such as OpenAI, Anthropic, Google Gemini, LocalAI, and Ollama. Users can input images and videos from camera entities or local files, with the option to downscale images for faster processing. The tool provides detailed instructions on setting up LLM Vision and each supported provider, along with usage examples and service call parameters.

README:

Issues Static Badge

Image and video analyzer for Home Assistant using multimodal LLMs

🌟 Features · 📖 Resources · ⬇️ Installation · 🪲 How to report Bugs · ☕ Support

Visit Website →



LLM Vision is a Home Assistant integration that uses multimodal large language models to analyze images, videos, live camera feeds, and Frigate events. It can also keep track of analyzed events in a timeline, with an optional Timeline Card for your dashboard.

Features

  • Compatible with OpenAI, Anthropic Claude, Google Gemini, AWS Bedrock, Groq, LocalAI, Ollama, Open WebUI and providers with OpenAI compatible enpoints.
  • Analyzes images and video files, live camera feeds and Frigate events
  • Can remembers people, pets and objects
  • Maintains a timeline of camera events, so you can display them on your dashboard as well as ask about them later
  • Seamlessly updates sensors based on image input

See the website for the latest features as well as examples. features


Blueprint

With the easy to use blueprint, you'll get camera event notifications intelligently summarized by AI. LLM Vision can also store events in a timeline, so you can see what happened on your dashboard.

Learn how to install the blueprint

Resources

Check the docs for detailed instructions on how to set up LLM Vision and each of the supported providers, get inspiration from examples or join the discussion on the Home Assistant Community.

Static Badge

For technical questions see the discussions tab.

Installation

[!TIP] LLM Vision is available in the default HACS repository. You can install it directly through HACS or click the button below to open it there.

Open a repository inside the Home Assistant Community Store.

  1. Install LLM Vision from HACS
  2. Search for LLM Vision in Home Assistant Settings/Devices & services
  3. Select your provider
  4. Follow the instructions to add your AI providers.

Continue with setup here: https://llm-vision.gitbook.io/getting-started/setup/providers

How to report a bug or request a feature

[!IMPORTANT] Bugs: If you encounter any bugs and have followed the instructions carefully, file a bug report. Please check open issues first and include debug logs in your report. Debugging can be enabled on the integration's settings page. Feature Requests: If you have an idea for a feature, create a feature request.


Create new Issue 

Support

You can support this project by starring this GitHub repository. If you want, you can also buy me a coffee here:

For Tasks:

Click tags to check more tools for each tasks

For Jobs:

Alternative AI tools for ha-llmvision

Similar Open Source Tools

For similar tasks

For similar jobs