
recommendarr
An AI driven recommendation system based on Radarr and Sonarr library information
Stars: 206

Recommendarr is a tool that generates personalized TV show and movie recommendations based on your Sonarr, Radarr, Plex, and Jellyfin libraries using AI. It offers AI-powered recommendations, media server integration, flexible AI support, watch history analysis, customization options, and dark/light mode toggle. Users can connect their media libraries and watch history services, configure AI service settings, and get personalized recommendations based on genre, language, and mood/vibe preferences. The tool works with any OpenAI-compatible API and offers various recommended models for different cost options and performance levels. It provides personalized suggestions, detailed information, filter options, watch history analysis, and one-click adding of recommended content to Sonarr/Radarr.
README:
Recommendarr is a web application that generates personalized TV show and movie recommendations based on your Sonarr, Radarr, Plex, and Jellyfin libraries using AI.
⚠️ IMPORTANT: When accessing this application from outside your network, you must open port 3030 on your router/firewall.
⚠️ PORT REQUIREMENT: The application currently requires mapping exactly to ports 3030 (frontend) and 3050 (API). These port mappings cannot be changed without breaking functionality. You must map 3030:3030 and 3050:3050 in your Docker configuration.
- AI-Powered Recommendations: Get personalized TV show and movie suggestions based on your existing library
- Sonarr & Radarr Integration: Connects directly to your media servers to analyze your TV and movie collections
- Plex, Jellyfin & Tautulli Integration: Analyzes your watch history to provide better recommendations based on what you've actually watched
- Flexible AI Support: Works with OpenAI, local models (Ollama/LM Studio), or any OpenAI-compatible API
- Customization Options: Adjust recommendation count, model parameters, and more
- Dark/Light Mode: Toggle between themes based on your preference
- Poster Images: Displays media posters with fallback generation
- Sonarr instance with API access (for TV recommendations)
- Radarr instance with API access (for movie recommendations)
- Plex, Jellyfin, or Tautulli instance with API access (for watch history analysis) - optional
- An OpenAI API key or any OpenAI-compatible API (like local LLM servers)
- Node.js (v14+) and npm for development
The easiest way to run Recommendarr with all features is to use Docker Compose:
# Clone the repository (which includes the docker-compose.yml file)
git clone https://github.com/fingerthief/recommendarr.git
cd recommendarr
# Start the application
docker-compose up -d --build
This will:
- Build the combined container with both frontend and API server
- Configure proper networking and persistence
- Start the unified service
Then visit http://localhost:3030
in your browser to access the application.
The unified container runs both the frontend (on port 3030) and the API server (on port 3050 internally). This provides secure credential storage and proxy functionality for accessing services that may be blocked by CORS restrictions.
Note: If accessing from outside your network, remember to forward port 3030 on your router/firewall.
You can also run the unified container manually:
# Pull the image
docker pull tannermiddleton/recommendarr:latest
# Run the container
# IMPORTANT: Port mappings must be exactly 3030:3030 and 3050:3050
docker run -d \
--name recommendarr \
-p 3030:3030 \
-p 3050:3050 \
-v $(pwd)/server/data:/app/server/data \
tannermiddleton/recommendarr:latest
Then visit http://localhost:3030
in your browser. The container includes both the frontend and API server for secure credential storage.
For more Docker options, see the Docker Support section below.
- Clone the repository:
git clone https://github.com/fingerthief/recommendarr.git
cd recommendarr
- Install dependencies:
npm install
- Run the development server:
npm run serve
- Visit
http://localhost:3030
in your browser.
- When you first open Recommendarr, you'll be prompted to connect to your services
- For Sonarr (TV shows):
- Enter your Sonarr URL (e.g.,
http://localhost:8989
orhttps://sonarr.yourdomain.com
) - Enter your Sonarr API key (found in Sonarr under Settings → General)
- Click "Connect"
- Enter your Sonarr URL (e.g.,
- For Radarr (Movies):
- Enter your Radarr URL (e.g.,
http://localhost:7878
orhttps://radarr.yourdomain.com
) - Enter your Radarr API key (found in Radarr under Settings → General)
- Click "Connect"
- Enter your Radarr URL (e.g.,
- For Plex (Optional - Watch History):
- Enter your Plex URL (e.g.,
http://localhost:32400
orhttps://plex.yourdomain.com
) - Enter your Plex token (can be found by following these instructions)
- Click "Connect"
- Enter your Plex URL (e.g.,
- For Jellyfin (Optional - Watch History):
- Enter your Jellyfin URL (e.g.,
http://localhost:8096
orhttps://jellyfin.yourdomain.com
) - Enter your Jellyfin API key (found in Jellyfin under Dashboard → API Keys)
- Enter your Jellyfin user ID (found in Jellyfin user settings)
- Click "Connect"
- Enter your Jellyfin URL (e.g.,
- For Tautulli (Optional - Watch History):
- Enter your Tautulli URL (e.g.,
http://localhost:8181
orhttps://tautulli.yourdomain.com
) - Enter your Tautulli API key (found in Tautulli under Settings → Web Interface → API)
- Click "Connect"
- Enter your Tautulli URL (e.g.,
You can connect to any combination of these services based on your needs.
- Navigate to Settings
- Select the AI Service tab
- Enter your AI service details:
-
API URL: For OpenAI, use
https://api.openai.com/v1
. For local models, use your server URL (e.g.,http://localhost:1234/v1
) - API Key: Your OpenAI API key or appropriate key for other services (not needed for some local servers)
- Model: Select a model from the list or leave as default
- Parameters: Adjust max tokens and temperature as needed
-
API URL: For OpenAI, use
- Click "Save Settings"
- Navigate to TV Recommendations or Movie Recommendations page
- Adjust the number of recommendations you'd like to receive using the slider
- If connected to Plex, Jellyfin, or Tautulli, choose whether to include your watch history in the recommendations
- Click "Get Recommendations"
- View your personalized media suggestions with posters and descriptions
The easiest way to run Recommendarr:
# Pull the image
docker pull tannermiddleton/recommendarr:latest
# Run the container (basic)
# IMPORTANT: Port mappings must be exactly 3030:3030 and 3050:3050
docker run -d \
--name recommendarr \
-p 3030:3030 \
-p 3050:3050 \
-v $(pwd)/server/data:/app/server/data \
tannermiddleton/recommendarr:latest
If you want to build the Docker image yourself:
# Clone the repository
git clone https://github.com/fingerthief/recommendarr.git
# Navigate to the project directory
cd recommendarr
# Build the Docker image
docker build -t recommendarr:local .
# Run the container
# IMPORTANT: Port mappings must be exactly 3030:3030 and 3050:3050
docker run -d \
--name recommendarr \
-p 3030:3030 \
-p 3050:3050 \
-v $(pwd)/server/data:/app/server/data \
recommendarr:local
Key benefits of using the Docker Compose method:
- The data directory is mounted as a volume, ensuring your credentials persist across container restarts
- The frontend and API server are bundled together in a single container
- All your service credentials are stored securely using encryption
- CORS issues are automatically handled through the proxy service
- Custom URL configuration for reverse proxy setups (via environment variables)
Note: You cannot change the port mappings without breaking functionality. The app must use ports 3030 and 3050 internally.
If you want to run Recommendarr behind a reverse proxy (like Nginx, Traefik, or Caddy), you must build the image yourself with specific build arguments. The pre-built image will not work correctly with a reverse proxy.
Your reverse proxy should be configured to (example):
- Forward requests from
https://recommendarr.yourdomain.com
tohttp://your-docker-host:3030
- Forward requests from
https://api.yourdomain.com
tohttp://your-docker-host:3050
For now the proper reverse proxy setup is to either:
-
run a build command and pass in the args (replace with your URLs)
-
docker build --build-arg VUE_APP_API_URL=https://api.myapp.recommendarr.com --build-arg PUBLIC_URL=https://myapp.recommendarr.com -t recommendarr:latest .
-
docker run -p 3030:3030 -p 3050:3050 -e VUE_APP_API_URL="https://api.myapp.recommendarr.com" -e PUBLIC_URL="https://myapp.recommendarr.com" -v recommendarr-data:/app/server/data . --build
-
-
use the updated docker-compose and run
docker-compose up -d --build
, obviously replace the URLs with the ones correct for your setup.
services:
recommendarr:
#IF NOT using a reverse proxy uncomment the image tag to use prebuilt
#image: tannermiddleton/recommendarr:latest
# Uncomment and build locally if you need a Reverse Proxy
build:
context: .
args:
# Build time arguments - set these for the Vue.js build process
#Reverse proxy example
- VUE_APP_API_URL=https://api.myapp.recommendarr.com
#Local example
#- VUE_APP_API_URL=http://localhost:3050
- BASE_URL=/
container_name: recommendarr
ports:
- "3030:3030" # Frontend port
- "3050:3050" # Backend API port
extra_hosts:
- "host.docker.internal:host-gateway"
environment:
- NODE_ENV=production
- DOCKER_ENV=true
# Runtime environment variables - customize these as needed
# For local use, the defaults should work without changes
#- PUBLIC_URL=http://localhost:3030
#- VUE_APP_API_URL=http://localhost:3050
# For reverse proxy setups, uncomment and modify these do NOT forget the build section above
- PUBLIC_URL=https://myapp.recommendarr.com
- VUE_APP_API_URL=https://api.myapp.recommendarr.com
volumes:
- recommendarr-data:/app/server/data
restart: unless-stopped
volumes:
recommendarr-data:
IMPORTANT: The internal port mappings in the Docker container must remain 3030:3030 and 3050:3050.
# Clone the repository
git clone https://github.com/fingerthief/recommendarr.git
cd recommendarr
# Build the image with your URLs
docker build -t recommendarr:custom \
--build-arg BASE_URL=https://recommendarr.yourdomain.com \
--build-arg VUE_APP_API_URL=https://api.yourdomain.com \
.
# Run the container
docker run -d \
--name recommendarr \
-p 3030:3030 \
-p 3050:3050 \
-v $(pwd)/server/data:/app/server/data \
recommendarr:custom
Recommendarr works with various AI services:
- OpenAI API: Standard integration with models like GPT-3.5 and GPT-4
- Ollama: Self-hosted models with OpenAI-compatible API
- LM Studio: Run models locally on your computer
- Anthropic Claude: Via OpenAI-compatible endpoints
- Self-hosted models: Any service with OpenAI-compatible chat completions API
Here are some recommendations for models that work well with Recommendarr:
- Meta Llama 3.3 70B Instruct: Great performance for free
- Gemini 2.0 models (Flash/Pro/Thinking): Excellent recommendation quality
- DeepSeek R1 models: Strong performance across variants
- Claude 3.7/3.5 Haiku: Exceptional for understanding your library preferences
- GPT-4o mini: Excellent balance of performance and cost
- Grok Beta: Good recommendations at reasonable prices
- Amazon Nova Pro: Strong media understanding capabilities
- DeepSeek R1 7B Qwen Distill: Good performance for a smaller model (via LM Studio)
For best results, try setting max tokens to 4000 and temperature between 0.6-0.8 depending on the model.
- Connect to your Sonarr instance to get personalized TV show recommendations
- The AI analyzes your TV library to understand your preferences
- Optional Plex, Jellyfin, or Tautulli integration enhances recommendations based on what you've actually watched
- Receives detailed recommendations with show descriptions and reasoning
- Connect to your Radarr instance to get personalized movie recommendations
- The AI analyzes your movie collection to understand genres and preferences you enjoy
- Optional Plex, Jellyfin, or Tautulli integration provides watch history data for better personalization
- Get suggested movies with descriptions, reasoning, and poster images
- Easily discover new films based on your existing collection
Tautulli provides advanced insights into your Plex Media Server, tracking user activity and media statistics. Integrating Tautulli with Recommendarr enhances recommendations by analyzing your actual watch history.
- In the Recommendarr interface, go to Settings and connect to your Tautulli instance
- Enter your Tautulli URL (e.g.,
http://localhost:8181
orhttp://your-server-ip:8181
) - Enter your Tautulli API key (found in Tautulli under Settings → Web Interface → API)
- Test the connection and save
- Enhanced Recommendations: Your watch history is analyzed to provide more personalized recommendations
- Viewing Insights: See what content is most popular in your household
- Better Context: The AI uses your actual viewing patterns to understand your preferences
Watch history from Tautulli complements your media library data, giving the AI a more complete picture of your preferences beyond just what content you've collected.
Your data never leaves your control:
- When using the API server (via Docker Compose):
- Sonarr, Radarr, Plex, Jellyfin, and Tautulli API credentials are stored securely using encryption
- AI API keys are stored encrypted and used only for your requests
- The API server acts as a proxy, preventing CORS issues when accessing your services
- All sensitive data is encrypted at rest on the server
- Media library and watch history data is sent only to the AI service you configure
- No analytics or tracking are included in the application
# Clone the repository
git clone https://github.com/fingerthief/recommendarr.git
cd recommendarr
# Install dependencies
npm install
# Start both the frontend and API server concurrently (recommended)
npm run dev
# Or start components individually:
# Run frontend development server with hot-reload
npm run serve
# Run API server separately
npm run api
# Compile and minify for production
npm run build
# Lint and fix files
npm run lint
The development server will start at http://localhost:8080 (frontend) and http://localhost:3050 (API server).
This project is licensed under the MIT License - see the LICENSE file for details.
- Vue.js - The progressive JavaScript framework
- Sonarr - For the amazing API that powers TV recommendations
- Radarr - For the API that enables movie recommendations
- Plex - For the API that provides watch history data
- Jellyfin - For the API that provides additional watch history data
- Tautulli - For the API that provides detailed Plex watch statistics
- OpenRouter - For the API that powers AI-based suggestions
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for recommendarr
Similar Open Source Tools

recommendarr
Recommendarr is a tool that generates personalized TV show and movie recommendations based on your Sonarr, Radarr, Plex, and Jellyfin libraries using AI. It offers AI-powered recommendations, media server integration, flexible AI support, watch history analysis, customization options, and dark/light mode toggle. Users can connect their media libraries and watch history services, configure AI service settings, and get personalized recommendations based on genre, language, and mood/vibe preferences. The tool works with any OpenAI-compatible API and offers various recommended models for different cost options and performance levels. It provides personalized suggestions, detailed information, filter options, watch history analysis, and one-click adding of recommended content to Sonarr/Radarr.

aiaio
aiaio (AI-AI-O) is a lightweight, privacy-focused web UI for interacting with AI models. It supports both local and remote LLM deployments through OpenAI-compatible APIs. The tool provides features such as dark/light mode support, local SQLite database for conversation storage, file upload and processing, configurable model parameters through UI, privacy-focused design, responsive design for mobile/desktop, syntax highlighting for code blocks, real-time conversation updates, automatic conversation summarization, customizable system prompts, WebSocket support for real-time updates, Docker support for deployment, multiple API endpoint support, and multiple system prompt support. Users can configure model parameters and API settings through the UI, handle file uploads, manage conversations, and use keyboard shortcuts for efficient interaction. The tool uses SQLite for storage with tables for conversations, messages, attachments, and settings. Contributions to the project are welcome under the Apache License 2.0.

paperless-ai
Paperless-AI is an automated document analyzer tool designed for Paperless-ngx users. It utilizes the OpenAI API and Ollama (Mistral, llama, phi 3, gemma 2) to automatically scan, analyze, and tag documents. The tool offers features such as automatic document scanning, AI-powered document analysis, automatic title and tag assignment, manual mode for analyzing documents, easy setup through a web interface, document processing dashboard, error handling, and Docker support. Users can configure the tool through a web interface and access a debug interface for monitoring and troubleshooting. Paperless-AI aims to streamline document organization and analysis processes for users with access to Paperless-ngx and AI capabilities.

DevoxxGenieIDEAPlugin
Devoxx Genie is a Java-based IntelliJ IDEA plugin that integrates with local and cloud-based LLM providers to aid in reviewing, testing, and explaining project code. It supports features like code highlighting, chat conversations, and adding files/code snippets to context. Users can modify REST endpoints and LLM parameters in settings, including support for cloud-based LLMs. The plugin requires IntelliJ version 2023.3.4 and JDK 17. Building and publishing the plugin is done using Gradle tasks. Users can select an LLM provider, choose code, and use commands like review, explain, or generate unit tests for code analysis.

CrewAI-Studio
CrewAI Studio is an application with a user-friendly interface for interacting with CrewAI, offering support for multiple platforms and various backend providers. It allows users to run crews in the background, export single-page apps, and use custom tools for APIs and file writing. The roadmap includes features like better import/export, human input, chat functionality, automatic crew creation, and multiuser environment support.

meeting-minutes
An open-source AI assistant for taking meeting notes that captures live meeting audio, transcribes it in real-time, and generates summaries while ensuring user privacy. Perfect for teams to focus on discussions while automatically capturing and organizing meeting content without external servers or complex infrastructure. Features include modern UI, real-time audio capture, speaker diarization, local processing for privacy, and more. The tool also offers a Rust-based implementation for better performance and native integration, with features like live transcription, speaker diarization, and a rich text editor for notes. Future plans include database connection for saving meeting minutes, improving summarization quality, and adding download options for meeting transcriptions and summaries. The backend supports multiple LLM providers through a unified interface, with configurations for Anthropic, Groq, and Ollama models. System architecture includes core components like audio capture service, transcription engine, LLM orchestrator, data services, and API layer. Prerequisites for setup include Node.js, Python, FFmpeg, and Rust. Development guidelines emphasize project structure, testing, documentation, type hints, and ESLint configuration. Contributions are welcome under the MIT License.

gateway
CentralMind Gateway is an AI-first data gateway that securely connects any data source and automatically generates secure, LLM-optimized APIs. It filters out sensitive data, adds traceability, and optimizes for AI workloads. Suitable for companies deploying AI agents for customer support and analytics.

gitdiagram
GitDiagram is a tool that turns any GitHub repository into an interactive diagram for visualization in seconds. It offers instant visualization, interactivity, fast generation, customization, and API access. The tool utilizes a tech stack including Next.js, FastAPI, PostgreSQL, Claude 3.5 Sonnet, Vercel, EC2, GitHub Actions, PostHog, and Api-Analytics. Users can self-host the tool for local development and contribute to its development. GitDiagram is inspired by Gitingest and has future plans to use larger context models, allow user API key input, implement RAG with Mermaid.js docs, and include font-awesome icons in diagrams.

Director
Director is a framework to build video agents that can reason through complex video tasks like search, editing, compilation, generation, etc. It enables users to summarize videos, search for specific moments, create clips instantly, integrate GenAI projects and APIs, add overlays, generate thumbnails, and more. Built on VideoDB's 'video-as-data' infrastructure, Director is perfect for developers, creators, and teams looking to simplify media workflows and unlock new possibilities.

morphic
Morphic is an AI-powered answer engine with a generative UI. It utilizes a stack of Next.js, Vercel AI SDK, OpenAI, Tavily AI, shadcn/ui, Radix UI, and Tailwind CSS. To get started, fork and clone the repo, install dependencies, fill out secrets in the .env.local file, and run the app locally using 'bun dev'. You can also deploy your own live version of Morphic with Vercel. Verified models that can be specified to writers include Groq, LLaMA3 8b, and LLaMA3 70b.

note-companion
Note Companion is an AI-powered Obsidian plugin that automatically organizes and formats notes. It provides organizing suggestions, custom format AI prompts, automated workflows, handwritten note digitization, audio transcription, atomic note generation, YouTube summaries, and context-aware AI chat. Key use cases include smart vault management, handwritten notes digitization, and intelligent meeting notes. The tool offers advanced features like custom AI templates and multi-modal support for processing various content types. Users can seamlessly integrate with mobile workflows and utilize iOS shortcuts for sending Apple Notes to Obsidian. Note Companion enhances productivity by streamlining note organization and management tasks with AI assistance.

Archon
Archon is an AI meta-agent designed to autonomously build, refine, and optimize other AI agents. It serves as a practical tool for developers and an educational framework showcasing the evolution of agentic systems. Through iterative development, Archon demonstrates the power of planning, feedback loops, and domain-specific knowledge in creating robust AI agents.

FinAnGPT-Pro
FinAnGPT-Pro is a financial data downloader and AI query system that downloads quarterly and annual financial data for stocks from EOD Historical Data, storing it in MongoDB and Google BigQuery. It includes an AI-powered natural language interface for querying financial data. Users can set up the tool by following the prerequisites and setup instructions provided in the README. The tool allows users to download financial data for all stocks in a watchlist or for a single stock, query financial data using a natural language interface, and receive responses in a structured format. Important considerations include error handling, rate limiting, data validation, BigQuery costs, MongoDB connection, and security measures for API keys and credentials.

webapp-starter
webapp-starter is a modern full-stack application template built with Turborepo, featuring a Hono + Bun API backend and Next.js frontend. It provides an easy way to build a SaaS product. The backend utilizes technologies like Bun, Drizzle ORM, and Supabase, while the frontend is built with Next.js, Tailwind CSS, Shadcn/ui, and Clerk. Deployment can be done using Vercel and Render. The project structure includes separate directories for API backend and Next.js frontend, along with shared packages for the main database. Setup involves installing dependencies, configuring environment variables, and setting up services like Bun, Supabase, and Clerk. Development can be done using 'turbo dev' command, and deployment instructions are provided for Vercel and Render. Contributions are welcome through pull requests.

julep
Julep is an advanced platform for creating stateful and functional AI apps powered by large language models. It offers features like statefulness by design, automatic function calling, production-ready deployment, cron-like asynchronous functions, 90+ built-in tools, and the ability to switch between different LLMs easily. Users can build AI applications without the need to write code for embedding, saving, and retrieving conversation history, and can connect to third-party applications using Composio. Julep simplifies the process of getting started with AI apps, whether they are conversational, functional, or agentic.

extension-gen-ai
The Looker GenAI Extension provides code examples and resources for building a Looker Extension that integrates with Vertex AI Large Language Models (LLMs). Users can leverage the power of LLMs to enhance data exploration and analysis within Looker. The extension offers generative explore functionality to ask natural language questions about data and generative insights on dashboards to analyze data by asking questions. It leverages components like BQML Remote Models, BQML Remote UDF with Vertex AI, and Custom Fine Tune Model for different integration options. Deployment involves setting up infrastructure with Terraform and deploying the Looker Extension by creating a Looker project, copying extension files, configuring BigQuery connection, connecting to Git, and testing the extension. Users can save example prompts and configure user settings for the extension. Development of the Looker Extension environment includes installing dependencies, starting the development server, and building for production.
For similar tasks

recommendarr
Recommendarr is a tool that generates personalized TV show and movie recommendations based on your Sonarr, Radarr, Plex, and Jellyfin libraries using AI. It offers AI-powered recommendations, media server integration, flexible AI support, watch history analysis, customization options, and dark/light mode toggle. Users can connect their media libraries and watch history services, configure AI service settings, and get personalized recommendations based on genre, language, and mood/vibe preferences. The tool works with any OpenAI-compatible API and offers various recommended models for different cost options and performance levels. It provides personalized suggestions, detailed information, filter options, watch history analysis, and one-click adding of recommended content to Sonarr/Radarr.

danswer
Danswer is an open-source Gen-AI Chat and Unified Search tool that connects to your company's docs, apps, and people. It provides a Chat interface and plugs into any LLM of your choice. Danswer can be deployed anywhere and for any scale - on a laptop, on-premise, or to cloud. Since you own the deployment, your user data and chats are fully in your own control. Danswer is MIT licensed and designed to be modular and easily extensible. The system also comes fully ready for production usage with user authentication, role management (admin/basic users), chat persistence, and a UI for configuring Personas (AI Assistants) and their Prompts. Danswer also serves as a Unified Search across all common workplace tools such as Slack, Google Drive, Confluence, etc. By combining LLMs and team specific knowledge, Danswer becomes a subject matter expert for the team. Imagine ChatGPT if it had access to your team's unique knowledge! It enables questions such as "A customer wants feature X, is this already supported?" or "Where's the pull request for feature Y?"

search_with_ai
Build your own conversation-based search with AI, a simple implementation with Node.js & Vue3. Live Demo Features: * Built-in support for LLM: OpenAI, Google, Lepton, Ollama(Free) * Built-in support for search engine: Bing, Sogou, Google, SearXNG(Free) * Customizable pretty UI interface * Support dark mode * Support mobile display * Support local LLM with Ollama * Support i18n * Support Continue Q&A with contexts.
For similar jobs

weave
Weave is a toolkit for developing Generative AI applications, built by Weights & Biases. With Weave, you can log and debug language model inputs, outputs, and traces; build rigorous, apples-to-apples evaluations for language model use cases; and organize all the information generated across the LLM workflow, from experimentation to evaluations to production. Weave aims to bring rigor, best-practices, and composability to the inherently experimental process of developing Generative AI software, without introducing cognitive overhead.

LLMStack
LLMStack is a no-code platform for building generative AI agents, workflows, and chatbots. It allows users to connect their own data, internal tools, and GPT-powered models without any coding experience. LLMStack can be deployed to the cloud or on-premise and can be accessed via HTTP API or triggered from Slack or Discord.

VisionCraft
The VisionCraft API is a free API for using over 100 different AI models. From images to sound.

kaito
Kaito is an operator that automates the AI/ML inference model deployment in a Kubernetes cluster. It manages large model files using container images, avoids tuning deployment parameters to fit GPU hardware by providing preset configurations, auto-provisions GPU nodes based on model requirements, and hosts large model images in the public Microsoft Container Registry (MCR) if the license allows. Using Kaito, the workflow of onboarding large AI inference models in Kubernetes is largely simplified.

PyRIT
PyRIT is an open access automation framework designed to empower security professionals and ML engineers to red team foundation models and their applications. It automates AI Red Teaming tasks to allow operators to focus on more complicated and time-consuming tasks and can also identify security harms such as misuse (e.g., malware generation, jailbreaking), and privacy harms (e.g., identity theft). The goal is to allow researchers to have a baseline of how well their model and entire inference pipeline is doing against different harm categories and to be able to compare that baseline to future iterations of their model. This allows them to have empirical data on how well their model is doing today, and detect any degradation of performance based on future improvements.

tabby
Tabby is a self-hosted AI coding assistant, offering an open-source and on-premises alternative to GitHub Copilot. It boasts several key features: * Self-contained, with no need for a DBMS or cloud service. * OpenAPI interface, easy to integrate with existing infrastructure (e.g Cloud IDE). * Supports consumer-grade GPUs.

spear
SPEAR (Simulator for Photorealistic Embodied AI Research) is a powerful tool for training embodied agents. It features 300 unique virtual indoor environments with 2,566 unique rooms and 17,234 unique objects that can be manipulated individually. Each environment is designed by a professional artist and features detailed geometry, photorealistic materials, and a unique floor plan and object layout. SPEAR is implemented as Unreal Engine assets and provides an OpenAI Gym interface for interacting with the environments via Python.

Magick
Magick is a groundbreaking visual AIDE (Artificial Intelligence Development Environment) for no-code data pipelines and multimodal agents. Magick can connect to other services and comes with nodes and templates well-suited for intelligent agents, chatbots, complex reasoning systems and realistic characters.