Best AI tools for< Support Multiple Gpus >
20 - AI tool Sites
Stablematic
Stablematic is a web-based platform that allows users to run Stable Diffusion and other machine learning models without the need for local setup or hardware limitations. It provides a user-friendly interface, pre-installed plugins, and dedicated GPU resources for a seamless and efficient workflow. Users can generate images and videos from text prompts, merge multiple models, train custom models, and access a range of pre-trained models, including Dreambooth and CivitAi models. Stablematic also offers API access for developers and dedicated support for users to explore and utilize the capabilities of Stable Diffusion and other machine learning models.
Juice Remote GPU
Juice Remote GPU is a software that enables AI and Graphics workloads on remote GPUs. It allows users to offload GPU processing for any CUDA or Vulkan application to a remote host running the Juice agent. The software injects CUDA and Vulkan implementations during runtime, eliminating the need for code changes in the application. Juice supports multiple clients connecting to multiple GPUs and multiple clients sharing a single GPU. It is useful for sharing a single GPU across multiple workstations, allocating GPUs dynamically to CPU-only machines, and simplifying development workflows and deployments. Juice Remote GPU performs within 5% of a local GPU when running in the same datacenter. It supports various APIs, including CUDA, Vulkan, DirectX, and OpenGL, and is compatible with PyTorch and TensorFlow. The team behind Juice Remote GPU consists of engineers from Meta, Intel, and the gaming industry.
LiveChatAI
LiveChatAI is an AI chatbot application that works with your data to provide interactive and personalized customer support solutions. It blends AI and human support to deliver dynamic and accurate responses, improving customer satisfaction and reducing support volume. With features like AI Actions, custom question & answers, and content import, LiveChatAI offers a seamless integration for businesses across various platforms and languages. The application is designed to be user-friendly, requiring no AI expertise, and offers instant localization in 95 languages.
AI Diff Checker Comparator Online
The AI Diff Checker Comparator Online is an advanced online comparison tool that leverages AI technology to help users compare multiple text files, JSON files, and code files side by side. It offers both pairwise and baseline comparison modes, ensuring precise results. The tool processes files based on their content structure, supports various file types, and provides real-time editing capabilities. Users can benefit from its accurate comparison algorithms and innovative features, making it a powerful and easy-to-use solution for spotting differences between files.
15minuteplan.ai
15minuteplan.ai is a cutting-edge AI Business Plan Generator that enables entrepreneurs to create professional business plans in under 15 minutes. The tool simplifies the process by guiding users through a series of questions and leveraging advanced language models like GPT-3.5 and GPT-4 to generate comprehensive plans. It caters to entrepreneurs seeking investor funding, bank loans, or simply looking to create a business plan for various purposes. The AI tool is designed to save time and effort by providing quick and efficient solutions for business planning.
AI Comic Translate
AI Comic Translate is an intelligent comic translation tool that revolutionizes comic translation by providing fast, accurate, and multi-language translation services for comic enthusiasts and creators. It offers cost-effective solutions, easy-to-use interface design, and supports translation between multiple languages, breaking language barriers and taking comic works global.
EmbedAI
EmbedAI is a platform that enables users to create custom AI chatbots powered by ChatGPT using their own data. It allows users to train an AI chatbot on their data and embed it on their website. The platform aims to efficiently manage information and provide automated responses to user queries. EmbedAI supports multiple languages and offers customization options for the chatbot's appearance and integration with various apps.
ASKTOWEB
ASKTOWEB is an AI-powered service that enhances websites by adding AI search buttons to SaaS landing pages, software documentation pages, and other websites. It allows visitors to easily search for information without needing specific keywords, making websites more user-friendly and useful. ASKTOWEB analyzes user questions to improve site content and discover customer needs. The service offers multi-model accuracy verification, direct reference jump links, multilingual chatbot support, effortless attachment with a single line of script, and a simple UI without annoying pop-ups. ASKTOWEB reduces the burden on customer support by acting as a buffer for inquiries about available information on the website.
Kel
Kel is an AI Assistant designed to operate within the Command Line Interface (CLI). It offers users the ability to automate repetitive tasks, boost productivity, and enhance the intelligence and efficiency of their CLI experience. Kel supports multiple Language Model Models (LLMs) including OpenAI, Anthropic, and Ollama. Users can upload files to interact with their artifacts and bring their own API key for integration. The tool is free and open source, allowing for community contributions on GitHub. For support, users can reach out to the Kel team.
Doclingo
Doclingo is an AI-powered document translation tool that supports translating documents in various formats such as PDF, Word, Excel, PowerPoint, SRT subtitles, ePub ebooks, AR&ZIP packages, and more. It utilizes large language models to provide accurate and professional translations, preserving the original layout of the documents. Users can enjoy a limited-time free trial upon registration, with the option to subscribe for more features. Doclingo aims to offer high-quality translation services through continuous algorithm improvements.
ttsMP3.com
ttsMP3.com is a free Text-To-Speech and Text-to-MP3 tool that allows users to easily convert US English text into professional speech for various purposes such as e-learning, presentations, YouTube videos, and website accessibility. The tool offers a wide range of voices in different languages and accents, including regular and AI voices. Users can download the generated speech as MP3 files, and customize speech with features like breaks, emphasis, speed adjustments, pitch variations, whispers, and conversations. Supported voice languages include Arabic, English, Portuguese, Spanish, Chinese, Danish, Dutch, French, German, Icelandic, Indian, Italian, Japanese, Korean, Mexican, Norwegian, Polish, Romanian, Russian, Swedish, Turkish, and Welsh.
Humanize AI Text
Humanize AI Text is a free online AI humanizer tool that converts AI-generated content from ChatGPT, Google Bard, Jasper, QuillBot, Grammarly, or any other AI to human text without altering the content's meaning. The platform uses advanced algorithms to analyze and produce output that mimics human writing style. It offers various modes for conversion and supports multiple languages. The tool aims to help content creators, bloggers, and writers enhance their content quality and improve search engine ranking by converting AI-generated text into human-readable form.
Paraphrasing.io
Paraphrasing.io is a free AI paraphrasing tool that helps users rewrite, edit, and adjust the tone of their content for improved comprehension. It prevents plagiarism in various types of content such as blogs, research papers, and more using cutting-edge AI technology. The tool offers four paraphrasing modes to cater to different writing styles and resonates with a distinct writing style. Users including writers, bloggers, researchers, students, and laypersons can benefit from this online tool to enhance the uniqueness, engagement, and readability of their content.
CoeFont
CoeFont is a global AI Voice Hub that offers innovative AI voice solutions to empower users worldwide to unleash the full potential of their voices. With features like Text-to-Speech Editor, Voice Changer, and AI Voice Creation, CoeFont provides a platform for users to transform written text into lifelike audio, experiment with voice effects, and monetize their voice talent. The application supports multiple languages, offers a wide range of voices, and ensures natural-sounding interactions through real-time conversion. CoeFont is dedicated to promoting inclusivity and accessibility through initiatives like the Voice for All project, providing free AI voice services to individuals at risk of losing their voices.
Spinach
Spinach is an AI-powered tool that transforms meeting discussions into actionable notes and automates post-meeting tasks. It seamlessly integrates with existing tools, supports multiple languages, and ensures enterprise-grade security. Users can effortlessly capture decision points, action items, and status updates, enhancing team collaboration and productivity.
Corti
Corti is an AI platform that provides advanced capabilities for patient consultations. It offers features such as Co-Pilot, a proactive AI scribe and assistant for clinicians, and Mission Control, an AI-powered conversation recorder. Corti's AI tools support various healthcare tasks, including high precision procedure and diagnosis coding, context-aware assistants for clinicians, and support for multiple languages in speech and text. The platform is trusted by major hospitals and healthcare providers worldwide.
CodeAI
CodeAI is an advanced AI tool designed to optimize the development process for coders of all levels. It offers cutting-edge AI capabilities to enhance coding efficiency, provide real-time feedback, automate tasks like generating commit messages and updating changelogs, and boost productivity. CodeAI supports multiple programming languages and is suitable for individual developers, small teams, and professionals working on various projects.
Spicy Chat AI
Spicy Chat AI is an innovative AI application that allows users to engage in uncensored adult-themed conversations with lifelike NSFW Character AI. Users can create virtual personas, enjoy uninhibited chats, and experience emotional depth in a safe and expressive environment. The platform prioritizes user privacy with SSL encryption and compliance with data protection standards. Spicy Chat AI offers both free and premium plans, supporting multiple languages and unique features like AI voice response and chat-based image generation.
Scrabblecam
Scrabblecam is an AI-powered tool designed to help Scrabble players find the best moves by analyzing pictures of Scrabble boards. Users can simply take a picture of the board and receive suggestions for the optimal plays. The tool supports multiple game languages including English (CSW21, World), English (NWL23, North American), French (ODS9), and Italian. Scrabblecam leverages artificial intelligence to enhance the gaming experience and provide strategic insights to players.
X-Me
X-Me is an AI tool that allows users to generate personalized AI avatar videos effortlessly. Users can create AI avatars that mimic famous personalities like AI Trump, AI Musk, AI Johnson, Al Kard, and AI Gaga by inputting text in multiple languages. The tool supports 147 languages, offers zero customization fees, and requires zero training. With X-Me, users can upload a selfie video, enter text, and generate AI avatar videos that reflect their face, voice, and story. The platform is known for its efficient, fast, and user-friendly approach to creating realistic digital human videos without the need for complex model training processes.
20 - Open Source AI Tools
FluidFrames.RIFE
FluidFrames.RIFE is a Windows app powered by RIFE AI to create frame-generated and slowmotion videos. It is written in Python and utilizes external packages such as torch, onnxruntime-directml, customtkinter, OpenCV, moviepy, and Nuitka. The app features an elegant GUI, video frame generation at different speeds, video slow motion, video resizing, multiple GPU support, and compatibility with various video formats. Future versions aim to support different GPU types, enhance the GUI, include audio processing, optimize video processing speed, and introduce new features like saving AI-generated frames and supporting different RIFE AI models.
jina
Jina is a tool that allows users to build multimodal AI services and pipelines using cloud-native technologies. It provides a Pythonic experience for serving ML models and transitioning from local deployment to advanced orchestration frameworks like Docker-Compose, Kubernetes, or Jina AI Cloud. Users can build and serve models for any data type and deep learning framework, design high-performance services with easy scaling, serve LLM models while streaming their output, integrate with Docker containers via Executor Hub, and host on CPU/GPU using Jina AI Cloud. Jina also offers advanced orchestration and scaling capabilities, a smooth transition to the cloud, and easy scalability and concurrency features for applications. Users can deploy to their own cloud or system with Kubernetes and Docker Compose integration, and even deploy to JCloud for autoscaling and monitoring.
awesome-cuda-tensorrt-fpga
Okay, here is a JSON object with the requested information about the awesome-cuda-tensorrt-fpga repository:
RealScaler
RealScaler is a Windows app powered by RealESRGAN AI to enhance, upscale, and de-noise photos and videos. It provides an easy-to-use GUI for upscaling images and videos using multiple AI models. The tool supports automatic image tiling and merging to avoid GPU VRAM limitations, resizing images/videos before upscaling, interpolation between original and upscaled content, and compatibility with various image and video formats. RealScaler is written in Python and requires Windows 11/10, at least 8GB RAM, and a Directx12 compatible GPU with 4GB VRAM. Future versions aim to enhance performance, support more GPUs, offer a new GUI with Windows 11 style, include audio for upscaled videos, and provide features like metadata extraction and application from original to upscaled files.
sdkit
sdkit (stable diffusion kit) is an easy-to-use library for utilizing Stable Diffusion in AI Art projects. It includes features like ControlNets, LoRAs, Textual Inversion Embeddings, GFPGAN, CodeFormer for face restoration, RealESRGAN for upscaling, k-samplers, support for custom VAEs, NSFW filter, model-downloader, parallel GPU support, and more. It offers a model database, auto-scanning for malicious models, and various optimizations. The API consists of modules for loading models, generating images, filters, model merging, and utilities, all managed through the sdkit.Context object.
QualityScaler
QualityScaler is a Windows app powered by AI to enhance, upscale, and de-noise photographs and videos. It provides an easy-to-use GUI for upscaling images and videos using multiple AI models. The tool supports automatic image tiling and merging to avoid GPU VRAM limitations, resizing images/videos before upscaling, and interpolation between the original and upscaled content. QualityScaler is written in Python and utilizes external packages such as torch, onnxruntime-directml, customtkinter, OpenCV, moviepy, and nuitka. It requires Windows 11 or Windows 10, at least 8GB of RAM, and a Directx12 compatible GPU with 4GB VRAM or more. The tool aims to continue improving with upcoming versions by adding new features, enhancing performance, and supporting additional AI architectures.
llamafile
llamafile is a tool that enables users to distribute and run Large Language Models (LLMs) with a single file. It combines llama.cpp with Cosmopolitan Libc to create a framework that simplifies the complexity of LLMs into a single-file executable called a 'llamafile'. Users can run these executable files locally on most computers without the need for installation, making open LLMs more accessible to developers and end users. llamafile also provides example llamafiles for various LLM models, allowing users to try out different LLMs locally. The tool supports multiple CPU microarchitectures, CPU architectures, and operating systems, making it versatile and easy to use.
ppl.llm.serving
PPL LLM Serving is a serving based on ppl.nn for various Large Language Models (LLMs). It provides inference support for LLaMA. Key features include: * **High Performance:** Optimized for fast and efficient inference on LLM models. * **Scalability:** Supports distributed deployment across multiple GPUs or machines. * **Flexibility:** Allows for customization of model configurations and inference pipelines. * **Ease of Use:** Provides a user-friendly interface for deploying and managing LLM models. This tool is suitable for various tasks, including: * **Text Generation:** Generating text, stories, or code from scratch or based on a given prompt. * **Text Summarization:** Condensing long pieces of text into concise summaries. * **Question Answering:** Answering questions based on a given context or knowledge base. * **Language Translation:** Translating text between different languages. * **Chatbot Development:** Building conversational AI systems that can engage in natural language interactions. Keywords: llm, large language model, natural language processing, text generation, question answering, language translation, chatbot development
KsanaLLM
KsanaLLM is a high-performance engine for LLM inference and serving. It utilizes optimized CUDA kernels for high performance, efficient memory management, and detailed optimization for dynamic batching. The tool offers flexibility with seamless integration with popular Hugging Face models, support for multiple weight formats, and high-throughput serving with various decoding algorithms. It enables multi-GPU tensor parallelism, streaming outputs, and an OpenAI-compatible API server. KsanaLLM supports NVIDIA GPUs and Huawei Ascend NPU, and seamlessly integrates with verified Hugging Face models like LLaMA, Baichuan, and Qwen. Users can create a docker container, clone the source code, compile for Nvidia or Huawei Ascend NPU, run the tool, and distribute it as a wheel package. Optional features include a model weight map JSON file for models with different weight names.
booster
Booster is a powerful inference accelerator designed for scaling large language models within production environments or for experimental purposes. It is built with performance and scaling in mind, supporting various CPUs and GPUs, including Nvidia CUDA, Apple Metal, and OpenCL cards. The tool can split large models across multiple GPUs, offering fast inference on machines with beefy GPUs. It supports both regular FP16/FP32 models and quantised versions, along with popular LLM architectures. Additionally, Booster features proprietary Janus Sampling for code generation and non-English languages.
AI-Horde-Worker
AI-Horde-Worker is a repository containing the original reference implementation for a worker that turns your graphics card(s) into a worker for the AI Horde. It allows users to generate or alchemize images for others. The repository provides instructions for setting up the worker on Windows and Linux, updating the worker code, running with multiple GPUs, and stopping the worker. Users can configure the worker using a WebUI to connect to the horde with their username and API key. The repository also includes information on model usage and running the Docker container with specified environment variables.
TensorRT-LLM
TensorRT-LLM is an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM contains components to create Python and C++ runtimes that execute those TensorRT engines. It also includes a backend for integration with the NVIDIA Triton Inference Server; a production-quality system to serve LLMs. Models built with TensorRT-LLM can be executed on a wide range of configurations going from a single GPU to multiple nodes with multiple GPUs (using Tensor Parallelism and/or Pipeline Parallelism).
litserve
LitServe is a high-throughput serving engine for deploying AI models at scale. It generates an API endpoint for a model, handles batching, streaming, autoscaling across CPU/GPUs, and more. Built for enterprise scale, it supports every framework like PyTorch, JAX, Tensorflow, and more. LitServe is designed to let users focus on model performance, not the serving boilerplate. It is like PyTorch Lightning for model serving but with broader framework support and scalability.
horde-worker-reGen
This repository provides the latest implementation for the AI Horde Worker, allowing users to utilize their graphics card(s) to generate, post-process, or analyze images for others. It offers a platform where users can create images and earn 'kudos' in return, granting priority for their own image generations. The repository includes important details for setup, recommendations for system configurations, instructions for installation on Windows and Linux, basic usage guidelines, and information on updating the AI Horde Worker. Users can also run the worker with multiple GPUs and receive notifications for updates through Discord. Additionally, the repository contains models that are licensed under the CreativeML OpenRAIL License.
easydiffusion
Easy Diffusion 3.0 is a user-friendly tool for installing and using Stable Diffusion on your computer. It offers hassle-free installation, clutter-free UI, task queue, intelligent model detection, live preview, image modifiers, multiple prompts file, saving generated images, UI themes, searchable models dropdown, and supports various image generation tasks like 'Text to Image', 'Image to Image', and 'InPainting'. The tool also provides advanced features such as custom models, merge models, custom VAE models, multi-GPU support, auto-updater, developer console, and more. It is designed for both new users and advanced users looking for powerful AI image generation capabilities.
pytorch-lightning
PyTorch Lightning is a framework for training and deploying AI models. It provides a high-level API that abstracts away the low-level details of PyTorch, making it easier to write and maintain complex models. Lightning also includes a number of features that make it easy to train and deploy models on multiple GPUs or TPUs, and to track and visualize training progress. PyTorch Lightning is used by a wide range of organizations, including Google, Facebook, and Microsoft. It is also used by researchers at top universities around the world. Here are some of the benefits of using PyTorch Lightning: * **Increased productivity:** Lightning's high-level API makes it easy to write and maintain complex models. This can save you time and effort, and allow you to focus on the research or business problem you're trying to solve. * **Improved performance:** Lightning's optimized training loops and data loading pipelines can help you train models faster and with better performance. * **Easier deployment:** Lightning makes it easy to deploy models to a variety of platforms, including the cloud, on-premises servers, and mobile devices. * **Better reproducibility:** Lightning's logging and visualization tools make it easy to track and reproduce training results.
h2ogpt
h2oGPT is an Apache V2 open-source project that allows users to query and summarize documents or chat with local private GPT LLMs. It features a private offline database of any documents (PDFs, Excel, Word, Images, Video Frames, Youtube, Audio, Code, Text, MarkDown, etc.), a persistent database (Chroma, Weaviate, or in-memory FAISS) using accurate embeddings (instructor-large, all-MiniLM-L6-v2, etc.), and efficient use of context using instruct-tuned LLMs (no need for LangChain's few-shot approach). h2oGPT also offers parallel summarization and extraction, reaching an output of 80 tokens per second with the 13B LLaMa2 model, HYDE (Hypothetical Document Embeddings) for enhanced retrieval based upon LLM responses, a variety of models supported (LLaMa2, Mistral, Falcon, Vicuna, WizardLM. With AutoGPTQ, 4-bit/8-bit, LORA, etc.), GPU support from HF and LLaMa.cpp GGML models, and CPU support using HF, LLaMa.cpp, and GPT4ALL models. Additionally, h2oGPT provides Attention Sinks for arbitrarily long generation (LLaMa-2, Mistral, MPT, Pythia, Falcon, etc.), a UI or CLI with streaming of all models, the ability to upload and view documents through the UI (control multiple collaborative or personal collections), Vision Models LLaVa, Claude-3, Gemini-Pro-Vision, GPT-4-Vision, Image Generation Stable Diffusion (sdxl-turbo, sdxl) and PlaygroundAI (playv2), Voice STT using Whisper with streaming audio conversion, Voice TTS using MIT-Licensed Microsoft Speech T5 with multiple voices and Streaming audio conversion, Voice TTS using MPL2-Licensed TTS including Voice Cloning and Streaming audio conversion, AI Assistant Voice Control Mode for hands-free control of h2oGPT chat, Bake-off UI mode against many models at the same time, Easy Download of model artifacts and control over models like LLaMa.cpp through the UI, Authentication in the UI by user/password via Native or Google OAuth, State Preservation in the UI by user/password, Linux, Docker, macOS, and Windows support, Easy Windows Installer for Windows 10 64-bit (CPU/CUDA), Easy macOS Installer for macOS (CPU/M1/M2), Inference Servers support (oLLaMa, HF TGI server, vLLM, Gradio, ExLLaMa, Replicate, OpenAI, Azure OpenAI, Anthropic), OpenAI-compliant, Server Proxy API (h2oGPT acts as drop-in-replacement to OpenAI server), Python client API (to talk to Gradio server), JSON Mode with any model via code block extraction. Also supports MistralAI JSON mode, Claude-3 via function calling with strict Schema, OpenAI via JSON mode, and vLLM via guided_json with strict Schema, Web-Search integration with Chat and Document Q/A, Agents for Search, Document Q/A, Python Code, CSV frames (Experimental, best with OpenAI currently), Evaluate performance using reward models, and Quality maintained with over 1000 unit and integration tests taking over 4 GPU-hours.
mistral.rs
Mistral.rs is a fast LLM inference platform written in Rust. We support inference on a variety of devices, quantization, and easy-to-use application with an Open-AI API compatible HTTP server and Python bindings.
habitat-sim
Habitat-Sim is a high-performance physics-enabled 3D simulator with support for 3D scans of indoor/outdoor spaces, CAD models of spaces and piecewise-rigid objects, configurable sensors, robots described via URDF, and rigid-body mechanics. It prioritizes simulation speed over the breadth of simulation capabilities, achieving several thousand frames per second (FPS) running single-threaded and over 10,000 FPS multi-process on a single GPU when rendering a scene from the Matterport3D dataset. Habitat-Sim simulates a Fetch robot interacting in ReplicaCAD scenes at over 8,000 steps per second (SPS), where each ‘step’ involves rendering 1 RGBD observation (128×128 pixels) and rigid-body dynamics for 1/30sec.
20 - OpenAI Gpts
Social Mentor One Gpt
Genero le bozze di condivisione su Facebook, Instagram, LinkedIn, X e Threads per articoli giornalistici a partire da un link. 👇 Incolla direttamente il link senza scrivere altro e premi invio
Marketing Scribe
I'm a creative bot crafting engaging social posts in English and Dutch, informed by extensive copywriting resources.
Multiple Sclerosis MS Companion
Friendly and conversational MS companion, empathetic and informative.
Directv Packages - How To Guide 3 Months Free
Comprehensive guide on Directv packages and multiple offers.
LightingGPT
(EN) LightingGPT is an innovative AI system created by Lightinology. It specifically designed to answer a wide range of questions about lighting and optics. It supports multiple languages. (中) LightingGPT是由Lightinology創建的人工智能系統,專門設計來解答有關照明和光學的各種問題。支援各國語言。
Learn WCAG2.2 (Web Accessibility)
This GPT is created to learn Web Content Accessibility Guidelines (WCAG) 2.2. Supports multiple languages.
CreceTube Experto
Asistente multilingüe para la creación de contenido de video, con apoyo y consejos creativos en múltiples idiomas.
Ekko Support Specialist
How to be a master of surprise plays and unconventional strategies in the bot lane as a support role.
Backloger.ai -Support Log Analyzer and Summary
Drop your Support Log Here, Allowing it to automatically generate concise summaries reporting to the tech team.
Tech Support Advisor
From setting up a printer to troubleshooting a device, I’m here to help you step-by-step.