Best AI tools for< Multi-gpu Training >
20 - AI tool Sites
Cirrascale Cloud Services
Cirrascale Cloud Services is an AI tool that offers cloud solutions for Artificial Intelligence applications. The platform provides a range of cloud services and products tailored for AI innovation, including NVIDIA GPU Cloud, AMD Instinct Series Cloud, Qualcomm Cloud, Graphcore, Cerebras, and SambaNova. Cirrascale's AI Innovation Cloud enables users to test and deploy on leading AI accelerators in one cloud, democratizing AI by delivering high-performance AI compute and scalable deep learning solutions. The platform also offers professional and managed services, tailored multi-GPU server options, and high-throughput storage and networking solutions to accelerate development, training, and inference workloads.
Multi AI Chat
Multi AI Chat is an AI-powered application that allows users to interact with various AI models by asking questions. The platform enables users to seek information, get answers, and engage in conversations with AI on a wide range of topics. With Multi AI Chat, users can explore the capabilities of different AI technologies and enhance their understanding of artificial intelligence.
DataSpark AI Multi-Agent Platform
DataSpark AI Multi-Agent Platform is an advanced artificial intelligence tool developed by DataSpark, Inc. The platform leverages cutting-edge AI technology to provide users with a powerful solution for multi-agent systems. With DataSpark AI, users can access the first GenAI Super Agent, enabling them to discover new insights and make informed decisions. The platform is designed to streamline complex processes and enhance productivity across various industries.
Artiko.ai
Artiko.ai is a multi-model AI chat platform that integrates advanced AI models such as ChatGPT, Claude 3, Gemini 1.5, and Mistral AI. It offers a convenient and cost-effective solution for work, business, or study by providing a single chat interface to harness the power of multi-model AI. Users can save time and money while achieving better results through features like text rewriting, data conversation, AI assistants, website chatbot, PDF and document chat, translation, brainstorming, and integration with various tools like Woocommerce, Amazon, Salesforce, and more.
Salieri
Salieri is a multi-agent LLM home multiverse platform that offers an efficient, trustworthy, and automated AI workflow. The innovative Multiverse Factory allows developers to elevate their projects by generating personalized AI applications through an intuitive interface. The platform aims to optimize user queries via LLM API calls, reduce expenses, and enhance the cognitive functions of AI agents. Salieri's team comprises experts from top AI institutes like MIT and Google, focusing on generative AI, neural knowledge graph, and composite AI models.
Multilingual.top
Multilingual.top is an advanced translation platform that enables users to translate text into multiple languages at once. It leverages artificial intelligence, specifically OpenAI's technology, to provide accurate and authentic translations. With Multilingual.top, users can break away from the traditional one-to-one translation limits and get multilingual results in one go, saving time and effort. The platform supports a wide range of languages, including Arabic, Chinese, Danish, Dutch, English, French, German, Indonesian, Italian, Japanese, Korean, Norwegian, Polish, Portuguese, Russian, Spanish, Thai, Turkish, and more. Multilingual.top offers a free translation service with some limits to prevent misuse and ensure everyone has fair access. Users can also upload documents in JSON, PDF, DOCX, and DOC formats for translation, making it especially useful for office workers and professionals dealing with documentation. The platform is continuously updated to improve translation accuracy and target language breadth.
Questflow
Questflow is a decentralized AI agent economy platform that enables users to orchestrate multiple AI agents to gather insights, take action, and earn rewards autonomously. It serves as a co-pilot for work, helping knowledge workers automate repetitive tasks in a private and safety-first approach. The platform offers user-friendly dashboards, visual reports, smart keyword generators, content evaluation, SEO goal setting, automated alerts, actionable SEO tips, and link optimization wizards. Users can dispatch tasks to AI agents in groups and take action on tasks automatically through decentralized multi-agent orchestration. Questflow also facilitates the distribution of economic incentives to creators and guardians of AI agents via a blockchain network, rewarding them for their contributions to the future of work.
DTiQ
DTiQ is a leading provider of loss prevention and intelligent video solutions for businesses in the United States and globally. The company offers a range of products and services, including SmartAudit, SmartAnalysis, and SmartAssurance, designed to help businesses improve operational quality, reduce theft, and enhance customer experience. DTiQ's solutions are trusted by hundreds of brands across various industries, such as quick service restaurants, convenience stores, and retail outlets. With a focus on security, innovation, and support, DTiQ aims to help businesses run smarter and more efficiently.
Floneum
Floneum is a versatile AI-powered tool designed for language tasks. It allows users to build workflows using large language models through a user-friendly drag and drop interface. With the ability to securely extend Floneum using WebAssembly plugins, users can write plugins in various languages like Rust, C, Java, or Go. The tool offers 41 built-in plugins for tasks such as text generation, search engine operations, file manipulation, Python script execution, browser automation, and more. Floneum empowers users to automate language-related tasks efficiently and effectively.
Questflow
Questflow is a decentralized AI agent economy platform that allows users to orchestrate multiple AI agents to gather insights, take action, and earn rewards autonomously. It offers a user-friendly dashboard, visual reports, smart keyword generator, content evaluation, SEO goal setting, automated alerts, actionable SEO tips, and link optimization wizard. The platform aims to automate repetitive tasks for knowledge workers in a private and safety-first approach, providing a co-pilot for work. Users can dispatch tasks to AI agents in groups and incentivize them for completing tasks. Questflow also rewards AI agent creators and supporters through revenue sharing and staking incentives, fostering a community-driven ecosystem.
Claude
Claude is a large multi-modal model, trained by Google. It is similar to GPT-3, but it is trained on a larger dataset and with more advanced techniques. Claude is capable of generating human-like text, translating languages, answering questions, and writing different kinds of creative content.
AirDroid
AirDroid is an AI-powered device management solution that offers both business and personal services. It provides features such as remote support, file transfer, application management, and AI-powered insights. The application aims to streamline IT resources, reduce costs, and increase efficiency for businesses, while also offering personal management solutions for private mobile devices. AirDroid is designed to empower businesses with intelligent AI assistance and enhance user experience through seamless multi-screen interactions.
ReadWeb.ai
ReadWeb.ai is a free web-based tool that provides instant multi-language translation of web pages. It allows users to translate any webpage into up to 10 different languages with just one click. ReadWeb.ai also offers a unique bilingual reading experience, allowing users to view translations in an easy-to-understand, top-and-bottom format. This makes it an ideal tool for language learners, researchers, and anyone who needs to access information from websites in different languages.
Flamel.ai
Flamel.ai is a social media management platform designed specifically for multi-location brands and franchises. It enables businesses to manage social media content and engagement across all of their locations from a centralized platform. Flamel.ai uses AI to help businesses create personalized and engaging content that resonates with local audiences. The platform also provides analytics and insights to help businesses track their social media performance and identify areas for improvement.
CX Genie
CX Genie is an AI-powered customer support platform that offers a comprehensive solution for businesses seeking to enhance their customer support operations. It leverages advanced AI technology to deliver smarter, faster, and more accurate customer support, including AI chatbots, help desk management, workflow automation, product data synchronization, and ticket management. With features like seamless integrations, customizable plans, and a user-friendly interface, CX Genie aims to provide outstanding customer support and drive business growth.
Wave.video
Wave.video is an online video editor and hosting platform that allows users to create, edit, and host videos. It offers a wide range of features, including a live streaming studio, video recorder, stock library, and video hosting. Wave.video is easy to use and affordable, making it a great option for businesses and individuals who need to create high-quality videos.
Qwen
Qwen is an AI tool that focuses on developing and releasing various language models, including dense models, coding models, mathematical models, and vision language models. The Qwen family offers open-source models with different parameter ranges to cater to various user needs, such as production use, mobile applications, coding assistance, mathematical problem-solving, and visual understanding of images and videos. Qwen aims to enhance intelligence and provide smarter and more knowledgeable models for developers and users.
Galleri
Galleri is a multi-cancer early detection test that uses a single blood draw to screen for over 50 types of cancer. It is recommended for adults aged 50 or older who are at an elevated risk for cancer. Galleri is not a diagnostic test and does not detect all cancers. A positive result requires confirmatory diagnostic evaluation by medically established procedures (e.g., imaging) to confirm cancer.
Omni Engage
Omni Engage is a powerful omnichannel communications software designed to help businesses create meaningful and personalized interactions with their customers. It allows businesses to connect with their audience across multiple channels, including email, social media, and voice, and deliver a consistent and memorable experience for every customer. Omni Engage simplifies customer engagement with its Unified Inbox, which enables agents to handle requests from all channels seamlessly and efficiently. It also offers AI automation with Omni Automate, which streamlines customer interactions by automating routine inquiries and providing rapid response times. With its robust reporting and analytics capabilities, Omni Engage empowers supervisors to measure engagement and performance across all channels, identify areas for improvement, and drive success.
VIVA.ai
VIVA is an AI-powered creative visual design platform that aims to bring every moment to life. It provides users with tools and features to create visually appealing designs effortlessly. With VIVA, users can unleash their creativity and design stunning visuals for various purposes such as social media posts, presentations, and marketing materials. The platform leverages artificial intelligence to streamline the design process and help users achieve professional-looking results without the need for advanced design skills.
20 - Open Source AI Tools
AnglE
AnglE is a library for training state-of-the-art BERT/LLM-based sentence embeddings with just a few lines of code. It also serves as a general sentence embedding inference framework, allowing for inferring a variety of transformer-based sentence embeddings. The library supports various loss functions such as AnglE loss, Contrastive loss, CoSENT loss, and Espresso loss. It provides backbones like BERT-based models, LLM-based models, and Bi-directional LLM-based models for training on single or multi-GPU setups. AnglE has achieved significant performance on various benchmarks and offers official pretrained models for both BERT-based and LLM-based models.
x-lstm
This repository contains an unofficial implementation of the xLSTM model introduced in Beck et al. (2024). It serves as a didactic tool to explain the details of a modern Long-Short Term Memory model with competitive performance against Transformers or State-Space models. The repository also includes a Lightning-based implementation of a basic LLM for multi-GPU training. It provides modules for scalar-LSTM and matrix-LSTM, as well as an xLSTM LLM built using Pytorch Lightning for easy training on multi-GPUs.
SLAM-LLM
SLAM-LLM is a deep learning toolkit for training custom multimodal large language models (MLLM) focusing on speech, language, audio, and music processing. It provides detailed recipes for training and high-performance checkpoints for inference. The toolkit supports various tasks such as automatic speech recognition (ASR), text-to-speech (TTS), visual speech recognition (VSR), automated audio captioning (AAC), spatial audio understanding, and music caption (MC). Users can easily extend to new models and tasks, utilize mixed precision training for faster training with less GPU memory, and perform multi-GPU training with data and model parallelism. Configuration is flexible based on Hydra and dataclass, allowing different configuration methods.
SLAM-LLM
SLAM-LLM is a deep learning toolkit designed for researchers and developers to train custom multimodal large language models (MLLM) focusing on speech, language, audio, and music processing. It provides detailed recipes for training and high-performance checkpoints for inference. The toolkit supports tasks such as automatic speech recognition (ASR), text-to-speech (TTS), visual speech recognition (VSR), automated audio captioning (AAC), spatial audio understanding, and music caption (MC). SLAM-LLM features easy extension to new models and tasks, mixed precision training for faster training with less GPU memory, multi-GPU training with data and model parallelism, and flexible configuration based on Hydra and dataclass.
fairseq
Fairseq is a sequence modeling toolkit that enables researchers and developers to train custom models for translation, summarization, language modeling, and other text generation tasks. It provides reference implementations of various sequence modeling papers covering CNN, LSTM networks, Transformer networks, LightConv, DynamicConv models, Non-autoregressive Transformers, Finetuning, and more. The toolkit supports multi-GPU training, fast generation on CPU and GPU, mixed precision training, extensibility, flexible configuration based on Hydra, and full parameter and optimizer state sharding. Pre-trained models are available for translation and language modeling with a torch.hub interface. Fairseq also offers pre-trained models and examples for tasks like XLS-R, cross-lingual retrieval, wav2vec 2.0, unsupervised quality estimation, and more.
Liger-Kernel
Liger Kernel is a collection of Triton kernels designed for LLM training, increasing training throughput by 20% and reducing memory usage by 60%. It includes Hugging Face Compatible modules like RMSNorm, RoPE, SwiGLU, CrossEntropy, and FusedLinearCrossEntropy. The tool works with Flash Attention, PyTorch FSDP, and Microsoft DeepSpeed, aiming to enhance model efficiency and performance for researchers, ML practitioners, and curious novices.
llm.c
LLM training in simple, pure C/CUDA. There is no need for 245MB of PyTorch or 107MB of cPython. For example, training GPT-2 (CPU, fp32) is ~1,000 lines of clean code in a single file. It compiles and runs instantly, and exactly matches the PyTorch reference implementation. I chose GPT-2 as the first working example because it is the grand-daddy of LLMs, the first time the modern stack was put together.
swift
SWIFT (Scalable lightWeight Infrastructure for Fine-Tuning) supports training, inference, evaluation and deployment of nearly **200 LLMs and MLLMs** (multimodal large models). Developers can directly apply our framework to their own research and production environments to realize the complete workflow from model training and evaluation to application. In addition to supporting the lightweight training solutions provided by [PEFT](https://github.com/huggingface/peft), we also provide a complete **Adapters library** to support the latest training techniques such as NEFTune, LoRA+, LLaMA-PRO, etc. This adapter library can be used directly in your own custom workflow without our training scripts. To facilitate use by users unfamiliar with deep learning, we provide a Gradio web-ui for controlling training and inference, as well as accompanying deep learning courses and best practices for beginners. Additionally, we are expanding capabilities for other modalities. Currently, we support full-parameter training and LoRA training for AnimateDiff.
ColossalAI
Colossal-AI is a deep learning system for large-scale parallel training. It provides a unified interface to scale sequential code of model training to distributed environments. Colossal-AI supports parallel training methods such as data, pipeline, tensor, and sequence parallelism and is integrated with heterogeneous training and zero redundancy optimizer.
llm-finetuning
llm-finetuning is a repository that provides a serverless twist to the popular axolotl fine-tuning library using Modal's serverless infrastructure. It allows users to quickly fine-tune any LLM model with state-of-the-art optimizations like Deepspeed ZeRO, LoRA adapters, Flash attention, and Gradient checkpointing. The repository simplifies the fine-tuning process by not exposing all CLI arguments, instead allowing users to specify options in a config file. It supports efficient training and scaling across multiple GPUs, making it suitable for production-ready fine-tuning jobs.
DB-GPT-Hub
DB-GPT-Hub is an experimental project leveraging Large Language Models (LLMs) for Text-to-SQL parsing. It includes stages like data collection, preprocessing, model selection, construction, and fine-tuning of model weights. The project aims to enhance Text-to-SQL capabilities, reduce model training costs, and enable developers to contribute to improving Text-to-SQL accuracy. The ultimate goal is to achieve automated question-answering based on databases, allowing users to execute complex database queries using natural language descriptions. The project has successfully integrated multiple large models and established a comprehensive workflow for data processing, SFT model training, prediction output, and evaluation.
RAG-Retrieval
RAG-Retrieval provides full-chain RAG retrieval fine-tuning and inference code. It supports fine-tuning any open-source RAG retrieval models, including vector (embedding, graph a), delayed interactive models (ColBERT, graph d), interactive models (cross encoder, graph c). For inference, RAG-Retrieval focuses on ranking (reranker) and has developed a lightweight Python library rag-retrieval, providing a unified way to call any different RAG ranking models.
Qwen
Qwen is a series of large language models developed by Alibaba DAMO Academy. It outperforms the baseline models of similar model sizes on a series of benchmark datasets, e.g., MMLU, C-Eval, GSM8K, MATH, HumanEval, MBPP, BBH, etc., which evaluate the models’ capabilities on natural language understanding, mathematic problem solving, coding, etc. Qwen models outperform the baseline models of similar model sizes on a series of benchmark datasets, e.g., MMLU, C-Eval, GSM8K, MATH, HumanEval, MBPP, BBH, etc., which evaluate the models’ capabilities on natural language understanding, mathematic problem solving, coding, etc. Qwen-72B achieves better performance than LLaMA2-70B on all tasks and outperforms GPT-3.5 on 7 out of 10 tasks.
EasyLM
EasyLM is a one-stop solution for pre-training, fine-tuning, evaluating, and serving large language models in JAX/Flax. It simplifies the process by leveraging JAX's pjit functionality to scale up training to multiple TPU/GPU accelerators. Built on top of Huggingface's transformers and datasets, EasyLM offers an easy-to-use and customizable codebase for training large language models without the complexity found in other frameworks. It supports sharding model weights and training data across multiple accelerators, enabling multi-TPU/GPU training on a single host or across multiple hosts on Google Cloud TPU Pods. EasyLM currently supports models like LLaMA, LLaMA 2, and LLaMA 3.
friendly-stable-audio-tools
This repository is a refactored and updated version of `stable-audio-tools`, an open-source code for audio/music generative models originally by Stability AI. It contains refactored codes for improved readability and usability, useful scripts for evaluating and playing with trained models, and instructions on how to train models such as `Stable Audio 2.0`. The repository does not contain any pretrained checkpoints. Requirements include PyTorch 2.0 or later for Flash Attention support and Python 3.8.10 or later for development. The repository provides guidance on installing, building a training environment using Docker or Singularity, logging with Weights & Biases, training configurations, and stages for VAE-GAN and Diffusion Transformer (DiT) training.
END-TO-END-GENERATIVE-AI-PROJECTS
The 'END TO END GENERATIVE AI PROJECTS' repository is a collection of awesome industry projects utilizing Large Language Models (LLM) for various tasks such as chat applications with PDFs, image to speech generation, video transcribing and summarizing, resume tracking, text to SQL conversion, invoice extraction, medical chatbot, financial stock analysis, and more. The projects showcase the deployment of LLM models like Google Gemini Pro, HuggingFace Models, OpenAI GPT, and technologies such as Langchain, Streamlit, LLaMA2, LLaMAindex, and more. The repository aims to provide end-to-end solutions for different AI applications.
glake
GLake is an acceleration library and utilities designed to optimize GPU memory management and IO transmission for AI large model training and inference. It addresses challenges such as GPU memory bottleneck and IO transmission bottleneck by providing efficient memory pooling, sharing, and tiering, as well as multi-path acceleration for CPU-GPU transmission. GLake is easy to use, open for extension, and focuses on improving training throughput, saving inference memory, and accelerating IO transmission. It offers features like memory fragmentation reduction, memory deduplication, and built-in security mechanisms for troubleshooting GPU memory issues.
LLMSys-PaperList
This repository provides a comprehensive list of academic papers, articles, tutorials, slides, and projects related to Large Language Model (LLM) systems. It covers various aspects of LLM research, including pre-training, serving, system efficiency optimization, multi-model systems, image generation systems, LLM applications in systems, ML systems, survey papers, LLM benchmarks and leaderboards, and other relevant resources. The repository is regularly updated to include the latest developments in this rapidly evolving field, making it a valuable resource for researchers, practitioners, and anyone interested in staying abreast of the advancements in LLM technology.
NeMo
NeMo Framework is a generative AI framework built for researchers and pytorch developers working on large language models (LLMs), multimodal models (MM), automatic speech recognition (ASR), and text-to-speech synthesis (TTS). The primary objective of NeMo is to provide a scalable framework for researchers and developers from industry and academia to more easily implement and design new generative AI models by being able to leverage existing code and pretrained models.
litdata
LitData is a tool designed for blazingly fast, distributed streaming of training data from any cloud storage. It allows users to transform and optimize data in cloud storage environments efficiently and intuitively, supporting various data types like images, text, video, audio, geo-spatial, and multimodal data. LitData integrates smoothly with frameworks such as LitGPT and PyTorch, enabling seamless streaming of data to multiple machines. Key features include multi-GPU/multi-node support, easy data mixing, pause & resume functionality, support for profiling, memory footprint reduction, cache size configuration, and on-prem optimizations. The tool also provides benchmarks for measuring streaming speed and conversion efficiency, along with runnable templates for different data types. LitData enables infinite cloud data processing by utilizing the Lightning.ai platform to scale data processing with optimized machines.
20 - OpenAI Gpts
Tango Multi-Agent Wizard
I'm Tango, your go-to for simulating dialogues with any persona, entity, style, or expertise.
OE Buddy
Assistant for multi-job remote workers, aiding in task management and communication.
Duesentrieb x100
Multi-algorithmic mastermind who innovates technology solutions and optimizes product design. And it is a duck. // Carefully test any generated solutions.
Multiple Personas v2.0.1
A Multi-Agent Multi-Tasking Assistant. Seamlessly switches personas with different skills and backgrounds to tackle complex tasks. Powered by Mr Persona.
MULTITASKER GPT-4 (Turbo)
Advanced multi-tasking GPT with real-time data management, image generation, and document editing.
Dr. Watt's Energy Insight Lab
Energy Insights Lab is a multi-disciplinary team of dedicated professionals advising on energy markets, technologies, and decarbonization.
Art Authenticator Guide
Advanced artwork authenticator with unrestricted, multi-functional abilities.
Video Insights: Summaries/Transcription/Vision
Chat with any video or audio. High-quality search, summarization, insights, multi-language transcriptions, and more. We currently support Youtube and files uploaded on our website.
Website Builder [Multipage & High Quality]
👋 I'm Wegic, the AI web designer & developer by your side! I can help you quickly create and launch a multi-page website! #website builder##website generator##website create#