Best AI tools for< Load Model >
18 - AI tool Sites
Avaturn
Avaturn is a realistic 3D avatar creator that uses generative AI to turn a 2D photo into a recognizable and realistic 3D avatar. With endless options for avatar customization, you can create a unique look for each and everyone. Export your avatar as a 3D model and load it in Blender, Unity, Unreal Engine, Maya, Cinema4D, or any other 3D environment. The avatars come with a standard humanoid body rig, ARKit blendshapes, and visemes. They are compatible with Mixamo animations and VTubing software.
Milo
Milo is an AI-powered co-pilot for parents, designed to help them manage the chaos of family life. It uses GPT-4, the latest in large-language models, to sort and organize information, send reminders, and provide updates. Milo is designed to be accurate and solve complex problems, and it learns and gets better based on user feedback. It can be used to manage tasks such as adding items to a grocery list, getting updates on the week's schedule, and sending screenshots of birthday invitations.
VoiceGPT
VoiceGPT is an Android app that provides a voice-based interface to interact with AI language models like ChatGPT, Bing AI, and Bard. It offers features such as unlimited free messages, voice input and output in 67+ languages, a floating bubble for easy switching between apps, OCR text recognition, code execution, image generation with DALL-E 2, and support for ChatGPT Plus accounts. VoiceGPT is designed to be accessible for users with visual impairments, dyslexia, or other conditions, and it can be set as the default assistant to be activated hands-free with a custom hotword.
Kolank
Kolank is an AI tool that provides a unified API for accessing a wide range of Language Model Models (LLMs) and providers. It offers features such as model comparison based on price, latency, output, context, and throughput, OpenAI compatible API integration, transparency in tracking API calls and token expenditure, cost reduction by paying for performance, load balancing with fallbacks, and easy integration with preferred LLMs using Python, Javascript, and Curl.
Ermine.ai
Ermine.ai is an AI tool that provides local audio recording and transcription services. Users can easily transcribe audio files into text using this tool. The application currently supports Chrome browser and is working on adding support for Firefox. It requires the browser to load and initialize the transcription model, which may take a few minutes during the first use. The tool is designed to offer fast transcription services with support for English language only.
Vendorful
Vendorful is an AI-powered tool designed to assist users in responding to Requests for Proposals (RFPs), Requests for Information (RFIs), and Security Questionnaires. The tool leverages artificial intelligence to generate draft responses with better-than-human accuracy based on the user's content. Vendorful aims to streamline the RFP response process, save time, and increase the chances of winning deals by providing contextual smarts and flexible answers.
WebPilot
WebPilot is an AI tool designed to enhance your GPTs by enabling them to perform various tasks such as opening URL/file links, using multiple search engines, accessing all types of websites, loading dynamic web content, and providing enhanced answers. It offers a super easy way to interact with webpages, assisting in tasks like responding to emails, writing in forms, and solving quizzes. WebPilot is free, open-source, and has been featured by Google Extension Store as an established publisher.
Parade
Parade is a capacity management platform designed for freight brokerages and 3PLs to streamline operations, automate bookings, and improve margins. The platform leverages advanced AI to optimize pricing, bidding, and carrier management, helping users book more loads efficiently. Parade integrates seamlessly with existing tech stacks, offering precise pricing, optimized bidding, and enhanced shipper connectivity. The platform boasts a range of features and benefits aimed at increasing efficiency, reducing costs, and boosting margins for freight businesses.
SwapFans
The website offers an AI-powered tool called SwapFans that allows users to load balance and receive discounts. Users can easily FaceSwap any social media videos and swap entire Instagram and TikTok accounts with high-speed FaceSwap AI. The tool is designed to help users manage their social media presence effectively and efficiently.
PixieBrix
PixieBrix is an AI engagement platform that allows users to build, deploy, and manage internal AI tools to drive team productivity. It unifies AI landscapes with oversight and governance for enterprise scale. The platform is enterprise-ready and fully customizable to meet unique needs, and can be deployed on any site, making it easy to integrate into existing systems. PixieBrix leverages the power of AI and automation to harness the latest technology to streamline workflows and take productivity to new heights.
TLDRai
TLDRai.com is an AI tool designed to help users summarize any text into concise and easy-to-digest content, enabling them to free themselves from information overload. The tool utilizes AI technology to provide efficient text summarization services, making it a valuable resource for individuals seeking quick and accurate summaries of lengthy texts.
promptsplitter.com
The website promptsplitter.com is experiencing an Argo Tunnel error on the Cloudflare network. Users encountering this error are advised to wait a few minutes if they are visitors, or ensure that cloudflared is running and can reach the network if they are the website owners. The error message provides guidance on troubleshooting steps to resolve the issue.
Merlin AI
Merlin AI is a YouTube transcript tool that allows users to create summaries of YouTube videos. It is easy to use and can be added to Chrome as an extension. Merlin AI is powered by an undocumented API and features the latest build.
Daxtra
Daxtra is an AI-powered recruitment technology tool designed to help staffing and recruiting professionals find, parse, match, and engage the best candidates quickly and efficiently. The tool offers a suite of products that seamlessly integrate with existing ATS or CRM systems, automating various recruitment processes such as candidate data loading, CV/resume formatting, information extraction, and job matching. Daxtra's solutions cater to corporates, vendors, job boards, and social media partners, providing a comprehensive set of developer components to enhance recruitment workflows.
Lex Fridman
Lex Fridman is an AI tool developed by Lex Fridman, a Research Scientist at MIT, focusing on human-robot interaction and machine learning. The tool offers various resources such as podcasts, research publications, and studies related to AI-assisted driving data collection, autonomous vehicle systems, gaze estimation, and cognitive load estimation. It aims to provide insights into the safe and enjoyable interaction between humans and AI in driving scenarios.
LiteLLM
LiteLLM is an AI tool that offers a Unified API for Azure OpenAI Vertex AI Bedrock. It provides a proxy server for managing authentication, load balancing, and spend tracking across a wide range of LLMs. LiteLLM is designed to simplify the integration and management of various AI services in the OpenAI format. With features like cloud deployment, open-source availability, and extensive provider integrations, LiteLLM aims to streamline AI development workflows and enhance operational efficiency.
Epicflow
Epicflow is an AI-based multi-project and resource management software designed to help organizations deliver more projects on time with available resources, increase profitability, and make informed project decisions using real-time data and predictive analytics. The software bridges demand and supply by matching talent based on competencies, experience, and availability. It offers features like AI assistant, What-If Analysis, Future Load Graph, Historical Load Graph, Task List, and Competence Management Pipeline. Epicflow is trusted by leading companies in various industries for high performance and flawless project delivery.
Knowbo
Knowbo is a custom chatbot tool that allows users to create a chatbot for their website in just 2 minutes. The chatbot learns directly from the website or documentation, providing up-to-date information to users. With features like easy deployment, chat history tracking, and customization options, Knowbo aims to revolutionize customer experience by reducing the load on support teams and offering a seamless way for users to get their questions answered quickly.
20 - Open Source AI Tools
ServerlessLLM
ServerlessLLM is a fast, affordable, and easy-to-use library designed for multi-LLM serving, optimized for environments with limited GPU resources. It supports loading various leading LLM inference libraries, achieving fast load times, and reducing model switching overhead. The library facilitates easy deployment via Ray Cluster and Kubernetes, integrates with the OpenAI Query API, and is actively maintained by contributors.
expo-stable-diffusion
The `expo-stable-diffusion` repository provides a tool for generating images using Stable Diffusion natively on iOS devices within Expo and React Native apps. Users can install and configure the module to create images based on prompts. The repository includes information on updating iOS deployment targets, enabling increased memory limits, and building iOS apps. Additionally, users can obtain Stable Diffusion models from various sources. The repository also addresses troubleshooting tips related to model load times and image generation durations. The developer seeks sponsorship to further enhance the project, including adding Android support.
AITranslator
AITranslator is a software tool that utilizes a large language model to translate text from images exported by MTool into a user-friendly graphical interface. Users can start TGW to load the model, open the software, and select the text to be translated. The tool aims to simplify the translation process by leveraging advanced language processing capabilities.
model.nvim
model.nvim is a tool designed for Neovim users who want to utilize AI models for completions or chat within their text editor. It allows users to build prompts programmatically with Lua, customize prompts, experiment with multiple providers, and use both hosted and local models. The tool supports features like provider agnosticism, programmatic prompts in Lua, async and multistep prompts, streaming completions, and chat functionality in 'mchat' filetype buffer. Users can customize prompts, manage responses, and context, and utilize various providers like OpenAI ChatGPT, Google PaLM, llama.cpp, ollama, and more. The tool also supports treesitter highlights and folds for chat buffers.
wllama
Wllama is a WebAssembly binding for llama.cpp, a high-performance and lightweight language model library. It enables you to run inference directly on the browser without the need for a backend or GPU. Wllama provides both high-level and low-level APIs, allowing you to perform various tasks such as completions, embeddings, tokenization, and more. It also supports model splitting, enabling you to load large models in parallel for faster download. With its Typescript support and pre-built npm package, Wllama is easy to integrate into your React Typescript projects.
catalyst
Catalyst is a C# Natural Language Processing library designed for speed, inspired by spaCy's design. It provides pre-trained models, support for training word and document embeddings, and flexible entity recognition models. The library is fast, modern, and pure-C#, supporting .NET standard 2.0. It is cross-platform, running on Windows, Linux, macOS, and ARM. Catalyst offers non-destructive tokenization, named entity recognition, part-of-speech tagging, language detection, and efficient binary serialization. It includes pre-built models for language packages and lemmatization. Users can store and load models using streams. Getting started with Catalyst involves installing its NuGet Package and setting the storage to use the online repository. The library supports lazy loading of models from disk or online. Users can take advantage of C# lazy evaluation and native multi-threading support to process documents in parallel. Training a new FastText word2vec embedding model is straightforward, and Catalyst also provides algorithms for fast embedding search and dimensionality reduction.
autoscraper
AutoScraper is a smart, automatic, fast, and lightweight web scraping tool for Python. It simplifies the process of web scraping by learning scraping rules based on sample data provided by the user. The tool can extract text, URLs, or HTML tag values from web pages and return similar elements. Users can utilize the learned object to scrape similar content or exact elements from new pages. AutoScraper is compatible with Python 3 and offers easy installation from various sources. It provides functionalities for fetching similar and exact results from web pages, such as extracting post titles from Stack Overflow or live stock prices from Yahoo Finance. The tool allows customization with custom requests module parameters like proxies or headers. Users can save and load models for future use and explore advanced usages through tutorials and examples.
keras-llm-robot
The Keras-llm-robot Web UI project is an open-source tool designed for offline deployment and testing of various open-source models from the Hugging Face website. It allows users to combine multiple models through configuration to achieve functionalities like multimodal, RAG, Agent, and more. The project consists of three main interfaces: chat interface for language models, configuration interface for loading models, and tools & agent interface for auxiliary models. Users can interact with the language model through text, voice, and image inputs, and the tool supports features like model loading, quantization, fine-tuning, role-playing, code interpretation, speech recognition, image recognition, network search engine, and function calling.
spandrel
Spandrel is a library for loading and running pre-trained PyTorch models. It automatically detects the model architecture and hyperparameters from model files, and provides a unified interface for running models.
lmstudio.js
lmstudio.js is a pre-release alpha client SDK for LM Studio, allowing users to use local LLMs in JS/TS/Node. It is currently undergoing rapid development with breaking changes expected. Users can follow LM Studio's announcements on Twitter and Discord. The SDK provides API usage for loading models, predicting text, setting up the local LLM server, and more. It supports features like custom loading progress tracking, model unloading, structured output prediction, and cancellation of predictions. Users can interact with LM Studio through the CLI tool 'lms' and perform tasks like text completion, conversation, and getting prediction statistics.
create-million-parameter-llm-from-scratch
The 'create-million-parameter-llm-from-scratch' repository provides a detailed guide on creating a Large Language Model (LLM) with 2.3 million parameters from scratch. The blog replicates the LLaMA approach, incorporating concepts like RMSNorm for pre-normalization, SwiGLU activation function, and Rotary Embeddings. The model is trained on a basic dataset to demonstrate the ease of creating a million-parameter LLM without the need for a high-end GPU.
forust
Forust is a lightweight package for building gradient boosted decision tree ensembles. The algorithm code is written in Rust with a Python wrapper. It implements the same algorithm as XGBoost and provides nearly identical results. The package was developed to better understand XGBoost, as a fun project in Rust, and to experiment with adding new features to the algorithm in a simpler codebase. Forust allows training gradient boosted decision tree ensembles with multiple objective functions, predicting on datasets, inspecting model structures, calculating feature importance, and saving/loading trained boosters.
hqq
HQQ is a fast and accurate model quantizer that skips the need for calibration data. It's super simple to implement (just a few lines of code for the optimizer). It can crunch through quantizing the Llama2-70B model in only 4 minutes! 🚀
neural-compressor
Intel® Neural Compressor is an open-source Python library that supports popular model compression techniques such as quantization, pruning (sparsity), distillation, and neural architecture search on mainstream frameworks such as TensorFlow, PyTorch, ONNX Runtime, and MXNet. It provides key features, typical examples, and open collaborations, including support for a wide range of Intel hardware, validation of popular LLMs, and collaboration with cloud marketplaces, software platforms, and open AI ecosystems.
worker-vllm
The worker-vLLM repository provides a serverless endpoint for deploying OpenAI-compatible vLLM models with blazing-fast performance. It supports deploying various model architectures, such as Aquila, Baichuan, BLOOM, ChatGLM, Command-R, DBRX, DeciLM, Falcon, Gemma, GPT-2, GPT BigCode, GPT-J, GPT-NeoX, InternLM, Jais, LLaMA, MiniCPM, Mistral, Mixtral, MPT, OLMo, OPT, Orion, Phi, Phi-3, Qwen, Qwen2, Qwen2MoE, StableLM, Starcoder2, Xverse, and Yi. Users can deploy models using pre-built Docker images or build custom images with specified arguments. The repository also supports OpenAI compatibility for chat completions, completions, and models, with customizable input parameters. Users can modify their OpenAI codebase to use the deployed vLLM worker and access a list of available models for deployment.
onnxruntime-server
ONNX Runtime Server is a server that provides TCP and HTTP/HTTPS REST APIs for ONNX inference. It aims to offer simple, high-performance ML inference and a good developer experience. Users can provide inference APIs for ONNX models without writing additional code by placing the models in the directory structure. Each session can choose between CPU or CUDA, analyze input/output, and provide Swagger API documentation for easy testing. Ready-to-run Docker images are available, making it convenient to deploy the server.
godot-llm
Godot LLM is a plugin that enables the utilization of large language models (LLM) for generating content in games. It provides functionality for text generation, text embedding, multimodal text generation, and vector database management within the Godot game engine. The plugin supports features like Retrieval Augmented Generation (RAG) and integrates llama.cpp-based functionalities for text generation, embedding, and multimodal capabilities. It offers support for various platforms and allows users to experiment with LLM models in their game development projects.
InternLM
InternLM is a powerful language model series with features such as 200K context window for long-context tasks, outstanding comprehensive performance in reasoning, math, code, chat experience, instruction following, and creative writing, code interpreter & data analysis capabilities, and stronger tool utilization capabilities. It offers models in sizes of 7B and 20B, suitable for research and complex scenarios. The models are recommended for various applications and exhibit better performance than previous generations. InternLM models may match or surpass other open-source models like ChatGPT. The tool has been evaluated on various datasets and has shown superior performance in multiple tasks. It requires Python >= 3.8, PyTorch >= 1.12.0, and Transformers >= 4.34 for usage. InternLM can be used for tasks like chat, agent applications, fine-tuning, deployment, and long-context inference.
MARS5-TTS
MARS5 is a novel English speech model (TTS) developed by CAMB.AI, featuring a two-stage AR-NAR pipeline with a unique NAR component. The model can generate speech for various scenarios like sports commentary and anime with just 5 seconds of audio and a text snippet. It allows steering prosody using punctuation and capitalization in the transcript. Speaker identity is specified using an audio reference file, enabling 'deep clone' for improved quality. The model can be used via torch.hub or HuggingFace, supporting both shallow and deep cloning for inference. Checkpoints are provided for AR and NAR models, with hardware requirements of 750M+450M params on GPU. Contributions to improve model stability, performance, and reference audio selection are welcome.
fsdp_qlora
The fsdp_qlora repository provides a script for training Large Language Models (LLMs) with Quantized LoRA and Fully Sharded Data Parallelism (FSDP). It integrates FSDP+QLoRA into the Axolotl platform and offers installation instructions for dependencies like llama-recipes, fastcore, and PyTorch. Users can finetune Llama-2 70B on Dual 24GB GPUs using the provided command. The script supports various training options including full params fine-tuning, LoRA fine-tuning, custom LoRA fine-tuning, quantized LoRA fine-tuning, and more. It also discusses low memory loading, mixed precision training, and comparisons to existing trainers. The repository addresses limitations and provides examples for training with different configurations, including BnB QLoRA and HQQ QLoRA. Additionally, it offers SLURM training support and instructions for adding support for a new model.