Best AI tools for< Install Custom Kernels >
20 - AI tool Sites

FastBots.ai
FastBots.ai is an AI chatbot builder that allows users to create custom chatbots trained on their own data. These chatbots can be integrated into websites to provide customer support, sales assistance, and other services. FastBots.ai is easy to use and requires no coding. It supports a wide range of content types, including text, PDFs, and YouTube videos. FastBots.ai also offers a variety of features, such as customization options, chat history storage, and Zapier integration.

AnythingLLM
AnythingLLM is an all-in-one AI application designed for everyone. It offers a suite of tools for working with LLM (Large Language Models), documents, and agents in a fully private environment. Users can install AnythingLLM on their desktop for Windows, MacOS, and Linux, enabling flexible one-click installation and secure, fully private operation without internet connectivity. The application supports custom models, including enterprise models like GPT-4, custom fine-tuned models, and open-source models like Llama and Mistral. AnythingLLM allows users to work with various document formats, such as PDFs and word documents, providing tailored solutions with locally running defaults for privacy.

Axiom.ai
Axiom.ai is a no-code browser automation tool that allows users to automate website actions and repetitive tasks on any website or web app. It is a Chrome Extension that is simple to install and free to try. Once installed, users can pin Axiom to the Chrome Toolbar and click on the icon to open and close. Users can build custom bots or use templates to automate actions like clicking, typing, and scraping data from websites. Axiom.ai can be integrated with Zapier to trigger automations based on external events.

AnythingYou.AI
AnythingYou.AI is an AI tool that generates beautiful profile pictures using AI avatars. Users can create custom AI avatars by uploading 10-20 selfies, and the tool will train a custom model for them immediately. The generated avatar images are high-quality and realistic, created using innovative technologies like Stable Diffusion and DreamBooth. Users can easily create avatars without the need for subscriptions or app installs, and get their avatar images in just 2 hours. The tool ensures user privacy by using images only for model training and deleting them immediately after avatar generation.

Cmd J – ChatGPT for Chrome
Cmd J – ChatGPT for Chrome is a Chrome extension that allows users to use ChatGPT on any tab without having to copy and paste. It offers a variety of features to help users improve their writing, generate blog posts, crush coding issues, boost their social engagement, and fix code bugs faster. The extension is easy to use and can be accessed with a simple keyboard shortcut.

Package
Package is a generative AI rendering tool that helps homeowners envision different renovation styles, receive recommended material packages, and streamline procurement with just one click. It offers a wide range of design packages curated by experts, allowing users to customize items to fit their specific style. Package also provides 3D renderings, material management, and personalized choices, making it easy for homeowners to bring their design ideas to life.

Cascadeur
Cascadeur is a standalone 3D software that lets you create keyframe animation, as well as clean up and edit any imported ones. Thanks to its AI-assisted and physics tools you can dramatically speed up the animation process and get high quality results. It works with .FBX, .DAE and .USD files making it easy to integrate into any animation workflow.

Hoop.dev
Hoop.dev is an AI-powered application that provides live data masking in Rails console sessions. It offers shielded Rails console access, automated employee onboarding and off-boarding, and AI data masking to protect sensitive information. The application allows for passwordless authentication via Google SSO with MFA, auditability of console operations, and compliance with various security controls and regulations. Hoop.dev aims to streamline Rails console operations, reduce manual workflows, and enhance security measures for user convenience and data protection.

GptPanda
GptPanda is a free AI-assistant application designed for Slack users to enhance teamwork and productivity. It integrates seamlessly into Slack workspaces, offering unlimited requests and support in multiple languages. Users can communicate with GptPanda in personal messages or corporate chats, allowing it to assist with daily tasks, answer questions, and manage workspaces efficiently. The application prioritizes user data security through encryption and provides 24/7 customer support for any inquiries or issues.

Meow Apps
Meow Apps is a collection of powerful WordPress plugins designed to supercharge websites with AI capabilities, optimization features, and more. Created by Jordy Meow, a software engineer and photographer based in Tokyo, the plugins aim to enhance productivity and user experience on WordPress platforms. With a focus on optimization, imagery, and AI integration, Meow Apps offers a range of tools to elevate content, automate social posts, clean databases, manage media files, and add AI features like chatbots and content generation. The plugins are known for their friendly user interface, extensive features, and support for databases of all sizes. Meow Apps strives for perfection by providing high-quality tools that can transform the WordPress experience for users.

InstantAPI.ai
InstantAPI.ai is a powerful web scraping API and Chrome extension that allows users to extract data from any website with ease. The tool leverages AI technology to automate data extraction, adapt to site changes, and deliver customized JSON objects. With features like worldwide geotargeting, proxy management, JavaScript rendering, and CAPTCHA bypass, InstantAPI.ai ensures fast and reliable results. Users can describe the data they need and receive it in real-time, tailored to their exact requirements. The tool offers unlimited concurrency, human support, and a user-friendly interface, making web scraping simple and efficient.

Visual Studio Marketplace
The Visual Studio Marketplace is a platform where users can find and publish extensions for Visual Studio family of products. It offers a wide range of extensions to enhance the functionality and features of Visual Studio, Visual Studio Code, Azure DevOps, and more. Users can customize their development environment with themes, tools, and integrations to improve productivity and efficiency.

Tactiq
Tactiq is a live transcription and AI summary tool for Google Meet, Zoom, and MS Teams. It provides real-time transcriptions, speaker identification, and AI-powered insights to help users focus on the meeting and take effective notes. Tactiq also offers one-click AI actions, such as generating meeting summaries, crafting follow-up emails, and formatting project updates, to streamline post-meeting workflows.

Colorcinch
Colorcinch is an online photo editor and AI cartoonizer that allows users to easily edit and transform their photos into artwork. It offers a wide range of features, including background removal, image cropping and resizing, color adjustment, and the ability to add filters and effects. Colorcinch also has a large library of stock photography, graphics, and icons that users can use to enhance their photos. The platform is available online and offline, making it easy for users to access their projects from anywhere.

Machinet
Machinet is an AI Agent designed for full-stack software developers. It serves as an AI-based IDE that assists developers in various tasks, such as code generation, terminal access, front-end debugging, architecture suggestions, refactoring, and mentoring. The tool aims to enhance productivity and streamline the development workflow by providing intelligent assistance and support throughout the coding process. Machinet prioritizes security and privacy, ensuring that user data is encrypted, secure, and never stored for training purposes.

Shakespeare Toolbar
Shakespeare Toolbar is an AI-powered writing tool that helps you write better and faster. It is available as a Chrome extension and can be used on any website. With Shakespeare Toolbar, you can rephrase emails, summarize documents, write social media posts, and more. It supports over 10 languages and is available for a one-time purchase of $49.

GPTConsole
GPTConsole is an AI-powered platform that helps developers build production-ready applications faster and more efficiently. Its AI agents can generate code for a variety of applications, including web applications, AI applications, and landing pages. GPTConsole also offers a range of features to help developers build and maintain their applications, including an AI agent that can learn your entire codebase and answer your questions, and a CLI tool for accessing agents directly from the command line.

Stable Diffusion
Stable Diffusion is an AI art generation tool that allows users to create high-quality images from text descriptions. It offers a user-friendly platform for both beginners and experts to explore AI art creation without deep technical knowledge. The tool excels in producing complex, detailed, and customizable images, making it ideal for artists, designers, and anyone looking to integrate AI into their creative process. Stable Diffusion provides unprecedented creative freedom through features like image generation, inpainting, outpainting, and text-guided image-to-image translation.

Remodel AI
Remodel AI is an innovative AI application that allows users to renovate their homes with ease. By simply taking photos of their home's interior or exterior, users can instantly visualize fully remodeled versions, new flooring, different walls, and more. The app leverages artificial intelligence to provide various interior design styles and architecture options for users to choose from. With features like interior and exterior remodeling, new flooring installation, wall painting, landscaping visualization, and object reskinning, Remodel AI offers a comprehensive solution for home renovation enthusiasts. The app has received accolades for its user-friendly interface and ability to transform home design ideas into reality.

75 Wbet Com Daftar : OLX500
75 Wbet Com Daftar : OLX500 is an online platform offering a variety of casino games, including slots, live casino, poker, and more. Users can access popular games like Sweet Bonanza, Mahjong Ways, and Gates of Olympus. The platform also provides guidance on how to install the app on Android devices. With a focus on responsible gambling, 75 Wbet Com Daftar : OLX500 aims to enhance the gaming experience for its users.
20 - Open Source AI Tools

BitMat
BitMat is a Python package designed to optimize matrix multiplication operations by utilizing custom kernels written in Triton. It leverages the principles outlined in the "1bit-LLM Era" paper, specifically utilizing packed int8 data to enhance computational efficiency and performance in deep learning and numerical computing tasks.

MInference
MInference is a tool designed to accelerate pre-filling for long-context Language Models (LLMs) by leveraging dynamic sparse attention. It achieves up to a 10x speedup for pre-filling on an A100 while maintaining accuracy. The tool supports various decoding LLMs, including LLaMA-style models and Phi models, and provides custom kernels for attention computation. MInference is useful for researchers and developers working with large-scale language models who aim to improve efficiency without compromising accuracy.

fsdp_qlora
The fsdp_qlora repository provides a script for training Large Language Models (LLMs) with Quantized LoRA and Fully Sharded Data Parallelism (FSDP). It integrates FSDP+QLoRA into the Axolotl platform and offers installation instructions for dependencies like llama-recipes, fastcore, and PyTorch. Users can finetune Llama-2 70B on Dual 24GB GPUs using the provided command. The script supports various training options including full params fine-tuning, LoRA fine-tuning, custom LoRA fine-tuning, quantized LoRA fine-tuning, and more. It also discusses low memory loading, mixed precision training, and comparisons to existing trainers. The repository addresses limitations and provides examples for training with different configurations, including BnB QLoRA and HQQ QLoRA. Additionally, it offers SLURM training support and instructions for adding support for a new model.

prime
Prime is a framework for efficient, globally distributed training of AI models over the internet. It includes features such as fault-tolerant training with ElasticDeviceMesh, asynchronous distributed checkpointing, live checkpoint recovery, custom Int8 All-Reduce Kernel, maximizing bandwidth utilization, PyTorch FSDP2/DTensor ZeRO-3 implementation, and CPU off-loading. The framework aims to optimize communication, checkpointing, and bandwidth utilization for large-scale AI model training.

llama.cpp
The main goal of llama.cpp is to enable LLM inference with minimal setup and state-of-the-art performance on a wide range of hardware - locally and in the cloud. It provides a Plain C/C++ implementation without any dependencies, optimized for Apple silicon via ARM NEON, Accelerate and Metal frameworks, and supports various architectures like AVX, AVX2, AVX512, and AMX. It offers integer quantization for faster inference, custom CUDA kernels for NVIDIA GPUs, Vulkan and SYCL backend support, and CPU+GPU hybrid inference. llama.cpp is the main playground for developing new features for the ggml library, supporting various models and providing tools and infrastructure for LLM deployment.

flashinfer
FlashInfer is a library for Language Languages Models that provides high-performance implementation of LLM GPU kernels such as FlashAttention, PageAttention and LoRA. FlashInfer focus on LLM serving and inference, and delivers state-the-art performance across diverse scenarios.

lorax
LoRAX is a framework that allows users to serve thousands of fine-tuned models on a single GPU, dramatically reducing the cost of serving without compromising on throughput or latency. It features dynamic adapter loading, heterogeneous continuous batching, adapter exchange scheduling, optimized inference, and is ready for production with prebuilt Docker images, Helm charts for Kubernetes, Prometheus metrics, and distributed tracing with Open Telemetry. LoRAX supports a number of Large Language Models as the base model including Llama, Mistral, and Qwen, and any of the linear layers in the model can be adapted via LoRA and loaded in LoRAX.

llama.cpp
llama.cpp is a C++ implementation of LLaMA, a large language model from Meta. It provides a command-line interface for inference and can be used for a variety of tasks, including text generation, translation, and question answering. llama.cpp is highly optimized for performance and can be run on a variety of hardware, including CPUs, GPUs, and TPUs.

deeppowers
Deeppowers is a powerful Python library for deep learning applications. It provides a wide range of tools and utilities to simplify the process of building and training deep neural networks. With Deeppowers, users can easily create complex neural network architectures, perform efficient training and optimization, and deploy models for various tasks. The library is designed to be user-friendly and flexible, making it suitable for both beginners and experienced deep learning practitioners.

lite_llama
lite_llama is a llama model inference lite framework by triton. It offers accelerated inference for llama3, Qwen2.5, and Llava1.5 models with up to 4x speedup compared to transformers. The framework supports top-p sampling, stream output, GQA, and cuda graph optimizations. It also provides efficient dynamic management for kv cache, operator fusion, and custom operators like rmsnorm, rope, softmax, and element-wise multiplication using triton kernels.

ZhiLight
ZhiLight is a highly optimized large language model (LLM) inference engine developed by Zhihu and ModelBest Inc. It accelerates the inference of models like Llama and its variants, especially on PCIe-based GPUs. ZhiLight offers significant performance advantages compared to mainstream open-source inference engines. It supports various features such as custom defined tensor and unified global memory management, optimized fused kernels, support for dynamic batch, flash attention prefill, prefix cache, and different quantization techniques like INT8, SmoothQuant, FP8, AWQ, and GPTQ. ZhiLight is compatible with OpenAI interface and provides high performance on mainstream NVIDIA GPUs with different model sizes and precisions.

KsanaLLM
KsanaLLM is a high-performance engine for LLM inference and serving. It utilizes optimized CUDA kernels for high performance, efficient memory management, and detailed optimization for dynamic batching. The tool offers flexibility with seamless integration with popular Hugging Face models, support for multiple weight formats, and high-throughput serving with various decoding algorithms. It enables multi-GPU tensor parallelism, streaming outputs, and an OpenAI-compatible API server. KsanaLLM supports NVIDIA GPUs and Huawei Ascend NPU, and seamlessly integrates with verified Hugging Face models like LLaMA, Baichuan, and Qwen. Users can create a docker container, clone the source code, compile for Nvidia or Huawei Ascend NPU, run the tool, and distribute it as a wheel package. Optional features include a model weight map JSON file for models with different weight names.

exllamav2
ExLlamaV2 is an inference library for running local LLMs on modern consumer GPUs. It is a faster, better, and more versatile codebase than its predecessor, ExLlamaV1, with support for a new quant format called EXL2. EXL2 is based on the same optimization method as GPTQ and supports 2, 3, 4, 5, 6, and 8-bit quantization. It allows for mixing quantization levels within a model to achieve any average bitrate between 2 and 8 bits per weight. ExLlamaV2 can be installed from source, from a release with prebuilt extension, or from PyPI. It supports integration with TabbyAPI, ExUI, text-generation-webui, and lollms-webui. Key features of ExLlamaV2 include: - Faster and better kernels - Cleaner and more versatile codebase - Support for EXL2 quantization format - Integration with various web UIs and APIs - Community support on Discord

aici
The Artificial Intelligence Controller Interface (AICI) lets you build Controllers that constrain and direct output of a Large Language Model (LLM) in real time. Controllers are flexible programs capable of implementing constrained decoding, dynamic editing of prompts and generated text, and coordinating execution across multiple, parallel generations. Controllers incorporate custom logic during the token-by-token decoding and maintain state during an LLM request. This allows diverse Controller strategies, from programmatic or query-based decoding to multi-agent conversations to execute efficiently in tight integration with the LLM itself.

AQLM
AQLM is the official PyTorch implementation for Extreme Compression of Large Language Models via Additive Quantization. It includes prequantized AQLM models without PV-Tuning and PV-Tuned models for LLaMA, Mistral, and Mixtral families. The repository provides inference examples, model details, and quantization setups. Users can run prequantized models using Google Colab examples, work with different model families, and install the necessary inference library. The repository also offers detailed instructions for quantization, fine-tuning, and model evaluation. AQLM quantization involves calibrating models for compression, and users can improve model accuracy through finetuning. Additionally, the repository includes information on preparing models for inference and contributing guidelines.

ztachip
ztachip is a RISCV accelerator designed for vision and AI edge applications, offering up to 20-50x acceleration compared to non-accelerated RISCV implementations. It features an innovative tensor processor hardware to accelerate various vision tasks and TensorFlow AI models. ztachip introduces a new tensor programming paradigm for massive processing/data parallelism. The repository includes technical documentation, code structure, build procedures, and reference design examples for running vision/AI applications on FPGA devices. Users can build ztachip as a standalone executable or a micropython port, and run various AI/vision applications like image classification, object detection, edge detection, motion detection, and multi-tasking on supported hardware.

hqq
HQQ is a fast and accurate model quantizer that skips the need for calibration data. It's super simple to implement (just a few lines of code for the optimizer). It can crunch through quantizing the Llama2-70B model in only 4 minutes! 🚀

basalt
Basalt is a lightweight and flexible CSS framework designed to help developers quickly build responsive and modern websites. It provides a set of pre-designed components and utilities that can be easily customized to create unique and visually appealing web interfaces. With Basalt, developers can save time and effort by leveraging its modular structure and responsive design principles to create professional-looking websites with ease.

LLamaTuner
LLamaTuner is a repository for the Efficient Finetuning of Quantized LLMs project, focusing on building and sharing instruction-following Chinese baichuan-7b/LLaMA/Pythia/GLM model tuning methods. The project enables training on a single Nvidia RTX-2080TI and RTX-3090 for multi-round chatbot training. It utilizes bitsandbytes for quantization and is integrated with Huggingface's PEFT and transformers libraries. The repository supports various models, training approaches, and datasets for supervised fine-tuning, LoRA, QLoRA, and more. It also provides tools for data preprocessing and offers models in the Hugging Face model hub for inference and finetuning. The project is licensed under Apache 2.0 and acknowledges contributions from various open-source contributors.

SageAttention
SageAttention is an official implementation of an accurate 8-bit attention mechanism for plug-and-play inference acceleration. It is optimized for RTX4090 and RTX3090 GPUs, providing performance improvements for specific GPU architectures. The tool offers a technique called 'smooth_k' to ensure accuracy in processing FP16/BF16 data. Users can easily replace 'scaled_dot_product_attention' with SageAttention for faster video processing.
20 - OpenAI Gpts

S22 Flip Advisor
Expert on Cat S22 FLIP rooting and custom ROMs, with a broad internet research scope.

Browser Extension Generator
Create browser extensions for web tasks to boost your productivity. Or jumpstart a more advanced extension idea. You'll get a full package download ready to install in your Chrome or Edge browser. 📂 v1.2 _____ _____ What do you want to build? _____

FlutterCraft
FlutterCraft is an AI-powered assistant that streamlines Flutter app development. It interprets user-provided descriptions to generate and compile Flutter app code, providing ready-to-install APK and iOS files. Ideal for rapid prototyping, FlutterCraft makes app development accessible and efficient.

BioinformaticsManual
Compile instructions from the web and github for bioinformatics applications. Receive line-by-line instructions and commands to get started
Ciepły montaż okien
Firma Grupa Magnum specjalizuje się w sprzedaży akcesoriów do ciepłego montażu okien i drzwi. Oferują bogaty wybór narzędzi i akcesoriów, które są niezbędne do prawidłowego montażu stolarki okiennej i drzwiowej.

Throw a Wrench In Your Plans GPT
As "Throw a Wrench in Your Plans GPT", I provide expert guidance on skilled trades and AI adoption, inspired by TWYP Media