Best AI tools for< Customize Training >
20 - AI tool Sites
Grasply
Grasply.ai is an AI-powered personalized training solution that transforms documents into impactful learning resources using multi-agent AI training assistants. It enhances productivity, improves skill transfer, and empowers teams to succeed by creating customized learning resources for training and assessment. Grasply allows users to upload documents, define learning goals, customize the learning experience, build tailored micro-courses with AI, share personalized courses, and track learner progress. It offers different pricing plans with varying features to cater to different user needs.
Paradiso AI
Paradiso AI is an AI application that offers a range of generative AI solutions tailored to businesses. From AI chatbots to AI employees and document generators, Paradiso AI helps businesses boost ROI, enhance customer satisfaction, optimize costs, and accelerate time-to-value. The platform provides customizable AI tools that seamlessly adapt to unique processes, accelerating tasks, ensuring precision, and driving exceptional outcomes. With a focus on data security, compliance, and cost efficiency, Paradiso AI aims to deliver high-quality outcomes at lower operating costs through sophisticated prompt optimization and ongoing refinements.
SC Training
SC Training, formerly known as EdApp, is a mobile learning management system that offers a comprehensive platform for creating, delivering, and tracking training courses. The application provides features such as admin control, content creation tools, analytics tracking, AI course generation, microlearning courses, gamification elements, and support for various industries. SC Training aims to deliver efficient and engaging training experiences to users, with a focus on bite-sized learning and accessibility across devices. The platform also offers course libraries, practical assessments, rapid course refresh, and group training options. Users can customize courses, integrate with existing tools, and access a range of resources through the help center and blog.
PaddleBoat
PaddleBoat is an AI-powered sales readiness platform designed to help sales representatives improve their cold calling skills through realistic AI roleplays. It offers automated call feedback, insights on objection handling, best calling practices, and areas for improvement in every roleplay. PaddleBoat aims to accelerate sales excellence by providing real-time insights, customizing roleplays, and minimizing ramp-up time for sales reps. The platform allows users to create engaging training programs, courses, wikis, and interactive videos to enhance their sales pitch skills and boost their confidence in sales conversations.
Hyperbound
Hyperbound is an AI Sales Role-Play & Upskilling Platform designed to help sales teams improve their skills through realistic AI roleplays. It allows users to practice cold, warm, discovery, and post-sales calls with AI buyers customized for their target persona. The platform has received high ratings and positive feedback from sales professionals globally, offering interactive demos and no credit card required for booking a demo.
GymBuddy.ai
GymBuddy.ai is an AI workout planner that leverages artificial intelligence to create personalized workout plans tailored to individual fitness levels, equipment access, and target areas of the body. The platform offers a wide range of exercises and features to help users achieve their fitness goals effectively. With advanced analytics and full workout customization, GymBuddy.ai aims to revolutionize the way people approach their fitness journey.
Second Nature
Second Nature is an AI-powered sales training software that offers life-like role-playing simulations to enhance sales skills and productivity. It provides personalized AI role plays, customized simulations, and virtual pitch partners to help sales teams practice conversations, improve performance, and drive results. The platform features AI avatars, template libraries, and various training modules for different industries and use cases.
Rupert AI
Rupert AI is an all-in-one AI platform that allows users to train custom AI models for text, audio, video, and images. The platform streamlines AI workflows by providing access to the latest open-source AI models and tools in a single studio tailored to business needs. Users can automate their AI workflow, generate high-quality AI product photography, and utilize popular AI workflows like the AI Fashion Model Generator and Facebook Ad Testing Tool. Rupert AI aims to revolutionize the way businesses leverage AI technology to enhance marketing visuals, streamline operations, and make informed decisions.
Wix
Wix.com is a website building platform that allows users to create professional websites without the need for coding skills. Users can choose from a variety of templates, customize their site with drag-and-drop tools, and publish their website with ease. Wix offers a user-friendly interface and a range of features to help individuals and businesses establish an online presence.
Scale AI
Scale AI is an AI tool that accelerates the development of AI applications for enterprise, government, and automotive sectors. It offers Scale Data Engine for generative AI, Scale GenAI Platform, and evaluation services for model developers. The platform leverages enterprise data to build sustainable AI programs and partners with leading AI models. Scale's focus on generative AI applications, data labeling, and model evaluation sets it apart in the AI industry.
Pooks.ai
Pooks.ai is a revolutionary AI-powered platform that offers personalized books in both ebook and audiobook formats. By leveraging sophisticated algorithms and natural language processing, Pooks.ai creates dynamic and contextually relevant content tailored to individual preferences and needs. Users can enjoy a unique reading experience with books crafted specifically for them, covering a wide range of topics from fitness and travel to pet care and self-help. The platform aims to transform the way people engage with literature by providing affordable and personalized reading experiences.
Pooks.ai
Pooks.ai is a revolutionary AI-powered platform that offers personalized books in both ebook and audiobook formats. It leverages sophisticated algorithms and natural language processing to create dynamic and contextually relevant content tailored to individual preferences and needs. Users can enjoy a unique reading experience with books written on any non-fiction topic desired, personalized just for them. The platform provides swift, proficient, and user-friendly service, redefining how users engage with literature and absorb information. Pooks.ai is free to use and offers a wide range of personalized book options, making reading more engaging and meaningful.
Slice Knowledge
Slice Knowledge is an AI-powered content creation platform designed for learning purposes. It offers fast and simple creation of learning units using AI technology. The platform is a perfect solution for course creators, HR and L&D teams, education experts, and enterprises looking to enhance their employee training programs. Slice Knowledge provides AI-powered creation, compliance templates, assistant bots, SCORM tracking integration, and multilingual support. It allows users to convert documents into interactive, responsive, SCORM-compliant learning materials with features like unlimited designer CSS, interactive video, multi-lingual support, and responsive design.
EduHunt
EduHunt is an AI-powered search engine that helps users find quality educational content on YouTube. It allows users to search for specific topics and filters the results to show only the most relevant and high-quality videos. EduHunt also offers a variety of features to help users customize their search results, such as the ability to filter by language, duration, and difficulty level.
Degreed
Degreed is an AI-driven learning platform that offers skill-building solutions for employees, from onboarding to retention. It partners with leading vendors to provide skills-first learning experiences. The platform leverages AI to deliver efficient and effective learning experiences, personalized skill development, and data-driven insights. Degreed helps organizations identify critical skill gaps, provide personalized learning paths, and measure the impact of upskilling and reskilling initiatives. With a focus on workforce transformation, Degreed empowers companies to drive business results through continuous learning and skill development.
Mendable
Mendable is an AI-powered search tool that helps businesses answer customer and employee questions by training a secure AI on their technical resources. It offers a variety of features such as answer correction, custom prompt edits, and model creativity control, allowing businesses to customize the AI to fit their specific needs. Mendable also provides enterprise-grade security features such as RBAC, SSO, and BYOK, ensuring the security and privacy of sensitive data.
CourseMagic.ai
CourseMagic.ai is an AI-powered platform designed to assist educators in effortlessly generating high-quality courses by leveraging best practices in learning design. The platform allows users to customize courses for any level, import directly into Learning Management Systems (LMS), and offers a range of frameworks and taxonomies to enhance course structure and content. CourseMagic.ai aims to streamline the course creation process, reduce development time, and provide interactive activities and assessments to engage learners effectively.
Osher.ai
Osher.ai is a personal AI for businesses that allows users to interact with websites, intranets, knowledge bases, process documents, spreadsheets, and procedures. It can be used to train custom AIs on internal knowledge bases, process documents, and files. Osher.ai also offers private and public AIs, and users can customize their AIs' personality, purpose, and welcome message.
prompteasy.ai
Prompteasy.ai is an AI tool that allows users to fine-tune AI models in less than 5 minutes. It simplifies the process of training AI models on user data, making it as easy as having a conversation. Users can fully customize GPT by fine-tuning it to meet their specific needs. The tool offers data-driven customization, interactive AI coaching, and seamless model enhancement, providing users with a competitive edge and simplifying AI integration into their workflows.
Chattie
Chattie is an AI-powered chatbot platform that allows users to easily integrate ChatGPT on their websites. It offers features such as training chatbots with various data sources, theme customization with CSS support, and detailed stats and analytics. Chattie provides different pricing plans to cater to different user needs, from individual users to agencies. With Chattie, users can create and customize chatbots to engage with website visitors effectively.
20 - Open Source AI Tools
Groma
Groma is a grounded multimodal assistant that excels in region understanding and visual grounding. It can process user-defined region inputs and generate contextually grounded long-form responses. The tool presents a unique paradigm for multimodal large language models, focusing on visual tokenization for localization. Groma achieves state-of-the-art performance in referring expression comprehension benchmarks. The tool provides pretrained model weights and instructions for data preparation, training, inference, and evaluation. Users can customize training by starting from intermediate checkpoints. Groma is designed to handle tasks related to detection pretraining, alignment pretraining, instruction finetuning, instruction following, and more.
fastfit
FastFit is a Python package designed for fast and accurate few-shot classification, especially for scenarios with many semantically similar classes. It utilizes a novel approach integrating batch contrastive learning and token-level similarity score, significantly improving multi-class classification performance in speed and accuracy across various datasets. FastFit provides a convenient command-line tool for training text classification models with customizable parameters. It offers a 3-20x improvement in training speed, completing training in just a few seconds. Users can also train models with Python scripts and perform inference using pretrained models for text classification tasks.
LibreAim
Libre Aim is a free and open source first-person shooter (FPS) aim trainer developed using Godot. It is highly customizable, allowing users to modify all aspects of the tool easily. The focus is on providing a lightweight training tool that runs smoothly on low-end machines, offering high FPS and minimal input lag. Although still in early development stages, Libre Aim aims to provide a platform for users to enhance their aiming skills in FPS games through customizable features and simplicity.
litgpt
LitGPT is a command-line tool designed to easily finetune, pretrain, evaluate, and deploy 20+ LLMs **on your own data**. It features highly-optimized training recipes for the world's most powerful open-source large-language-models (LLMs).
llm-on-ray
LLM-on-Ray is a comprehensive solution for building, customizing, and deploying Large Language Models (LLMs). It simplifies complex processes into manageable steps by leveraging the power of Ray for distributed computing. The tool supports pretraining, finetuning, and serving LLMs across various hardware setups, incorporating industry and Intel optimizations for performance. It offers modular workflows with intuitive configurations, robust fault tolerance, and scalability. Additionally, it provides an Interactive Web UI for enhanced usability, including a chatbot application for testing and refining models.
EasyLM
EasyLM is a one-stop solution for pre-training, fine-tuning, evaluating, and serving large language models in JAX/Flax. It simplifies the process by leveraging JAX's pjit functionality to scale up training to multiple TPU/GPU accelerators. Built on top of Huggingface's transformers and datasets, EasyLM offers an easy-to-use and customizable codebase for training large language models without the complexity found in other frameworks. It supports sharding model weights and training data across multiple accelerators, enabling multi-TPU/GPU training on a single host or across multiple hosts on Google Cloud TPU Pods. EasyLM currently supports models like LLaMA, LLaMA 2, and LLaMA 3.
swift
SWIFT (Scalable lightWeight Infrastructure for Fine-Tuning) supports training, inference, evaluation and deployment of nearly **200 LLMs and MLLMs** (multimodal large models). Developers can directly apply our framework to their own research and production environments to realize the complete workflow from model training and evaluation to application. In addition to supporting the lightweight training solutions provided by [PEFT](https://github.com/huggingface/peft), we also provide a complete **Adapters library** to support the latest training techniques such as NEFTune, LoRA+, LLaMA-PRO, etc. This adapter library can be used directly in your own custom workflow without our training scripts. To facilitate use by users unfamiliar with deep learning, we provide a Gradio web-ui for controlling training and inference, as well as accompanying deep learning courses and best practices for beginners. Additionally, we are expanding capabilities for other modalities. Currently, we support full-parameter training and LoRA training for AnimateDiff.
Reflection_Tuning
Reflection-Tuning is a project focused on improving the quality of instruction-tuning data through a reflection-based method. It introduces Selective Reflection-Tuning, where the student model can decide whether to accept the improvements made by the teacher model. The project aims to generate high-quality instruction-response pairs by defining specific criteria for the oracle model to follow and respond to. It also evaluates the efficacy and relevance of instruction-response pairs using the r-IFD metric. The project provides code for reflection and selection processes, along with data and model weights for both V1 and V2 methods.
dialog
Dialog is an API-focused tool designed to simplify the deployment of Large Language Models (LLMs) for programmers interested in AI. It allows users to deploy any LLM based on the structure provided by dialog-lib, enabling them to spend less time coding and more time training their models. The tool aims to humanize Retrieval-Augmented Generative Models (RAGs) and offers features for better RAG deployment and maintenance. Dialog requires a knowledge base in CSV format and a prompt configuration in TOML format to function effectively. It provides functionalities for loading data into the database, processing conversations, and connecting to the LLM, with options to customize prompts and parameters. The tool also requires specific environment variables for setup and configuration.
build_MiniLLM_from_scratch
This repository aims to build a low-parameter LLM model through pretraining, fine-tuning, model rewarding, and reinforcement learning stages to create a chat model capable of simple conversation tasks. It features using the bert4torch training framework, seamless integration with transformers package for inference, optimized file reading during training to reduce memory usage, providing complete training logs for reproducibility, and the ability to customize robot attributes. The chat model supports multi-turn conversations. The trained model currently only supports basic chat functionality due to limitations in corpus size, model scale, SFT corpus size, and quality.
mindnlp
MindNLP is an open-source NLP library based on MindSpore. It provides a platform for solving natural language processing tasks, containing many common approaches in NLP. It can help researchers and developers to construct and train models more conveniently and rapidly. Key features of MindNLP include: * Comprehensive data processing: Several classical NLP datasets are packaged into a friendly module for easy use, such as Multi30k, SQuAD, CoNLL, etc. * Friendly NLP model toolset: MindNLP provides various configurable components. It is friendly to customize models using MindNLP. * Easy-to-use engine: MindNLP simplified complicated training process in MindSpore. It supports Trainer and Evaluator interfaces to train and evaluate models easily. MindNLP supports a wide range of NLP tasks, including: * Language modeling * Machine translation * Question answering * Sentiment analysis * Sequence labeling * Summarization MindNLP also supports industry-leading Large Language Models (LLMs), including Llama, GLM, RWKV, etc. For support related to large language models, including pre-training, fine-tuning, and inference demo examples, you can find them in the "llm" directory. To install MindNLP, you can either install it from Pypi, download the daily build wheel, or install it from source. The installation instructions are provided in the documentation. MindNLP is released under the Apache 2.0 license. If you find this project useful in your research, please consider citing the following paper: @misc{mindnlp2022, title={{MindNLP}: a MindSpore NLP library}, author={MindNLP Contributors}, howpublished = {\url{https://github.com/mindlab-ai/mindnlp}}, year={2022} }
ai-voice-cloning
This repository provides a tool for AI voice cloning, allowing users to generate synthetic speech that closely resembles a target speaker's voice. The tool is designed to be user-friendly and accessible, with a graphical user interface that guides users through the process of training a voice model and generating synthetic speech. The tool also includes a variety of features that allow users to customize the generated speech, such as the pitch, volume, and speaking rate. Overall, this tool is a valuable resource for anyone interested in creating realistic and engaging synthetic speech.
rust-snake-ai-ratatui
This repository contains an AI implementation that learns to play the classic game Snake in the terminal. The AI is built using Rust and Ratatui. Users can clone the repo, run the simulation, and configure various settings to customize the AI's behavior. The project also provides options for minimal UI, training custom networks, and watching the AI complete the game on different board sizes. The developer shares updates and insights about the project on Twitter and plans to create a detailed blog post explaining the AI's workings.
deepdoctection
**deep** doctection is a Python library that orchestrates document extraction and document layout analysis tasks using deep learning models. It does not implement models but enables you to build pipelines using highly acknowledged libraries for object detection, OCR and selected NLP tasks and provides an integrated framework for fine-tuning, evaluating and running models. For more specific text processing tasks use one of the many other great NLP libraries. **deep** doctection focuses on applications and is made for those who want to solve real world problems related to document extraction from PDFs or scans in various image formats. **deep** doctection provides model wrappers of supported libraries for various tasks to be integrated into pipelines. Its core function does not depend on any specific deep learning library. Selected models for the following tasks are currently supported: * Document layout analysis including table recognition in Tensorflow with **Tensorpack**, or PyTorch with **Detectron2**, * OCR with support of **Tesseract**, **DocTr** (Tensorflow and PyTorch implementations available) and a wrapper to an API for a commercial solution, * Text mining for native PDFs with **pdfplumber**, * Language detection with **fastText**, * Deskewing and rotating images with **jdeskew**. * Document and token classification with all LayoutLM models provided by the **Transformer library**. (Yes, you can use any LayoutLM-model with any of the provided OCR-or pdfplumber tools straight away!). * Table detection and table structure recognition with **table-transformer**. * There is a small dataset for token classification available and a lot of new tutorials to show, how to train and evaluate this dataset using LayoutLMv1, LayoutLMv2, LayoutXLM and LayoutLMv3. * Comprehensive configuration of **analyzer** like choosing different models, output parsing, OCR selection. Check this notebook or the docs for more infos. * Document layout analysis and table recognition now runs with **Torchscript** (CPU) as well and **Detectron2** is not required anymore for basic inference. * [**new**] More angle predictors for determining the rotation of a document based on **Tesseract** and **DocTr** (not contained in the built-in Analyzer). * [**new**] Token classification with **LiLT** via **transformers**. We have added a model wrapper for token classification with LiLT and added a some LiLT models to the model catalog that seem to look promising, especially if you want to train a model on non-english data. The training script for LayoutLM can be used for LiLT as well and we will be providing a notebook on how to train a model on a custom dataset soon. **deep** doctection provides on top of that methods for pre-processing inputs to models like cropping or resizing and to post-process results, like validating duplicate outputs, relating words to detected layout segments or ordering words into contiguous text. You will get an output in JSON format that you can customize even further by yourself. Have a look at the **introduction notebook** in the notebook repo for an easy start. Check the **release notes** for recent updates. **deep** doctection or its support libraries provide pre-trained models that are in most of the cases available at the **Hugging Face Model Hub** or that will be automatically downloaded once requested. For instance, you can find pre-trained object detection models from the Tensorpack or Detectron2 framework for coarse layout analysis, table cell detection and table recognition. Training is a substantial part to get pipelines ready on some specific domain, let it be document layout analysis, document classification or NER. **deep** doctection provides training scripts for models that are based on trainers developed from the library that hosts the model code. Moreover, **deep** doctection hosts code to some well established datasets like **Publaynet** that makes it easy to experiment. It also contains mappings from widely used data formats like COCO and it has a dataset framework (akin to **datasets** so that setting up training on a custom dataset becomes very easy. **This notebook** shows you how to do this. **deep** doctection comes equipped with a framework that allows you to evaluate predictions of a single or multiple models in a pipeline against some ground truth. Check again **here** how it is done. Having set up a pipeline it takes you a few lines of code to instantiate the pipeline and after a for loop all pages will be processed through the pipeline.
persian-license-plate-recognition
The Persian License Plate Recognition (PLPR) system is a state-of-the-art solution designed for detecting and recognizing Persian license plates in images and video streams. Leveraging advanced deep learning models and a user-friendly interface, it ensures reliable performance across different scenarios. The system offers advanced detection using YOLOv5 models, precise recognition of Persian characters, real-time processing capabilities, and a user-friendly GUI. It is well-suited for applications in traffic monitoring, automated vehicle identification, and similar fields. The system's architecture includes modules for resident management, entrance management, and a detailed flowchart explaining the process from system initialization to displaying results in the GUI. Hardware requirements include an Intel Core i5 processor, 8 GB RAM, a dedicated GPU with at least 4 GB VRAM, and an SSD with 20 GB of free space. The system can be installed by cloning the repository and installing required Python packages. Users can customize the video source for processing and run the application to upload and process images or video streams. The system's GUI allows for parameter adjustments to optimize performance, and the Wiki provides in-depth information on the system's architecture and model training.
litdata
LitData is a tool designed for blazingly fast, distributed streaming of training data from any cloud storage. It allows users to transform and optimize data in cloud storage environments efficiently and intuitively, supporting various data types like images, text, video, audio, geo-spatial, and multimodal data. LitData integrates smoothly with frameworks such as LitGPT and PyTorch, enabling seamless streaming of data to multiple machines. Key features include multi-GPU/multi-node support, easy data mixing, pause & resume functionality, support for profiling, memory footprint reduction, cache size configuration, and on-prem optimizations. The tool also provides benchmarks for measuring streaming speed and conversion efficiency, along with runnable templates for different data types. LitData enables infinite cloud data processing by utilizing the Lightning.ai platform to scale data processing with optimized machines.
LLM-Finetuning-Toolkit
LLM Finetuning toolkit is a config-based CLI tool for launching a series of LLM fine-tuning experiments on your data and gathering their results. It allows users to control all elements of a typical experimentation pipeline - prompts, open-source LLMs, optimization strategy, and LLM testing - through a single YAML configuration file. The toolkit supports basic, intermediate, and advanced usage scenarios, enabling users to run custom experiments, conduct ablation studies, and automate fine-tuning workflows. It provides features for data ingestion, model definition, training, inference, quality assurance, and artifact outputs, making it a comprehensive tool for fine-tuning large language models.
neptune-client
Neptune is a scalable experiment tracker for teams training foundation models. Log millions of runs, effortlessly monitor and visualize model training, and deploy on your infrastructure. Track 100% of metadata to accelerate AI breakthroughs. Log and display any framework and metadata type from any ML pipeline. Organize experiments with nested structures and custom dashboards. Compare results, visualize training, and optimize models quicker. Version models, review stages, and access production-ready models. Share results, manage users, and projects. Integrate with 25+ frameworks. Trusted by great companies to improve workflow.
supervisely
Supervisely is a computer vision platform that provides a range of tools and services for developing and deploying computer vision solutions. It includes a data labeling platform, a model training platform, and a marketplace for computer vision apps. Supervisely is used by a variety of organizations, including Fortune 500 companies, research institutions, and government agencies.
20 - OpenAI Gpts
Corporate Trainer
Develops training programs, customizing content to fit corporate culture and objectives.
VBPS TigerBot
This is a customized Chat bot that has been trained on the VBPS Employee Handbook, as well as the current teacher union contract.
The Learning Architect
An all-in-one, consultative L&D expert AI helping you build impactful, customized learning solutions for your organization.
Tattoo Ideas GPT
Helps design and customize tattoos, recommends artists, and provides aftercare advice.
Quick QR Art - QR Code AI Art Generator
Create, Customize, and Track Stunning QR Codes Art with Our Free QR Code AI Art Generator. Seamlessly integrate these artistic codes into your marketing materials, packaging, and digital platforms.
Instant Command GPT
Executes tasks via short commands instantly, using a single seesion to customize commands.
GAPP STORE
Welcome to GAPP Store: Chat, create, customize—your all-in-one AI app universe
Sneaker Genius
Expert in sneaker customization, buying, collecting, and offering detailed advice on painting techniques and design inspiration
Preference Card Estimator
Generates detailed orthopedic surgery cards using uploaded formats.
Vikas' Scripting Helper
Guides in creating, customizing Airtable scripts with user-friendly explanations.