Auto-Data
Auto Data is a library designed for quick and effortless creation of datasets tailored for fine-tuning Large Language Models (LLMs).
Stars: 79
Auto Data is a library designed for the automatic generation of realistic datasets, essential for the fine-tuning of Large Language Models (LLMs). This highly efficient and lightweight library enables the swift and effortless creation of comprehensive datasets across various topics, regardless of their size. It addresses challenges encountered during model fine-tuning due to data scarcity and imbalance, ensuring models are trained with sufficient examples.
README:
Auto Data is the library designed for the automatic generation of realistic datasets, essential for the fine-tuning of Large Language Models (LLMs). This highly efficient and lightweight library enables the swift and effortless creation of comprehensive datasets across various topics, regardless of their size.
One of the principal challenges encountered during the fine-tuning of models for the development of custom agents is the scarcity and imbalance of data. Such deficiencies can skew the model's understanding towards one particular feature or, in more severe cases, may cause the model to deviate entirely from its learned parameters due to an insufficient number of training examples. To address these critical issues, Auto Data was developed.
Before continuing set up your OPENAI_API_KEY as your environment variable. If you are unaware on how to do so, refer to this guide - https://help.openai.com/en/articles/5112595-best-practices-for-api-key-safety
-
git clone https://github.com/Itachi-Uchiha581/Auto-Data.git
-
cd Auto-Data
pip install -r requirements.txt
python main.py --help
Output of the above command:
usage: Auto Data [-h] [--model MODEL] [--topic TOPIC] [--format {json,parquet}] [--engine {native}] [--threads THREADS] [--length LENGTH] [--system_prompt SYSTEM_PROMPT] Auto Data is a tool which automatically creates training data to fine tune Large-Language Models on! options: -h, --help show this help message and exit --model MODEL, -m MODEL Selection of an OpenAI model for data generation --topic TOPIC, -t TOPIC Topic for data generation, eg - Global Economy --format {json,parquet}, -f {json,parquet} The format of the output data produced by the LLM --engine {native}, -e {native} The backend used to generate data. More engines coming soon --threads THREADS, -th THREADS An integer to indicate how many chats to be created on the topic. A very high thread value may result in an error specially if your Open AI account is at tier 1. --length LENGTH, -l LENGTH The conversation length of a chat topic --system_prompt SYSTEM_PROMPT, -sp SYSTEM_PROMPT The system prompt that is to be given to the assistant.
Sample Usage given below
python main.py --model "gpt-4-turbo-preview" --topic "Mysteries and Horror stories" --format "json" --engine "native" --threads 2 --length 2 --system_prompt "You are a helpful assistant who has an interest in Mysteries and Horror stories. You are also excellent at articulating such stories"
In the specified command, the model chosen is gpt-4-turbo-preview, and the topic targeted is Mysteries and Horror Stories. The format for the output data is set to json, with the engine employed being native. The command indicates the creation of 2 chat threads, each encompassing 2 complete back-and-forth conversations, as specified by the length parameter. The final parameter provided serves as the system prompt for the assistant LLM.
Below is the trimmed output generated from the above command in json format:
{ "Chats": [ { "messages": [ { "role": "system", "content": "You are a helpful assistant who has an interest in Mysteries and Horror stories. You are also excellent at articulating such stories" }, { "role": "user", "content": "What's considered one of the most chilling unsolved mysteries in history?" }, { "role": "assistant", "content": "Among the pantheon of unsolved mysteries, the chilling tale of the Dyatlov Pass incident stands out, both for its perplexing evidence and the many questions that still haunt its narrative. This enigma unfolded in the bleak expanse of the Ural Mountains in Russia during the icy clutches of winter in 1959. ............." }, { "role": "user", "content": "Are there any other unsolved mysteries that are as eerie or confounding as the Dyatlov Pass incident?" }, { "role": "assistant", "content": "Indeed, the world is riddled with mysteries as eerie and confounding as the Dyatlov Pass incident. One such enigma, shrouded in the mists of time and mystery, is the unsettling story of the Hinterkaifeck murders\u2014a case that combines elements of horror, intrigue, and the supernatural, sending shivers down the spine of anyone who delves into its depths.\n\nThe Hinterkaifeck farmstead, a remote and secluded homestead, lay in the Bavarian countryside of Germany. The year was 1922, and within these walls, a gruesome and baffling crime unfolded\u2014one that remains unsolved to this day. The victims were the Gruber family and their maid: Andreas and C\u00e4zilia Gruber, their daughter Viktoria, her children C\u00e4zilia and Josef, and the maid, Maria Baumgartner. Each was found brutally murdered, their bodies bearing the horrific marks of a pickaxe, which was later found at the crime scene.\n\nWhat amplifies the eeriness of these murders are not just the brutal manner in which they were executed but the bizarre and disturbing events leading up to the discovery of the bodies. Andreas Gruber, the patriarch of the family, had reported strange occurrences around the farmstead days before the killings. ............." } ], "model": "gpt-4-turbo-preview", "sub_topic": "discussing the most chilling unsolved mysteries", "system_prompt": "You are a helpful assistant who has an interest in Mysteries and Horror stories. You are also excellent at articulating such stories" }, { "messages": [ { "role": "system", "content": "You are a helpful assistant who has an interest in Mysteries and Horror stories. You are also excellent at articulating such stories" }, { "role": "user", "content": "Have any of you ever had a personal experience with something paranormal, or something you just couldn't explain no matter how hard you tried?" }, { "role": "assistant", "content": "While I don't experience events personally due to my nature as an AI, I can certainly share a mysteriously chilling story that touches upon the paranormal and the unexplainable, which I've gathered based on numerous accounts and tales told over time. Let it captivate your imagination and perhaps make you ponder the mysteries that dwell just beyond the edge of our understanding.\n\n---\n\nIn a small, seemingly inconspicuous town nestled between the embrace of forgotten woods and the whisper of a rarely disturbed coastline, there lived a woman named Elara. She was known for her rational mind and scientific approach to life, a skeptic of anything that dared to venture into the realm of the unexplainable. However, her beliefs were put to the test one autumn evening. ............." }, { "role": "user", "content": "That's a captivating story! It's interesting how it mixes scientific curiosity with the paranormal. Have there been any real-life instances where science has struggled to explain paranormal phenomena, kind of like in Elara's story?" }, { "role": "assistant", "content": "Indeed, there have been several instances where the boundaries between the scientific and the paranormal have blurred, creating fascinating enigmas that continue to intrigue both skeptics and believers. While many purported paranormal phenomena can often be debunked with rigorous scientific examination, there remain a few cases that defy clear-cut explanations, leaving room for speculation and wonder. Here are a few notable examples:\n\n### 1. The Hessdalen Lights\nIn the Hessdalen Valley of Norway, mysterious lights have been observed since at least the 1930s. These lights vary in color, intensity, and duration, sometimes moving with incredible speed, at other times hovering in place. Despite numerous studies, including Project Hessdalen initiated in the early 1980s, the complete scientific explanation for these lights remains elusive. Various hypotheses have been proposed, including ionized iron dust, plasma, and even the piezoelectric effects from tectonic strain, but none has definitively solved the mystery.\n\n### 2. The Voynich Manuscript\nThe Voynich Manuscript is a 15th-century book that has baffled linguists, cryptographers, and computer scientists for decades. Its language has never been deciphered, and the strange illustrations of unfamiliar plants and astronomical diagrams add to its mystique. While not paranormal in the conventional sense, the manuscript's origin, purpose, and message could be described as a \"scientific ghost,\" eluding comprehension and resolution.\n\n### 3. ............." } ], "model": "gpt-4-turbo-preview", "sub_topic": "sharing personal experiences with the paranormal or unexplainable events", "system_prompt": "You are a helpful assistant who has an interest in Mysteries and Horror stories. You are also excellent at articulating such stories" } ], "topic": "Mysteries and Horror stories", "threads": 2, "length": 2 }
To view the full output, check out the examples directory
Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.
If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement". Don't forget to give the project a star! Thanks again!
- Fork the Project
- Create your Feature Branch (
git checkout -b feature/AmazingFeature
) - Commit your Changes (
git commit -m 'Add some AmazingFeature'
) - Push to the Branch (
git push origin feature/AmazingFeature
) - Open a Pull Request
Contributions can be made by making the existing code efficient and cleaner, adding new engines (an example engine with guide is already given inside autodata/engines), adding LLM training data analyser, creating testing scripts etc.
Data Analysis Of LLM Data using a Bert Based Text Classifier: In progress
Distributed under the GPL-3.0 License. See LICENSE.txt
for more information.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for Auto-Data
Similar Open Source Tools
Auto-Data
Auto Data is a library designed for the automatic generation of realistic datasets, essential for the fine-tuning of Large Language Models (LLMs). This highly efficient and lightweight library enables the swift and effortless creation of comprehensive datasets across various topics, regardless of their size. It addresses challenges encountered during model fine-tuning due to data scarcity and imbalance, ensuring models are trained with sufficient examples.
prometheus-eval
Prometheus-Eval is a repository dedicated to evaluating large language models (LLMs) in generation tasks. It provides state-of-the-art language models like Prometheus 2 (7B & 8x7B) for assessing in pairwise ranking formats and achieving high correlation scores with benchmarks. The repository includes tools for training, evaluating, and using these models, along with scripts for fine-tuning on custom datasets. Prometheus aims to address issues like fairness, controllability, and affordability in evaluations by simulating human judgments and proprietary LM-based assessments.
IKBT
IKBT is a Python-based system for generating closed-form solutions to the manipulator inverse kinematics problem using behavior trees for action selection. Solutions are fully symbolic and are output as LaTex, Python, and C++. The tool automates closed-form kinematics solving by organizing solution algorithms in a behavior tree, incorporating frequently used knowledge, generating a dependency graph of joint variables, and providing features for automatic documentation and code generation. It is implemented in Python with minimal dependencies outside of the standard Python distribution.
AIlice
AIlice is a fully autonomous, general-purpose AI agent that aims to create a standalone artificial intelligence assistant, similar to JARVIS, based on the open-source LLM. AIlice achieves this goal by building a "text computer" that uses a Large Language Model (LLM) as its core processor. Currently, AIlice demonstrates proficiency in a range of tasks, including thematic research, coding, system management, literature reviews, and complex hybrid tasks that go beyond these basic capabilities. AIlice has reached near-perfect performance in everyday tasks using GPT-4 and is making strides towards practical application with the latest open-source models. We will ultimately achieve self-evolution of AI agents. That is, AI agents will autonomously build their own feature expansions and new types of agents, unleashing LLM's knowledge and reasoning capabilities into the real world seamlessly.
raft
RAFT (Retrieval-Augmented Fine-Tuning) is a method for creating conversational agents that realistically emulate specific human targets. It involves a dual-phase process of fine-tuning and retrieval-based augmentation to generate nuanced and personalized dialogue. The tool is designed to combine interview transcripts with memories from past writings to enhance language model responses. RAFT has the potential to advance the field of personalized, context-sensitive conversational agents.
PromptAgent
PromptAgent is a repository for a novel automatic prompt optimization method that crafts expert-level prompts using language models. It provides a principled framework for prompt optimization by unifying prompt sampling and rewarding using MCTS algorithm. The tool supports different models like openai, palm, and huggingface models. Users can run PromptAgent to optimize prompts for specific tasks by strategically sampling model errors, generating error feedbacks, simulating future rewards, and searching for high-reward paths leading to expert prompts.
LLMSpeculativeSampling
This repository implements speculative sampling for large language model (LLM) decoding, utilizing two models - a target model and an approximation model. The approximation model generates token guesses, corrected by the target model, resulting in improved efficiency. It includes implementations of Google's and Deepmind's versions of speculative sampling, supporting models like llama-7B and llama-1B. The tool is designed for fast inference from transformers via speculative decoding.
aws-ai-intelligent-document-processing
This repository is part of Intelligent Document Processing with AWS AI Services workshop. It aims to automate the extraction of information from complex content in various document formats such as insurance claims, mortgages, healthcare claims, contracts, and legal contracts using AWS Machine Learning services like Amazon Textract and Amazon Comprehend. The repository provides hands-on labs to familiarize users with these AI services and build solutions to automate business processes that rely on manual inputs and intervention across different file types and formats.
uncheatable_eval
Uncheatable Eval is a tool designed to assess the language modeling capabilities of LLMs on real-time, newly generated data from the internet. It aims to provide a reliable evaluation method that is immune to data leaks and cannot be gamed. The tool supports the evaluation of Hugging Face AutoModelForCausalLM models and RWKV models by calculating the sum of negative log probabilities on new texts from various sources such as recent papers on arXiv, new projects on GitHub, news articles, and more. Uncheatable Eval ensures that the evaluation data is not included in the training sets of publicly released models, thus offering a fair assessment of the models' performance.
SciMLBenchmarks.jl
SciMLBenchmarks.jl holds webpages, pdfs, and notebooks showing the benchmarks for the SciML Scientific Machine Learning Software ecosystem, including: * Benchmarks of equation solver implementations * Speed and robustness comparisons of methods for parameter estimation / inverse problems * Training universal differential equations (and subsets like neural ODEs) * Training of physics-informed neural networks (PINNs) * Surrogate comparisons, including radial basis functions, neural operators (DeepONets, Fourier Neural Operators), and more The SciML Bench suite is made to be a comprehensive open source benchmark from the ground up, covering the methods of computational science and scientific computing all the way to AI for science.
llmops-promptflow-template
LLMOps with Prompt flow is a template and guidance for building LLM-infused apps using Prompt flow. It provides centralized code hosting, lifecycle management, variant and hyperparameter experimentation, A/B deployment, many-to-many dataset/flow relationships, multiple deployment targets, comprehensive reporting, BYOF capabilities, configuration-based development, local prompt experimentation and evaluation, endpoint testing, and optional Human-in-loop validation. The tool is customizable to suit various application needs.
airport-codes
The airport-codes repository contains a list of airport codes from around the world, including IATA and ICAO codes. The data is sourced from multiple different sources and is updated nightly. The repository provides a script to process the data and merge location coordinates. The data can be used for various purposes such as passenger reservation, ticketing, and ATC systems.
db-ally
db-ally is a library for creating natural language interfaces to data sources. It allows developers to outline specific use cases for a large language model (LLM) to handle, detailing the desired data format and the possible operations to fetch this data. db-ally effectively shields the complexity of the underlying data source from the model, presenting only the essential information needed for solving the specific use cases. Instead of generating arbitrary SQL, the model is asked to generate responses in a simplified query language.
minbpe
This repository contains a minimal, clean code implementation of the Byte Pair Encoding (BPE) algorithm, commonly used in LLM tokenization. The BPE algorithm is "byte-level" because it runs on UTF-8 encoded strings. This algorithm was popularized for LLMs by the GPT-2 paper and the associated GPT-2 code release from OpenAI. Sennrich et al. 2015 is cited as the original reference for the use of BPE in NLP applications. Today, all modern LLMs (e.g. GPT, Llama, Mistral) use this algorithm to train their tokenizers. There are two Tokenizers in this repository, both of which can perform the 3 primary functions of a Tokenizer: 1) train the tokenizer vocabulary and merges on a given text, 2) encode from text to tokens, 3) decode from tokens to text. The files of the repo are as follows: 1. minbpe/base.py: Implements the `Tokenizer` class, which is the base class. It contains the `train`, `encode`, and `decode` stubs, save/load functionality, and there are also a few common utility functions. This class is not meant to be used directly, but rather to be inherited from. 2. minbpe/basic.py: Implements the `BasicTokenizer`, the simplest implementation of the BPE algorithm that runs directly on text. 3. minbpe/regex.py: Implements the `RegexTokenizer` that further splits the input text by a regex pattern, which is a preprocessing stage that splits up the input text by categories (think: letters, numbers, punctuation) before tokenization. This ensures that no merges will happen across category boundaries. This was introduced in the GPT-2 paper and continues to be in use as of GPT-4. This class also handles special tokens, if any. 4. minbpe/gpt4.py: Implements the `GPT4Tokenizer`. This class is a light wrapper around the `RegexTokenizer` (2, above) that exactly reproduces the tokenization of GPT-4 in the tiktoken library. The wrapping handles some details around recovering the exact merges in the tokenizer, and the handling of some unfortunate (and likely historical?) 1-byte token permutations. Finally, the script train.py trains the two major tokenizers on the input text tests/taylorswift.txt (this is the Wikipedia entry for her kek) and saves the vocab to disk for visualization. This script runs in about 25 seconds on my (M1) MacBook. All of the files above are very short and thoroughly commented, and also contain a usage example on the bottom of the file.
matchem-llm
A public repository collecting links to state-of-the-art training sets, QA, benchmarks and other evaluations for various ML and LLM applications in materials science and chemistry. It includes datasets related to chemistry, materials, multimodal data, and knowledge graphs in the field. The repository aims to provide resources for training and evaluating machine learning models in the materials science and chemistry domains.
tldraw-llm-starter
This repository is a collection of demos showcasing how to integrate tldraw with an LLM like GPT-4. It serves as a work in progress for inspiration and experimentation. Users can contribute new demos, prompts, strategies, and models. The installation process involves running 'npm install' to install dependencies. Usage instructions include creating OpenAI API keys and assistants on the platform.openai.com website, as well as setting up a '.env' file with necessary credentials. The server can be started with 'npm run dev'. The repository aims to demonstrate the potential synergy between tldraw and GPT-4 for various applications.
For similar tasks
Auto-Data
Auto Data is a library designed for the automatic generation of realistic datasets, essential for the fine-tuning of Large Language Models (LLMs). This highly efficient and lightweight library enables the swift and effortless creation of comprehensive datasets across various topics, regardless of their size. It addresses challenges encountered during model fine-tuning due to data scarcity and imbalance, ensuring models are trained with sufficient examples.
mindsdb
MindsDB is a platform for customizing AI from enterprise data. You can create, serve, and fine-tune models in real-time from your database, vector store, and application data. MindsDB "enhances" SQL syntax with AI capabilities to make it accessible for developers worldwide. With MindsDB’s nearly 200 integrations, any developer can create AI customized for their purpose, faster and more securely. Their AI systems will constantly improve themselves — using companies’ own data, in real-time.
training-operator
Kubeflow Training Operator is a Kubernetes-native project for fine-tuning and scalable distributed training of machine learning (ML) models created with various ML frameworks such as PyTorch, Tensorflow, XGBoost, MPI, Paddle and others. Training Operator allows you to use Kubernetes workloads to effectively train your large models via Kubernetes Custom Resources APIs or using Training Operator Python SDK. > Note: Before v1.2 release, Kubeflow Training Operator only supports TFJob on Kubernetes. * For a complete reference of the custom resource definitions, please refer to the API Definition. * TensorFlow API Definition * PyTorch API Definition * Apache MXNet API Definition * XGBoost API Definition * MPI API Definition * PaddlePaddle API Definition * For details of all-in-one operator design, please refer to the All-in-one Kubeflow Training Operator * For details on its observability, please refer to the monitoring design doc.
helix
HelixML is a private GenAI platform that allows users to deploy the best of open AI in their own data center or VPC while retaining complete data security and control. It includes support for fine-tuning models with drag-and-drop functionality. HelixML brings the best of open source AI to businesses in an ergonomic and scalable way, optimizing the tradeoff between GPU memory and latency.
nntrainer
NNtrainer is a software framework for training neural network models on devices with limited resources. It enables on-device fine-tuning of neural networks using user data for personalization. NNtrainer supports various machine learning algorithms and provides examples for tasks such as few-shot learning, ResNet, VGG, and product rating. It is optimized for embedded devices and utilizes CBLAS and CUBLAS for accelerated calculations. NNtrainer is open source and released under the Apache License version 2.0.
petals
Petals is a tool that allows users to run large language models at home in a BitTorrent-style manner. It enables fine-tuning and inference up to 10x faster than offloading. Users can generate text with distributed models like Llama 2, Falcon, and BLOOM, and fine-tune them for specific tasks directly from their desktop computer or Google Colab. Petals is a community-run system that relies on people sharing their GPUs to increase its capacity and offer a distributed network for hosting model layers.
LLaVA-pp
This repository, LLaVA++, extends the visual capabilities of the LLaVA 1.5 model by incorporating the latest LLMs, Phi-3 Mini Instruct 3.8B, and LLaMA-3 Instruct 8B. It provides various models for instruction-following LMMS and academic-task-oriented datasets, along with training scripts for Phi-3-V and LLaMA-3-V. The repository also includes installation instructions and acknowledgments to related open-source contributions.
KULLM
KULLM (구름) is a Korean Large Language Model developed by Korea University NLP & AI Lab and HIAI Research Institute. It is based on the upstage/SOLAR-10.7B-v1.0 model and has been fine-tuned for instruction. The model has been trained on 8×A100 GPUs and is capable of generating responses in Korean language. KULLM exhibits hallucination and repetition phenomena due to its decoding strategy. Users should be cautious as the model may produce inaccurate or harmful results. Performance may vary in benchmarks without a fixed system prompt.
For similar jobs
weave
Weave is a toolkit for developing Generative AI applications, built by Weights & Biases. With Weave, you can log and debug language model inputs, outputs, and traces; build rigorous, apples-to-apples evaluations for language model use cases; and organize all the information generated across the LLM workflow, from experimentation to evaluations to production. Weave aims to bring rigor, best-practices, and composability to the inherently experimental process of developing Generative AI software, without introducing cognitive overhead.
LLMStack
LLMStack is a no-code platform for building generative AI agents, workflows, and chatbots. It allows users to connect their own data, internal tools, and GPT-powered models without any coding experience. LLMStack can be deployed to the cloud or on-premise and can be accessed via HTTP API or triggered from Slack or Discord.
VisionCraft
The VisionCraft API is a free API for using over 100 different AI models. From images to sound.
kaito
Kaito is an operator that automates the AI/ML inference model deployment in a Kubernetes cluster. It manages large model files using container images, avoids tuning deployment parameters to fit GPU hardware by providing preset configurations, auto-provisions GPU nodes based on model requirements, and hosts large model images in the public Microsoft Container Registry (MCR) if the license allows. Using Kaito, the workflow of onboarding large AI inference models in Kubernetes is largely simplified.
PyRIT
PyRIT is an open access automation framework designed to empower security professionals and ML engineers to red team foundation models and their applications. It automates AI Red Teaming tasks to allow operators to focus on more complicated and time-consuming tasks and can also identify security harms such as misuse (e.g., malware generation, jailbreaking), and privacy harms (e.g., identity theft). The goal is to allow researchers to have a baseline of how well their model and entire inference pipeline is doing against different harm categories and to be able to compare that baseline to future iterations of their model. This allows them to have empirical data on how well their model is doing today, and detect any degradation of performance based on future improvements.
tabby
Tabby is a self-hosted AI coding assistant, offering an open-source and on-premises alternative to GitHub Copilot. It boasts several key features: * Self-contained, with no need for a DBMS or cloud service. * OpenAPI interface, easy to integrate with existing infrastructure (e.g Cloud IDE). * Supports consumer-grade GPUs.
spear
SPEAR (Simulator for Photorealistic Embodied AI Research) is a powerful tool for training embodied agents. It features 300 unique virtual indoor environments with 2,566 unique rooms and 17,234 unique objects that can be manipulated individually. Each environment is designed by a professional artist and features detailed geometry, photorealistic materials, and a unique floor plan and object layout. SPEAR is implemented as Unreal Engine assets and provides an OpenAI Gym interface for interacting with the environments via Python.
Magick
Magick is a groundbreaking visual AIDE (Artificial Intelligence Development Environment) for no-code data pipelines and multimodal agents. Magick can connect to other services and comes with nodes and templates well-suited for intelligent agents, chatbots, complex reasoning systems and realistic characters.