Auto-Data
Auto Data is a library designed for quick and effortless creation of datasets tailored for fine-tuning Large Language Models (LLMs).
Stars: 79
Auto Data is a library designed for the automatic generation of realistic datasets, essential for the fine-tuning of Large Language Models (LLMs). This highly efficient and lightweight library enables the swift and effortless creation of comprehensive datasets across various topics, regardless of their size. It addresses challenges encountered during model fine-tuning due to data scarcity and imbalance, ensuring models are trained with sufficient examples.
README:
Auto Data is the library designed for the automatic generation of realistic datasets, essential for the fine-tuning of Large Language Models (LLMs). This highly efficient and lightweight library enables the swift and effortless creation of comprehensive datasets across various topics, regardless of their size.
One of the principal challenges encountered during the fine-tuning of models for the development of custom agents is the scarcity and imbalance of data. Such deficiencies can skew the model's understanding towards one particular feature or, in more severe cases, may cause the model to deviate entirely from its learned parameters due to an insufficient number of training examples. To address these critical issues, Auto Data was developed.
Before continuing set up your OPENAI_API_KEY as your environment variable. If you are unaware on how to do so, refer to this guide - https://help.openai.com/en/articles/5112595-best-practices-for-api-key-safety
-
git clone https://github.com/Itachi-Uchiha581/Auto-Data.git
-
cd Auto-Data
pip install -r requirements.txt
python main.py --help
Output of the above command:
usage: Auto Data [-h] [--model MODEL] [--topic TOPIC] [--format {json,parquet}] [--engine {native}] [--threads THREADS] [--length LENGTH] [--system_prompt SYSTEM_PROMPT] Auto Data is a tool which automatically creates training data to fine tune Large-Language Models on! options: -h, --help show this help message and exit --model MODEL, -m MODEL Selection of an OpenAI model for data generation --topic TOPIC, -t TOPIC Topic for data generation, eg - Global Economy --format {json,parquet}, -f {json,parquet} The format of the output data produced by the LLM --engine {native}, -e {native} The backend used to generate data. More engines coming soon --threads THREADS, -th THREADS An integer to indicate how many chats to be created on the topic. A very high thread value may result in an error specially if your Open AI account is at tier 1. --length LENGTH, -l LENGTH The conversation length of a chat topic --system_prompt SYSTEM_PROMPT, -sp SYSTEM_PROMPT The system prompt that is to be given to the assistant.
Sample Usage given below
python main.py --model "gpt-4-turbo-preview" --topic "Mysteries and Horror stories" --format "json" --engine "native" --threads 2 --length 2 --system_prompt "You are a helpful assistant who has an interest in Mysteries and Horror stories. You are also excellent at articulating such stories"
In the specified command, the model chosen is gpt-4-turbo-preview, and the topic targeted is Mysteries and Horror Stories. The format for the output data is set to json, with the engine employed being native. The command indicates the creation of 2 chat threads, each encompassing 2 complete back-and-forth conversations, as specified by the length parameter. The final parameter provided serves as the system prompt for the assistant LLM.
Below is the trimmed output generated from the above command in json format:
{ "Chats": [ { "messages": [ { "role": "system", "content": "You are a helpful assistant who has an interest in Mysteries and Horror stories. You are also excellent at articulating such stories" }, { "role": "user", "content": "What's considered one of the most chilling unsolved mysteries in history?" }, { "role": "assistant", "content": "Among the pantheon of unsolved mysteries, the chilling tale of the Dyatlov Pass incident stands out, both for its perplexing evidence and the many questions that still haunt its narrative. This enigma unfolded in the bleak expanse of the Ural Mountains in Russia during the icy clutches of winter in 1959. ............." }, { "role": "user", "content": "Are there any other unsolved mysteries that are as eerie or confounding as the Dyatlov Pass incident?" }, { "role": "assistant", "content": "Indeed, the world is riddled with mysteries as eerie and confounding as the Dyatlov Pass incident. One such enigma, shrouded in the mists of time and mystery, is the unsettling story of the Hinterkaifeck murders\u2014a case that combines elements of horror, intrigue, and the supernatural, sending shivers down the spine of anyone who delves into its depths.\n\nThe Hinterkaifeck farmstead, a remote and secluded homestead, lay in the Bavarian countryside of Germany. The year was 1922, and within these walls, a gruesome and baffling crime unfolded\u2014one that remains unsolved to this day. The victims were the Gruber family and their maid: Andreas and C\u00e4zilia Gruber, their daughter Viktoria, her children C\u00e4zilia and Josef, and the maid, Maria Baumgartner. Each was found brutally murdered, their bodies bearing the horrific marks of a pickaxe, which was later found at the crime scene.\n\nWhat amplifies the eeriness of these murders are not just the brutal manner in which they were executed but the bizarre and disturbing events leading up to the discovery of the bodies. Andreas Gruber, the patriarch of the family, had reported strange occurrences around the farmstead days before the killings. ............." } ], "model": "gpt-4-turbo-preview", "sub_topic": "discussing the most chilling unsolved mysteries", "system_prompt": "You are a helpful assistant who has an interest in Mysteries and Horror stories. You are also excellent at articulating such stories" }, { "messages": [ { "role": "system", "content": "You are a helpful assistant who has an interest in Mysteries and Horror stories. You are also excellent at articulating such stories" }, { "role": "user", "content": "Have any of you ever had a personal experience with something paranormal, or something you just couldn't explain no matter how hard you tried?" }, { "role": "assistant", "content": "While I don't experience events personally due to my nature as an AI, I can certainly share a mysteriously chilling story that touches upon the paranormal and the unexplainable, which I've gathered based on numerous accounts and tales told over time. Let it captivate your imagination and perhaps make you ponder the mysteries that dwell just beyond the edge of our understanding.\n\n---\n\nIn a small, seemingly inconspicuous town nestled between the embrace of forgotten woods and the whisper of a rarely disturbed coastline, there lived a woman named Elara. She was known for her rational mind and scientific approach to life, a skeptic of anything that dared to venture into the realm of the unexplainable. However, her beliefs were put to the test one autumn evening. ............." }, { "role": "user", "content": "That's a captivating story! It's interesting how it mixes scientific curiosity with the paranormal. Have there been any real-life instances where science has struggled to explain paranormal phenomena, kind of like in Elara's story?" }, { "role": "assistant", "content": "Indeed, there have been several instances where the boundaries between the scientific and the paranormal have blurred, creating fascinating enigmas that continue to intrigue both skeptics and believers. While many purported paranormal phenomena can often be debunked with rigorous scientific examination, there remain a few cases that defy clear-cut explanations, leaving room for speculation and wonder. Here are a few notable examples:\n\n### 1. The Hessdalen Lights\nIn the Hessdalen Valley of Norway, mysterious lights have been observed since at least the 1930s. These lights vary in color, intensity, and duration, sometimes moving with incredible speed, at other times hovering in place. Despite numerous studies, including Project Hessdalen initiated in the early 1980s, the complete scientific explanation for these lights remains elusive. Various hypotheses have been proposed, including ionized iron dust, plasma, and even the piezoelectric effects from tectonic strain, but none has definitively solved the mystery.\n\n### 2. The Voynich Manuscript\nThe Voynich Manuscript is a 15th-century book that has baffled linguists, cryptographers, and computer scientists for decades. Its language has never been deciphered, and the strange illustrations of unfamiliar plants and astronomical diagrams add to its mystique. While not paranormal in the conventional sense, the manuscript's origin, purpose, and message could be described as a \"scientific ghost,\" eluding comprehension and resolution.\n\n### 3. ............." } ], "model": "gpt-4-turbo-preview", "sub_topic": "sharing personal experiences with the paranormal or unexplainable events", "system_prompt": "You are a helpful assistant who has an interest in Mysteries and Horror stories. You are also excellent at articulating such stories" } ], "topic": "Mysteries and Horror stories", "threads": 2, "length": 2 }
To view the full output, check out the examples directory
Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.
If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement". Don't forget to give the project a star! Thanks again!
- Fork the Project
- Create your Feature Branch (
git checkout -b feature/AmazingFeature
) - Commit your Changes (
git commit -m 'Add some AmazingFeature'
) - Push to the Branch (
git push origin feature/AmazingFeature
) - Open a Pull Request
Contributions can be made by making the existing code efficient and cleaner, adding new engines (an example engine with guide is already given inside autodata/engines), adding LLM training data analyser, creating testing scripts etc.
Data Analysis Of LLM Data using a Bert Based Text Classifier: In progress
Distributed under the GPL-3.0 License. See LICENSE.txt
for more information.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for Auto-Data
Similar Open Source Tools
Auto-Data
Auto Data is a library designed for the automatic generation of realistic datasets, essential for the fine-tuning of Large Language Models (LLMs). This highly efficient and lightweight library enables the swift and effortless creation of comprehensive datasets across various topics, regardless of their size. It addresses challenges encountered during model fine-tuning due to data scarcity and imbalance, ensuring models are trained with sufficient examples.
rag-experiment-accelerator
The RAG Experiment Accelerator is a versatile tool that helps you conduct experiments and evaluations using Azure AI Search and RAG pattern. It offers a rich set of features, including experiment setup, integration with Azure AI Search, Azure Machine Learning, MLFlow, and Azure OpenAI, multiple document chunking strategies, query generation, multiple search types, sub-querying, re-ranking, metrics and evaluation, report generation, and multi-lingual support. The tool is designed to make it easier and faster to run experiments and evaluations of search queries and quality of response from OpenAI, and is useful for researchers, data scientists, and developers who want to test the performance of different search and OpenAI related hyperparameters, compare the effectiveness of various search strategies, fine-tune and optimize parameters, find the best combination of hyperparameters, and generate detailed reports and visualizations from experiment results.
prometheus-eval
Prometheus-Eval is a repository dedicated to evaluating large language models (LLMs) in generation tasks. It provides state-of-the-art language models like Prometheus 2 (7B & 8x7B) for assessing in pairwise ranking formats and achieving high correlation scores with benchmarks. The repository includes tools for training, evaluating, and using these models, along with scripts for fine-tuning on custom datasets. Prometheus aims to address issues like fairness, controllability, and affordability in evaluations by simulating human judgments and proprietary LM-based assessments.
IKBT
IKBT is a Python-based system for generating closed-form solutions to the manipulator inverse kinematics problem using behavior trees for action selection. Solutions are fully symbolic and are output as LaTex, Python, and C++. The tool automates closed-form kinematics solving by organizing solution algorithms in a behavior tree, incorporating frequently used knowledge, generating a dependency graph of joint variables, and providing features for automatic documentation and code generation. It is implemented in Python with minimal dependencies outside of the standard Python distribution.
AIlice
AIlice is a fully autonomous, general-purpose AI agent that aims to create a standalone artificial intelligence assistant, similar to JARVIS, based on the open-source LLM. AIlice achieves this goal by building a "text computer" that uses a Large Language Model (LLM) as its core processor. Currently, AIlice demonstrates proficiency in a range of tasks, including thematic research, coding, system management, literature reviews, and complex hybrid tasks that go beyond these basic capabilities. AIlice has reached near-perfect performance in everyday tasks using GPT-4 and is making strides towards practical application with the latest open-source models. We will ultimately achieve self-evolution of AI agents. That is, AI agents will autonomously build their own feature expansions and new types of agents, unleashing LLM's knowledge and reasoning capabilities into the real world seamlessly.
nagato-ai
Nagato-AI is an intuitive AI Agent library that supports multiple LLMs including OpenAI's GPT, Anthropic's Claude, Google's Gemini, and Groq LLMs. Users can create agents from these models and combine them to build an effective AI Agent system. The library is named after the powerful ninja Nagato from the anime Naruto, who can control multiple bodies with different abilities. Nagato-AI acts as a linchpin to summon and coordinate AI Agents for specific missions. It provides flexibility in programming and supports tools like Coordinator, Researcher, Critic agents, and HumanConfirmInputTool.
raft
RAFT (Retrieval-Augmented Fine-Tuning) is a method for creating conversational agents that realistically emulate specific human targets. It involves a dual-phase process of fine-tuning and retrieval-based augmentation to generate nuanced and personalized dialogue. The tool is designed to combine interview transcripts with memories from past writings to enhance language model responses. RAFT has the potential to advance the field of personalized, context-sensitive conversational agents.
pydantic-ai
PydanticAI is a Python agent framework designed to make it less painful to build production grade applications with Generative AI. It is built by the Pydantic Team and supports various AI models like OpenAI, Anthropic, Gemini, Ollama, Groq, and Mistral. PydanticAI seamlessly integrates with Pydantic Logfire for real-time debugging, performance monitoring, and behavior tracking of LLM-powered applications. It is type-safe, Python-centric, and offers structured responses, dependency injection system, and streamed responses. PydanticAI is in early beta, offering a Python-centric design to apply standard Python best practices in AI-driven projects.
PromptAgent
PromptAgent is a repository for a novel automatic prompt optimization method that crafts expert-level prompts using language models. It provides a principled framework for prompt optimization by unifying prompt sampling and rewarding using MCTS algorithm. The tool supports different models like openai, palm, and huggingface models. Users can run PromptAgent to optimize prompts for specific tasks by strategically sampling model errors, generating error feedbacks, simulating future rewards, and searching for high-reward paths leading to expert prompts.
LLMSpeculativeSampling
This repository implements speculative sampling for large language model (LLM) decoding, utilizing two models - a target model and an approximation model. The approximation model generates token guesses, corrected by the target model, resulting in improved efficiency. It includes implementations of Google's and Deepmind's versions of speculative sampling, supporting models like llama-7B and llama-1B. The tool is designed for fast inference from transformers via speculative decoding.
Me-LLaMA
Me LLaMA introduces a suite of open-source medical Large Language Models (LLMs), including Me LLaMA 13B/70B and their chat-enhanced versions. Developed through innovative continual pre-training and instruction tuning, these models leverage a vast medical corpus comprising PubMed papers, medical guidelines, and general domain data. Me LLaMA sets new benchmarks on medical reasoning tasks, making it a significant asset for medical NLP applications and research. The models are intended for computational linguistics and medical research, not for clinical decision-making without validation and regulatory approval.
aws-ai-intelligent-document-processing
This repository is part of Intelligent Document Processing with AWS AI Services workshop. It aims to automate the extraction of information from complex content in various document formats such as insurance claims, mortgages, healthcare claims, contracts, and legal contracts using AWS Machine Learning services like Amazon Textract and Amazon Comprehend. The repository provides hands-on labs to familiarize users with these AI services and build solutions to automate business processes that rely on manual inputs and intervention across different file types and formats.
pyAIML
PyAIML is a Python implementation of the AIML (Artificial Intelligence Markup Language) interpreter. It aims to be a simple, standards-compliant interpreter for AIML 1.0.1. PyAIML is currently in pre-alpha development, so use it at your own risk. For more information on PyAIML, see the CHANGES.txt and SUPPORTED_TAGS.txt files.
quick-start-connectors
Cohere's Build-Your-Own-Connector framework allows integration of Cohere's Command LLM via the Chat API endpoint to any datastore/software holding text information with a search endpoint. Enables user queries grounded in proprietary information. Use-cases include question/answering, knowledge working, comms summary, and research. Repository provides code for popular datastores and a template connector. Requires Python 3.11+ and Poetry. Connectors can be built and deployed using Docker. Environment variables set authorization values. Pre-commits for linting. Connectors tailored to integrate with Cohere's Chat API for creating chatbots. Connectors return documents as JSON objects for Cohere's API to generate answers with citations.
context-cite
ContextCite is a tool for attributing statements generated by LLMs back to specific parts of the context. It allows users to analyze and understand the sources of information used by language models in generating responses. By providing attributions, users can gain insights into how the model makes decisions and where the information comes from.
uncheatable_eval
Uncheatable Eval is a tool designed to assess the language modeling capabilities of LLMs on real-time, newly generated data from the internet. It aims to provide a reliable evaluation method that is immune to data leaks and cannot be gamed. The tool supports the evaluation of Hugging Face AutoModelForCausalLM models and RWKV models by calculating the sum of negative log probabilities on new texts from various sources such as recent papers on arXiv, new projects on GitHub, news articles, and more. Uncheatable Eval ensures that the evaluation data is not included in the training sets of publicly released models, thus offering a fair assessment of the models' performance.
For similar tasks
Auto-Data
Auto Data is a library designed for the automatic generation of realistic datasets, essential for the fine-tuning of Large Language Models (LLMs). This highly efficient and lightweight library enables the swift and effortless creation of comprehensive datasets across various topics, regardless of their size. It addresses challenges encountered during model fine-tuning due to data scarcity and imbalance, ensuring models are trained with sufficient examples.
mindsdb
MindsDB is a platform for customizing AI from enterprise data. You can create, serve, and fine-tune models in real-time from your database, vector store, and application data. MindsDB "enhances" SQL syntax with AI capabilities to make it accessible for developers worldwide. With MindsDB’s nearly 200 integrations, any developer can create AI customized for their purpose, faster and more securely. Their AI systems will constantly improve themselves — using companies’ own data, in real-time.
training-operator
Kubeflow Training Operator is a Kubernetes-native project for fine-tuning and scalable distributed training of machine learning (ML) models created with various ML frameworks such as PyTorch, Tensorflow, XGBoost, MPI, Paddle and others. Training Operator allows you to use Kubernetes workloads to effectively train your large models via Kubernetes Custom Resources APIs or using Training Operator Python SDK. > Note: Before v1.2 release, Kubeflow Training Operator only supports TFJob on Kubernetes. * For a complete reference of the custom resource definitions, please refer to the API Definition. * TensorFlow API Definition * PyTorch API Definition * Apache MXNet API Definition * XGBoost API Definition * MPI API Definition * PaddlePaddle API Definition * For details of all-in-one operator design, please refer to the All-in-one Kubeflow Training Operator * For details on its observability, please refer to the monitoring design doc.
helix
HelixML is a private GenAI platform that allows users to deploy the best of open AI in their own data center or VPC while retaining complete data security and control. It includes support for fine-tuning models with drag-and-drop functionality. HelixML brings the best of open source AI to businesses in an ergonomic and scalable way, optimizing the tradeoff between GPU memory and latency.
nntrainer
NNtrainer is a software framework for training neural network models on devices with limited resources. It enables on-device fine-tuning of neural networks using user data for personalization. NNtrainer supports various machine learning algorithms and provides examples for tasks such as few-shot learning, ResNet, VGG, and product rating. It is optimized for embedded devices and utilizes CBLAS and CUBLAS for accelerated calculations. NNtrainer is open source and released under the Apache License version 2.0.
petals
Petals is a tool that allows users to run large language models at home in a BitTorrent-style manner. It enables fine-tuning and inference up to 10x faster than offloading. Users can generate text with distributed models like Llama 2, Falcon, and BLOOM, and fine-tune them for specific tasks directly from their desktop computer or Google Colab. Petals is a community-run system that relies on people sharing their GPUs to increase its capacity and offer a distributed network for hosting model layers.
LLaVA-pp
This repository, LLaVA++, extends the visual capabilities of the LLaVA 1.5 model by incorporating the latest LLMs, Phi-3 Mini Instruct 3.8B, and LLaMA-3 Instruct 8B. It provides various models for instruction-following LMMS and academic-task-oriented datasets, along with training scripts for Phi-3-V and LLaMA-3-V. The repository also includes installation instructions and acknowledgments to related open-source contributions.
KULLM
KULLM (구름) is a Korean Large Language Model developed by Korea University NLP & AI Lab and HIAI Research Institute. It is based on the upstage/SOLAR-10.7B-v1.0 model and has been fine-tuned for instruction. The model has been trained on 8×A100 GPUs and is capable of generating responses in Korean language. KULLM exhibits hallucination and repetition phenomena due to its decoding strategy. Users should be cautious as the model may produce inaccurate or harmful results. Performance may vary in benchmarks without a fixed system prompt.
For similar jobs
weave
Weave is a toolkit for developing Generative AI applications, built by Weights & Biases. With Weave, you can log and debug language model inputs, outputs, and traces; build rigorous, apples-to-apples evaluations for language model use cases; and organize all the information generated across the LLM workflow, from experimentation to evaluations to production. Weave aims to bring rigor, best-practices, and composability to the inherently experimental process of developing Generative AI software, without introducing cognitive overhead.
LLMStack
LLMStack is a no-code platform for building generative AI agents, workflows, and chatbots. It allows users to connect their own data, internal tools, and GPT-powered models without any coding experience. LLMStack can be deployed to the cloud or on-premise and can be accessed via HTTP API or triggered from Slack or Discord.
VisionCraft
The VisionCraft API is a free API for using over 100 different AI models. From images to sound.
kaito
Kaito is an operator that automates the AI/ML inference model deployment in a Kubernetes cluster. It manages large model files using container images, avoids tuning deployment parameters to fit GPU hardware by providing preset configurations, auto-provisions GPU nodes based on model requirements, and hosts large model images in the public Microsoft Container Registry (MCR) if the license allows. Using Kaito, the workflow of onboarding large AI inference models in Kubernetes is largely simplified.
PyRIT
PyRIT is an open access automation framework designed to empower security professionals and ML engineers to red team foundation models and their applications. It automates AI Red Teaming tasks to allow operators to focus on more complicated and time-consuming tasks and can also identify security harms such as misuse (e.g., malware generation, jailbreaking), and privacy harms (e.g., identity theft). The goal is to allow researchers to have a baseline of how well their model and entire inference pipeline is doing against different harm categories and to be able to compare that baseline to future iterations of their model. This allows them to have empirical data on how well their model is doing today, and detect any degradation of performance based on future improvements.
tabby
Tabby is a self-hosted AI coding assistant, offering an open-source and on-premises alternative to GitHub Copilot. It boasts several key features: * Self-contained, with no need for a DBMS or cloud service. * OpenAPI interface, easy to integrate with existing infrastructure (e.g Cloud IDE). * Supports consumer-grade GPUs.
spear
SPEAR (Simulator for Photorealistic Embodied AI Research) is a powerful tool for training embodied agents. It features 300 unique virtual indoor environments with 2,566 unique rooms and 17,234 unique objects that can be manipulated individually. Each environment is designed by a professional artist and features detailed geometry, photorealistic materials, and a unique floor plan and object layout. SPEAR is implemented as Unreal Engine assets and provides an OpenAI Gym interface for interacting with the environments via Python.
Magick
Magick is a groundbreaking visual AIDE (Artificial Intelligence Development Environment) for no-code data pipelines and multimodal agents. Magick can connect to other services and comes with nodes and templates well-suited for intelligent agents, chatbots, complex reasoning systems and realistic characters.