LLM-on-Tabular-Data-Prediction-Table-Understanding-Data-Generation
Repository for collecting and categorizing papers outlined in our survey paper: "Large Language Models on Tabular Data -- A Survey".
Stars: 68
This repository serves as a comprehensive survey on the application of Large Language Models (LLMs) on tabular data, focusing on tasks such as prediction, data generation, and table understanding. It aims to consolidate recent progress in this field by summarizing key techniques, metrics, datasets, models, and optimization approaches. The survey identifies strengths, limitations, unexplored territories, and gaps in the existing literature, providing insights for future research directions. It also offers code and dataset references to empower readers with the necessary tools and knowledge to address challenges in this rapidly evolving domain.
README:
@article{
fang2024large,
title={Large Language Models ({LLM}s) on Tabular Data: Prediction, Generation, and Understanding - A Survey},
author={Xi Fang and Weijie Xu and Fiona Anting Tan and Ziqing Hu and Jiani Zhang and Yanjun Qi and Srinivasan H. Sengamedu and Christos Faloutsos},
journal={Transactions on Machine Learning Research},
issn={2835-8856},
year={2024},
url={https://openreview.net/forum?id=IZnrCGF9WI},
note={}
}
This repo is constructed for collecting and categorizing papers about diffusion models according to our survey paper——Large Language Models on Tabular Data -- A Survey. Considering the fast development of this field, we will continue to update both arxiv paper and this repo.
Abstract
Recent breakthroughs in large language modeling have facilitated rigorous exploration of their application in diverse tasks related to tabular data modeling, such as prediction, tabular data synthesis, question answering, and table understanding. Each task presents unique challenges and opportunities. However, there is currently a lack of comprehensive review that summarizes and compares the key techniques, metrics, datasets, models, and optimization approaches in this research domain. This survey aims to address this gap by consolidating recent progress in these areas, offering a thorough survey and taxonomy of the datasets, metrics, and methodologies utilized. It identifies strengths, limitations, unexplored territories, and gaps in the existing literature, while providing some insights for future research directions in this vital and rapidly evolving field. It also provides relevant code and datasets references. Through this comprehensive review, we hope to provide interested readers with pertinent references and insightful perspectives, empowering them with the necessary tools and knowledge to effectively navigate and address the prevailing challenges in the field.
Figure 1: Overview of LLM on Tabular Data: the paper discusses application of LLM for prediction, data generation, and table understanding tasks.
Figure 4: Key techniques in using LLMs for tabular data. The dotted line indicates steps that are optional.
Table of content:
TABLET: Learning From Instructions For Tabular Data [code]
Language models are weak learners
LIFT: Language-Interfaced Fine-Tuning for Non-Language Machine Learning Tasks
[code]
TabLLM: Few-shot Classification of Tabular Data with Large Language Models
[code]
UniPredict: Large Language Models are Universal Tabular Classifiers
Towards Foundation Models for Learning on Tabular Data
Towards Better Serialization of Tabular Data for Few-shot Classification with Large Language Models
Multimodal clinical pseudo-notes for emergency department prediction tasks using multiple embedding model for ehr (meme) **[code]
StructLM: Towards Building Generalist Models for Structured Knowledge Grounding
UniTabE: A Universal Pretraining Protocol for Tabular Foundation Model in Data Science
Unleashing the Potential of Large Language Models for Predictive Tabular Tasks in Data Science [model]
Synthetic Oversampling: Theory and A Practical Approach Using LLMs to Address Data Imbalance
LLMs learn governing principles of dynamical systems, revealing an in-context neural scaling law
PromptCast: A New Prompt-based Learning Paradigm for Time Series Forecasting
Large Language Models Are Zero-Shot Time Series Forecasters
TEST: Text Prototype Aligned Embedding to Activate LLM's Ability for Time Series
Time-LLM: Time Series Forecasting by Reprogramming Large Language Models
[code]
MediTab: Scaling Medical Tabular Data Predictors via Data Consolidation, Enrichment, and Refinement
[code]
CPLLM: Clinical Prediction with Large Language Models
[code]
CTRL: Connect Collaborative and Language Model for CTR Prediction
FinGPT: Open-Source Financial Large Language Models
[code]
Language Models are Realistic Tabular Data Generators [code]
REaLTabFormer: Generating Realistic Relational and Tabular Data using Transformers
Generative Table Pre-training Empowers Models for Tabular Prediction [code]
TabuLa: Harnessing Language Models for Tabular Data Synthesis [code]
Curated LLM: Synergy of LLMs and Data Curation for tabular augmentation in ultra low-data regimes
TabMT: Generating tabular data with masked transformers
Elephants Never Forget: Testing Language Models for Memorization of Tabular Data
Graph-to-Text Generation with Dynamic Structure Pruning
Plan-then-Seam: Towards Efficient Table-to-Text Generation
Differentially Private Tabular Data Synthesis using Large Language Models
Pythia: Unsupervised Generation of Ambiguous Textual Claims from Relational Data
TableGPT: Towards Unifying Tables, Nature Language and Commands into One GPT
Large Language Models are Versatile Decomposers: Decompose Evidence and Questions for Table-based Reasoning [code]
PACIFIC: Towards Proactive Conversational Question Answering over Tabular and Textual Data in Finance [code]
Large Language Models are few(1)-shot Table Reasoners [code]
cTBLS: Augmenting Large Language Models with Conversational Tables [code]
Large Language Models are Complex Table Parsers
Rethinking Tabular Data Understanding with Large Language Models [code]
TableGPT: Towards Unifying Tables, Nature Language and Commands into One GPT
Testing the Limits of Unified Sequence to Sequence LLM Pretraining on Diverse Table Data Tasks
Unified Language Representation for Question Answering over Text, Tables, and Images
SUQL: Conversational Search over Structured and Unstructured Data with Large Language Models [code]
TableLlama: Towards Open Large Generalist Models for Tables [code]
StructGPT: A General Framework for Large Language Model to Reason over Structured Data [code]
JarviX: A LLM No code Platform for Tabular Data Analysis and Optimization
CABINET: Content Relevance-based Noise Reduction for Table Question Answering **[code]
Data-Copilot: Bridging Billions of Data and Humans with Autonomous Workflow [code]
Querying Large Language Models with SQL
Text-to-SQL Empowered by Large Language Models: A Benchmark Evaluation
DIN-SQL: Decomposed In-Context Learning of Text-to-SQL with Self-Correction [code]
C3: Zero-shot Text-to-SQL with ChatGPT [code]
DBCopilot: Scaling Natural Language Querying to Massive Databases [code]
Bridging the Gap: Deciphering Tabular Data Using Large Language Model
TableQuery: Querying tabular data with natural language [code]
S2SQL: Injecting Syntax to Question-Schema Interaction Graph Encoder for Text-to-SQL Parsers
Dynamic hybrid relation network for cross-domain context-dependent semantic parsing
STAR: SQL Guided Pre-Training for Context-dependent Text-to-SQL Parsing
SUN: Exploring Intrinsic Uncertainties in Text-to-SQL Parsers
Towards Generalizable and Robust Text-to-SQL Parsing
Before Generation, Align it! A Novel and Effective Strategy for Mitigating Hallucinations in Text-to-SQL Generation [code]
Robust (Controlled) Table-to-Text Generation with Structure-Aware Equivariance Learning [code]
Table-based Fact Verification with Salience-aware Learning [code]
Cocoon: Semantic Table Profiling Using Large Language Models [code]
Relationalizing Tables with Large Language Models: The Promise and Challenges
Disambiguate Entity Matching using Large Language Models through Relation Discovery [code]
Please refer to our paper to see relevant methods that benchmark on these datasets.
Dataset | Dataset Number | Dataset Repo |
---|---|---|
OpenML | 11 | https://github.com/UW-Madison-Lee-Lab/LanguageInterfacedFineTuning/tree/master/regression/realdata/data |
Kaggle API | 169 | https://github.com/Kaggle/kaggle-api |
Combo | 9 | https://github.com/clinicalml/TabLLM/tree/main/datasets |
UCI ML | 20 | https://github.com/dylan-slack/Tablet/tree/main/data/benchmark/performance |
DDX | 10 | https://github.com/dylan-slack/Tablet/tree/main/data/ddx_data_no_instructions/benchmark |
Dataset | # Tables | Task Type | Input | Output | Data Source | Dataset Repo |
---|---|---|---|---|---|---|
FetaQA | 10330 | QA | Table Question | Answer | Wikipedia | https://github.com/Yale-LILY/FeTaQA |
WikiTableQuestion | 2108 | QA | Table Question | Answer | Wikipedia | https://ppasupat.github.io/WikiTableQuestions/ |
NQ-TABLES | 169898 | QA | Question, Table | Answer | Synthetic | https://github.com/google-research-datasets/natural-questions |
HybriDialogue | 13000 | QA | Conversation, Table, Reference | Answer | Wikipedia | https://github.com/entitize/HybridDialogue |
TAT-QA | 2757 | QA | Question, Table | Answer | Financial report | https://github.com/NExTplusplus/TAT-QA |
HiTAB | 3597 | QA/NLG | Question, Table | Answer | Statistical Report and Wikipedia | https://github.com/microsoft/HiTab |
ToTTo | 120000 | NLG | Table | Sentence | Wikipedia | https://github.com/google-research-datasets/ToTTo |
FEVEROUS | 28800 | Classification | Claim, Table | Label | Common Crawl | https://fever.ai/dataset/feverous.html |
Dresden Web Tables | 125M | Classification | Table | Label | Common Crawl | https://ppasupat.github.io/WikiTableQuestions/ |
InfoTabs | 2540 | NLI | Table , Hypothesis | Label | Wikipedia | https://infotabs.github.io/ |
TabFact | 16573 | NLI | Table, Statement | Label | Wikipedia | https://tabfact.github.io/ |
TAPEX | 1500 | Text2SQL | SQL, Table | Answer | Synthetic | https://github.com/google-research/tapas |
Spider | 1020 | Text2SQL | Table, Question | SQL | Human annotation | https://drive.usercontent.google.com/download?id=1iRDVHLr4mX2wQKSgA9J8Pire73Jahh0m&export=download&authuser=0 |
WIKISQL | 24241 | Text2SQL | Table, Question | SQL, Answer | Human Annotated | https://github.com/salesforce/WikiSQL |
BIRD | 12751 | Text2SQL | Table, Question | SQL | Human Annotated | https://bird-bench.github.io/ |
Tapilot-Crossing | 5 | Text2Code, QA, RAG | Table, Dialog History, Question, Private Lib, Chart | Python, Private Lib Code, Answer | Human-Agent Interaction | https://tapilot-crossing.github.io/ |
A Survey on Text-to-SQL Parsing: Concepts, Methods, and Future Directions
If you would like to contribute to this list or writeup, feel free to submit a pull request!
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for LLM-on-Tabular-Data-Prediction-Table-Understanding-Data-Generation
Similar Open Source Tools
LLM-on-Tabular-Data-Prediction-Table-Understanding-Data-Generation
This repository serves as a comprehensive survey on the application of Large Language Models (LLMs) on tabular data, focusing on tasks such as prediction, data generation, and table understanding. It aims to consolidate recent progress in this field by summarizing key techniques, metrics, datasets, models, and optimization approaches. The survey identifies strengths, limitations, unexplored territories, and gaps in the existing literature, providing insights for future research directions. It also offers code and dataset references to empower readers with the necessary tools and knowledge to address challenges in this rapidly evolving domain.
LLM-Tool-Survey
This repository contains a collection of papers related to tool learning with large language models (LLMs). The papers are organized according to the survey paper 'Tool Learning with Large Language Models: A Survey'. The survey focuses on the benefits and implementation of tool learning with LLMs, covering aspects such as task planning, tool selection, tool calling, response generation, benchmarks, evaluation, challenges, and future directions in the field. It aims to provide a comprehensive understanding of tool learning with LLMs and inspire further exploration in this emerging area.
LynxHub
LynxHub is a platform that allows users to seamlessly install, configure, launch, and manage all their AI interfaces from a single, intuitive dashboard. It offers features like AI interface management, arguments manager, custom run commands, pre-launch actions, extension management, in-app tools like terminal and web browser, AI information dashboard, Discord integration, and additional features like theme options and favorite interface pinning. The platform supports modular design for custom AI modules and upcoming extensions system for complete customization. LynxHub aims to streamline AI workflow and enhance user experience with a user-friendly interface and comprehensive functionalities.
OpenRedTeaming
OpenRedTeaming is a repository focused on red teaming for generative models, specifically large language models (LLMs). The repository provides a comprehensive survey on potential attacks on GenAI and robust safeguards. It covers attack strategies, evaluation metrics, benchmarks, and defensive approaches. The repository also implements over 30 auto red teaming methods. It includes surveys, taxonomies, attack strategies, and risks related to LLMs. The goal is to understand vulnerabilities and develop defenses against adversarial attacks on large language models.
unilm
The 'unilm' repository is a collection of tools, models, and architectures for Foundation Models and General AI, focusing on tasks such as NLP, MT, Speech, Document AI, and Multimodal AI. It includes various pre-trained models, such as UniLM, InfoXLM, DeltaLM, MiniLM, AdaLM, BEiT, LayoutLM, WavLM, VALL-E, and more, designed for tasks like language understanding, generation, translation, vision, speech, and multimodal processing. The repository also features toolkits like s2s-ft for sequence-to-sequence fine-tuning and Aggressive Decoding for efficient sequence-to-sequence decoding. Additionally, it offers applications like TrOCR for OCR, LayoutReader for reading order detection, and XLM-T for multilingual NMT.
app_generative_ai
This repository contains course materials for T81 559: Applications of Generative Artificial Intelligence at Washington University in St. Louis. The course covers practical applications of Large Language Models (LLMs) and text-to-image networks using Python. Students learn about generative AI principles, LangChain, Retrieval-Augmented Generation (RAG) model, image generation techniques, fine-tuning neural networks, and prompt engineering. Ideal for students, researchers, and professionals in computer science, the course offers a transformative learning experience in the realm of Generative AI.
nuitrack-sdk
Nuitrack™ is an ultimate 3D body tracking solution developed by 3DiVi Inc. It enables body motion analytics applications for virtually any widespread depth sensors and hardware platforms, supporting a wide range of applications from real-time gesture recognition on embedded platforms to large-scale multisensor analytical systems. Nuitrack provides highly-sophisticated 3D skeletal tracking, basic facial analysis, hand tracking, and gesture recognition APIs for UI control. It offers two skeletal tracking engines: classical for embedded hardware and AI for complex poses, providing a human-centric spatial understanding tool for natural and intelligent user engagement.
awesome-mobile-robotics
The 'awesome-mobile-robotics' repository is a curated list of important content related to Mobile Robotics and AI. It includes resources such as courses, books, datasets, software and libraries, podcasts, conferences, journals, companies and jobs, laboratories and research groups, and miscellaneous resources. The repository covers a wide range of topics in the field of Mobile Robotics and AI, providing valuable information for enthusiasts, researchers, and professionals in the domain.
anylabeling
AnyLabeling is a tool for effortless data labeling with AI support from YOLO and Segment Anything. It combines features from LabelImg and Labelme with an improved UI and auto-labeling capabilities. Users can annotate images with polygons, rectangles, circles, lines, and points, as well as perform auto-labeling using YOLOv5 and Segment Anything. The tool also supports text detection, recognition, and Key Information Extraction (KIE) labeling, with multiple language options available such as English, Vietnamese, and Chinese.
PyTorch-Tutorial-2nd
The second edition of "PyTorch Practical Tutorial" was completed after 5 years, 4 years, and 2 years. On the basis of the essence of the first edition, rich and detailed deep learning application cases and reasoning deployment frameworks have been added, so that this book can more systematically cover the knowledge involved in deep learning engineers. As the development of artificial intelligence technology continues to emerge, the second edition of "PyTorch Practical Tutorial" is not the end, but the beginning, opening up new technologies, new fields, and new chapters. I hope to continue learning and making progress in artificial intelligence technology with you in the future.
Anim
Anim v0.1.0 is an animation tool that allows users to convert videos to animations using mixamorig characters. It features FK animation editing, object selection, embedded Python support (only on Windows), and the ability to export to glTF and FBX formats. Users can also utilize Mediapipe to create animations. The tool is designed to assist users in creating animations with ease and flexibility.
rivet
Rivet is a desktop application for creating complex AI agents and prompt chaining, and embedding it in your application. Rivet currently has LLM support for OpenAI GPT-3.5 and GPT-4, Anthropic Claude Instant and Claude 2, [Anthropic Claude 3 Haiku, Sonnet, and Opus](https://www.anthropic.com/news/claude-3-family), and AssemblyAI LeMUR framework for voice data. Rivet has embedding/vector database support for OpenAI Embeddings and Pinecone. Rivet also supports these additional integrations: Audio Transcription from AssemblyAI. Rivet core is a TypeScript library for running graphs created in Rivet. It is used by the Rivet application, but can also be used in your own applications, so that Rivet can call into your own application's code, and your application can call into Rivet graphs.
general
General is a DART & Flutter library created by AZKADEV to speed up development on various platforms and CLI easily. It allows access to features such as camera, fingerprint, SMS, and MMS. The library is designed for Dart language and provides functionalities for app background, text to speech, speech to text, and more.
kweaver
KWeaver is an open-source cognitive intelligence development framework that provides data scientists, application developers, and domain experts with the ability for rapid development, comprehensive openness, and high-performance knowledge network generation and cognitive intelligence large model framework. It offers features such as automated and visual knowledge graph construction, visualization and analysis of knowledge graph data, knowledge graph integration, knowledge graph resource management, large model prompt engineering and debugging, and visual configuration for large model access.
ClashRoyaleBuildABot
Clash Royale Build-A-Bot is a project that allows users to build their own bot to play Clash Royale. It provides an advanced state generator that accurately returns detailed information using cutting-edge technologies. The project includes tutorials for setting up the environment, building a basic bot, and understanding state generation. It also offers updates such as replacing YOLOv5 with YOLOv8 unit model and enhancing performance features like placement and elixir management. The future roadmap includes plans to label more images of diverse cards, add a tracking layer for unit predictions, publish tutorials on Q-learning and imitation learning, release the YOLOv5 training notebook, implement chest opening and card upgrading features, and create a leaderboard for the best bots developed with this repository.
data-scientist-roadmap2024
The Data Scientist Roadmap2024 provides a comprehensive guide to mastering essential tools for data science success. It includes programming languages, machine learning libraries, cloud platforms, and concepts categorized by difficulty. The roadmap covers a wide range of topics from programming languages to machine learning techniques, data visualization tools, and DevOps/MLOps tools. It also includes web development frameworks and specific concepts like supervised and unsupervised learning, NLP, deep learning, reinforcement learning, and statistics. Additionally, it delves into DevOps tools like Airflow and MLFlow, data visualization tools like Tableau and Matplotlib, and other topics such as ETL processes, optimization algorithms, and financial modeling.
For similar tasks
LLM-on-Tabular-Data-Prediction-Table-Understanding-Data-Generation
This repository serves as a comprehensive survey on the application of Large Language Models (LLMs) on tabular data, focusing on tasks such as prediction, data generation, and table understanding. It aims to consolidate recent progress in this field by summarizing key techniques, metrics, datasets, models, and optimization approaches. The survey identifies strengths, limitations, unexplored territories, and gaps in the existing literature, providing insights for future research directions. It also offers code and dataset references to empower readers with the necessary tools and knowledge to address challenges in this rapidly evolving domain.
For similar jobs
sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.
teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.
ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.
classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.
chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.
BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students
uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.
griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.