Transformers_And_LLM_Are_What_You_Dont_Need
The best repository showing why transformers might not be the answer for time series forecasting and showcasing the best SOTA non transformer models.
Stars: 470
Transformers_And_LLM_Are_What_You_Dont_Need is a repository that explores the limitations of transformers in time series forecasting. It contains a collection of papers, articles, and theses discussing the effectiveness of transformers and LLMs in this domain. The repository aims to provide insights into why transformers may not be the best choice for time series forecasting tasks.
README:
The best repository showing why transformers donβt work in time series forecasting
- Are Transformers Effective for Time Series Forecasting? by Ailing Zeng, Muxi Chen, Lei Zhang, Qiang Xu (The Chinese University of Hong Kong, International Digital Economy Academy (IDEA), 2022) code π₯π₯π₯π₯π₯
- LLMs and foundational models for time series forecasting: They are not (yet) as good as you may hope by Christoph Bergmeir (2023) π₯π₯π₯π₯π₯
- Transformers Are What You Do Not Need by Valeriy Manokhin (2023) π₯π₯π₯π₯π₯
- Deep Learning is What You Do Not Need by Valeriy Manokhin (2022) π₯π₯π₯π₯π₯
- Why do Transformers suck at Time Series Forecasting by Devansh (2023)
- Frequency-domain MLPs are More Effective Learners in Time Series Forecasting by Kun Yi, Qi Zhang, Wei Fan, Shoujin Wang, Pengyang Wang, Hui He, Defu Lian, Ning An, Longbing Cao, Zhendong Niu (Bejing Institute of Technology, Tongji University, University of Oxford, Universuty of Technology Sydney, University of Macau, HeFei University of Technology, Macquarie University) (2023) π₯π₯π₯π₯π₯
- Forecasting CPI inflation under economic policy and geo-political uncertainties by Shovon Sengupta, Tanujit Chakraborty, Sunny Kumar Singh (Fidelity Investments, Sorbonne University, BITS Pilani Hyderabad). (2024) π₯π₯π₯π₯π₯
- Revisiting Long-term Time Series Forecasting: An Investigation on Linear Mapping by Zhe Li, Shiyi Qi, Yiduo Li, Zenglin Xu (Harbin Institute of Technology, Shenzhen, 2023) code
- SCINet: Time Series Modeling and Forecasting with Sample Convolution and Interaction by Minhao Liu, Ailing Zeng, Muxi Chen, Zhijian Xu, Qiuxia Lai, Lingna Ma, Qiang Xu (The Chinese University of Hong Kong,2022) code
- WINNET:TIME SERIES FORECASTING WITH A WINDOW-ENHANCED PERIOD EXTRACTING AND INTERACTING by Wenjie Ou, Dongyue Guo, Zheng Zhang, Zhishuo Zhao, Yi Lin (Sichuan University, China, 2023)
- A Multi-Scale Decomposition MLP-Mixer for Time Series Analysis by Shuhan Zhong, Sizhe Song, Guanyao Li, Weipeng Zhuo, Yang Liu, S.-H. Gary Chan, The Hong Kong University of Science and Technology Hong Kong, 2023) code π₯π₯π₯π₯π₯
- TimesNet: Temporal 2D-Variation Modeling for General Time Series Analysis by (Haixu Wu, Tengge Hu, Yong Liu, Hang Zhou, Jianmin Wang, Mingsheng Longj, , Tsinghua University, 2023) code π₯π₯π₯π₯π₯
- MTS-Mixers: Multivariate Time Series Forecasting via Factorized Temporal and Channel Mixing code π₯π₯π₯π₯π₯
- Reversible Instance Normalization for Accurate Time-Series Forecasting against Distribution Shift by Taesung Kim, Jinhee Kim, Yunwon Tae, Cheonbok Park, Jang-Ho Choi, Jaegul Choo (Kaist AI, Vuno, Naver Corp, ETRI, ICLR 2022) code project page π₯π₯π₯π₯π₯
- WINNet: Wavelet-inspired Invertible Network for Image Denoising by Wenjie Ou, Dongyue Guo, Zheng Zhang, Zhishuo Zhao, Yi Lin (College of Computer Science, Sichuan University, China) code π₯π₯π₯π₯π₯
- Mlinear: Rethink the Linear Model for Time-series Forecasting Wei Li, Xiangxu Meng, Chuhao Chen and Jianing Chen (Harbin Engineering University, 2023) π₯π₯π₯π₯π₯
- Minimalist Traffic Prediction: Linear Layer Is All You Need by Wenying Duan, Hong Rao, Wei Huang, Xiaoxi He (Nanchang, University, Universify of Macau, 2023)
- Frequency-domain MLPs are More Effective Learners in Time Series Forecasting by Kun Yi, Qi Zhang, Wei Fan, Shoujin Wang, Pengyang Wang, Hui He, Defu Lian, Ning An, Longbing Cao, Zhendong Niu (Beijing Institute of Technology, Tongji University, University of Oxford University of Technology Sydney, University of Macau, USTC, HeFei University of Technology, Macquarie University, 2023) code π₯π₯π₯π₯π₯
- AN END-TO-END TIME SERIES MODEL FOR SIMULTANEOUS IMPUTATION AND FORECAST by Trang H. Tran, Lam M. Nguyen, Kyongmin Yeo, Nam Nguyen, Dzung Phan, Roman Vaculin Jayant Kalagnanam (School of Operations Research and Information Engineering, Cornell University; IBM Research, Thomas J. Watson Research Center, Yorktown Heights, NY, USA, 2023) π₯π₯π₯π₯π₯
- Long-term Forecasting with TiDE: Time-series Dense Encoder by Abhimanyu Das, Weihao Kong, Andrew Leach, Shaan Mathur, Rajat Sen, Rose Yu (Google Cloud, University of California, San Diego, 2023)
- TSMixer: Lightweight MLP-Mixer Model for Multivariate Time Series Forecasting by Vijay Ekambaram, Arindam Jati, Nam Nguyen, Phanwadee Sinthong, Jayant Kalagnanam (IBM Research, 2023) code code
- Koopa: Learning Non-stationary Time Series Dynamics with Koopman Predictors by Yong Liu, Chenyu Li, Jianmin Wang, Mingsheng Long (Tsinghua University, 2023) code π₯π₯π₯π₯π₯
- Attractor Memory for Long-Term Time Series Forecasting: A Chaos Perspective Jiaxi Hu, Yuehong Hu, Wei Chen, Ming Jin, Shirui Pan, Qingsong Wen, Yuxuan Liang (2024) π₯π₯π₯π₯π₯
- When and How: Learning Identifiable Latent States for Nonstationary Time Series Forecasting (2024) π₯π₯π₯π₯π₯
- Deep Coupling Network For Multivariate Time Series Forecasting (2024)
- Linear Dynamics-embedded Neural Network for Long-Sequence Modeling by Tongyi Liang and Han-Xiong Li (City University of Hong Kong, 2024).
- PDETime: Rethinking Long-Term Multivariate Time Series Forecasting from the perspective of partial differential equations (2024)
- CATS: Enhancing Multivariate Time Series Forecasting by Constructing Auxiliary Time Series as Exogenous Variables (2024) π₯π₯π₯π₯π₯
- Is Mamba Effective for Time Series Forecasting? code (2024) π₯π₯π₯π₯π₯
- STG-Mamba: Spatial-Temporal Graph Learning via Selective State Space Model (2024)
- TimeMachine: A Time Series is Worth 4 Mambas for Long-term Forecasting code (2024)π₯π₯π₯π₯π₯
- FITS: Modeling Time Series with 10k Parameters code (2023)
- TSLANet: Rethinking Transformers for Time Series Representation Learning code (2024) π₯π₯π₯π₯π₯
- WFTNet: Exploiting Global and Local Periodicity in Long-term Time Series Forecasting code (2024) π₯π₯π₯π₯π₯
- SiMBA: Simplified Mamba-based Architecture for Vision and Multivariate Time series code (2024) π₯π₯π₯π₯π₯
- SOFTS: Efficient Multivariate Time Series Forecasting with Series-Core Fusion code (2024) π₯π₯π₯π₯π₯
- Integrating Mamba and Transformer for Long-Short Range Time Series Forecasting code (2024) π₯π₯π₯π₯π₯
- SparseTSF: Modeling Long-term Time Series Forecasting with 1k Parameters (2024) π₯π₯π₯π₯π₯
- Boosting MLPs with a Coarsening Strategy for Long-Term Time Series Forecasting (2024) π₯π₯π₯π₯π₯
- Multi-Scale Dilated Convolution Network for Long-Term Time Series Forecasting (2024)
- ModernTCN: A Modern Pure Convolution Structure for General Time Series Analysis code (ICLR 2024 Spotlight)
- Adaptive Extraction Network for Multivariate Long Sequence Time-Series Forecasting (2024) π₯π₯π₯π₯π₯
- Interpretable Multivariate Time Series Forecasting Using Neural Fourier Transform (2024) π₯π₯π₯π₯π₯
- PERIODICITY DECOUPLING FRAMEWORK FOR LONG- TERM SERIES FORECASTING code (2024) π₯π₯π₯π₯π₯
- Chimera: Effectively Modeling Multivariate Time Series with 2-Dimensional State Space Models π₯π₯π₯π₯π₯ (2024)
- Time Evidence Fusion Network: Multi-source View in Long-Term Time Series Forecasting code (2024)
- ATFNet: Adaptive Time-Frequency Ensembled Network for Long-term Time Series Forecasting code (2024) π₯π₯π₯π₯
- C-Mamba: Channel Correlation Enhanced State Space Models for Multivariate Time Series Forecasting (2024) π₯π₯π₯π₯
- The Power of Minimalism in Long Sequence Time-series Forecasting
- WindowMixer: Intra-Window and Inter-Window Modeling for Time Series Forecasting
- xLSTMTime : Long-term Time Series Forecasting With xLSTM code (2024)
- Not All Frequencies Are Created Equal:Towards a Dynamic Fusion of Frequencies in Time-Series Forecasting (2024) π₯π₯π₯π₯
- FMamba: Mamba based on Fast-attention for Multivariate Time-series Forecasting (2024)
- Long Input Sequence Network for Long Time Series Forecasting (2024)
- Time-series Forecasting with Tri-Decomposition Linear-based Modelling and Series-wise Metrics (2024) π₯π₯π₯π₯
- An Evaluation of Standard Statistical Models and LLMs on Time Series Forecasting (2024) LLM π₯π₯π₯π₯
- Macroeconomic Forecasting with Large Language Models (2024) LLM π₯π₯π₯π₯
- Language Models Still Struggle to Zero-shot Reason about Time Series (2024) LLM π₯π₯π₯π₯
- KAN4TSF: Are KAN and KAN-based models Effective for Time Series Forecasting? (2024)
- Simplified Mamba with Disentangled Dependency Encoding for Long-Term Time Series Forecasting (2024)
- [TimeGPT vs TiDE: Is Zero-Shot Inference the Future of Forecasting or Just Hype?](https://arxiv.org/abs/2205.13504 by LuΓs Roque and Rafael Guedes. (2024)π₯π₯π₯π₯π₯
- TimeGPT-1, discussion on Hacker News (2023) π₯π₯π₯π₯π₯
- TimeGPT : The first Generative Pretrained Transformer for Time-Series Forecasting
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for Transformers_And_LLM_Are_What_You_Dont_Need
Similar Open Source Tools
Transformers_And_LLM_Are_What_You_Dont_Need
Transformers_And_LLM_Are_What_You_Dont_Need is a repository that explores the limitations of transformers in time series forecasting. It contains a collection of papers, articles, and theses discussing the effectiveness of transformers and LLMs in this domain. The repository aims to provide insights into why transformers may not be the best choice for time series forecasting tasks.
awesome-sound_event_detection
The 'awesome-sound_event_detection' repository is a curated reading list focusing on sound event detection and Sound AI. It includes research papers covering various sub-areas such as learning formulation, network architecture, pooling functions, missing or noisy audio, data augmentation, representation learning, multi-task learning, few-shot learning, zero-shot learning, knowledge transfer, polyphonic sound event detection, loss functions, audio and visual tasks, audio captioning, audio retrieval, audio generation, and more. The repository provides a comprehensive collection of papers, datasets, and resources related to sound event detection and Sound AI, making it a valuable reference for researchers and practitioners in the field.
Awesome-LLM-Quantization
Awesome-LLM-Quantization is a curated list of resources related to quantization techniques for Large Language Models (LLMs). Quantization is a crucial step in deploying LLMs on resource-constrained devices, such as mobile phones or edge devices, by reducing the model's size and computational requirements.
chronos-forecasting
Chronos is a family of pretrained time series forecasting models based on language model architectures. A time series is transformed into a sequence of tokens via scaling and quantization, and a language model is trained on these tokens using the cross-entropy loss. Once trained, probabilistic forecasts are obtained by sampling multiple future trajectories given the historical context. Chronos models have been trained on a large corpus of publicly available time series data, as well as synthetic data generated using Gaussian processes.
LongCite
LongCite is a tool that enables Large Language Models (LLMs) to generate fine-grained citations in long-context Question Answering (QA) scenarios. It provides models trained on GLM-4-9B and Meta-Llama-3.1-8B, supporting up to 128K context. Users can deploy LongCite chatbots, generate accurate responses, and obtain precise sentence-level citations. The tool includes components for model deployment, Coarse to Fine (CoF) pipeline for data construction, model training using LongCite-45k dataset, evaluation with LongBench-Cite benchmark, and citation generation.
Awesome-LLM-Prune
This repository is dedicated to the pruning of large language models (LLMs). It aims to serve as a comprehensive resource for researchers and practitioners interested in the efficient reduction of model size while maintaining or enhancing performance. The repository contains various papers, summaries, and links related to different pruning approaches for LLMs, along with author information and publication details. It covers a wide range of topics such as structured pruning, unstructured pruning, semi-structured pruning, and benchmarking methods. Researchers and practitioners can explore different pruning techniques, understand their implications, and access relevant resources for further study and implementation.
SEED-Bench
SEED-Bench is a comprehensive benchmark for evaluating the performance of multimodal large language models (LLMs) on a wide range of tasks that require both text and image understanding. It consists of two versions: SEED-Bench-1 and SEED-Bench-2. SEED-Bench-1 focuses on evaluating the spatial and temporal understanding of LLMs, while SEED-Bench-2 extends the evaluation to include text and image generation tasks. Both versions of SEED-Bench provide a diverse set of tasks that cover different aspects of multimodal understanding, making it a valuable tool for researchers and practitioners working on LLMs.
EDA-AI
EDA-AI is a repository containing implementations of cutting-edge research papers in the field of chip design. It includes DeepPlace, PRNet, HubRouter, and PreRoutGNN models for tasks such as placement, routing, timing prediction, and global routing. Researchers and practitioners can leverage these implementations to explore advanced techniques in chip design.
specification
OWASP CycloneDX is a full-stack Bill of Materials (BOM) standard that provides advanced supply chain capabilities for cyber risk reduction. The specification supports various types of Bill of Materials including Software, Hardware, Machine Learning, Cryptography, Manufacturing, and Operations. It also includes support for Vulnerability Disclosure Reports, Vulnerability Exploitability eXchange, and CycloneDX Attestations. CycloneDX helps organizations accurately inventory all components used in software development to identify risks, enhance transparency, and enable rapid impact analysis. The project is managed by the CycloneDX Core Working Group under the OWASP Foundation and is supported by the global information security community.
OpenNARS-for-Applications
OpenNARS-for-Applications is an implementation of a Non-Axiomatic Reasoning System, a general-purpose reasoner that adapts under the Assumption of Insufficient Knowledge and Resources. The system combines the logic and conceptual ideas of OpenNARS, event handling and procedure learning capabilities of ANSNA and 20NAR1, and the control model from ALANN. It is written in C, offers improved reasoning performance, and has been compared with Reinforcement Learning and means-end reasoning approaches. The system has been used in real-world applications such as assisting first responders, real-time traffic surveillance, and experiments with autonomous robots. It has been developed with a pragmatic mindset focusing on effective implementation of existing theory.
rllm
rLLM (relationLLM) is a Pytorch library for Relational Table Learning (RTL) with LLMs. It breaks down state-of-the-art GNNs, LLMs, and TNNs as standardized modules and facilitates novel model building in a 'combine, align, and co-train' way using these modules. The library is LLM-friendly, processes various graphs as multiple tables linked by foreign keys, introduces new relational table datasets, and is supported by students and teachers from Shanghai Jiao Tong University and Tsinghua University.
IvyGPT
IvyGPT is a medical large language model that aims to generate the most realistic doctor consultation effects. It has been fine-tuned on high-quality medical Q&A data and trained using human feedback reinforcement learning. The project features full-process training on medical Q&A LLM, multiple fine-tuning methods support, efficient dataset creation tools, and a dataset of over 300,000 high-quality doctor-patient dialogues for training.
RLHF-Reward-Modeling
This repository, RLHF-Reward-Modeling, is dedicated to training reward models for DRL-based RLHF (PPO), Iterative SFT, and iterative DPO. It provides state-of-the-art performance in reward models with a base model size of up to 13B. The installation instructions involve setting up the environment and aligning the handbook. Dataset preparation requires preprocessing conversations into a standard format. The code can be run with Gemma-2b-it, and evaluation results can be obtained using provided datasets. The to-do list includes various reward models like Bradley-Terry, preference model, regression-based reward model, and multi-objective reward model. The repository is part of iterative rejection sampling fine-tuning and iterative DPO.
awesome-LLM-resourses
A comprehensive repository of resources for Chinese large language models (LLMs), including data processing tools, fine-tuning frameworks, inference libraries, evaluation platforms, RAG engines, agent frameworks, books, courses, tutorials, and tips. The repository covers a wide range of tools and resources for working with LLMs, from data labeling and processing to model fine-tuning, inference, evaluation, and application development. It also includes resources for learning about LLMs through books, courses, and tutorials, as well as insights and strategies from building with LLMs.
generative-ai-on-aws
Generative AI on AWS by O'Reilly Media provides a comprehensive guide on leveraging generative AI models on the AWS platform. The book covers various topics such as generative AI use cases, prompt engineering, large-language models, fine-tuning techniques, optimization, deployment, and more. Authors Chris Fregly, Antje Barth, and Shelbee Eigenbrode offer insights into cutting-edge AI technologies and practical applications in the field. The book is a valuable resource for data scientists, AI enthusiasts, and professionals looking to explore generative AI capabilities on AWS.
For similar tasks
Awesome-Segment-Anything
Awesome-Segment-Anything is a powerful tool for segmenting and extracting information from various types of data. It provides a user-friendly interface to easily define segmentation rules and apply them to text, images, and other data formats. The tool supports both supervised and unsupervised segmentation methods, allowing users to customize the segmentation process based on their specific needs. With its versatile functionality and intuitive design, Awesome-Segment-Anything is ideal for data analysts, researchers, content creators, and anyone looking to efficiently extract valuable insights from complex datasets.
Time-LLM
Time-LLM is a reprogramming framework that repurposes large language models (LLMs) for time series forecasting. It allows users to treat time series analysis as a 'language task' and effectively leverage pre-trained LLMs for forecasting. The framework involves reprogramming time series data into text representations and providing declarative prompts to guide the LLM reasoning process. Time-LLM supports various backbone models such as Llama-7B, GPT-2, and BERT, offering flexibility in model selection. The tool provides a general framework for repurposing language models for time series forecasting tasks.
crewAI
CrewAI is a cutting-edge framework designed to orchestrate role-playing autonomous AI agents. By fostering collaborative intelligence, CrewAI empowers agents to work together seamlessly, tackling complex tasks. It enables AI agents to assume roles, share goals, and operate in a cohesive unit, much like a well-oiled crew. Whether you're building a smart assistant platform, an automated customer service ensemble, or a multi-agent research team, CrewAI provides the backbone for sophisticated multi-agent interactions. With features like role-based agent design, autonomous inter-agent delegation, flexible task management, and support for various LLMs, CrewAI offers a dynamic and adaptable solution for both development and production workflows.
Transformers_And_LLM_Are_What_You_Dont_Need
Transformers_And_LLM_Are_What_You_Dont_Need is a repository that explores the limitations of transformers in time series forecasting. It contains a collection of papers, articles, and theses discussing the effectiveness of transformers and LLMs in this domain. The repository aims to provide insights into why transformers may not be the best choice for time series forecasting tasks.
pytorch-forecasting
PyTorch Forecasting is a PyTorch-based package for time series forecasting with state-of-the-art network architectures. It offers a high-level API for training networks on pandas data frames and utilizes PyTorch Lightning for scalable training on GPUs and CPUs. The package aims to simplify time series forecasting with neural networks by providing a flexible API for professionals and default settings for beginners. It includes a timeseries dataset class, base model class, multiple neural network architectures, multi-horizon timeseries metrics, and hyperparameter tuning with optuna. PyTorch Forecasting is built on pytorch-lightning for easy training on various hardware configurations.
spider
Spider is a high-performance web crawler and indexer designed to handle data curation workloads efficiently. It offers features such as concurrency, streaming, decentralization, headless Chrome rendering, HTTP proxies, cron jobs, subscriptions, smart mode, blacklisting, whitelisting, budgeting depth, dynamic AI prompt scripting, CSS scraping, and more. Users can easily get started with the Spider Cloud hosted service or set up local installations with spider-cli. The tool supports integration with Node.js and Python for additional flexibility. With a focus on speed and scalability, Spider is ideal for extracting and organizing data from the web.
AI_for_Science_paper_collection
AI for Science paper collection is an initiative by AI for Science Community to collect and categorize papers in AI for Science areas by subjects, years, venues, and keywords. The repository contains `.csv` files with paper lists labeled by keys such as `Title`, `Conference`, `Type`, `Application`, `MLTech`, `OpenReviewLink`. It covers top conferences like ICML, NeurIPS, and ICLR. Volunteers can contribute by updating existing `.csv` files or adding new ones for uncovered conferences/years. The initiative aims to track the increasing trend of AI for Science papers and analyze trends in different applications.
lighteval
LightEval is a lightweight LLM evaluation suite that Hugging Face has been using internally with the recently released LLM data processing library datatrove and LLM training library nanotron. We're releasing it with the community in the spirit of building in the open. Note that it is still very much early so don't expect 100% stability ^^' In case of problems or question, feel free to open an issue!
For similar jobs
lollms-webui
LoLLMs WebUI (Lord of Large Language Multimodal Systems: One tool to rule them all) is a user-friendly interface to access and utilize various LLM (Large Language Models) and other AI models for a wide range of tasks. With over 500 AI expert conditionings across diverse domains and more than 2500 fine tuned models over multiple domains, LoLLMs WebUI provides an immediate resource for any problem, from car repair to coding assistance, legal matters, medical diagnosis, entertainment, and more. The easy-to-use UI with light and dark mode options, integration with GitHub repository, support for different personalities, and features like thumb up/down rating, copy, edit, and remove messages, local database storage, search, export, and delete multiple discussions, make LoLLMs WebUI a powerful and versatile tool.
Azure-Analytics-and-AI-Engagement
The Azure-Analytics-and-AI-Engagement repository provides packaged Industry Scenario DREAM Demos with ARM templates (Containing a demo web application, Power BI reports, Synapse resources, AML Notebooks etc.) that can be deployed in a customerβs subscription using the CAPE tool within a matter of few hours. Partners can also deploy DREAM Demos in their own subscriptions using DPoC.
minio
MinIO is a High Performance Object Storage released under GNU Affero General Public License v3.0. It is API compatible with Amazon S3 cloud storage service. Use MinIO to build high performance infrastructure for machine learning, analytics and application data workloads.
mage-ai
Mage is an open-source data pipeline tool for transforming and integrating data. It offers an easy developer experience, engineering best practices built-in, and data as a first-class citizen. Mage makes it easy to build, preview, and launch data pipelines, and provides observability and scaling capabilities. It supports data integrations, streaming pipelines, and dbt integration.
AiTreasureBox
AiTreasureBox is a versatile AI tool that provides a collection of pre-trained models and algorithms for various machine learning tasks. It simplifies the process of implementing AI solutions by offering ready-to-use components that can be easily integrated into projects. With AiTreasureBox, users can quickly prototype and deploy AI applications without the need for extensive knowledge in machine learning or deep learning. The tool covers a wide range of tasks such as image classification, text generation, sentiment analysis, object detection, and more. It is designed to be user-friendly and accessible to both beginners and experienced developers, making AI development more efficient and accessible to a wider audience.
tidb
TiDB is an open-source distributed SQL database that supports Hybrid Transactional and Analytical Processing (HTAP) workloads. It is MySQL compatible and features horizontal scalability, strong consistency, and high availability.
airbyte
Airbyte is an open-source data integration platform that makes it easy to move data from any source to any destination. With Airbyte, you can build and manage data pipelines without writing any code. Airbyte provides a library of pre-built connectors that make it easy to connect to popular data sources and destinations. You can also create your own connectors using Airbyte's no-code Connector Builder or low-code CDK. Airbyte is used by data engineers and analysts at companies of all sizes to build and manage their data pipelines.
labelbox-python
Labelbox is a data-centric AI platform for enterprises to develop, optimize, and use AI to solve problems and power new products and services. Enterprises use Labelbox to curate data, generate high-quality human feedback data for computer vision and LLMs, evaluate model performance, and automate tasks by combining AI and human-centric workflows. The academic & research community uses Labelbox for cutting-edge AI research.