
airbyte-connectors
Airbyte connectors (sources & destinations) + Airbyte CDK for JavaScript/TypeScript
Stars: 121

This repository contains Airbyte connectors used in Faros and Faros Community Edition platforms as well as Airbyte Connector Development Kit (CDK) for JavaScript/TypeScript.
README:
This repository contains Airbyte connectors used in Faros and Faros Community Edition platforms as well as Airbyte Connector Development Kit (CDK) for JavaScript/TypeScript.
See the READMEs inside destinations/
and sources/
subfolders for more information on each connector.
Component | Code | Installation | Version |
---|---|---|---|
Airbyte CDK for JavaScript/TypeScript | faros-airbyte-cdk | npm i faros-airbyte-cdk |
|
AgileAccelerator Source | sources/agileaccelerator-source | docker pull farosai/airbyte-agileaccelerator-source |
|
Asana Source | sources/asana-source | docker pull farosai/airbyte-asana-source |
|
AWS CloudWatch Metrics Source | sources/aws-cloudwatch-metrics-source | docker pull farosai/airbyte-aws-cloudwatch-metrics-source |
|
Azure Active Directory Source | sources/azureactivedirectory-source | docker pull farosai/airbyte-azureactivedirectory-source |
|
Azure Pipeline Source | sources/azurepipeline-source | docker pull farosai/airbyte-azurepipeline-source |
|
Azure Repos Source | sources/azure-repos-source | docker pull farosai/airbyte-azure-repos-source |
|
Azure Workitems Source | sources/azure-workitems-source | docker pull farosai/airbyte-azure-workitems-source |
|
Backlog Source | sources/backlog-source | docker pull farosai/airbyte-backlog-source |
|
BambooHR Source | sources/bamboohr-source | docker pull farosai/airbyte-bamboohr-source |
|
Bitbucket Source | sources/bitbucket-source | docker pull farosai/airbyte-bitbucket-source |
|
Bitbucket Server Source | sources/bitbucket-server-source | docker pull farosai/airbyte-bitbucket-server-source |
|
Buildkite Source | sources/buildkite-source | docker pull farosai/airbyte-buildkite-source |
|
Customer.IO Source | sources/customer-io-source | docker pull farosai/airbyte-customer-io-source |
|
Cursor Source | sources/cursor-source | docker pull farosai/airbyte-cursor-source |
|
CircleCI Source | sources/circleci-source | docker pull farosai/airbyte-circleci-source |
|
Claude Source | sources/claude-source | docker pull farosai/airbyte-claude-source |
|
ClickUp Source | sources/clickup-source | docker pull farosai/airbyte-clickup-source |
|
Datadog Source | sources/datadog-source | docker pull farosai/airbyte-datadog-source |
|
Docker Source | sources/docker-source | docker pull farosai/airbyte-docker-source |
|
Faros Destination | destinations/airbyte-faros-destination |
npm i airbyte-faros-destination or docker pull farosai/airbyte-faros-destination
|
|
Faros GraphQL Source | sources/faros-graphql-source | docker pull farosai/airbyte-faros-graphql-source |
|
Faros Graph Doctor Source | sources/faros-graphdoctor-source | docker pull farosai/airbyte-faros-graphdoctor-source |
|
Files Source | sources/files-source | docker pull farosai/airbyte-files-source |
|
FireHydrant Source | sources/firehydrant-source | docker pull farosai/airbyte-firehydrant-source |
|
GitHub Source | sources/github-source | docker pull farosai/airbyte-github-source |
|
GitLab Source | sources/gitlab-source | docker pull farosai/airbyte-gitlab-source |
|
Google Calendar Source | sources/googlecalendar-source | docker pull farosai/airbyte-googlecalendar-source |
|
Google Drive Source | sources/googledrive-source | docker pull farosai/airbyte-googledrive-source |
|
Harness Source | sources/harness-source | docker pull farosai/airbyte-harness-source |
|
Jenkins Source | sources/jenkins-source | docker pull farosai/airbyte-jenkins-source |
|
Jira Source | sources/jira-source | docker pull farosai/airbyte-jira-source |
|
Okta Source | sources/okta-source | docker pull farosai/airbyte-okta-source |
|
Octopus Source | sources/octopus-source | docker pull farosai/airbyte-octopus-source |
|
OpsGenie Source | sources/opsgenie-source | docker pull farosai/airbyte-opsgenie-source |
|
PagerDuty Source | sources/pagerduty-source | docker pull farosai/airbyte-pagerduty-source |
|
Phabricator Source | sources/phabricator-source | docker pull farosai/airbyte-phabricator-source |
|
ServiceNow Source | sources/servicenow-source | docker pull farosai/airbyte-servicenow-source |
|
SemaphoreCI Source | sources/semaphoreci-source | docker pull farosai/airbyte-semaphoreci-source |
|
Shortcut Source | sources/shortcut-source | docker pull farosai/airbyte-shortcut-source |
|
Sheets Source | sources/sheets-source | docker pull farosai/airbyte-sheets-source |
|
SquadCast Source | sources/squadcast-source | docker pull farosai/airbyte-squadcast-source |
|
StatusPage Source | sources/statuspage-source | docker pull farosai/airbyte-statuspage-source |
|
TestRails Source | sources/testrails-source | docker pull farosai/airbyte-testrails-source |
|
Tromzo Source | sources/tromzo-source | docker pull farosai/airbyte-tromzo-source |
|
Trello Source | sources/trello-source | docker pull farosai/airbyte-trello-source |
|
Vanta Source | sources/vanta-source | docker pull farosai/airbyte-vanta-source |
|
VictorOps Source | sources/victorops-source | docker pull farosai/airbyte-victorops-source |
|
Windsurf Source | sources/windsurf-source | docker pull farosai/airbyte-windsurf-source |
|
Workday Source | sources/workday-source | docker pull farosai/airbyte-workday-source |
|
Wolken Source | sources/wolken-source | docker pull farosai/airbyte-wolken-source |
|
Xray Source | sources/xray-source | docker pull farosai/airbyte-xray-source |
|
Zephyr Source | sources/zephyr-source | docker pull farosai/airbyte-zephyr-source |
- Install
nvm
- Install Node.js
nvm install 22 && nvm use 22
- Install
Turborepo
by runningnpm install turbo --global
- Run
npm i
to install dependencies for all projects (turbo clean
to clean all) - Run
turbo build
to build all projects (for a single project add scope, e.gturbo build --filter=airbyte-faros-destination
) - Run
turbo test
to test all projects (for a single project add scope, e.gturbo test --filter=airbyte-faros-destination
) - Run
turbo lint
to apply linter on all projects (for a single project add scope, e.gturbo lint --filter=airbyte-faros-destination
)
👉 Follow our guide on how to develop a new source here.
Read more about Turborepo
here.
To manage dependencies in this project, you can use the following commands:
-
Install Dependencies: Run
npm install
to install all the necessary dependencies for the project. -
Update Dependencies: Use
npm update
to update all the dependencies to their latest versions. -
Check for Vulnerabilities: Run
npm audit
to check for any vulnerabilities in the dependencies. -
Fix Vulnerabilities: Use
npm audit fix
to automatically fix any vulnerabilities that can be resolved. -
Clean Dependencies: Run
npm prune
to remove any extraneous packages that are not listed inpackage.json
.
In order to build a Docker image for a connector run the docker build
command and set path
and version
arguments.
For example for Faros Destination connector run:
docker build . --build-arg path=destinations/airbyte-faros-destination --build-arg version=0.0.1 -t airbyte-faros-destination
And then run it:
docker run airbyte-faros-destination
- If you encounter errors like
...: No such file or directory
when running docker run commands on Windows, try to confirm all files in this repo are usingLF
end of line. If not, convert them all to useLF
instead ofCRLF
.
Create a new GitHub Release. The release workflow will automatically publish the packages to NPM and push Docker images to Docker Hub.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for airbyte-connectors
Similar Open Source Tools

airbyte-connectors
This repository contains Airbyte connectors used in Faros and Faros Community Edition platforms as well as Airbyte Connector Development Kit (CDK) for JavaScript/TypeScript.

phoenix
Phoenix is a tool that provides MLOps and LLMOps insights at lightning speed with zero-config observability. It offers a notebook-first experience for monitoring models and LLM Applications by providing LLM Traces, LLM Evals, Embedding Analysis, RAG Analysis, and Structured Data Analysis. Users can trace through the execution of LLM Applications, evaluate generative models, explore embedding point-clouds, visualize generative application's search and retrieval process, and statistically analyze structured data. Phoenix is designed to help users troubleshoot problems related to retrieval, tool execution, relevance, toxicity, drift, and performance degradation.

petercat
Peter Cat is an intelligent Q&A chatbot solution designed for community maintainers and developers. It provides a conversational Q&A agent configuration system, self-hosting deployment solutions, and a convenient integrated application SDK. Users can easily create intelligent Q&A chatbots for their GitHub repositories and quickly integrate them into various official websites or projects to provide more efficient technical support for the community.

hcaptcha-challenger
hCaptcha Challenger is a tool designed to gracefully face hCaptcha challenges using a multimodal large language model. It does not rely on Tampermonkey scripts or third-party anti-captcha services, instead implementing interfaces for 'AI vs AI' scenarios. The tool supports various challenge types such as image labeling, drag and drop, and advanced tasks like self-supervised challenges and Agentic Workflow. Users can access documentation in multiple languages and leverage resources for tasks like model training, dataset annotation, and model upgrading. The tool aims to enhance user experience in handling hCaptcha challenges with innovative AI capabilities.

no-cost-ai
No-cost-ai is a repository dedicated to providing a comprehensive list of free AI models and tools for developers, researchers, and curious builders. It serves as a living index for accessing state-of-the-art AI models without any cost. The repository includes information on various AI applications such as chat interfaces, media generation, voice and music tools, AI IDEs, and developer APIs and platforms. Users can find links to free models, their limits, and usage instructions. Contributions to the repository are welcome, and users are advised to use the listed services at their own risk due to potential changes in models, limitations, and reliability of free services.

Chinese-Mixtral-8x7B
Chinese-Mixtral-8x7B is an open-source project based on Mistral's Mixtral-8x7B model for incremental pre-training of Chinese vocabulary, aiming to advance research on MoE models in the Chinese natural language processing community. The expanded vocabulary significantly improves the model's encoding and decoding efficiency for Chinese, and the model is pre-trained incrementally on a large-scale open-source corpus, enabling it with powerful Chinese generation and comprehension capabilities. The project includes a large model with expanded Chinese vocabulary and incremental pre-training code.

DownEdit
DownEdit is a powerful program that allows you to download videos from various social media platforms such as TikTok, Douyin, Kuaishou, and more. With DownEdit, you can easily download videos from user profiles and edit them in bulk. You have the option to flip the videos horizontally or vertically throughout the entire directory with just a single click. Stay tuned for more exciting features coming soon!

DownEdit
DownEdit is a fast and powerful program for downloading and editing videos from platforms like TikTok, Douyin, and Kuaishou. It allows users to effortlessly grab videos, make bulk edits, and utilize advanced AI features for generating videos, images, and sounds in bulk. The tool offers features like video, photo, and sound editing, downloading videos without watermarks, bulk AI generation, and AI editing for content enhancement.

jiwu-mall-chat-tauri
Jiwu Chat Tauri APP is a desktop chat application based on Nuxt3 + Tauri + Element Plus framework. It provides a beautiful user interface with integrated chat and social functions. It also supports AI shopping chat and global dark mode. Users can engage in real-time chat, share updates, and interact with AI customer service through this application.

web-builder
Web Builder is a low-code front-end framework based on Material for Angular, offering a rich component library for excellent digital innovation experience. It allows rapid construction of modern responsive UI, multi-theme, multi-language web pages through drag-and-drop visual configuration. The framework includes a beautiful admin theme, complete front-end solutions, and AI integration in the Pro version for optimizing copy, creating components, and generating pages with a single sentence.

oumi
Oumi is an open-source platform for building state-of-the-art foundation models, offering tools for data preparation, training, evaluation, and deployment. It supports training and fine-tuning models with various parameters, working with text and multimodal models, synthesizing and curating training data, deploying models efficiently, evaluating models comprehensively, and running on different platforms. Oumi provides a consistent API, reliability, and flexibility for research purposes.

DownEdit
DownEdit is a fast and powerful program for downloading and editing videos from top platforms like TikTok, Douyin, and Kuaishou. Effortlessly grab videos from user profiles, make bulk edits throughout the entire directory with just one click. Advanced Chat & AI features let you download, edit, and generate videos, images, and sounds in bulk. Exciting new features are coming soon—stay tuned!

Muice-Chatbot
Muice-Chatbot is an AI chatbot designed to proactively engage in conversations with users. It is based on the ChatGLM2-6B and Qwen-7B models, with a training dataset of 1.8K+ dialogues. The chatbot has a speaking style similar to a 2D girl, being somewhat tsundere but willing to share daily life details and greet users differently every day. It provides various functionalities, including initiating chats and offering 5 available commands. The project supports model loading through different methods and provides onebot service support for QQ users. Users can interact with the chatbot by running the main.py file in the project directory.

TRACE
TRACE is a temporal grounding video model that utilizes causal event modeling to capture videos' inherent structure. It presents a task-interleaved video LLM model tailored for sequential encoding/decoding of timestamps, salient scores, and textual captions. The project includes various model checkpoints for different stages and fine-tuning on specific datasets. It provides evaluation codes for different tasks like VTG, MVBench, and VideoMME. The repository also offers annotation files and links to raw videos preparation projects. Users can train the model on different tasks and evaluate the performance based on metrics like CIDER, METEOR, SODA_c, F1, mAP, Hit@1, etc. TRACE has been enhanced with trace-retrieval and trace-uni models, showing improved performance on dense video captioning and general video understanding tasks.

Firefly
Firefly is an open-source large model training project that supports pre-training, fine-tuning, and DPO of mainstream large models. It includes models like Llama3, Gemma, Qwen1.5, MiniCPM, Llama, InternLM, Baichuan, ChatGLM, Yi, Deepseek, Qwen, Orion, Ziya, Xverse, Mistral, Mixtral-8x7B, Zephyr, Vicuna, Bloom, etc. The project supports full-parameter training, LoRA, QLoRA efficient training, and various tasks such as pre-training, SFT, and DPO. Suitable for users with limited training resources, QLoRA is recommended for fine-tuning instructions. The project has achieved good results on the Open LLM Leaderboard with QLoRA training process validation. The latest version has significant updates and adaptations for different chat model templates.

llm-export
llm-export is a tool for exporting llm models to onnx and mnn formats. It has features such as passing onnxruntime correctness tests, optimizing the original code to support dynamic shapes, reducing constant parts, optimizing onnx models using OnnxSlim for performance improvement, and exporting lora weights to onnx and mnn formats. Users can clone the project locally, clone the desired LLM project locally, and use LLMExporter to export the model. The tool supports various export options like exporting the entire model as one onnx model, exporting model segments as multiple models, exporting model vocabulary to a text file, exporting specific model layers like Embedding and lm_head, testing the model with queries, validating onnx model consistency with onnxruntime, converting onnx models to mnn models, and more. Users can specify export paths, skip optimization steps, and merge lora weights before exporting.
For similar tasks

skyvern
Skyvern automates browser-based workflows using LLMs and computer vision. It provides a simple API endpoint to fully automate manual workflows, replacing brittle or unreliable automation solutions. Traditional approaches to browser automations required writing custom scripts for websites, often relying on DOM parsing and XPath-based interactions which would break whenever the website layouts changed. Instead of only relying on code-defined XPath interactions, Skyvern adds computer vision and LLMs to the mix to parse items in the viewport in real-time, create a plan for interaction and interact with them. This approach gives us a few advantages: 1. Skyvern can operate on websites it’s never seen before, as it’s able to map visual elements to actions necessary to complete a workflow, without any customized code 2. Skyvern is resistant to website layout changes, as there are no pre-determined XPaths or other selectors our system is looking for while trying to navigate 3. Skyvern leverages LLMs to reason through interactions to ensure we can cover complex situations. Examples include: 1. If you wanted to get an auto insurance quote from Geico, the answer to a common question “Were you eligible to drive at 18?” could be inferred from the driver receiving their license at age 16 2. If you were doing competitor analysis, it’s understanding that an Arnold Palmer 22 oz can at 7/11 is almost definitely the same product as a 23 oz can at Gopuff (even though the sizes are slightly different, which could be a rounding error!) Want to see examples of Skyvern in action? Jump to #real-world-examples-of- skyvern

airbyte-connectors
This repository contains Airbyte connectors used in Faros and Faros Community Edition platforms as well as Airbyte Connector Development Kit (CDK) for JavaScript/TypeScript.

open-parse
Open Parse is a Python library for visually discerning document layouts and chunking them effectively. It is designed to fill the gap in open-source libraries for handling complex documents. Unlike text splitting, which converts a file to raw text and slices it up, Open Parse visually analyzes documents for superior LLM input. It also supports basic markdown for parsing headings, bold, and italics, and has high-precision table support, extracting tables into clean Markdown formats with accuracy that surpasses traditional tools. Open Parse is extensible, allowing users to easily implement their own post-processing steps. It is also intuitive, with great editor support and completion everywhere, making it easy to use and learn.

unstract
Unstract is a no-code platform that enables users to launch APIs and ETL pipelines to structure unstructured documents. With Unstract, users can go beyond co-pilots by enabling machine-to-machine automation. Unstract's Prompt Studio provides a simple, no-code approach to creating prompts for LLMs, vector databases, embedding models, and text extractors. Users can then configure Prompt Studio projects as API deployments or ETL pipelines to automate critical business processes that involve complex documents. Unstract supports a wide range of LLM providers, vector databases, embeddings, text extractors, ETL sources, and ETL destinations, providing users with the flexibility to choose the best tools for their needs.

Dot
Dot is a standalone, open-source application designed for seamless interaction with documents and files using local LLMs and Retrieval Augmented Generation (RAG). It is inspired by solutions like Nvidia's Chat with RTX, providing a user-friendly interface for those without a programming background. Pre-packaged with Mistral 7B, Dot ensures accessibility and simplicity right out of the box. Dot allows you to load multiple documents into an LLM and interact with them in a fully local environment. Supported document types include PDF, DOCX, PPTX, XLSX, and Markdown. Users can also engage with Big Dot for inquiries not directly related to their documents, similar to interacting with ChatGPT. Built with Electron JS, Dot encapsulates a comprehensive Python environment that includes all necessary libraries. The application leverages libraries such as FAISS for creating local vector stores, Langchain, llama.cpp & Huggingface for setting up conversation chains, and additional tools for document management and interaction.

instructor
Instructor is a Python library that makes it a breeze to work with structured outputs from large language models (LLMs). Built on top of Pydantic, it provides a simple, transparent, and user-friendly API to manage validation, retries, and streaming responses. Get ready to supercharge your LLM workflows!

sparrow
Sparrow is an innovative open-source solution for efficient data extraction and processing from various documents and images. It seamlessly handles forms, invoices, receipts, and other unstructured data sources. Sparrow stands out with its modular architecture, offering independent services and pipelines all optimized for robust performance. One of the critical functionalities of Sparrow - pluggable architecture. You can easily integrate and run data extraction pipelines using tools and frameworks like LlamaIndex, Haystack, or Unstructured. Sparrow enables local LLM data extraction pipelines through Ollama or Apple MLX. With Sparrow solution you get API, which helps to process and transform your data into structured output, ready to be integrated with custom workflows. Sparrow Agents - with Sparrow you can build independent LLM agents, and use API to invoke them from your system. **List of available agents:** * **llamaindex** - RAG pipeline with LlamaIndex for PDF processing * **vllamaindex** - RAG pipeline with LLamaIndex multimodal for image processing * **vprocessor** - RAG pipeline with OCR and LlamaIndex for image processing * **haystack** - RAG pipeline with Haystack for PDF processing * **fcall** - Function call pipeline * **unstructured-light** - RAG pipeline with Unstructured and LangChain, supports PDF and image processing * **unstructured** - RAG pipeline with Weaviate vector DB query, Unstructured and LangChain, supports PDF and image processing * **instructor** - RAG pipeline with Unstructured and Instructor libraries, supports PDF and image processing. Works great for JSON response generation

Open-DocLLM
Open-DocLLM is an open-source project that addresses data extraction and processing challenges using OCR and LLM technologies. It consists of two main layers: OCR for reading document content and LLM for extracting specific content in a structured manner. The project offers a larger context window size compared to JP Morgan's DocLLM and integrates tools like Tesseract OCR and Mistral for efficient data analysis. Users can run the models on-premises using LLM studio or Ollama, and the project includes a FastAPI app for testing purposes.
For similar jobs

lollms-webui
LoLLMs WebUI (Lord of Large Language Multimodal Systems: One tool to rule them all) is a user-friendly interface to access and utilize various LLM (Large Language Models) and other AI models for a wide range of tasks. With over 500 AI expert conditionings across diverse domains and more than 2500 fine tuned models over multiple domains, LoLLMs WebUI provides an immediate resource for any problem, from car repair to coding assistance, legal matters, medical diagnosis, entertainment, and more. The easy-to-use UI with light and dark mode options, integration with GitHub repository, support for different personalities, and features like thumb up/down rating, copy, edit, and remove messages, local database storage, search, export, and delete multiple discussions, make LoLLMs WebUI a powerful and versatile tool.

Azure-Analytics-and-AI-Engagement
The Azure-Analytics-and-AI-Engagement repository provides packaged Industry Scenario DREAM Demos with ARM templates (Containing a demo web application, Power BI reports, Synapse resources, AML Notebooks etc.) that can be deployed in a customer’s subscription using the CAPE tool within a matter of few hours. Partners can also deploy DREAM Demos in their own subscriptions using DPoC.

minio
MinIO is a High Performance Object Storage released under GNU Affero General Public License v3.0. It is API compatible with Amazon S3 cloud storage service. Use MinIO to build high performance infrastructure for machine learning, analytics and application data workloads.

mage-ai
Mage is an open-source data pipeline tool for transforming and integrating data. It offers an easy developer experience, engineering best practices built-in, and data as a first-class citizen. Mage makes it easy to build, preview, and launch data pipelines, and provides observability and scaling capabilities. It supports data integrations, streaming pipelines, and dbt integration.

AiTreasureBox
AiTreasureBox is a versatile AI tool that provides a collection of pre-trained models and algorithms for various machine learning tasks. It simplifies the process of implementing AI solutions by offering ready-to-use components that can be easily integrated into projects. With AiTreasureBox, users can quickly prototype and deploy AI applications without the need for extensive knowledge in machine learning or deep learning. The tool covers a wide range of tasks such as image classification, text generation, sentiment analysis, object detection, and more. It is designed to be user-friendly and accessible to both beginners and experienced developers, making AI development more efficient and accessible to a wider audience.

tidb
TiDB is an open-source distributed SQL database that supports Hybrid Transactional and Analytical Processing (HTAP) workloads. It is MySQL compatible and features horizontal scalability, strong consistency, and high availability.

airbyte
Airbyte is an open-source data integration platform that makes it easy to move data from any source to any destination. With Airbyte, you can build and manage data pipelines without writing any code. Airbyte provides a library of pre-built connectors that make it easy to connect to popular data sources and destinations. You can also create your own connectors using Airbyte's no-code Connector Builder or low-code CDK. Airbyte is used by data engineers and analysts at companies of all sizes to build and manage their data pipelines.

labelbox-python
Labelbox is a data-centric AI platform for enterprises to develop, optimize, and use AI to solve problems and power new products and services. Enterprises use Labelbox to curate data, generate high-quality human feedback data for computer vision and LLMs, evaluate model performance, and automate tasks by combining AI and human-centric workflows. The academic & research community uses Labelbox for cutting-edge AI research.