
airflow-ai-sdk
An SDK for working with LLMs and AI Agents from Apache Airflow, based on Pydantic AI
Stars: 282

This repository contains an SDK for working with LLMs from Apache Airflow, based on Pydantic AI. It allows users to call LLMs and orchestrate agent calls directly within their Airflow pipelines using decorator-based tasks. The SDK leverages the familiar Airflow `@task` syntax with extensions like `@task.llm`, `@task.llm_branch`, and `@task.agent`. Users can define tasks that call language models, orchestrate multi-step AI reasoning, change the control flow of a DAG based on LLM output, and support various models in the Pydantic AI library. The SDK is designed to integrate LLM workflows into Airflow pipelines, from simple LLM calls to complex agentic workflows.
README:
This repository contains an SDK for working with LLMs from Apache Airflow, based on Pydantic AI. It allows users to call LLMs and orchestrate agent calls directly within their Airflow pipelines using decorator-based tasks. The SDK leverages the familiar Airflow @task
syntax with extensions like @task.llm
, @task.llm_branch
, and @task.agent
.
To get started, check out the examples repository here, which offers a full local Airflow instance with the AI SDK installed and 5 example pipelines. To run this locally, run:
git clone https://github.com/astronomer/ai-sdk-examples.git
cd ai-sdk-examples
astro dev start
If you don't have the Astro CLI installed, run brew install astro
(or see other options here).
If you already have Airflow running, you can also install the package with any optional dependencies you need:
pip install airflow-ai-sdk[openai,duckduckgo]
Note that installing the package with no optional dependencies will install the slim version of the package, which does not include any LLM models or tools. The available optional packages are listed here. While this SDK offers the optional dependencies for convenience sake, you can also install the optional dependencies from Pydantic AI directly.
Table of Contents:
-
LLM tasks with
@task.llm
: Define tasks that call language models (e.g. GPT-3.5-turbo) to process text. -
Agent tasks with
@task.agent
: Orchestrate multi-step AI reasoning by leveraging custom tools. - Automatic output parsing: Use function type hints (including Pydantic models) to automatically parse and validate LLM outputs.
-
Branching with
@task.llm_branch
: Change the control flow of a DAG based on the output of an LLM. - Model support: Support for all models in the Pydantic AI library (OpenAI, Anthropic, Gemini, Ollama, Groq, Mistral, Cohere, Bedrock)
We follow the taskflow pattern of Airflow with three decorators:
-
@task.llm
: Define a task that calls an LLM. Under the hood, this creates a Pydantic AIAgent
with no tools. -
@task.agent
: Define a task that calls an agent. You can pass in a Pydantic AIAgent
directly. -
@task.llm_branch
: Define a task that branches the control flow of a DAG based on the output of an LLM. Enforces that the LLM output is one of the downstream task_ids.
The function supplied to each decorator is a translation function that converts the Airflow task's input into the LLM's input. If you don't want to do any translation, you can just return the input unchanged.
AI workflows are becoming increasingly common as organizations look for pragmatic ways to get value out of LLMs. As with any workflow, it's important to have a flexible and scalable way to orchestrate them.
Airflow is a popular choice for orchestrating data pipelines. It's a powerful tool for managing the dependencies between tasks and for scheduling and monitoring them, and has been trusted by data teams everywhere for 10+ years. It comes "batteries included" with a rich set of capabilities:
- Flexible scheduling: run tasks on a fixed schedule, on-demand, or based on external events
- Dynamic task mapping: easily process multiple inputs in parallel with full error handling and observability
- Branching and conditional logic: change the control flow of a DAG based on the output of certain tasks
- Error handling: built-in support for retries, exponential backoff, and timeouts
- Resource management: limit the concurrency of tasks with Airflow Pools
- Monitoring: detailed logs and monitoring capabilities
- Scalability: designed for production workflows
This SDK is designed to make it easy to integrate LLM workflows into your Airflow pipelines. It allows you to do anything from simple LLM calls to complex agentic workflows.
See the full set of examples in the examples/dags directory.
This example shows how to use the @task.llm
decorator as part of an Airflow DAG. In the @task.llm
decorator, we can
specify a model and system prompt. The decorator allows you to transform the Airflow task's input into the LLM's input.
See full example: github_changelog.py
import os
import pendulum
from airflow.decorators import dag, task
from github import Github
@task
def get_recent_commits(data_interval_start: pendulum.DateTime, data_interval_end: pendulum.DateTime) -> list[str]:
"""
This task returns a mocked list of recent commits. In a real workflow, this
task would get the recent commits from a database or API.
"""
print(f"Getting commits for {data_interval_start} to {data_interval_end}")
gh = Github(os.getenv("GITHUB_TOKEN"))
repo = gh.get_repo("apache/airflow")
commits = repo.get_commits(since=data_interval_start, until=data_interval_end)
return [f"{commit.commit.sha}: {commit.commit.message}" for commit in commits]
@task.llm(
model="gpt-4o-mini",
result_type=str,
system_prompt="""
Your job is to summarize the commits to the Airflow project given a week's worth
of commits. Pay particular attention to large changes and new features as opposed
to bug fixes and minor changes.
You don't need to include every commit, just the most important ones. Add a one line
overall summary of the changes at the top, followed by bullet points of the most
important changes.
Example output:
This week, we made architectural changes to the core scheduler to make it more
maintainable and easier to understand.
- Made the scheduler 20% faster (commit 1234567)
- Added a new task type: `example_task` (commit 1234568)
- Added a new operator: `example_operator` (commit 1234569)
- Added a new sensor: `example_sensor` (commit 1234570)
"""
)
def summarize_commits(commits: list[str] | None = None) -> str:
"""
This task summarizes the commits. You can add logic here to transform the input
before it gets passed to the LLM.
"""
# don't need to do any translation
return "\n".join(commits)
@task
def send_summaries(summaries: str):
...
@dag(
schedule="@weekly",
start_date=pendulum.datetime(2025, 3, 1, tz="UTC"),
catchup=False,
)
def github_changelog():
commits = get_recent_commits()
summaries = summarize_commits(commits=commits)
send_summaries(summaries)
github_changelog()
This example demonstrates how to use the @task.llm
decorator to call an LLM and return a structured output. In this
case, we're using a Pydantic model to validate the output of the LLM. We recommend using the airflow_ai_sdk.BaseModel
class to define your Pydantic models in case we add more functionality in the future.
See full example: product_feedback_summarization.py
import pendulum
from typing import Literal, Any
from airflow.decorators import dag, task
from airflow.exceptions import AirflowSkipException
import airflow_ai_sdk as ai_sdk
from include.pii import mask_pii
@task
def get_product_feedback() -> list[str]:
"""
This task returns a mocked list of product feedback. In a real workflow, this
task would get the product feedback from a database or API.
"""
...
class ProductFeedbackSummary(ai_sdk.BaseModel):
summary: str
sentiment: Literal["positive", "negative", "neutral"]
feature_requests: list[str]
@task.llm(
model="gpt-4o-mini",
result_type=ProductFeedbackSummary,
system_prompt="""
You are a helpful assistant that summarizes product feedback.
"""
)
def summarize_product_feedback(feedback: str | None = None) -> ProductFeedbackSummary:
"""
This task summarizes the product feedback. You can add logic here to transform the input
before summarizing it.
"""
# if the feedback doesn't mention Airflow, skip it
if "Airflow" not in feedback:
raise AirflowSkipException("Feedback does not mention Airflow")
# mask PII in the feedback
feedback = mask_pii(feedback)
return feedback
@task
def upload_summaries(summaries: list[dict[str, Any]]):
...
@dag(
schedule=None,
start_date=pendulum.datetime(2021, 1, 1, tz="UTC"),
catchup=False,
)
def product_feedback_summarization():
feedback = get_product_feedback()
summaries = summarize_product_feedback.expand(feedback=feedback)
upload_summaries(summaries)
product_feedback_summarization()
This example shows how to build an AI agent that can autonomously invoke external tools (e.g., a knowledge base search) when answering a user question.
See full example: deep_research.py
import pendulum
import requests
from airflow.decorators import dag, task
from airflow.models.dagrun import DagRun
from airflow.models.param import Param
from bs4 import BeautifulSoup
from pydantic_ai import Agent
from pydantic_ai.common_tools.duckduckgo import duckduckgo_search_tool
# custom tool to get the content of a page
def get_page_content(url: str) -> str:
"""
Get the content of a page.
"""
response = requests.get(url)
soup = BeautifulSoup(response.text, "html.parser")
distillation_agent = Agent(
"gpt-4o-mini",
system_prompt="""
You are responsible for distilling information from a text. The summary will be used by a research agent to generate a research report.
Keep the summary concise and to the point, focusing on only key information.
""",
)
return distillation_agent.run_sync(soup.get_text())
deep_research_agent = Agent(
"o3-mini",
system_prompt="""
You are a deep research agent who is very skilled at distilling information from the web. You are given a query and your job is to generate a research report.
You can search the web by using the `duckduckgo_search_tool`. You can also use the `get_page_content` tool to get the contents of a page.
Keep going until you have enough information to generate a research report. Assume you know nothing about the query or contents, so you need to search the web for relevant information.
Do not generate new information, only distill information from the web.
""",
tools=[duckduckgo_search_tool(), get_page_content],
)
@task.agent(agent=deep_research_agent)
def deep_research_task(dag_run: DagRun) -> str:
"""
This task performs a deep research on the given query.
"""
query = dag_run.conf.get("query")
if not query:
raise ValueError("Query is required")
print(f"Performing deep research on {query}")
return query
@task
def upload_results(results: str):
...
@dag(
schedule=None,
start_date=pendulum.datetime(2025, 3, 1, tz="UTC"),
catchup=False,
params={
"query": Param(
type="string",
default="How has the field of data engineering evolved in the last 5 years?",
),
},
)
def deep_research():
results = deep_research_task()
upload_results(results)
deep_research()
This example demonstrates how to use the @task.llm_branch
decorator to change the control flow of a DAG based on the output of an LLM. In this case, we're routing support tickets based on the severity of the ticket.
See full example: support_ticket_routing.py
import pendulum
from airflow.decorators import dag, task
from airflow.models.dagrun import DagRun
@task.llm_branch(
model="gpt-4o-mini",
system_prompt="""
You are a support agent that routes support tickets based on the priority of the ticket.
Here are the priority definitions:
- P0: Critical issues that impact the user's ability to use the product, specifically for a production deployment.
- P1: Issues that impact the user's ability to use the product, but not as severely (or not for their production deployment).
- P2: Issues that are low priority and can wait until the next business day
- P3: Issues that are not important or time sensitive
Here are some examples of tickets and their priorities:
- "Our production deployment just went down because it ran out of memory. Please help.": P0
- "Our staging / dev / QA deployment just went down because it ran out of memory. Please help.": P1
- "I'm having trouble logging in to my account.": P1
- "The UI is not loading.": P1
- "I need help setting up my account.": P2
- "I have a question about the product.": P3
""",
allow_multiple_branches=True,
)
def route_ticket(dag_run: DagRun) -> str:
return dag_run.conf.get("ticket")
@task
def handle_p0_ticket(ticket: str):
print(f"Handling P0 ticket: {ticket}")
@task
def handle_p1_ticket(ticket: str):
print(f"Handling P1 ticket: {ticket}")
@task
def handle_p2_ticket(ticket: str):
print(f"Handling P2 ticket: {ticket}")
@task
def handle_p3_ticket(ticket: str):
print(f"Handling P3 ticket: {ticket}")
@dag(
start_date=pendulum.datetime(2025, 1, 1, tz="UTC"),
schedule=None,
catchup=False,
params={"ticket": "Hi, our production deployment just went down because it ran out of memory. Please help."}
)
def support_ticket_routing():
ticket = route_ticket()
handle_p0_ticket(ticket)
handle_p1_ticket(ticket)
handle_p2_ticket(ticket)
handle_p3_ticket(ticket)
support_ticket_routing()
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for airflow-ai-sdk
Similar Open Source Tools

airflow-ai-sdk
This repository contains an SDK for working with LLMs from Apache Airflow, based on Pydantic AI. It allows users to call LLMs and orchestrate agent calls directly within their Airflow pipelines using decorator-based tasks. The SDK leverages the familiar Airflow `@task` syntax with extensions like `@task.llm`, `@task.llm_branch`, and `@task.agent`. Users can define tasks that call language models, orchestrate multi-step AI reasoning, change the control flow of a DAG based on LLM output, and support various models in the Pydantic AI library. The SDK is designed to integrate LLM workflows into Airflow pipelines, from simple LLM calls to complex agentic workflows.

Tools4AI
Tools4AI is a Java-based Agentic Framework for building AI agents to integrate with enterprise Java applications. It enables the conversion of natural language prompts into actionable behaviors, streamlining user interactions with complex systems. By leveraging AI capabilities, it enhances productivity and innovation across diverse applications. The framework allows for seamless integration of AI with various systems, such as customer service applications, to interpret user requests, trigger actions, and streamline workflows. Prompt prediction anticipates user actions based on input prompts, enhancing user experience by proactively suggesting relevant actions or services based on context.

backtrack_sampler
Backtrack Sampler is a framework for experimenting with custom sampling algorithms that can backtrack the latest generated tokens. It provides a simple and easy-to-understand codebase for creating new sampling strategies. Users can implement their own strategies by creating new files in the `/strategy` directory. The repo includes examples for usage with llama.cpp and transformers, showcasing different strategies like Creative Writing, Anti-slop, Debug, Human Guidance, Adaptive Temperature, and Replace. The goal is to encourage experimentation and customization of backtracking algorithms for language models.

chores
The chores package provides a library of ergonomic LLM assistants designed to help users complete repetitive, hard-to-automate tasks quickly. Users can select code, trigger the chores addin, choose a helper, and watch their code be rewritten. The package offers chore helpers for tasks like converting to cli, testthat, and documenting functions with roxygen. Users can also create their own chore helpers by providing instructions in a markdown file. The cost of using helpers depends on the length of the prompt and the model chosen.

langevals
LangEvals is an all-in-one Python library for testing and evaluating LLM models. It can be used in notebooks for exploration, in pytest for writing unit tests, or as a server API for live evaluations and guardrails. The library is modular, with 20+ evaluators including Ragas for RAG quality, OpenAI Moderation, and Azure Jailbreak detection. LangEvals powers LangWatch evaluations and provides tools for batch evaluations on notebooks and unit test evaluations with PyTest. It also offers LangEvals evaluators for LLM-as-a-Judge scenarios and out-of-the-box evaluators for language detection and answer relevancy checks.

palimpzest
Palimpzest (PZ) is a tool for managing and optimizing workloads, particularly for data processing tasks. It provides a CLI tool and Python demos for users to register datasets, run workloads, and access results. Users can easily initialize their system, register datasets, and manage configurations using the CLI commands provided. Palimpzest also supports caching intermediate results and configuring for parallel execution with remote services like OpenAI and together.ai. The tool aims to streamline the workflow of working with datasets and optimizing performance for data extraction tasks.

AI
AI is an open-source Swift framework for interfacing with generative AI. It provides functionalities for text completions, image-to-text vision, function calling, DALLE-3 image generation, audio transcription and generation, and text embeddings. The framework supports multiple AI models from providers like OpenAI, Anthropic, Mistral, Groq, and ElevenLabs. Users can easily integrate AI capabilities into their Swift projects using AI framework.

langchain
LangChain is a framework for developing Elixir applications powered by language models. It enables applications to connect language models to other data sources and interact with the environment. The library provides components for working with language models and off-the-shelf chains for specific tasks. It aims to assist in building applications that combine large language models with other sources of computation or knowledge. LangChain is written in Elixir and is not aimed for parity with the JavaScript and Python versions due to differences in programming paradigms and design choices. The library is designed to make it easy to integrate language models into applications and expose features, data, and functionality to the models.

ai2-scholarqa-lib
Ai2 Scholar QA is a system for answering scientific queries and literature review by gathering evidence from multiple documents across a corpus and synthesizing an organized report with evidence for each claim. It consists of a retrieval component and a three-step generator pipeline. The retrieval component fetches relevant evidence passages using the Semantic Scholar public API and reranks them. The generator pipeline includes quote extraction, planning and clustering, and summary generation. The system is powered by the ScholarQA class, which includes components like PaperFinder and MultiStepQAPipeline. It requires environment variables for Semantic Scholar API and LLMs, and can be run as local docker containers or embedded into another application as a Python package.

instructor-js
Instructor is a Typescript library for structured extraction in Typescript, powered by llms, designed for simplicity, transparency, and control. It stands out for its simplicity, transparency, and user-centric design. Whether you're a seasoned developer or just starting out, you'll find Instructor's approach intuitive and steerable.

llms
The 'llms' repository is a comprehensive guide on Large Language Models (LLMs), covering topics such as language modeling, applications of LLMs, statistical language modeling, neural language models, conditional language models, evaluation methods, transformer-based language models, practical LLMs like GPT and BERT, prompt engineering, fine-tuning LLMs, retrieval augmented generation, AI agents, and LLMs for computer vision. The repository provides detailed explanations, examples, and tools for working with LLMs.

neo4j-graphrag-python
The Neo4j GraphRAG package for Python is an official repository that provides features for creating and managing vector indexes in Neo4j databases. It aims to offer developers a reliable package with long-term commitment, maintenance, and fast feature updates. The package supports various Python versions and includes functionalities for creating vector indexes, populating them, and performing similarity searches. It also provides guidelines for installation, examples, and development processes such as installing dependencies, making changes, and running tests.

ActionWeaver
ActionWeaver is an AI application framework designed for simplicity, relying on OpenAI and Pydantic. It supports both OpenAI API and Azure OpenAI service. The framework allows for function calling as a core feature, extensibility to integrate any Python code, function orchestration for building complex call hierarchies, and telemetry and observability integration. Users can easily install ActionWeaver using pip and leverage its capabilities to create, invoke, and orchestrate actions with the language model. The framework also provides structured extraction using Pydantic models and allows for exception handling customization. Contributions to the project are welcome, and users are encouraged to cite ActionWeaver if found useful.

chatmemory
ChatMemory is a simple yet powerful long-term memory manager that facilitates communication between AI and users. It organizes conversation data into history, summary, and knowledge entities, enabling quick retrieval of context and generation of clear, concise answers. The tool leverages vector search on summaries/knowledge and detailed history to provide accurate responses. It balances speed and accuracy by using lightweight retrieval and fallback detailed search mechanisms, ensuring efficient memory management and response generation beyond mere data retrieval.

talking-avatar-with-ai
The 'talking-avatar-with-ai' project is a digital human system that utilizes OpenAI's GPT-3 for generating responses, Whisper for audio transcription, Eleven Labs for voice generation, and Rhubarb Lip Sync for lip synchronization. The system allows users to interact with a digital avatar that responds with text, facial expressions, and animations, creating a realistic conversational experience. The project includes setup for environment variables, chat prompt templates, chat model configuration, and structured output parsing to enhance the interaction with the digital human.

web-llm
WebLLM is a modular and customizable javascript package that directly brings language model chats directly onto web browsers with hardware acceleration. Everything runs inside the browser with no server support and is accelerated with WebGPU. WebLLM is fully compatible with OpenAI API. That is, you can use the same OpenAI API on any open source models locally, with functionalities including json-mode, function-calling, streaming, etc. We can bring a lot of fun opportunities to build AI assistants for everyone and enable privacy while enjoying GPU acceleration.
For similar tasks

airflow-ai-sdk
This repository contains an SDK for working with LLMs from Apache Airflow, based on Pydantic AI. It allows users to call LLMs and orchestrate agent calls directly within their Airflow pipelines using decorator-based tasks. The SDK leverages the familiar Airflow `@task` syntax with extensions like `@task.llm`, `@task.llm_branch`, and `@task.agent`. Users can define tasks that call language models, orchestrate multi-step AI reasoning, change the control flow of a DAG based on LLM output, and support various models in the Pydantic AI library. The SDK is designed to integrate LLM workflows into Airflow pipelines, from simple LLM calls to complex agentic workflows.
For similar jobs

promptflow
**Prompt flow** is a suite of development tools designed to streamline the end-to-end development cycle of LLM-based AI applications, from ideation, prototyping, testing, evaluation to production deployment and monitoring. It makes prompt engineering much easier and enables you to build LLM apps with production quality.

deepeval
DeepEval is a simple-to-use, open-source LLM evaluation framework specialized for unit testing LLM outputs. It incorporates various metrics such as G-Eval, hallucination, answer relevancy, RAGAS, etc., and runs locally on your machine for evaluation. It provides a wide range of ready-to-use evaluation metrics, allows for creating custom metrics, integrates with any CI/CD environment, and enables benchmarking LLMs on popular benchmarks. DeepEval is designed for evaluating RAG and fine-tuning applications, helping users optimize hyperparameters, prevent prompt drifting, and transition from OpenAI to hosting their own Llama2 with confidence.

MegaDetector
MegaDetector is an AI model that identifies animals, people, and vehicles in camera trap images (which also makes it useful for eliminating blank images). This model is trained on several million images from a variety of ecosystems. MegaDetector is just one of many tools that aims to make conservation biologists more efficient with AI. If you want to learn about other ways to use AI to accelerate camera trap workflows, check out our of the field, affectionately titled "Everything I know about machine learning and camera traps".

leapfrogai
LeapfrogAI is a self-hosted AI platform designed to be deployed in air-gapped resource-constrained environments. It brings sophisticated AI solutions to these environments by hosting all the necessary components of an AI stack, including vector databases, model backends, API, and UI. LeapfrogAI's API closely matches that of OpenAI, allowing tools built for OpenAI/ChatGPT to function seamlessly with a LeapfrogAI backend. It provides several backends for various use cases, including llama-cpp-python, whisper, text-embeddings, and vllm. LeapfrogAI leverages Chainguard's apko to harden base python images, ensuring the latest supported Python versions are used by the other components of the stack. The LeapfrogAI SDK provides a standard set of protobuffs and python utilities for implementing backends and gRPC. LeapfrogAI offers UI options for common use-cases like chat, summarization, and transcription. It can be deployed and run locally via UDS and Kubernetes, built out using Zarf packages. LeapfrogAI is supported by a community of users and contributors, including Defense Unicorns, Beast Code, Chainguard, Exovera, Hypergiant, Pulze, SOSi, United States Navy, United States Air Force, and United States Space Force.

llava-docker
This Docker image for LLaVA (Large Language and Vision Assistant) provides a convenient way to run LLaVA locally or on RunPod. LLaVA is a powerful AI tool that combines natural language processing and computer vision capabilities. With this Docker image, you can easily access LLaVA's functionalities for various tasks, including image captioning, visual question answering, text summarization, and more. The image comes pre-installed with LLaVA v1.2.0, Torch 2.1.2, xformers 0.0.23.post1, and other necessary dependencies. You can customize the model used by setting the MODEL environment variable. The image also includes a Jupyter Lab environment for interactive development and exploration. Overall, this Docker image offers a comprehensive and user-friendly platform for leveraging LLaVA's capabilities.

carrot
The 'carrot' repository on GitHub provides a list of free and user-friendly ChatGPT mirror sites for easy access. The repository includes sponsored sites offering various GPT models and services. Users can find and share sites, report errors, and access stable and recommended sites for ChatGPT usage. The repository also includes a detailed list of ChatGPT sites, their features, and accessibility options, making it a valuable resource for ChatGPT users seeking free and unlimited GPT services.

TrustLLM
TrustLLM is a comprehensive study of trustworthiness in LLMs, including principles for different dimensions of trustworthiness, established benchmark, evaluation, and analysis of trustworthiness for mainstream LLMs, and discussion of open challenges and future directions. Specifically, we first propose a set of principles for trustworthy LLMs that span eight different dimensions. Based on these principles, we further establish a benchmark across six dimensions including truthfulness, safety, fairness, robustness, privacy, and machine ethics. We then present a study evaluating 16 mainstream LLMs in TrustLLM, consisting of over 30 datasets. The document explains how to use the trustllm python package to help you assess the performance of your LLM in trustworthiness more quickly. For more details about TrustLLM, please refer to project website.

AI-YinMei
AI-YinMei is an AI virtual anchor Vtuber development tool (N card version). It supports fastgpt knowledge base chat dialogue, a complete set of solutions for LLM large language models: [fastgpt] + [one-api] + [Xinference], supports docking bilibili live broadcast barrage reply and entering live broadcast welcome speech, supports Microsoft edge-tts speech synthesis, supports Bert-VITS2 speech synthesis, supports GPT-SoVITS speech synthesis, supports expression control Vtuber Studio, supports painting stable-diffusion-webui output OBS live broadcast room, supports painting picture pornography public-NSFW-y-distinguish, supports search and image search service duckduckgo (requires magic Internet access), supports image search service Baidu image search (no magic Internet access), supports AI reply chat box [html plug-in], supports AI singing Auto-Convert-Music, supports playlist [html plug-in], supports dancing function, supports expression video playback, supports head touching action, supports gift smashing action, supports singing automatic start dancing function, chat and singing automatic cycle swing action, supports multi scene switching, background music switching, day and night automatic switching scene, supports open singing and painting, let AI automatically judge the content.