Next-Gen-Dialogue
AI powered dialogue visual designer for Unity
Stars: 89
Next Gen Dialogue is a Unity dialogue plugin that combines traditional dialogue design with AI techniques. It features a visual dialogue editor, modular dialogue functions, AIGC support for generating dialogue at runtime, AIGC baking dialogue in Editor, and runtime debugging. The plugin aims to provide an experimental approach to dialogue design using large language models. Users can create dialogue trees, generate dialogue content using AI, and bake dialogue content in advance. The tool also supports localization, VITS speech synthesis, and one-click translation. Users can create dialogue by code using the DialogueSystem and DialogueTree components.
README:
Next Gen Dialogue is a Unity dialogue plugin combined with large language model design, won the 2023 Unity AI Plugin Excellence Award from Unity China.
It combines the traditional dialogue design pattern with AIGC to simlify your workflow. Hopes you enjoy it.
Demo video https://www.bilibili.com/video/BV1hg4y1U7FG
- Features
- Supported version
- Install
- Quick Start
- Nodes
- Modules
- Extensions
- Resolvers
- Create Dialogue in Script
- Visual dialogue editor
- Modular dialogue function
- AIGC dialogue
- Custom actions support
- Unity 2022.3 or Later
Using git URL to download package by Unity PackageManager https://github.com/AkiKurisu/Next-Gen-Dialogue.git
"dependencies": {
"com.kurisu.chris": "1.2.4",
"com.kurisu.chris-modules": "1.2.4",
"com.kurisu.ceres": "0.3.2",
"com.unity.nuget.newtonsoft-json": "3.2.1"
}The experimental features of Next Gen Dialogue are placed in the Modules folder and will not be enabled without installing the corresponding dependencies.
You can view the dependencies in the README.md document under its folder.
If you are using this plugin for the first time, it is recommended to play the following example scenes first:
1.Normal Usage.unity this scene contains the use of NextGenDialogueComponent and NextGenDialogueGraphAsset;
2.Editor Bake Dialogue.unitythis scene contains the sample of baking conversation conversation in the use of AI dialogue Baker in Editor;
3.Build Dialogue by Code.unity this scene contains the use of Code to generate dialogue.
4.Bake Novel.unity An example of using ChatGPT to infinitely generate dialogue trees.
NextGenDialogueComponent and NextGenDialogueGraphAsset are used to store dialogue data. In order to facilitate understanding, it is collectively referred to as dialogue tree.
The following process is to create a dialogue tree that contains only a single dialogue and a single option:
-
Mount
NextGenDialogueComponenton any gameObject -
Click
Open Dialogue Graphto enter the editor -
Create the Container/dialogue node, the node is the dialogue container used in the game
-
Connect the Parent port of the dialogue Node to the root node. You can have multiple dialogue in one dialogueTree, but only those connected to the root node will be used.
-
Create the Container/Piece node and create our first dialogue fragment
-
Right -click Piece node
Add ModuleaddContent Module, you can fill in the contents of the conversation inContent -
Create a Container/Option node and create a dialogue option corresponding to the PIECE node
-
Right-click Piece node
Add Option, connect Piece with Option -
Very important: At least one Piece node needs to be added to the Dialogue as the first piece of the dialogue. You can click
Collect All Piecesin context menu to collect all pieces in the Graph to the dialogue and adjust priority of them.- For priority, please refer to General Module-Condition Module
-
Click on the upper left of the editor's
Saveto save dialogue -
Click Play to enter PlayMode
-
Click on NextGenDialogueComponent
Play dialogueto play conversation -
Click
Open Dialogue Graphto enter the debug mode
The playing dialogue piece will be displayed in green
From V2, Next Gen Dialogue now use Ceres.Flow to implement custom action feature.
You can now add ExecuteFlowModule to fire a flow execution event at runtime.
For more details about Ceres.Flow, please refer to AkiKurisu/Ceres.
You can use AI dialogue Baker to bake the dialogue content generated by AI in advance when designing the dialogue tree, so as to improve the workflow efficiency without affecting your design framework.
- The basic dialogue graph design is consistent with the process of Create a Dialogue Graph
- Add
AI Bake Modulefor the fragments or options that need to be baked, and remove the module for nodes that do not need to be baked - Select the type of LLM you are baking with
- Select in turn the nodes that AI dialogue Baker needs to recognize, the order of recognition is based on the order selected by the mouse, and finally select the nodes that need to be baked
- If the selection is successful, you can see the preview input content at the bottom of the editor
- Click the
Bake Dialoguebutton on theAI Bake Moduleand wait for the AI response - After the language model responds, a
Content Modulewill be automatically added to the node to store the baked dialogue content - You can continuously generate conversations based on your needs
Different from talking directly to AI in baking dialogue, novel mode allows AI to play the role of copywriter and planner to write dialogue, so it can control options and fragments more precisely. Please refer to the example: 4.Bake Novel.unity
NGD use node based visual editor framework, most of the features are presented through nodes. The construction dialogue are divided into the following parts in NGD:
| Name | Description |
|---|---|
| Dialogue | Used to define dialogues, such as the first piece of the dialogue and other attributes |
| Piece | dialogue piece, usually store the core dialogue content |
| Option | dialogue options, usually used for interaction and bridging dialogues |
In addition to the above nodes, a more flexible concept is used in NGD, that is, Module. You can use Module to change the output form of the dialogue, such as Google translation, localization, add callbacks, or be executed as a markup.
The following are built-in general modules:
| Name | Description |
|---|---|
| Content | Provide text content for Option or Piece |
| TargetID | Add jumping target dialogue fragments for Option |
| PreUpdate | Add pre-update behavior for Container, it will update when jumping to the Container |
| Callback | Add callback behavior for Option, they will be updated after selection |
| Condition | Add judgment behavior for Option or Piece, it will be updated when jumping to the Container, if the return value is Status.Failure, the Container will be discarded. If it is the first Piece of the dialogue, the system will try to jump to the next Piece according to the order in which the Pieces are placed in the dialogue |
| Next Piece | Add the next dialogue segment after the end of the Piece. If there is no option, it will jump to the specified dialogue segment after playing the content of the Piece |
| Google Translate | Use Google Translate to translate the content of current Option or Piece |
The following are the built-in AIGC modules:
| Name | Description |
|---|---|
| Prompt | Prompt words that provide the basis for subsequent dialogue generation |
Editor modules are used to provide some useful tools for the editor, such as translation.
Add Editor/EditorTranslateModule in the Dialogue node, set the source language (sourceLanguageCode) and target language (targetLanguageCode) of the translation, right-click and select Translate All Contents to perform all Piece and Option with ContentModule translate.
For nodes other than ContentModule, if the TranslateEntryAttribute is added to the field, you can right-click a single node to translate it.
public class ExampleAction : Action
{
//Notify field can be translated
//* Only work for SharedString and string
[SerializeField, Multiline, TranslateEntry]
private SharedString value;
}The following are extensions, you need to install the corresponding Package or configure the corresponding environment before use:
Based on the UnityEngine.Localization plugin to support the localization of dialogue
| Name | Description |
|---|---|
| Localized Content | Provide content for Option or Piece after getting text from localization |
For VITS local deployment, please refer to this repository: VITS Simple API
If you want to use the VITS module, please use it with VITSAIReResolver. For the use of the Resolver, please refer to the following Resolver
| Name | Description |
|---|---|
| VITS Voice | Use VITS speech synthesis model to generate language for Piece or Option in real time |
Before use, you need to install the corresponding dependencies of Modules/VITS and open the local VITS server (refer to Modules/VITS/README.md). Add AIGC/VITSModule to the node where speech needs to be generated, right-click and select Bake Audio
If you are satisfied with the generated audio, click Download to save it locally to complete the baking, otherwise the audio file will not be retained after exiting the editor.
It is no longer necessary to start the VITS server at runtime after baking is complete.
- If the AudioClip field is empty, the run generation mode is enabled by default. If there is no connection, the conversation may not proceed. If you only need to use the baking function, please keep the AudioClip field not empty at all times.
Resolver is used to detect the Module in the Container at runtime and execute a series of preset logic such as injecting dependencies and executing behaviors, the difference between NGD's built-in Resolver is as follows:
| Name | Description |
|---|---|
| Default Resolver | The most basic resolver, supporting all built-in common modules |
| VITS Resolver | Additionally detect VITS modules to generate voice in real time |
-
In-scene Global Resolver You can mount the
VITSSetupscript on any GameObject to enable AIResolver in the scene -
Dialogue specified Resolver
You can add
VITSResolverModuleto the dialogue node to specify the resolver used by the dialogue, and you can also click the Setting button in the upper right corner of the module and select which Resolvers to be replaced inAdvanced Settings
NGD is divided into two parts, DialogueSystem and DialogueGraph. The former defines the data structure of the dialogue, which is interpreted by resolver after receiving the data. The latter provides a visual scripting solution and inherits the interface from the former. So you can also use scripts to write dialogues, examples are as follows:
using UnityEngine;
public class CodeDialogueBuilder : MonoBehaviour
{
private RuntimeDialogueBuilder _builder;
private void Start()
{
PlayDialogue();
}
private void PlayDialogue()
{
var dialogueSystem = DialogueSystem.Get();
_builder = new RuntimeDialogueBuilder();
// First Piece
_builder.AddPiece(GetFirstPiece());
// Second Piece
_builder.AddPiece(GetSecondPiece());
dialogueSystem.StartDialogue(_builder);
}
private static Piece GetFirstPiece()
{
var piece = Piece.GetPooled();
piece.AddContent("This is the first dialogue piece");
piece.ID = "01";
piece.AddOption(new Option
{
Content = "Jump to Next",
TargetID = "02"
});
return piece;
}
private static Piece GetSecondPiece()
{
var piece = Piece.GetPooled();
piece.AddContent("This is the second dialogue piece");
piece.ID = "02";
piece.AddOption(GetFirstOption());
piece.AddOption(GetSecondOption());
return piece;
}
private static Option GetFirstOption()
{
var callBackOption = Option.GetPooled();
// Add CallBack Module
callBackOption.AddModule(new CallBackModule(() => Debug.Log("Hello World!")));
callBackOption.Content = "Log";
return callBackOption;
}
private static Option GetSecondOption()
{
var option = Option.GetPooled();
option.Content = "Back To First";
option.TargetID = "01";
return option;
}
}For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for Next-Gen-Dialogue
Similar Open Source Tools
Next-Gen-Dialogue
Next Gen Dialogue is a Unity dialogue plugin that combines traditional dialogue design with AI techniques. It features a visual dialogue editor, modular dialogue functions, AIGC support for generating dialogue at runtime, AIGC baking dialogue in Editor, and runtime debugging. The plugin aims to provide an experimental approach to dialogue design using large language models. Users can create dialogue trees, generate dialogue content using AI, and bake dialogue content in advance. The tool also supports localization, VITS speech synthesis, and one-click translation. Users can create dialogue by code using the DialogueSystem and DialogueTree components.
LLMUnity
LLM for Unity enables seamless integration of Large Language Models (LLMs) within the Unity engine, allowing users to create intelligent characters for immersive player interactions. The tool supports major LLM models, runs locally without internet access, offers fast inference on CPU and GPU, and is easy to set up with a single line of code. It is free for both personal and commercial use, tested on Unity 2021 LTS, 2022 LTS, and 2023. Users can build multiple AI characters efficiently, use remote servers for processing, and customize model settings for text generation.
vscode-pddl
The vscode-pddl extension provides comprehensive support for Planning Domain Description Language (PDDL) in Visual Studio Code. It enables users to model planning domains, validate them, industrialize planning solutions, and run planners. The extension offers features like syntax highlighting, auto-completion, plan visualization, plan validation, plan happenings evaluation, search debugging, and integration with Planning.Domains. Users can create PDDL files, run planners, visualize plans, and debug search algorithms efficiently within VS Code.
MARS5-TTS
MARS5 is a novel English speech model (TTS) developed by CAMB.AI, featuring a two-stage AR-NAR pipeline with a unique NAR component. The model can generate speech for various scenarios like sports commentary and anime with just 5 seconds of audio and a text snippet. It allows steering prosody using punctuation and capitalization in the transcript. Speaker identity is specified using an audio reference file, enabling 'deep clone' for improved quality. The model can be used via torch.hub or HuggingFace, supporting both shallow and deep cloning for inference. Checkpoints are provided for AR and NAR models, with hardware requirements of 750M+450M params on GPU. Contributions to improve model stability, performance, and reference audio selection are welcome.
AI
AI is an open-source Swift framework for interfacing with generative AI. It provides functionalities for text completions, image-to-text vision, function calling, DALLE-3 image generation, audio transcription and generation, and text embeddings. The framework supports multiple AI models from providers like OpenAI, Anthropic, Mistral, Groq, and ElevenLabs. Users can easily integrate AI capabilities into their Swift projects using AI framework.
comfyui_LLM_party
COMFYUI LLM PARTY is a node library designed for LLM workflow development in ComfyUI, an extremely minimalist UI interface primarily used for AI drawing and SD model-based workflows. The project aims to provide a complete set of nodes for constructing LLM workflows, enabling users to easily integrate them into existing SD workflows. It features various functionalities such as API integration, local large model integration, RAG support, code interpreters, online queries, conditional statements, looping links for large models, persona mask attachment, and tool invocations for weather lookup, time lookup, knowledge base, code execution, web search, and single-page search. Users can rapidly develop web applications using API + Streamlit and utilize LLM as a tool node. Additionally, the project includes an omnipotent interpreter node that allows the large model to perform any task, with recommendations to use the 'show_text' node for display output.
ChatGPT-Telegram-Bot
The ChatGPT Telegram Bot is a powerful Telegram bot that utilizes various GPT models, including GPT3.5, GPT4, GPT4 Turbo, GPT4 Vision, DALL·E 3, Groq Mixtral-8x7b/LLaMA2-70b, and Claude2.1/Claude3 opus/sonnet API. It enables users to engage in efficient conversations and information searches on Telegram. The bot supports multiple AI models, online search with DuckDuckGo and Google, user-friendly interface, efficient message processing, document interaction, Markdown rendering, and convenient deployment options like Zeabur, Replit, and Docker. Users can set environment variables for configuration and deployment. The bot also provides Q&A functionality, supports model switching, and can be deployed in group chats with whitelisting. The project is open source under GPLv3 license.
brokk
Brokk is a code assistant designed to understand code semantically, allowing LLMs to work effectively on large codebases. It offers features like agentic search, summarizing related classes, parsing stack traces, adding source for usages, and autonomously fixing errors. Users can interact with Brokk through different panels and commands, enabling them to manipulate context, ask questions, search codebase, run shell commands, and more. Brokk helps with tasks like debugging regressions, exploring codebase, AI-powered refactoring, and working with dependencies. It is particularly useful for making complex, multi-file edits with o1pro.
Mapperatorinator
Mapperatorinator is a multi-model framework that uses spectrogram inputs to generate fully featured osu! beatmaps for all gamemodes and assist modding beatmaps. The project aims to automatically generate rankable quality osu! beatmaps from any song with a high degree of customizability. The tool is built upon osuT5 and osu-diffusion, utilizing GPU compute and instances on vast.ai for development. Users can responsibly use AI in their beatmaps with this tool, ensuring disclosure of AI usage. Installation instructions include cloning the repository, creating a virtual environment, and installing dependencies. The tool offers a Web GUI for user-friendly experience and a Command-Line Inference option for advanced configurations. Additionally, an Interactive CLI script is available for terminal-based workflow with guided setup. The tool provides generation tips and features MaiMod, an AI-driven modding tool for osu! beatmaps. Mapperatorinator tokenizes beatmaps, utilizes a model architecture based on HF Transformers Whisper model, and offers multitask training format for conditional generation. The tool ensures seamless long generation, refines coordinates with diffusion, and performs post-processing for improved beatmap quality. Super timing generator enhances timing accuracy, and LoRA fine-tuning allows adaptation to specific styles or gamemodes. The project acknowledges credits and related works in the osu! community.
aici
The Artificial Intelligence Controller Interface (AICI) lets you build Controllers that constrain and direct output of a Large Language Model (LLM) in real time. Controllers are flexible programs capable of implementing constrained decoding, dynamic editing of prompts and generated text, and coordinating execution across multiple, parallel generations. Controllers incorporate custom logic during the token-by-token decoding and maintain state during an LLM request. This allows diverse Controller strategies, from programmatic or query-based decoding to multi-agent conversations to execute efficiently in tight integration with the LLM itself.
local-talking-llm
The 'local-talking-llm' repository provides a tutorial on building a voice assistant similar to Jarvis or Friday from Iron Man movies, capable of offline operation on a computer. The tutorial covers setting up a Python environment, installing necessary libraries like rich, openai-whisper, suno-bark, langchain, sounddevice, pyaudio, and speechrecognition. It utilizes Ollama for Large Language Model (LLM) serving and includes components for speech recognition, conversational chain, and speech synthesis. The implementation involves creating a TextToSpeechService class for Bark, defining functions for audio recording, transcription, LLM response generation, and audio playback. The main application loop guides users through interactive voice-based conversations with the assistant.
talking-avatar-with-ai
The 'talking-avatar-with-ai' project is a digital human system that utilizes OpenAI's GPT-3 for generating responses, Whisper for audio transcription, Eleven Labs for voice generation, and Rhubarb Lip Sync for lip synchronization. The system allows users to interact with a digital avatar that responds with text, facial expressions, and animations, creating a realistic conversational experience. The project includes setup for environment variables, chat prompt templates, chat model configuration, and structured output parsing to enhance the interaction with the digital human.
vigenair
ViGenAiR is a tool that harnesses the power of Generative AI models on Google Cloud Platform to automatically transform long-form Video Ads into shorter variants, targeting different audiences. It generates video, image, and text assets for Demand Gen and YouTube video campaigns. Users can steer the model towards generating desired videos, conduct A/B testing, and benefit from various creative features. The tool offers benefits like diverse inventory, compelling video ads, creative excellence, user control, and performance insights. ViGenAiR works by analyzing video content, splitting it into coherent segments, and generating variants following Google's best practices for effective ads.
blurt
Blurt is a Gnome shell extension that enables accurate speech-to-text input in Linux. It is based on the command line utility NoteWhispers and supports Gnome shell version 48. Users can transcribe speech using a local whisper.cpp installation or a whisper.cpp server. The extension allows for easy setup, start/stop of speech-to-text input with key bindings or icon click, and provides visual indicators during operation. It offers convenience by enabling speech input into any window that allows text input, with the transcribed text sent to the clipboard for easy pasting.
chatdev
ChatDev IDE is a tool for building your AI agent, Whether it's NPCs in games or powerful agent tools, you can design what you want for this platform. It accelerates prompt engineering through **JavaScript Support** that allows implementing complex prompting techniques.
gptauthor
GPT Author is a command-line tool designed to help users write long form, multi-chapter stories by providing a story prompt and generating a synopsis and subsequent chapters using ChatGPT. Users can review and make changes to the generated content before finalizing the story output in Markdown and HTML formats. The tool aims to unleash storytelling genius by combining human input with AI-generated content, offering a seamless writing experience for creating engaging narratives.
For similar tasks
Next-Gen-Dialogue
Next Gen Dialogue is a Unity dialogue plugin that combines traditional dialogue design with AI techniques. It features a visual dialogue editor, modular dialogue functions, AIGC support for generating dialogue at runtime, AIGC baking dialogue in Editor, and runtime debugging. The plugin aims to provide an experimental approach to dialogue design using large language models. Users can create dialogue trees, generate dialogue content using AI, and bake dialogue content in advance. The tool also supports localization, VITS speech synthesis, and one-click translation. Users can create dialogue by code using the DialogueSystem and DialogueTree components.
open-dubbing
Open dubbing is an AI dubbing system that uses machine learning models to automatically translate and synchronize audio dialogue into different languages. It is designed as a command line tool. The project is experimental and aims to explore speech-to-text, text-to-speech, and translation systems combined. It supports multiple text-to-speech engines, translation engines, and gender voice detection. The tool can automatically dub videos, detect source language, and is built on open-source models. The roadmap includes better voice control, optimization for long videos, and support for multiple video input formats. Users can post-edit dubbed files by manually adjusting text, voice, and timings. Supported languages vary based on the combination of systems used.
For similar jobs
promptflow
**Prompt flow** is a suite of development tools designed to streamline the end-to-end development cycle of LLM-based AI applications, from ideation, prototyping, testing, evaluation to production deployment and monitoring. It makes prompt engineering much easier and enables you to build LLM apps with production quality.
deepeval
DeepEval is a simple-to-use, open-source LLM evaluation framework specialized for unit testing LLM outputs. It incorporates various metrics such as G-Eval, hallucination, answer relevancy, RAGAS, etc., and runs locally on your machine for evaluation. It provides a wide range of ready-to-use evaluation metrics, allows for creating custom metrics, integrates with any CI/CD environment, and enables benchmarking LLMs on popular benchmarks. DeepEval is designed for evaluating RAG and fine-tuning applications, helping users optimize hyperparameters, prevent prompt drifting, and transition from OpenAI to hosting their own Llama2 with confidence.
MegaDetector
MegaDetector is an AI model that identifies animals, people, and vehicles in camera trap images (which also makes it useful for eliminating blank images). This model is trained on several million images from a variety of ecosystems. MegaDetector is just one of many tools that aims to make conservation biologists more efficient with AI. If you want to learn about other ways to use AI to accelerate camera trap workflows, check out our of the field, affectionately titled "Everything I know about machine learning and camera traps".
leapfrogai
LeapfrogAI is a self-hosted AI platform designed to be deployed in air-gapped resource-constrained environments. It brings sophisticated AI solutions to these environments by hosting all the necessary components of an AI stack, including vector databases, model backends, API, and UI. LeapfrogAI's API closely matches that of OpenAI, allowing tools built for OpenAI/ChatGPT to function seamlessly with a LeapfrogAI backend. It provides several backends for various use cases, including llama-cpp-python, whisper, text-embeddings, and vllm. LeapfrogAI leverages Chainguard's apko to harden base python images, ensuring the latest supported Python versions are used by the other components of the stack. The LeapfrogAI SDK provides a standard set of protobuffs and python utilities for implementing backends and gRPC. LeapfrogAI offers UI options for common use-cases like chat, summarization, and transcription. It can be deployed and run locally via UDS and Kubernetes, built out using Zarf packages. LeapfrogAI is supported by a community of users and contributors, including Defense Unicorns, Beast Code, Chainguard, Exovera, Hypergiant, Pulze, SOSi, United States Navy, United States Air Force, and United States Space Force.
llava-docker
This Docker image for LLaVA (Large Language and Vision Assistant) provides a convenient way to run LLaVA locally or on RunPod. LLaVA is a powerful AI tool that combines natural language processing and computer vision capabilities. With this Docker image, you can easily access LLaVA's functionalities for various tasks, including image captioning, visual question answering, text summarization, and more. The image comes pre-installed with LLaVA v1.2.0, Torch 2.1.2, xformers 0.0.23.post1, and other necessary dependencies. You can customize the model used by setting the MODEL environment variable. The image also includes a Jupyter Lab environment for interactive development and exploration. Overall, this Docker image offers a comprehensive and user-friendly platform for leveraging LLaVA's capabilities.
carrot
The 'carrot' repository on GitHub provides a list of free and user-friendly ChatGPT mirror sites for easy access. The repository includes sponsored sites offering various GPT models and services. Users can find and share sites, report errors, and access stable and recommended sites for ChatGPT usage. The repository also includes a detailed list of ChatGPT sites, their features, and accessibility options, making it a valuable resource for ChatGPT users seeking free and unlimited GPT services.
TrustLLM
TrustLLM is a comprehensive study of trustworthiness in LLMs, including principles for different dimensions of trustworthiness, established benchmark, evaluation, and analysis of trustworthiness for mainstream LLMs, and discussion of open challenges and future directions. Specifically, we first propose a set of principles for trustworthy LLMs that span eight different dimensions. Based on these principles, we further establish a benchmark across six dimensions including truthfulness, safety, fairness, robustness, privacy, and machine ethics. We then present a study evaluating 16 mainstream LLMs in TrustLLM, consisting of over 30 datasets. The document explains how to use the trustllm python package to help you assess the performance of your LLM in trustworthiness more quickly. For more details about TrustLLM, please refer to project website.
AI-YinMei
AI-YinMei is an AI virtual anchor Vtuber development tool (N card version). It supports fastgpt knowledge base chat dialogue, a complete set of solutions for LLM large language models: [fastgpt] + [one-api] + [Xinference], supports docking bilibili live broadcast barrage reply and entering live broadcast welcome speech, supports Microsoft edge-tts speech synthesis, supports Bert-VITS2 speech synthesis, supports GPT-SoVITS speech synthesis, supports expression control Vtuber Studio, supports painting stable-diffusion-webui output OBS live broadcast room, supports painting picture pornography public-NSFW-y-distinguish, supports search and image search service duckduckgo (requires magic Internet access), supports image search service Baidu image search (no magic Internet access), supports AI reply chat box [html plug-in], supports AI singing Auto-Convert-Music, supports playlist [html plug-in], supports dancing function, supports expression video playback, supports head touching action, supports gift smashing action, supports singing automatic start dancing function, chat and singing automatic cycle swing action, supports multi scene switching, background music switching, day and night automatic switching scene, supports open singing and painting, let AI automatically judge the content.







