Next-Gen-Dialogue
AI powered dialogue visual designer for Unity
Stars: 62
Next Gen Dialogue is a Unity dialogue plugin that combines traditional dialogue design with AI techniques. It features a visual dialogue editor, modular dialogue functions, AIGC support for generating dialogue at runtime, AIGC baking dialogue in Editor, and runtime debugging. The plugin aims to provide an experimental approach to dialogue design using large language models. Users can create dialogue trees, generate dialogue content using AI, and bake dialogue content in advance. The tool also supports localization, VITS speech synthesis, and one-click translation. Users can create dialogue by code using the DialogueSystem and DialogueTree components.
README:
Read this document in Chinese: 中文文档
- Features
- RoadMap
- Supported version
- Install
- Quick Start
- Nodes
- Modules
- Experimental Function Introduction
- Resolvers
- Create Dialogue by Code
Next Gen Dialogue plugin (hereinafter referred to as NGD) is a Unity dialogue plugin combined with large language model design, won the Unity AI Plugin Excellence Award from Unity China. It combines the traditional dialogue design method with AI technique. Currently this package is an experimental attempt and hopes you enjoy it.
It has the following features:
- Visual dialogue editor
- Modular dialogue function
- Support AIGC to generate dialogue when running
- Support AIGC baking dialogue in Editor
- Debug during runtime
Demo project: https://github.com/AkiKurisu/Next-Gen-Dialogue-Example-Project
Demo video: https://www.bilibili.com/video/BV1hg4y1U7FG
- Use Unity Sentis to inference VITS, LLM model instead of using Python API which needs network and server (There are currently technical limitations).
- Unity 2021.3 or Later
Using git URL to download package by Unity PackageManager https://github.com/AkiKurisu/Next-Gen-Dialogue.git
The experimental features of Next Gen Dialogue are placed in the Modules folder and will not be enabled without installing the corresponding dependencies. You can view the dependencies in the README.md
document under its folder.
To use core functions, you need to install Newtonsoft Json
in PackageManager.
If you are using this plugin for the first time, it is recommended to play the following example scenes first:
1.Normal Usage.unity
this scene contains the use of NextGenDialogueTree and NextGenDialogueTreeSO;
2.GPT Generate dialogue.unity
this scene contains the samples of dialogue content using ChatGPT during runtime;
3.Local LLM Generate dialogue.unity
this scene contains a sample of dialogue with the use of local large language models at runtime;
4. Editor Bake Dialogue.unity
this scene contains the sample of baking conversation conversation in the use of AI dialogue Baker in Editor;
5.Build Dialogue by Code.unity
this scene contains the use of Code to generate dialogue.
6.Bake Novel.unity
An example of using ChatGPT to infinitely generate dialogue trees.
NextGenDialogueTree and NextGenDialogueTreeSO are used to store dialogue data. In order to facilitate understanding, it is collectively referred to as dialogue tree. The following process is to create a dialogue tree that contains only a single dialogue and a single option:
-
Mount NextGenDialogueTree on any gameObject
-
Click
Edit DialogueTree
to enter the editor -
Create the Container/dialogue node, the node is the dialogue container used in the game
-
Connect the Parent port of the dialogue Node to the root node. You can have multiple dialogue in one dialogueTree, but only those connected to the root node will be used.
-
Create the Container/Piece node and create our first dialogue fragment
-
Right -click Piece node
Add Module
addContent Module
, you can fill in the contents of the conversation inContent
-
Create a Container/Option node and create a dialogue option corresponding to the PIECE node
-
Right-click Piece node
Add Option
, connect Piece with Option -
Very important: At least one Piece node needs to be added to the Dialogue as the first piece of the dialogue.You can right -click dialogue's
Add Piece
to connect with the connection or quoting its PieceID. You can also right -click dialogue'sCollect All Pieces
to add all the piece in Graph to the dialogue and adjust the priority of the Piece- For priority, please refer to 《General Module-Condition Module》
-
Click on the upper left of the editor's
Save
to save dialogue -
Click Play to enter PlayMode
-
Click on NextGenDialogueTree
Play dialogue
to play conversation -
Click
Debug DialogueTree
to enter the debug mode
- Tips: The currently played dialogue piece will be displayed in green
The traditional dialogue design is completely dependent on the designer. If you want to make the dialogue more personalized, you can try to use AIGC. In addition to ChatGPT, you can also use a large language model deployed locally. Of course, since the model depends on the Python environment, Using the model in Unity needs to rely on the terminal for network communication
Tips: Currently supports the following popular terminals, you can choose according to your needs and equipment conditions
- The Generate mode of KoboldAI-KoboldCPP, KoboldCPP supports CPU reasoning
- The Generate mode of Oobabooga-Text-Generation-WebUI, WebUI has a high memory usage rate, and running Unity on the same machine will affect performance
- API (Generate mode) and OpenAI Type API (Chat mode) of ChatGLM2-6B, ChatGLM is a powerful and efficient Chinese-English language model
The following process is to create a dialogue tree that can generate dialogue content according to the player's choice at runtime:
- The design of the basic dialogue tree is consistent with the process of 《Create a Dialogue Tree》
- AIGC can better generate the content required by users by providing a prompt word (Prompt). For example, the background setting of the dialogue and the additional requirements of the designer. You only need to add
Prompt Module
in the dialogue node, and fill in the prompt word inprompt
- For Piece or Option nodes that require AI recognition but do not need to be generated, add
Character Module
and indicate the name of the speaking character incharacterName
- Add
AI Generate Module
to the Piece node that needs to be generated by AI and fill in the corresponding character name incharacterName
- Create an empty GameObject in the scene and mount the
AIEntry
component - Select the type of LLM you are using and configure the address and port of the Server
- Note : Generate dialogue content does not support generate option at runtime
It is not easy to control the dialogue content of AIGC at runtime, but you can use AI dialogue Baker to bake the dialogue content generated by AI in advance when designing the dialogue tree, so as to improve the workflow efficiency without affecting your design framework.
- The basic dialogue tree design is consistent with the process of 《Create a Dialogue Tree》
- The addition of Prompt is consistent with the process of 《AI Generated Dialogue》
- Add
AI Bake Module
for the fragments or options that need to be baked, and remove the module for nodes that do not need to be baked - Select the type of LLM you are baking with
- Select in turn the nodes that AI dialogue Baker needs to recognize, the order of recognition is based on the order selected by the mouse, and finally select the nodes that need to be baked
- If the selection is successful, you can see the preview input content at the bottom of the editor
- Click the
Bake Dialogue
button on theAI Bake Module
and wait for the AI response - After the language model responds, a
Content Module
will be automatically added to the node to store the baked dialogue content - You can continuously generate conversations based on your needs
Different from talking directly to AI in baking dialogue, novel mode allows AI to play the role of copywriter and planner to write dialogue, so it can control options and fragments more precisely. Please refer to the example: 6.Bake Novel.unity
NGD use node based visual editor framework, most of the features are presented through nodes.
The construction dialogue are divided into the following parts in NGD:
Name | Description |
---|---|
Dialogue | Used to define dialogues, such as the first piece of the dialogue and other attributes |
Piece | dialogue piece, usually store the core dialogue content |
Option | dialogue options, usually used for interaction and bridging dialogues |
In addition, in order to add interest to the dialogue such as adding events and executing actions, you can use the following types of nodes in the behavior tree framework in NGD:
Name | Description |
---|---|
Composite | It has one or more child nodes and controls which child nodes are updated. |
Action | This is a leaf node. It performs actions such as following the player, attacking, fleeing or other actions you define. |
Conditional | It has a child node and checks if the child node is updatable. When there is no child node, Conditional is a leaf node like Action. |
Decorator | It has a child node, which will modify the return value according to the return value of the child node |
In addition to the above nodes, a more flexible concept is used in NGD, that is, Module. You can use Module to change the output form of the dialogue, such as Google translation, localization, add callbacks, or be executed as a markup.
The following are built-in general modules:
Name | Description |
---|---|
Content Module | Provide text content for Option or Piece |
TargetID Module | Add jumping target dialogue fragments for Option |
PreUpdate Module | Add pre-update behavior for Container, it will update when jumping to the Container |
CallBack Module | Add callback behavior for Option, they will be updated after selection |
ScriptableEvent Module | Adds ScriptableEvent events to Option, they will be updated after selection, ScriptableEvent can be used for cross-scenario event subscription |
UnityEvent Module | Add UnityEvent events to Option, they will be updated after selection, UnityEvent can be used for event subscription in traditional single scene |
Condition Module | Add judgment behavior for Option or Piece, it will be updated when jumping to the Container, if the return value is Status.Failure , the Container will be discarded. If it is the first Piece of the dialogue, the system will try to jump to the next Piece according to the order in which the Pieces are placed in the dialogue |
NextPiece Module | Add the next dialogue segment after the end of the Piece. If there is no option, it will jump to the specified dialogue segment after playing the content of the Piece |
Google Translate Module | Use Google Translate to translate the content of current Option or Piece |
The following are the built-in AIGC modules:
Name | Description |
---|---|
Prompt Module | Prompt words that provide the basis for subsequent dialogue generation |
Character Module | Annotate the Speaker of a dialogue |
AI Generate Module | Allow Piece to generate dialogue using AIGC based on previous player choices |
AI Bake Module (Editor Only)
|
Add this module to bake Option or Piece in Editor
|
The following are experimental modules, you need to install the corresponding Package or configure the corresponding environment before use:
Based on the UnityEngine.Localization plugin to support the localization of dialogue
Name | Description |
---|---|
Localized Content Module | Provide content for Option or Piece after getting text from localization |
For VITS local deployment, please refer to this repository: VITS Simple API
If you want to use the VITS module, please use it with VITSAIReResolver. For the use of the Resolver, please refer to the following 《Resolver》
Name | Description |
---|---|
VITS Module | Use VITS speech synthesis model to generate language for Piece or Option in real time |
Add Editor/EditorTranslateModule in the Dialogue node, set the source language (sourceLanguageCode
) and target language (targetLanguageCode
) of the translation, right-click and select Translate All Contents
to perform all Piece and Option with ContentModule
translate.
For nodes other than ContentModule
, if the TranslateEntryAttribute
is added to the field, you can right-click a single node to translate it.
namespace Kurisu.NGDT.Behavior
{
public class SetString : Action
{
//Notify field can be translated
//* Only work for SharedString and string
[SerializeField, Multiline, TranslateEntry]
private SharedString value;
}
}
Before use, you need to install the corresponding dependencies of Modules/VITS
and open the local VITS server (refer to Modules/VITS/README.md
). Add AIGC/VITSModule
to the node where speech needs to be generated, right-click and select Bake Audio
If you are satisfied with the generated audio, click Download
to save it locally to complete the baking, otherwise the audio file will not be retained after exiting the editor.
It is no longer necessary to start the VITS server at runtime after baking is complete.
- If the AudioClip field is empty, the run generation mode is enabled by default. If there is no connection, the conversation may not proceed. If you only need to use the baking function, please keep the AudioClip field not empty at all times.
Resolver is used to detect the Module in the Container at runtime and execute a series of preset logic such as injecting dependencies and executing behaviors, the difference between NGD's built-in Resolver is as follows:
Name | Description |
---|---|
BuiltIn Resolver | The most basic resolver, supporting all built-in common modules |
AI Resolver | Added AIGC module on the basis of BuiltIn Resolver |
VITS AI Resolver (Experimental)
|
On the basis of AI Resolver, additionally detect VITS modules to generate voice in real time, no need for bake mode |
-
In-scene Global Resolver You can mount the
AIEntry
script on any GameObject to enable AIResolver in the scene -
Dialogue specified Resolver
You can add
AIResolverModule
(orVITSAIResolverModule
) to the dialogue node to specify the resolver used by the dialogue, and you can also click the Setting button in the upper right corner of the module and select which Resolvers to be replaced inAdvanced Settings
NGD is divided into two parts, DialogueSystem (NGDS) and DialogueTree (NGDT). The former defines the data structure of the dialogue, which is interpreted by Resolver after receiving the data. The latter provides a visual editing solution and inherits the interface from the former. So you can also use scripts to write dialogues, examples are as follows:
using UnityEngine;
public class CodeDialogueBuilder : MonoBehaviour
{
private DialogueGenerator generator;
private IEnumerator Start()
{
yield return new WaitForEndOfFrame();
PlayDialogue();
}
private void PlayDialogue()
{
var dialogueSystem = IOCContainer.Resolve<IDialogueSystem>();
generator = new();
//First Piece
var piece = DialoguePiece.CreatePiece();
piece.Content = "This is the first dialogue piece";
piece.PieceID = "01";
piece.AddOption(new DialogueOption()
{
Content = "Jump to Next",
TargetID = "02"
});
generator.AddPiece(piece);
//Second Piece
piece = DialoguePiece.CreatePiece();
piece.Content = "This is the second dialogue piece";
piece.PieceID = "02";
var callBackOption = DialogueOption.CreateOption();
//Add CallBack Module
callBackOption.AddModule(new CallBackModule(() => Debug.Log("Hello World !")));
callBackOption.Content = "Log";
piece.AddOption(callBackOption);
generator.AddPiece(piece);
dialogueSystem.StartDialogue(generator);
}
}
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for Next-Gen-Dialogue
Similar Open Source Tools
Next-Gen-Dialogue
Next Gen Dialogue is a Unity dialogue plugin that combines traditional dialogue design with AI techniques. It features a visual dialogue editor, modular dialogue functions, AIGC support for generating dialogue at runtime, AIGC baking dialogue in Editor, and runtime debugging. The plugin aims to provide an experimental approach to dialogue design using large language models. Users can create dialogue trees, generate dialogue content using AI, and bake dialogue content in advance. The tool also supports localization, VITS speech synthesis, and one-click translation. Users can create dialogue by code using the DialogueSystem and DialogueTree components.
BitMat
BitMat is a Python package designed to optimize matrix multiplication operations by utilizing custom kernels written in Triton. It leverages the principles outlined in the "1bit-LLM Era" paper, specifically utilizing packed int8 data to enhance computational efficiency and performance in deep learning and numerical computing tasks.
kaito
Kaito is an operator that automates the AI/ML inference model deployment in a Kubernetes cluster. It manages large model files using container images, avoids tuning deployment parameters to fit GPU hardware by providing preset configurations, auto-provisions GPU nodes based on model requirements, and hosts large model images in the public Microsoft Container Registry (MCR) if the license allows. Using Kaito, the workflow of onboarding large AI inference models in Kubernetes is largely simplified.
LazyLLM
LazyLLM is a low-code development tool for building complex AI applications with multiple agents. It assists developers in building AI applications at a low cost and continuously optimizing their performance. The tool provides a convenient workflow for application development and offers standard processes and tools for various stages of application development. Users can quickly prototype applications with LazyLLM, analyze bad cases with scenario task data, and iteratively optimize key components to enhance the overall application performance. LazyLLM aims to simplify the AI application development process and provide flexibility for both beginners and experts to create high-quality applications.
neutone_sdk
The Neutone SDK is a tool designed for researchers to wrap their own audio models and run them in a DAW using the Neutone Plugin. It simplifies the process by allowing models to be built using PyTorch and minimal Python code, eliminating the need for extensive C++ knowledge. The SDK provides support for buffering inputs and outputs, sample rate conversion, and profiling tools for model performance testing. It also offers examples, notebooks, and a submission process for sharing models with the community.
wandbot
Wandbot is a question-answering bot designed for Weights & Biases documentation. It employs Retrieval Augmented Generation with a ChromaDB backend for efficient responses. The bot features periodic data ingestion, integration with Discord and Slack, and performance monitoring through logging. It has a fallback mechanism for model selection and is evaluated based on retrieval accuracy and model-generated responses. The implementation includes creating document embeddings, constructing the Q&A RAGPipeline, model selection, deployment on FastAPI, Discord, and Slack, logging and analysis with Weights & Biases Tables, and performance evaluation.
home-llm
Home LLM is a project that provides the necessary components to control your Home Assistant installation with a completely local Large Language Model acting as a personal assistant. The goal is to provide a drop-in solution to be used as a "conversation agent" component by Home Assistant. The 2 main pieces of this solution are Home LLM and Llama Conversation. Home LLM is a fine-tuning of the Phi model series from Microsoft and the StableLM model series from StabilityAI. The model is able to control devices in the user's house as well as perform basic question and answering. The fine-tuning dataset is a custom synthetic dataset designed to teach the model function calling based on the device information in the context. Llama Conversation is a custom component that exposes the locally running LLM as a "conversation agent" in Home Assistant. This component can be interacted with in a few ways: using a chat interface, integrating with Speech-to-Text and Text-to-Speech addons, or running the oobabooga/text-generation-webui project to provide access to the LLM via an API interface.
council
Council is an open-source platform designed for the rapid development and deployment of customized generative AI applications using teams of agents. It extends the LLM tool ecosystem by providing advanced control flow and scalable oversight for AI agents. Users can create sophisticated agents with predictable behavior by leveraging Council's powerful approach to control flow using Controllers, Filters, Evaluators, and Budgets. The framework allows for automated routing between agents, comparing, evaluating, and selecting the best results for a task. Council aims to facilitate packaging and deploying agents at scale on multiple platforms while enabling enterprise-grade monitoring and quality control.
codebase-context-spec
The Codebase Context Specification (CCS) project aims to standardize embedding contextual information within codebases to enhance understanding for both AI and human developers. It introduces a convention similar to `.env` and `.editorconfig` files but focused on documenting code for both AI and humans. By providing structured contextual metadata, collaborative documentation guidelines, and standardized context files, developers can improve code comprehension, collaboration, and development efficiency. The project includes a linter for validating context files and provides guidelines for using the specification with AI assistants. Tooling recommendations suggest creating memory systems, IDE plugins, AI model integrations, and agents for context creation and utilization. Future directions include integration with existing documentation systems, dynamic context generation, and support for explicit context overriding.
falkon
Falkon is a Python implementation of the Falkon algorithm for large-scale, approximate kernel ridge regression. The code is optimized for scalability to large datasets with tens of millions of points and beyond. Full kernel matrices are never computed explicitly so that you will not run out of memory on larger problems. Preconditioned conjugate gradient optimization ensures that only few iterations are necessary to obtain good results. The basic algorithm is a Nyström approximation to kernel ridge regression, which needs only three hyperparameters: 1. The number of centers M - this controls the quality of the approximation: a higher number of centers will produce more accurate results at the expense of more computation time, and higher memory requirements. 2. The penalty term, which controls the amount of regularization. 3. The kernel function. A good default is always the Gaussian (RBF) kernel (`falkon.kernels.GaussianKernel`).
stable-diffusion-webui-Layer-Divider
This repository contains an implementation of the Segment-Anything Model (SAM) within the SD WebUI. It allows users to divide layers in the SD WebUI and save them as PSD files. Users can adjust parameters, click 'Generate', and view the output below. A PSD file will be saved in the designated folder. The tool provides various parameters for customization, such as points_per_side, pred_iou_thresh, stability_score_thresh, crops_n_layers, crop_n_points_downscale_factor, and min_mask_region_area.
SheetCopilot
SheetCopilot is an assistant agent that manipulates spreadsheets by following user commands. It leverages Large Language Models (LLMs) to interact with spreadsheets like a human expert, enabling non-expert users to complete tasks on complex software such as Google Sheets and Excel via a language interface. The tool observes spreadsheet states, polishes generated solutions based on external action documents and error feedback, and aims to improve success rate and efficiency. SheetCopilot offers a dataset with diverse task categories and operations, supporting operations like entry & manipulation, management, formatting, charts, and pivot tables. Users can interact with SheetCopilot in Excel or Google Sheets, executing tasks like calculating revenue, creating pivot tables, and plotting charts. The tool's evaluation includes performance comparisons with leading LLMs and VBA-based methods on specific datasets, showcasing its capabilities in controlling various aspects of a spreadsheet.
ScreenAgent
ScreenAgent is a project focused on creating an environment for Visual Language Model agents (VLM Agent) to interact with real computer screens. The project includes designing an automatic control process for agents to interact with the environment and complete multi-step tasks. It also involves building the ScreenAgent dataset, which collects screenshots and action sequences for various daily computer tasks. The project provides a controller client code, configuration files, and model training code to enable users to control a desktop with a large model.
vscode-pddl
The vscode-pddl extension provides comprehensive support for Planning Domain Description Language (PDDL) in Visual Studio Code. It enables users to model planning domains, validate them, industrialize planning solutions, and run planners. The extension offers features like syntax highlighting, auto-completion, plan visualization, plan validation, plan happenings evaluation, search debugging, and integration with Planning.Domains. Users can create PDDL files, run planners, visualize plans, and debug search algorithms efficiently within VS Code.
LARS
LARS is an application that enables users to run Large Language Models (LLMs) locally on their devices, upload their own documents, and engage in conversations where the LLM grounds its responses with the uploaded content. The application focuses on Retrieval Augmented Generation (RAG) to increase accuracy and reduce AI-generated inaccuracies. LARS provides advanced citations, supports various file formats, allows follow-up questions, provides full chat history, and offers customization options for LLM settings. Users can force enable or disable RAG, change system prompts, and tweak advanced LLM settings. The application also supports GPU-accelerated inferencing, multiple embedding models, and text extraction methods. LARS is open-source and aims to be the ultimate RAG-centric LLM application.
For similar tasks
Next-Gen-Dialogue
Next Gen Dialogue is a Unity dialogue plugin that combines traditional dialogue design with AI techniques. It features a visual dialogue editor, modular dialogue functions, AIGC support for generating dialogue at runtime, AIGC baking dialogue in Editor, and runtime debugging. The plugin aims to provide an experimental approach to dialogue design using large language models. Users can create dialogue trees, generate dialogue content using AI, and bake dialogue content in advance. The tool also supports localization, VITS speech synthesis, and one-click translation. Users can create dialogue by code using the DialogueSystem and DialogueTree components.
For similar jobs
promptflow
**Prompt flow** is a suite of development tools designed to streamline the end-to-end development cycle of LLM-based AI applications, from ideation, prototyping, testing, evaluation to production deployment and monitoring. It makes prompt engineering much easier and enables you to build LLM apps with production quality.
deepeval
DeepEval is a simple-to-use, open-source LLM evaluation framework specialized for unit testing LLM outputs. It incorporates various metrics such as G-Eval, hallucination, answer relevancy, RAGAS, etc., and runs locally on your machine for evaluation. It provides a wide range of ready-to-use evaluation metrics, allows for creating custom metrics, integrates with any CI/CD environment, and enables benchmarking LLMs on popular benchmarks. DeepEval is designed for evaluating RAG and fine-tuning applications, helping users optimize hyperparameters, prevent prompt drifting, and transition from OpenAI to hosting their own Llama2 with confidence.
MegaDetector
MegaDetector is an AI model that identifies animals, people, and vehicles in camera trap images (which also makes it useful for eliminating blank images). This model is trained on several million images from a variety of ecosystems. MegaDetector is just one of many tools that aims to make conservation biologists more efficient with AI. If you want to learn about other ways to use AI to accelerate camera trap workflows, check out our of the field, affectionately titled "Everything I know about machine learning and camera traps".
leapfrogai
LeapfrogAI is a self-hosted AI platform designed to be deployed in air-gapped resource-constrained environments. It brings sophisticated AI solutions to these environments by hosting all the necessary components of an AI stack, including vector databases, model backends, API, and UI. LeapfrogAI's API closely matches that of OpenAI, allowing tools built for OpenAI/ChatGPT to function seamlessly with a LeapfrogAI backend. It provides several backends for various use cases, including llama-cpp-python, whisper, text-embeddings, and vllm. LeapfrogAI leverages Chainguard's apko to harden base python images, ensuring the latest supported Python versions are used by the other components of the stack. The LeapfrogAI SDK provides a standard set of protobuffs and python utilities for implementing backends and gRPC. LeapfrogAI offers UI options for common use-cases like chat, summarization, and transcription. It can be deployed and run locally via UDS and Kubernetes, built out using Zarf packages. LeapfrogAI is supported by a community of users and contributors, including Defense Unicorns, Beast Code, Chainguard, Exovera, Hypergiant, Pulze, SOSi, United States Navy, United States Air Force, and United States Space Force.
llava-docker
This Docker image for LLaVA (Large Language and Vision Assistant) provides a convenient way to run LLaVA locally or on RunPod. LLaVA is a powerful AI tool that combines natural language processing and computer vision capabilities. With this Docker image, you can easily access LLaVA's functionalities for various tasks, including image captioning, visual question answering, text summarization, and more. The image comes pre-installed with LLaVA v1.2.0, Torch 2.1.2, xformers 0.0.23.post1, and other necessary dependencies. You can customize the model used by setting the MODEL environment variable. The image also includes a Jupyter Lab environment for interactive development and exploration. Overall, this Docker image offers a comprehensive and user-friendly platform for leveraging LLaVA's capabilities.
carrot
The 'carrot' repository on GitHub provides a list of free and user-friendly ChatGPT mirror sites for easy access. The repository includes sponsored sites offering various GPT models and services. Users can find and share sites, report errors, and access stable and recommended sites for ChatGPT usage. The repository also includes a detailed list of ChatGPT sites, their features, and accessibility options, making it a valuable resource for ChatGPT users seeking free and unlimited GPT services.
TrustLLM
TrustLLM is a comprehensive study of trustworthiness in LLMs, including principles for different dimensions of trustworthiness, established benchmark, evaluation, and analysis of trustworthiness for mainstream LLMs, and discussion of open challenges and future directions. Specifically, we first propose a set of principles for trustworthy LLMs that span eight different dimensions. Based on these principles, we further establish a benchmark across six dimensions including truthfulness, safety, fairness, robustness, privacy, and machine ethics. We then present a study evaluating 16 mainstream LLMs in TrustLLM, consisting of over 30 datasets. The document explains how to use the trustllm python package to help you assess the performance of your LLM in trustworthiness more quickly. For more details about TrustLLM, please refer to project website.
AI-YinMei
AI-YinMei is an AI virtual anchor Vtuber development tool (N card version). It supports fastgpt knowledge base chat dialogue, a complete set of solutions for LLM large language models: [fastgpt] + [one-api] + [Xinference], supports docking bilibili live broadcast barrage reply and entering live broadcast welcome speech, supports Microsoft edge-tts speech synthesis, supports Bert-VITS2 speech synthesis, supports GPT-SoVITS speech synthesis, supports expression control Vtuber Studio, supports painting stable-diffusion-webui output OBS live broadcast room, supports painting picture pornography public-NSFW-y-distinguish, supports search and image search service duckduckgo (requires magic Internet access), supports image search service Baidu image search (no magic Internet access), supports AI reply chat box [html plug-in], supports AI singing Auto-Convert-Music, supports playlist [html plug-in], supports dancing function, supports expression video playback, supports head touching action, supports gift smashing action, supports singing automatic start dancing function, chat and singing automatic cycle swing action, supports multi scene switching, background music switching, day and night automatic switching scene, supports open singing and painting, let AI automatically judge the content.