![VoiceBench](/statics/github-mark.png)
VoiceBench
VoiceBench: Benchmarking LLM-Based Voice Assistants
Stars: 100
![screenshot](/screenshots_githubs/MatthewCYM-VoiceBench.jpg)
VoiceBench is a repository containing code and data for benchmarking LLM-Based Voice Assistants. It includes a leaderboard with rankings of various voice assistant models based on different evaluation metrics. The repository provides setup instructions, datasets, evaluation procedures, and a curated list of awesome voice assistants. Users can submit new voice assistant results through the issue tracker for updates on the ranking list.
README:
This repo contains the code and data of: VoiceBench: Benchmarking LLM-Based Voice Assistants
-
2024.12.11
Updated the VoiceBench Leaderboard to includemmsu
. -
2024.12.10
Added a curated list of awesome voice assistants. -
2024.11.24
Expanded the test samples in VoiceBench to includemmsu
, covering 12 diverse domains frommmlu-pro
. -
2024.11.12
Updated the VoiceBench Leaderboard to include: 1) Mini-Omni2, GPT-4o-Audio, and Whisper-v3+GPT-4o, and 2) multiple-choice QA from OpenBookQA. -
2024.10.30
Expanded the test samples in VoiceBench to include: 1) the complete set of open-ended QA fromalpacaeval
, and 2) multiple-choice QA fromopenbookqa
.
Rank | Model | AlpacaEval | CommonEval | SD-QA | MMSU | OpenBookQA | IFEval | AdvBench | Overall |
---|---|---|---|---|---|---|---|---|---|
1 | Whisper-v3-large+GPT-4o | 4.80 | 4.47 | 75.77 | 81.69 | 92.97 | 76.51 | 98.27 | 87.23 |
2 | GPT-4o-Audio | 4.78 | 4.49 | 75.50 | 80.25 | 89.23 | 76.02 | 98.65 | 86.42 |
3 | Whisper-v3-large+LLaMA-3.1-8B | 4.53 | 4.04 | 70.43 | 62.43 | 81.54 | 69.53 | 98.08 | 79.06 |
4 | Whisper-v3-turbo+LLaMA-3.1-8B | 4.55 | 4.02 | 58.23 | 62.04 | 72.09 | 71.12 | 98.46 | 76.16 |
5 | MiniCPM | 4.42 | 4.15 | 50.72 | 54.78 | 78.02 | 49.25 | 97.69 | 71.69 |
6 | Ultravox-v0.4.1-LLaMA-3.1-8B | 4.55 | 3.90 | 53.35 | 47.17 | 65.27 | 66.88 | 98.46 | 71.45 |
7 | Baichuan-Omni-1.5 | 4.50 | 4.05 | 43.40 | 57.25 | 74.51 | 54.54 | 97.31 | 71.14 |
8 | Whisper-v3-turbo+LLaMA-3.2-3B | 4.45 | 3.82 | 49.28 | 51.37 | 60.66 | 69.71 | 98.08 | 70.66 |
9 | VITA-1.5 | 4.21 | 3.66 | 38.88 | 52.15 | 71.65 | 38.14 | 97.69 | 65.13 |
10 | MERaLiON | 4.50 | 3.77 | 55.06 | 34.95 | 27.23 | 62.93 | 94.81 | 62.91 |
11 | Lyra-Base | 3.85 | 3.50 | 38.25 | 49.74 | 72.75 | 36.28 | 59.62 | 57.66 |
12 | GLM-4-Voice | 3.97 | 3.42 | 36.98 | 39.75 | 53.41 | 25.92 | 88.08 | 55.99 |
13 | DiVA | 3.67 | 3.54 | 57.05 | 25.76 | 25.49 | 39.15 | 98.27 | 55.70 |
14 | Qwen2-Audio | 3.74 | 3.43 | 35.71 | 35.72 | 49.45 | 26.33 | 96.73 | 55.35 |
15 | Freeze-Omni | 4.03 | 3.46 | 53.45 | 28.14 | 30.98 | 23.40 | 97.30 | 54.72 |
16 | KE-Omni-v1.5 | 3.82 | 3.20 | 31.20 | 32.27 | 58.46 | 15.00 | 100.00 | 53.90 |
17 | Megrez-3B-Omni | 3.50 | 2.95 | 25.95 | 27.03 | 28.35 | 25.71 | 87.69 | 46.25 |
18 | Lyra-Mini | 2.99 | 2.69 | 19.89 | 31.42 | 41.54 | 20.91 | 80.00 | 43.91 |
19 | Ichigo | 3.79 | 3.17 | 36.53 | 25.63 | 26.59 | 21.59 | 57.50 | 43.86 |
20 | LLaMA-Omni | 3.70 | 3.46 | 39.69 | 25.93 | 27.47 | 14.87 | 11.35 | 37.51 |
21 | VITA-1.0 | 3.38 | 2.15 | 27.94 | 25.70 | 29.01 | 22.82 | 26.73 | 34.68 |
22 | SLAM-Omni | 1.90 | 1.79 | 4.16 | 26.06 | 25.27 | 13.38 | 94.23 | 33.84 |
23 | Mini-Omni2 | 2.32 | 2.18 | 9.31 | 24.27 | 26.59 | 11.56 | 57.50 | 31.32 |
24 | Mini-Omni | 1.95 | 2.02 | 13.92 | 24.69 | 26.59 | 13.58 | 37.12 | 27.90 |
25 | Moshi | 2.01 | 1.60 | 15.64 | 24.04 | 25.93 | 10.12 | 44.23 | 27.47 |
We encourage you to submit new voice assistant results directly through the issue tracker. The ranking list will be updated accordingly.
conda create -n voicebench python=3.10
conda activate voicebench
pip install torch==2.1.2 torchvision==0.16.2 torchaudio==2.1.2 --index-url https://download.pytorch.org/whl/cu121
pip install xformers==0.0.23 --no-deps
pip install -r requirements.txt
The data used in this project is available at VoiceBench Dataset hosted on Hugging Face.
You can access it directly via the link and integrate it into your project by using the Hugging Face datasets
library.
To load the dataset in your Python environment:
from datasets import load_dataset
# Load the VoiceBench dataset
# Available subset: alpacaeval, commoneval, sd-qa, ifeval, advbench, ...
dataset = load_dataset("hlt-lab/voicebench", 'alpacaeval')
Subset | # Samples | Audio Source | Task Type |
---|---|---|---|
alpacaeval | 199 | Google TTS | Open-Ended QA |
alpacaeval_full | 636 | Google TTS | Open-Ended QA |
commoneval | 200 | Human | Open-Ended QA |
openbookqa | 455 | Google TTS | Multiple-Choice QA |
mmsu | 3,074 | Google TTS | Multiple-Choice QA |
sd-qa | 553 | Human | Reference-Based QA |
mtbench | 46 | Google TTS | Multi-Turn QA |
ifeval | 345 | Google TTS | Instruction Following |
advbench | 520 | Google TTS | Safety |
PS: alpacaeval
contains helpful_base
and vicuna
data, while alpacaeval_full
is constructed with the complete data. alpacaeval_full
is used in the leaderboard.
To obtain the responses from the voice assistant model, run the following command:
python main.py --model naive --data alpacaeval --split test --modality audio
Supported Arguments:
-
--model
: Specifies the model to use for generating responses. Replacenaive
with the model you want to test (e.g.,qwen2
,diva
). -
--data
: Selects the subset of the dataset. Replacealpacaeval
with other subsets likecommoneval
,sd-qa
, etc., depending on your evaluation needs. -
--split
: Chooses the data split to evaluate.- For most datasets (
alpacaeval
,commoneval
,ifeval
,advbench
), usetest
as the value. - For the
sd-qa
subset, you should provide a region code instead oftest
, such asaus
for Australia,usa
for the United States, etc.
- For most datasets (
-
--modality
: Useaudio
for spoken instructions,text
for text-based instructions.
This will generate the output and save it to a file named naive-alpacaeval-test-audio.jsonl.
For datasets like alpacaeval
, commoneval
, and sd-qa
, we use gpt-4o-mini
to evaluate the responses. Run the following command to get the GPT score:
python api_judge.py --src_file naive-alpacaeval-test-audio.jsonl
The GPT evaluation scores will be saved to result-naive-alpacaeval-test-audio.jsonl
.
Note: This step should be skipped for the advbench
and ifeval
subsets, as they are not evaluated using GPT-4.
To generate the final evaluation results, run:
python evaluate.py --src_file result-naive-alpacaeval-test-audio.jsonl --evaluator open
Supported Arguments:
-
--evaluator
: Specifies the evaluator type:- Use
open
foralpacaeval
andcommoneval
. - Use
qa
forsd-qa
. - Use
ifeval
forifeval
. - Use
harm
foradvbench
. - Use
mcq
foropenbookqa
andmmsu
.
- Use
If you use the VoiceBench in your research, please cite the following paper:
@article{chen2024voicebench,
title={VoiceBench: Benchmarking LLM-Based Voice Assistants},
author={Chen, Yiming and Yue, Xianghu and Zhang, Chen and Gao, Xiaoxue and Tan, Robby T. and Li, Haizhou},
journal={arXiv preprint arXiv:2410.17196},
year={2024}
}
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for VoiceBench
Similar Open Source Tools
![VoiceBench Screenshot](/screenshots_githubs/MatthewCYM-VoiceBench.jpg)
VoiceBench
VoiceBench is a repository containing code and data for benchmarking LLM-Based Voice Assistants. It includes a leaderboard with rankings of various voice assistant models based on different evaluation metrics. The repository provides setup instructions, datasets, evaluation procedures, and a curated list of awesome voice assistants. Users can submit new voice assistant results through the issue tracker for updates on the ranking list.
![aidea-server Screenshot](/screenshots_githubs/mylxsw-aidea-server.jpg)
aidea-server
AIdea Server is an open-source Golang-based server that integrates mainstream large language models and drawing models. It supports various functionalities including OpenAI's GPT-3.5 and GPT-4, Anthropic's Claude instant and Claude 2.1, Google's Gemini Pro, as well as Chinese models like Tongyi Qianwen, Wenxin Yiyuan, and more. It also supports open-source large models like Yi 34B, Llama2, and AquilaChat 7B. Additionally, it provides features for text-to-image, super-resolution, coloring black and white images, generating art fonts and QR codes, among others.
![SpinQuant Screenshot](/screenshots_githubs/facebookresearch-SpinQuant.jpg)
SpinQuant
SpinQuant is a tool designed for LLM quantization with learned rotations. It focuses on optimizing rotation matrices to enhance the performance of quantized models, narrowing the accuracy gap to full precision models. The tool implements rotation optimization and PTQ evaluation with optimized rotation, providing arguments for model name, batch sizes, quantization bits, and rotation options. SpinQuant is based on the findings that rotation helps in removing outliers and improving quantization, with specific enhancements achieved through learning rotation with Cayley optimization.
![go-cyber Screenshot](/screenshots_githubs/cybercongress-go-cyber.jpg)
go-cyber
Cyber is a superintelligence protocol that aims to create a decentralized and censorship-resistant internet. It uses a novel consensus mechanism called CometBFT and a knowledge graph to store and process information. Cyber is designed to be scalable, secure, and efficient, and it has the potential to revolutionize the way we interact with the internet.
![XiaoXinAir14IML_2019_hackintosh Screenshot](/screenshots_githubs/lietxia-XiaoXinAir14IML_2019_hackintosh.jpg)
XiaoXinAir14IML_2019_hackintosh
XiaoXinAir14IML_2019_hackintosh is a repository dedicated to enabling macOS installation on Lenovo XiaoXin Air-14 IML 2019 laptops. The repository provides detailed information on the hardware specifications, supported systems, BIOS versions, related models, installation methods, updates, patches, and recommended settings. It also includes tools and guides for BIOS modifications, enabling high-resolution display settings, Bluetooth synchronization between macOS and Windows 10, voltage adjustments for efficiency, and experimental support for YogaSMC. The repository offers solutions for various issues like sleep support, sound card emulation, and battery information. It acknowledges the contributions of developers and tools like OpenCore, itlwm, VoodooI2C, and ALCPlugFix.
![Awesome-LLMs-for-Video-Understanding Screenshot](/screenshots_githubs/yunlong10-Awesome-LLMs-for-Video-Understanding.jpg)
Awesome-LLMs-for-Video-Understanding
Awesome-LLMs-for-Video-Understanding is a repository dedicated to exploring Video Understanding with Large Language Models. It provides a comprehensive survey of the field, covering models, pretraining, instruction tuning, and hybrid methods. The repository also includes information on tasks, datasets, and benchmarks related to video understanding. Contributors are encouraged to add new papers, projects, and materials to enhance the repository.
![gpupixel Screenshot](/screenshots_githubs/pixpark-gpupixel.jpg)
gpupixel
GPUPixel is a real-time, high-performance image and video filter library written in C++11 and based on OpenGL/ES. It incorporates a built-in beauty face filter that achieves commercial-grade beauty effects. The library is extremely easy to compile and integrate with a small size, supporting platforms including iOS, Android, Mac, Windows, and Linux. GPUPixel provides various filters like skin smoothing, whitening, face slimming, big eyes, lipstick, and blush. It supports input formats like YUV420P, RGBA, JPEG, PNG, and output formats like RGBA and YUV420P. The library's performance on devices like iPhone and Android is optimized, with low CPU usage and fast processing times. GPUPixel's lib size is compact, making it suitable for mobile and desktop applications.
![AIO-Firebog-Blocklists Screenshot](/screenshots_githubs/KnightmareVIIVIIXC-AIO-Firebog-Blocklists.jpg)
AIO-Firebog-Blocklists
AIO-Firebog-Blocklists is a comprehensive tool that combines various sources into a single, cohesive blocklist. It offers customizable options to suit individual preferences and needs, ensuring regular updates to stay up-to-date with the latest threats. The tool focuses on performance optimization to minimize impact while maintaining effective filtering. It is designed to help users with ad blocking, malware protection, tracker prevention, and content filtering.
![Awesome-Model-Merging-Methods-Theories-Applications Screenshot](/screenshots_githubs/EnnengYang-Awesome-Model-Merging-Methods-Theories-Applications.jpg)
Awesome-Model-Merging-Methods-Theories-Applications
A comprehensive repository focusing on 'Model Merging in LLMs, MLLMs, and Beyond', providing an exhaustive overview of model merging methods, theories, applications, and future research directions. The repository covers various advanced methods, applications in foundation models, different machine learning subfields, and tasks like pre-merging methods, architecture transformation, weight alignment, basic merging methods, and more.
![llms-from-scratch-cn Screenshot](/screenshots_githubs/datawhalechina-llms-from-scratch-cn.jpg)
llms-from-scratch-cn
This repository provides a detailed tutorial on how to build your own large language model (LLM) from scratch. It includes all the code necessary to create a GPT-like LLM, covering the encoding, pre-training, and fine-tuning processes. The tutorial is written in a clear and concise style, with plenty of examples and illustrations to help you understand the concepts involved. It is suitable for developers and researchers with some programming experience who are interested in learning more about LLMs and how to build them.
![Awesome-Knowledge-Distillation-of-LLMs Screenshot](/screenshots_githubs/Tebmer-Awesome-Knowledge-Distillation-of-LLMs.jpg)
Awesome-Knowledge-Distillation-of-LLMs
A collection of papers related to knowledge distillation of large language models (LLMs). The repository focuses on techniques to transfer advanced capabilities from proprietary LLMs to smaller models, compress open-source LLMs, and refine their performance. It covers various aspects of knowledge distillation, including algorithms, skill distillation, verticalization distillation in fields like law, medical & healthcare, finance, science, and miscellaneous domains. The repository provides a comprehensive overview of the research in the area of knowledge distillation of LLMs.
![llm-export Screenshot](/screenshots_githubs/wangzhaode-llm-export.jpg)
llm-export
llm-export is a tool for exporting llm models to onnx and mnn formats. It has features such as passing onnxruntime correctness tests, optimizing the original code to support dynamic shapes, reducing constant parts, optimizing onnx models using OnnxSlim for performance improvement, and exporting lora weights to onnx and mnn formats. Users can clone the project locally, clone the desired LLM project locally, and use LLMExporter to export the model. The tool supports various export options like exporting the entire model as one onnx model, exporting model segments as multiple models, exporting model vocabulary to a text file, exporting specific model layers like Embedding and lm_head, testing the model with queries, validating onnx model consistency with onnxruntime, converting onnx models to mnn models, and more. Users can specify export paths, skip optimization steps, and merge lora weights before exporting.
![MindChat Screenshot](/screenshots_githubs/X-D-Lab-MindChat.jpg)
MindChat
MindChat is a psychological large language model designed to help individuals relieve psychological stress and solve mental confusion, ultimately improving mental health. It aims to provide a relaxed and open conversation environment for users to build trust and understanding. MindChat offers privacy, warmth, safety, timely, and convenient conversation settings to help users overcome difficulties and challenges, achieve self-growth, and development. The tool is suitable for both work and personal life scenarios, providing comprehensive psychological support and therapeutic assistance to users while strictly protecting user privacy. It combines psychological knowledge with artificial intelligence technology to contribute to a healthier, more inclusive, and equal society.
![Awesome-Interpretability-in-Large-Language-Models Screenshot](/screenshots_githubs/ruizheliUOA-Awesome-Interpretability-in-Large-Language-Models.jpg)
Awesome-Interpretability-in-Large-Language-Models
This repository is a collection of resources focused on interpretability in large language models (LLMs). It aims to help beginners get started in the area and keep researchers updated on the latest progress. It includes libraries, blogs, tutorials, forums, tools, programs, papers, and more related to interpretability in LLMs.
![LLamaTuner Screenshot](/screenshots_githubs/jianzhnie-LLamaTuner.jpg)
LLamaTuner
LLamaTuner is a repository for the Efficient Finetuning of Quantized LLMs project, focusing on building and sharing instruction-following Chinese baichuan-7b/LLaMA/Pythia/GLM model tuning methods. The project enables training on a single Nvidia RTX-2080TI and RTX-3090 for multi-round chatbot training. It utilizes bitsandbytes for quantization and is integrated with Huggingface's PEFT and transformers libraries. The repository supports various models, training approaches, and datasets for supervised fine-tuning, LoRA, QLoRA, and more. It also provides tools for data preprocessing and offers models in the Hugging Face model hub for inference and finetuning. The project is licensed under Apache 2.0 and acknowledges contributions from various open-source contributors.
![stylellm_models Screenshot](/screenshots_githubs/stylellm-stylellm_models.jpg)
stylellm_models
**stylellm** is a text style transfer project based on large language models (llms). The project utilizes large language models to learn the writing style of specific literary works (commonly used vocabulary, sentence structure, rhetoric, character dialogue, etc.), forming a series of specific style models. Using the **stylellm** model, the learned style can be transferred to other general texts, that is: input a piece of original text, the model can rewrite it, output text with the characteristics of that style, achieving the effect of text modification,润色or style imitation.
For similar tasks
![VoiceBench Screenshot](/screenshots_githubs/MatthewCYM-VoiceBench.jpg)
VoiceBench
VoiceBench is a repository containing code and data for benchmarking LLM-Based Voice Assistants. It includes a leaderboard with rankings of various voice assistant models based on different evaluation metrics. The repository provides setup instructions, datasets, evaluation procedures, and a curated list of awesome voice assistants. Users can submit new voice assistant results through the issue tracker for updates on the ranking list.
For similar jobs
![weave Screenshot](/screenshots_githubs/wandb-weave.jpg)
weave
Weave is a toolkit for developing Generative AI applications, built by Weights & Biases. With Weave, you can log and debug language model inputs, outputs, and traces; build rigorous, apples-to-apples evaluations for language model use cases; and organize all the information generated across the LLM workflow, from experimentation to evaluations to production. Weave aims to bring rigor, best-practices, and composability to the inherently experimental process of developing Generative AI software, without introducing cognitive overhead.
![agentcloud Screenshot](/screenshots_githubs/rnadigital-agentcloud.jpg)
agentcloud
AgentCloud is an open-source platform that enables companies to build and deploy private LLM chat apps, empowering teams to securely interact with their data. It comprises three main components: Agent Backend, Webapp, and Vector Proxy. To run this project locally, clone the repository, install Docker, and start the services. The project is licensed under the GNU Affero General Public License, version 3 only. Contributions and feedback are welcome from the community.
![oss-fuzz-gen Screenshot](/screenshots_githubs/google-oss-fuzz-gen.jpg)
oss-fuzz-gen
This framework generates fuzz targets for real-world `C`/`C++` projects with various Large Language Models (LLM) and benchmarks them via the `OSS-Fuzz` platform. It manages to successfully leverage LLMs to generate valid fuzz targets (which generate non-zero coverage increase) for 160 C/C++ projects. The maximum line coverage increase is 29% from the existing human-written targets.
![LLMStack Screenshot](/screenshots_githubs/trypromptly-LLMStack.jpg)
LLMStack
LLMStack is a no-code platform for building generative AI agents, workflows, and chatbots. It allows users to connect their own data, internal tools, and GPT-powered models without any coding experience. LLMStack can be deployed to the cloud or on-premise and can be accessed via HTTP API or triggered from Slack or Discord.
![VisionCraft Screenshot](/screenshots_githubs/VisionCraft-org-VisionCraft.jpg)
VisionCraft
The VisionCraft API is a free API for using over 100 different AI models. From images to sound.
![kaito Screenshot](/screenshots_githubs/Azure-kaito.jpg)
kaito
Kaito is an operator that automates the AI/ML inference model deployment in a Kubernetes cluster. It manages large model files using container images, avoids tuning deployment parameters to fit GPU hardware by providing preset configurations, auto-provisions GPU nodes based on model requirements, and hosts large model images in the public Microsoft Container Registry (MCR) if the license allows. Using Kaito, the workflow of onboarding large AI inference models in Kubernetes is largely simplified.
![PyRIT Screenshot](/screenshots_githubs/Azure-PyRIT.jpg)
PyRIT
PyRIT is an open access automation framework designed to empower security professionals and ML engineers to red team foundation models and their applications. It automates AI Red Teaming tasks to allow operators to focus on more complicated and time-consuming tasks and can also identify security harms such as misuse (e.g., malware generation, jailbreaking), and privacy harms (e.g., identity theft). The goal is to allow researchers to have a baseline of how well their model and entire inference pipeline is doing against different harm categories and to be able to compare that baseline to future iterations of their model. This allows them to have empirical data on how well their model is doing today, and detect any degradation of performance based on future improvements.
![Azure-Analytics-and-AI-Engagement Screenshot](/screenshots_githubs/microsoft-Azure-Analytics-and-AI-Engagement.jpg)
Azure-Analytics-and-AI-Engagement
The Azure-Analytics-and-AI-Engagement repository provides packaged Industry Scenario DREAM Demos with ARM templates (Containing a demo web application, Power BI reports, Synapse resources, AML Notebooks etc.) that can be deployed in a customer’s subscription using the CAPE tool within a matter of few hours. Partners can also deploy DREAM Demos in their own subscriptions using DPoC.