
Awesome-Audio-LLM
Audio Large Language Models
Stars: 424

Awesome-Audio-LLM is a repository dedicated to various models and methods related to audio and language processing. It includes a wide range of research papers and models developed by different institutions and authors. The repository covers topics such as bridging audio and language, speech emotion recognition, voice assistants, and more. It serves as a comprehensive resource for those interested in the intersection of audio and language processing.
README:
We thank the following contributors for their valuable contributions! zwenyu, Yuan-ManX, chaoweihuang, Liu-Tianchi, Sakshi113, hbwu-ntu, potsawee, czwxian, marianasignal, and You!
- OSUM
- Step-Audio
- Audio-CoT
- UltraEval-Audio
- LUCY
- MinMo
- ADU-Bench
- TalkArena
- Typhoon2-Audio
- MERaLiON-AudioLLM
- ADU-Bench
- Taiwanese AudioLLM
- WavChat-Survey
- Dynamic-SUPERB Phase-2
- VoiceBench
- MMAU
- SPIRIT LM
- SpeechLLM-Survey
- SpeechEmotionLlama
- SPIRIT LM
- SpeechLM-Survey
- DiVA
- AudioBERT
- Ultravox
- LLaMA-Omni
- SALMon
- DeSTA2
- ASRCompare
- MoWE-Audio
- Moshi
- EMOVA
- MuChoMusic
- Mini-Omni
- MooER
- Typhoon-Audio
- Qwen2-Audio
- LLaST
- Decoder-only LLMs for STT
- AudioEntailment
- GAMA
- FunAudioLLM
- CompA
- Speech ReaLLM
- Audio Hallucination
- AudioBench
- DeSTA
- CodecFake
- SD-Eval
- AIR-Bench
- Audio Flamingo
- VoiceJailbreak
- LibriSQA
- SALMONN
- SpokenWOZ
- WavLLM
- SLAM-LLM
- AudioLM-Survey
- Pengi
- Qwen-Audio
- CoDi-2
- UniAudio
- Dynamic-SUPERB
- LLaSM
- Segment-level Q-Former
- Prompting LLMs with Speech Recognition
- Macaw-LLM
- SpeechGPT
- AudioGPT
-
【2024-04】-【LibriSQA】-【Shanghai Jiao Tong University】-【Type: Dataset Resource】
- LibriSQA: A Novel Dataset and Framework for Spoken Question Answering with Large Language Models
- Author(s): Zihan Zhao, Yiyang Jiang, Heyang Liu, Yanfeng Wang, Yu Wang
- Paper
-
【2025-02】-【OSUM】-【ASLP@NPU】-【Type: Model】
- OSUM: Advancing Open Speech Understanding Models with Limited Resources in Academia
- Author(s): Xuelong Geng, Kun Wei, Qijie Shao, Shuiyun Liu, Zhennan Lin, Zhixian Zhao, Guojian Li, Wenjie Tian, Peikun Chen, Yangze Li, Pengcheng Guo, Mingchen Shao, Shuiyuan Wang, Yuang Cao, Chengyou Wang, Tianyi Xu, Yuhang Dai, Xinfa Zhu, Yue Li, Li Zhang, Lei Xie
- Paper / Hugging Face Model
-
【2025-02】-【Step-Audio】-【Step-Audio Team, StepFun】-【Type: Model】
- Step-Audio: Unified Understanding and Generation in Intelligent Speech Interaction
- Author(s): Ailin Huang, Boyong Wu, Bruce Wang, Chao Yan, Chen Hu, Chengli Feng, Fei Tian, Feiyu Shen, Jingbei Li, Mingrui Chen, Peng Liu, Ruihang Miao, Wang You, Xi Chen, Xuerui Yang, Yechang Huang, Yuxiang Zhang, Zheng Gong, Zixin Zhang, Hongyu Zhou, Jianjian Sun, Brian Li, Chengting Feng, Changyi Wan, Hanpeng Hu, Jianchang Wu, Jiangjie Zhen, Ranchen Ming, Song Yuan, Xuelin Zhang, Yu Zhou, Bingxin Li, Buyun Ma, Hongyuan Wang, Kang An, Wei Ji, Wen Li, Xuan Wen, Xiangwen Kong, Yuankai Ma, Yuanwei Liang, Yun Mou, Bahtiyar Ahmidi, Bin Wang, Bo Li, Changxin Miao, Chen Xu, Chenrun Wang, Dapeng Shi, Deshan Sun, Dingyuan Hu, Dula Sai, Enle Liu, Guanzhe Huang, Gulin Yan, Heng Wang, Haonan Jia, Haoyang Zhang, Jiahao Gong, Junjing Guo, Jiashuai Liu, Jiahong Liu, Jie Feng, Jie Wu, Jiaoren Wu, Jie Yang, Jinguo Wang, Jingyang Zhang, Junzhe Lin, Kaixiang Li, Lei Xia, Li Zhou, Liang Zhao, Longlong Gu, Mei Chen, Menglin Wu, Ming Li, Mingxiao Li, Mingliang Li, Mingyao Liang, Na Wang, Nie Hao, Qiling Wu, Qinyuan Tan, Ran Sun, Shuai Shuai, Shaoliang Pang, Shiliang Yang, Shuli Gao, Shanshan Yuan, Siqi Liu, Shihong Deng, Shilei Jiang, Sitong Liu, Tiancheng Cao, Tianyu Wang, Wenjin Deng, Wuxun Xie, Weipeng Ming, Wenqing He , Wen Sun, Xin Han, Xin Huang, Xiaomin Deng, Xiaojia Liu, Xin Wu, Xu Zhao, Yanan Wei, Yanbo Yu, Yang Cao, Yangguang Li, Yangzhen Ma, Yanming Xu, Yaoyu Wang, Yaqiang Shi, Yilei Wang, Yizhuang Zhou, Yinmin Zhong, Yang Zhang, Yaoben Wei, Yu Luo, Yuanwei Lu, Yuhe Yin, Yuchu Luo, Yuanhao Ding, Yuting Yan, Yaqi Dai, Yuxiang Yang, Zhe Xie, Zheng Ge, Zheng Sun, Zhewei Huang, Zhichao Chang, Zhisheng Guan, Zidong Yang, Zili Zhang, Binxing Jiao, Daxin Jiang, Heung-Yeung Shum, Jiansheng Chen, Jing Li, Shuchang Zhou, Xiangyu Zhang, Xinhao Zhang, Yibo Zhu
- Paper / Hugging Face Model
-
【2025-01】-【Audio-CoT】-【Nanyang Technological University, Singapore】-【Type: Model】
- Audio-CoT: Exploring Chain-of-Thought Reasoning in Large Audio Language Model
- Author(s): Ziyang Ma, Zhuo Chen, Yuping Wang, Eng Siong Chng, Xie Chen
- Paper
-
【2025-01】-【LUCY】-【Tencent】-【Type: Model】
- LUCY: Linguistic Understanding and Control Yielding Early Stage of Her
- Author(s): Heting Gao, Hang Shao, Xiong Wang, Chaofan Qiu, Yunhang Shen, Siqi Cai, Yuchen Shi, Zihan Xu, Zuwei Long, Yike Zhang, Shaoqi Dong, Chaoyou Fu, Ke Li, Long Ma, Xing Sun
- Paper
-
【2024-12】-【Typhoon2-Audio】-【SCB 10X】-【Type: Multimodal Language Model】
- Typhoon2-Audio: A Thai Multimodal Language Model for Speech and Text Processing
- Author(s): Kunat Pipatanakul, Potsawee Manakul, Natapong Nitarach, Warit Sirichotedumrong, Surapon Nonesung, Teetouch Jaknamon, Parinthapat Pengpun, Pittawat Taveekitworachai, Adisai Na-Thalang, Sittipong Sripaisarnmongkol, Krisanapong Jirayoot, Kasima Tharnpipitchai
- Paper / Hugging Face Model / Demo
-
【2024-12】-【MERaLiON-AudioLLM】-【I2R, A*STAR, Singapore】-【Type: Model】
- MERaLiON-AudioLLM: Bridging Audio and Language with Large Language Models
- Author(s): Yingxu He, Zhuohan Liu, Shuo Sun, Bin Wang, Wenyu Zhang, Xunlong Zou, Nancy F. Chen, Ai Ti Aw
- Paper / Hugging Face Model / Demo
-
【2024-11】-【Taiwanese AudioLLM】-【National Taiwan University】-【Type: Model】
- Building a Taiwanese Mandarin Spoken Language Model: A First Attempt
- Author(s): Chih-Kai Yang, Yu-Kuan Fu, Chen-An Li, Yi-Cheng Lin, Yu-Xiang Lin, Wei-Chih Chen, Ho Lam Chung, Chun-Yi Kuan, Wei-Ping Huang, Ke-Han Lu, Tzu-Quan Lin, Hsiu-Hsuan Wang, En-Pei Hu, Chan-Jan Hsu, Liang-Hsuan Tseng, I-Hsiang Chiu, Ulin Sanga, Xuanjun Chen, Po-chun Hsu, Shu-wen Yang, Hung-yi Lee
- Paper
-
【2024-10】-【SPIRIT LM】-【Meta】-【Type: Model】
- SPIRIT LM: Interleaved Spoken and Written Language Model
- Author(s): Tu Anh Nguyen, Benjamin Muller, Bokai Yu, Marta R. Costa-jussa, Maha Elbayad, Sravya Popuri, Christophe Ropers, Paul-Ambroise Duquenne, Robin Algayres, Ruslan Mavlyutov, Itai Gat, Mary Williamson, Gabriel Synnaeve, Juan Pino, Benoit Sagot, Emmanuel Dupoux
- Paper / Other Link
-
【2024-10】-【SpeechEmotionLlama】-【MIT, Meta】-【Type: Model】
- Frozen Large Language Models Can Perceive Paralinguistic Aspects of Speech
- Author(s): Wonjune Kang, Junteng Jia, Chunyang Wu, Wei Zhou, Egor Lakomkin, Yashesh Gaur, Leda Sari, Suyoun Kim, Ke Li, Jay Mahadeokar, Ozlem Kalinli
- Paper
-
【2024-10】-【SPIRIT LM】-【Meta】-【Type: Model】
- SPIRIT LM: Interleaved Spoken and Written Language Model
- Author(s): Tu Anh Nguyen, Benjamin Muller, Bokai Yu, Marta R. Costa-jussa, Maha Elbayad, Sravya Popuri, Paul-Ambroise Duquenne, Robin Algayres, Ruslan Mavlyutov, Itai Gat, Gabriel Synnaeve, Juan Pino, Benoît Sagot, Emmanuel Dupoux
- Paper / Demo
-
【2024-10】-【DiVA】-【Georgia Tech, Stanford】-【Type: Model】
-
【2024-09】-【AudioBERT】-【POSTECH, Inha University】-【Type: Model】
- AudioBERT: Audio Knowledge Augmented Language Model
- Author(s): Hyunjong Ok, Suho Yoo, Jaeho Lee
- Paper
-
【2024-09】-【Ultravox】-【Fixie.ai】-【Type: Model】
-
【2024-09】-【LLaMA-Omni】-【Institute of Computing Technology, Chinese Academy of Sciences (ICT/CAS)】-【Type: Model】
- LLaMA-Omni: Seamless Speech Interaction with Large Language Models
- Author(s): Qingkai Fang, Shoutao Guo, Yan Zhou, Zhengrui Ma, Shaolei Zhang, Yang Feng
- Paper
-
【2024-09】-【DeSTA2】-【National Taiwan University, NVIDIA】-【Type: Model】
- Developing Instruction-Following Speech Language Model Without Speech Instruction-Tuning Data
- Author(s): Ke-Han Lu, Zhehuai Chen, Szu-Wei Fu, Chao-Han Huck Yang, Jagadeesh Balam, Boris Ginsburg, Yu-Chiang Frank Wang, Hung-yi Lee
- Paper
-
【2024-09】-【ASRCompare】-【Tsinghua University, Tencent AI Lab】-【Type: Model】
- Comparing Discrete and Continuous Space LLMs for Speech Recognition
- Author(s): Yaoxun Xu, Shi-Xiong Zhang, Jianwei Yu, Zhiyong Wu, Dong Yu
- Paper
-
【2024-09】-【MoWE-Audio】-【A*STAR】-【Type: Model】
- MoWE-Audio: Multitask AudioLLMs with Mixture of Weak Encoders
- Author(s): Wenyu Zhang, Shuo Sun, Bin Wang, Xunlong Zou, Zhuohan Liu, Yingxu He, Geyu Lin, Nancy F. Chen, Ai Ti Aw
- Paper
-
【2024-09】-【Moshi】-【Kyutai】-【Type: Model】
- Moshi: a speech-text foundation model for real-time dialogue
- Author(s): Alexandre Défossez, Laurent Mazaré, Manu Orsini, Amélie Royer, Patrick Pérez, Hervé Jégou, Edouard Grave, Neil Zeghidour
- Paper
-
【2024-08】-【Mini-Omni】-【Tsinghua University】-【Type: Model】
- Mini-Omni: Language Models Can Hear, Talk While Thinking in Streaming
- Author(s): Zhifei Xie, Changqiao Wu
- Paper
-
【2024-08】-【MooER】-【Moore Threads】-【Type: Model】
- MooER: LLM-based Speech Recognition and Translation Models from Moore Threads
- Author(s): Zhenlin Liang, Junhao Xu, Yi Liu, Yichao Hu, Jian Li, Yajun Zheng, Meng Cai, Hua Wang
- Paper
-
【2024-08】-【Typhoon-Audio】-【SCB 10X】-【Type: Multimodal Language Model】
- Typhoon-Audio: Enhancing Low-Resource Language and Instruction Following Capabilities of Audio Language Models
- Author(s): Potsawee Manakul, Guangzhi Sun, Warit Sirichotedumrong, Kasima Tharnpipitchai, Kunat Pipatanakul
- Paper / Hugging Face Model
-
【2024-07】-【Qwen2-Audio】-【Alibaba Group】-【Type: Model】
- Qwen2-Audio Technical Report
- Author(s): Yunfei Chu, Jin Xu, Qian Yang, Haojie Wei, Xipin Wei, Zhifang Guo, Yichong Leng, Yuanjun Lv, Jinzheng He, Junyang Lin, Chang Zhou, Jingren Zhou
- Paper
-
【2024-07】-【LLaST】-【The Chinese University of Hong Kong, Shenzhen; Shanghai AI Laboratory; Nara Institute of Science and Technology, Japan】-【Type: Model】
- LLaST: Improved End-to-end Speech Translation System Leveraged by Large Language Models
- Author(s): Xi Chen, Songyang Zhang, Qibing Bai, Kai Chen, Satoshi Nakamura
- Paper
-
【2024-07】-【Decoder-only LLMs for STT】-【NTU-Taiwan, Meta】-【Type: Research】
- Investigating Decoder-only Large Language Models for Speech-to-text Translation
- Author(s): Authors not specified in the provided information
- Paper
-
【2024-07】-【GAMA】-【University of Maryland, College Park】-【Type: Model】
-
【2024-07】-【FunAudioLLM】-【Alibaba】-【Type: Model】
-
【2024-07】-【CompA】-【University of Maryland, College Park; Adobe, USA; NVIDIA, Bangalore, India】-【Type: Model】
-
【2024-06】-【Speech ReaLLM】-【Meta】-【Type: Model】
- Speech ReaLLM – Real-time Streaming Speech Recognition with Multimodal LLMs by Teaching the Flow of Time
- Author(s): Authors not specified in the provided information
- Paper
-
【2024-06】-【DeSTA】-【NTU-Taiwan, Nvidia】-【Type: Model】
- DeSTA: Enhancing Speech Language Models through Descriptive Speech-Text Alignment
- Author(s): Authors not specified in the provided information
- Paper
-
【2024-05】-【Audio Flamingo】-【Nvidia】-【Type: Model】
- Audio Flamingo: A Novel Audio Language Model with Few-Shot Learning and Dialogue Abilities
- Author(s): Authors not specified in the provided information
- Paper
-
【2024-04】-【SALMONN】-【Tsinghua】-【Type: Model】
-
【2024-03】-【WavLLM】-【CUHK】-【Type: Model】
- WavLLM: Towards Robust and Adaptive Speech Large Language Model
- Author(s): Authors not specified in the provided information
- Paper
-
【2024-02】-【SLAM-LLM】-【Shanghai Jiao Tong University (SJTU)】-【Type: Model】
- An Embarrassingly Simple Approach for LLM with Strong ASR Capacity
- Author(s): Authors not specified in the provided information
- Paper
-
【2024-01】-【Pengi】-【Microsoft】-【Type: Model】
- Pengi: An Audio Language Model for Audio Tasks
- Author(s): Authors not specified in the provided information
- Paper
-
【2023-12】-【Qwen-Audio】-【Alibaba】-【Type: Model】
-
【2023-10】-【UniAudio】-【Chinese University of Hong Kong (CUHK)】-【Type: Model】
-
【2023-09】-【LLaSM】-【LinkSoul.AI】-【Type: Model】
- LLaSM: Large Language and Speech Model
- Author(s): Authors not specified in the provided information
- Paper
-
【2023-09】-【Segment-level Q-Former】-【Tsinghua University, ByteDance】-【Type: Model】
- Connecting Speech Encoder and Large Language Model for ASR
- Author(s): Wenyi Yu, Changli Tang, Guangzhi Sun, Xianzhao Chen, Tian Tan, Wei Li, Lu Lu, Zejun Ma, Chao Zhang
- Paper
-
【2023-07】-【Prompting LLMs with Speech Recognition】-【Meta】-【Type: Model】
- Prompting Large Language Models with Speech Recognition Abilities
- Author(s): Yassir Fathullah, Chunyang Wu, Egor Lakomkin, Junteng Jia, Yuan Shangguan, Ke Li, Jinxi Guo, Wenhan Xiong, Jay Mahadeokar, Ozlem Kalinli, Christian Fuegen, Mike Seltzer
- Paper
-
【2023-05】-【SpeechGPT】-【Fudan University】-【Type: Model】
-
【2023-04】-【AudioGPT】-【Zhejiang University】-【Type: Model】
- AudioGPT: Understanding and Generating Speech, Music, Sound, and Talking Head
- Author(s): Rongjie Huang, Mingze Li, Dongchao Yang, Jiatong Shi, Xuankai Chang, Zhenhui Ye, Yuning Wu, Zhiqing Hong, Jiawei Huang, Jinglin Liu, Yi Ren, Zhou Zhao, Shinji Watanabe
- Paper
-
【2025-01】-【UltraEval-Audio】-【OpenBMB】-【Type: Benchmark】
-
【2024-12】-【ADU-Bench】-【Tsinghua University, University of Oxford】-【Type: Benchmark】
- Benchmarking Open-ended Audio Dialogue Understanding for Large Audio-Language Models
- Author(s): Kuofeng Gao, Shu-Tao Xia, Ke Xu, Philip Torr, Jindong Gu
- Paper
-
【2024-12】-【TalkArena】-【Stanford University, SCB 10X】-【Type: Interactive Benchmarking Tool】
- TalkArena: Interactive Evaluation of Large Audio Models
- Author(s): Ella Minzhi Li*, Will Held*, Michael J. Ryan, Kunat Pipatanakul, Potsawee Manakul, Hao Zhu, Diyi Yang (*Equal Contribution)
- Demo / Other Link
-
【2024-12】-【ADU-Bench】-【Tsinghua University, University of Oxford】-【Type: Benchmark】
- Benchmarking Open-ended Audio Dialogue Understanding for Large Audio-Language Models
- Author(s): Kuofeng Gao, Shu-Tao Xia, Ke Xu, Philip Torr, Jindong Gu
- Paper
-
【2024-11】-【Dynamic-SUPERB Phase-2】-【National Taiwan University, University of Texas at Austin, Carnegie Mellon University, Nanyang Technological University, Toyota Technological Institute of Chicago, Université du Québec (INRS-EMT), NVIDIA, ASAPP, Renmin University of China】-【Type: Evaluation Framework】
- Dynamic-SUPERB Phase-2: A Collaboratively Expanding Benchmark for Measuring the Capabilities of Spoken Language Models with 180 Tasks
- Author(s): Chien-yu Huang, Wei-Chih Chen, Shu-wen Yang, Andy T. Liu, Chen-An Li, Yu-Xiang Lin, Wei-Cheng Tseng, Anuj Diwan, Yi-Jen Shih, Jiatong Shi, William Chen, Xuanjun Chen, Chi-Yuan Hsiao, Puyuan Peng, Shih-Heng Wang, Chun-Yi Kuan, Haibin Wu, Siddhant Arora, Kai-Wei Chang, Yifan Peng, Roshan Sharma, Shinji Watanabe, Bhiksha Ramakrishnan, Shady Shehata, Hung-yi Lee
- Paper / Other Link
-
【2024-10】-【VoiceBench】-【National University of Singapore】-【Type: Benchmark】
- VoiceBench: Benchmarking LLM-Based Voice Assistants
- Author(s): Yiming Chen, Xianghu Yue, Chen Zhang, Xiaoxue Gao, Robby T. Tan, Haizhou Li
- Paper
-
【2024-10】-【MMAU】-【University of Maryland】-【Type: Benchmark】
- MMAU: A Massive Multi-Task Audio Understanding and Reasoning Benchmark
- Author(s): S Sakshi, Utkarsh Tyagi, Sonal Kumar, Ashish Seth, Ramaneswaran Selvakumar, Oriol Nieto, Ramani Duraiswami, Sreyan Ghosh, Dinesh Manocha
- Paper / Other Link
-
【2024-09】-【SALMon】-【Hebrew University of Jerusalem】-【Type: Benchmark】
-
【2024-08】-【MuChoMusic】-【UPF, QMUL, UMG】-【Type: Benchmark】
- MuChoMusic: Evaluating Music Understanding in Multimodal Audio-Language Models
- Author(s): Benno Weck, Ilaria Manco, Emmanouil Benetos, Elio Quinton, George Fazekas, Dmitry Bogdanov
- Paper
-
【2024-07】-【AudioEntailment】-【CMU, Microsoft】-【Type: Benchmark】
- Audio Entailment: Assessing Deductive Reasoning for Audio Understanding
- Author(s): Soham Deshmukh, Shuo Han, Hazim Bukhari, Benjamin Elizalde, Hannes Gamper, Rita Singh, Bhiksha Raj
- Paper
-
【2024-06】-【AudioBench】-【A*STAR, Singapore】-【Type: Benchmark】
-
【2024-06】-【SD-Eval】-【CUHK, Bytedance】-【Type: Benchmark】
- SD-Eval: A Benchmark Dataset for Spoken Dialogue Understanding Beyond Words
- Author(s): Junyi Ao, Yuancheng Wang, Xiaohai Tian, Dekun Chen, Jun Zhang, Lu Lu, Yuxuan Wang, Haizhou Li, Zhizheng Wu
- Paper
-
【2024-05】-【AIR-Bench】-【ZJU, Alibaba】-【Type: Benchmark】
- AIR-Bench: Benchmarking Large Audio-Language Models via Generative Comprehension
- Author(s): Qian Yang, Jin Xu, Wenrui Liu, Yunfei Chu, Ziyue Jiang, Xiaohuan Zhou, Yichong Leng, Yuanjun Lv, Zhou Zhao, Chang Zhou, Jingren Zhou
- Paper
-
【2024-03】-【SpokenWOZ】-【Tencent】-【Type: Benchmark】
-
【2023-09】-【Dynamic-SUPERB】-【NTU-Taiwan, etc.】-【Type: Benchmark】
- Dynamic-SUPERB: Towards A Dynamic, Collaborative, and Comprehensive Instruction-Tuning Benchmark for Speech
- Author(s): Chien-yu Huang, Ke-Han Lu, Shih-Heng Wang, Chi-Yuan Hsiao, Chun-Yi Kuan, Haibin Wu, Siddhant Arora, Kai-Wei Chang, Jiatong Shi, Yifan Peng, Roshan Sharma, Shinji Watanabe, Bhiksha Ramakrishnan, Shady Shehata, Hung-yi Lee
- Paper
-
【2024-11】-【WavChat-Survey】-【Zhejiang University】-【Type: Survey】
- WavChat: A Survey of Spoken Dialogue Models
- Author(s): Shengpeng Ji, Yifu Chen, Minghui Fang, Jialong Zuo, Jingyu Lu, Hanting Wang, Ziyue Jiang, Long Zhou, Shujie Liu, Xize Cheng, Xiaoda Yang, Zehan Wang, Qian Yang, Jian Li, Yidi Jiang, Jingzhen He, Yunfei Chu, Jin Xu, Zhou Zhao
- Paper
-
【2024-10】-【SpeechLLM-Survey】-【SJTU, AISpeech】-【Type: Survey】
- A Survey on Speech Large Language Models
- Author(s): Jing Peng, Yucheng Wang, Yu Xi, Xu Li, Xizhuo Zhang, Kai Yu
- Paper
-
【2024-10】-【SpeechLM-Survey】-【CUHK, Tencent】-【Type: Survey】
- Recent Advances in Speech Language Models: A Survey
- Author(s): Wenqian Cui, Dianzhi Yu, Xiaoqi Jiao, Ziqiao Meng, Guangyan Zhang, Qichao Wang, Yiwen Guo, Irwin King
- Paper
-
【2024-02】-【AudioLM-Survey】-【National Taiwan University, MIT】-【Type: Survey】
- Towards audio language modeling -- an overview
- Author(s): Haibin Wu, Xuanjun Chen, Yi-Cheng Lin, Kai-wei Chang, Ho-Lam Chung, Alexander H. Liu, Hung-yi Lee
- Paper
-
【2024-09】-【EMOVA】-【HKUST】-【Type: Model】
- EMOVA: Empowering Language Models to See, Hear and Speak with Vivid Emotions
- Author(s): Kai Chen, Yunhao Gou, Runhui Huang, Zhili Liu, Daxin Tan, Jing Xu, Chunwei Wang, Yi Zhu, Yihan Zeng, Kuo Yang, Dingdong Wang, Kun Xiang, Haoyuan Li, Haoli Bai, Jianhua Han, Xiaohui Li, Weike Jin, Nian Xie, Yu Zhang, James T. Kwok, Hengshuang Zhao, Xiaodan Liang, Dit-Yan Yeung, Xiao Chen, Zhenguo Li, Wei Zhang, Qun Liu, Jun Yao, Lanqing Hong, Lu Hou, Hang Xu
- Paper / Demo
-
【2023-11】-【CoDi-2】-【UC Berkeley】-【Type: Model】
-
【2023-06】-【Macaw-LLM】-【Tencent】-【Type: Model】
- Macaw-LLM: Multi-Modal Language Modeling with Image, Video, Audio, and Text Integration
- Author(s): Chenyang Lyu, Minghao Wu, Longyue Wang, Xinting Huang, Bingshuai Liu, Zefeng Du, Shuming Shi, Zhaopeng Tu
- Paper
-
【2024-06】-【Audio Hallucination】-【NTU-Taiwan】-【Type: Research】
- Understanding Sounds, Missing the Questions: The Challenge of Object Hallucination in Large Audio-Language Models
- Author(s): Chun-Yi Kuan, Wei-Ping Huang, Hung-yi Lee
- Paper
-
【2024-06】-【CodecFake】-【National Taiwan University】-【Type: Safety】
- CodecFake: Enhancing Anti-Spoofing Models Against Deepfake Audios from Codec-Based Speech Synthesis Systems
- Author(s): Haibin Wu, Yuan Tseng, Hung-yi Lee
- Paper / Other Link
-
【2024-05】-【VoiceJailbreak】-【CISPA】-【Type: Method】
- Voice Jailbreak Attacks Against GPT-4o
- Author(s): Xinyue Shen, Yixin Wu, Michael Backes, Yang Zhang
- Paper
-
【2025-01】-【MinMo】-【FunAudioLLM Team, Tongyi Lab, Alibaba Group】-【Type: Multimodal Large Language Model】
- MinMo: A Multimodal Large Language Model for Seamless Voice Interaction
- Author(s): Qian Chen, Yafeng Chen, Yanni Chen, Mengzhe Chen, Yingda Chen, Chong Deng, Zhihao Du, Ruize Gao, Changfeng Gao, Zhifu Gao, Yabin Li, Xiang Lv, Jiaqing Liu, Haoneng Luo, Bin Ma, Chongjia Ni, Xian Shi, Jialong Tang, Hui Wang, Hao Wang, Wen Wang, Yuxuan Wang, Yunlan Xu, Fan Yu, Zhijie Yan, Yexin Yang, Baosong Yang, Xian Yang, Guanrou Yang, Tianyu Zhao, Qinglin Zhang, Shiliang Zhang, Nan Zhao, Pei Zhang, Chong Zhang, Jinren Zhou
- Paper / Other Link
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for Awesome-Audio-LLM
Similar Open Source Tools

Awesome-Audio-LLM
Awesome-Audio-LLM is a repository dedicated to various models and methods related to audio and language processing. It includes a wide range of research papers and models developed by different institutions and authors. The repository covers topics such as bridging audio and language, speech emotion recognition, voice assistants, and more. It serves as a comprehensive resource for those interested in the intersection of audio and language processing.

AIGC-Interview-Book
AIGC-Interview-Book is the ultimate guide for AIGC algorithm and development job interviews, covering a wide range of topics such as AIGC, traditional deep learning, autonomous driving, AI agent, machine learning, computer vision, natural language processing, reinforcement learning, embodied intelligence, metaverse, AGI, Python, Java, C/C++, Go, embedded systems, front-end, back-end, testing, and operations. The repository consolidates industry experience and insights from frontline AIGC algorithm experts, providing resources on AIGC knowledge framework, internal referrals at AIGC big companies, interview experiences, company guides, AI campus recruitment schedule, interview preparation, salary insights, coding guide, and job-seeking Q&A. It serves as a valuable resource for AIGC-related professionals, students, and job seekers, offering insights and guidance for career advancement and job interviews in the AIGC field.

Awesome-Lists-and-CheatSheets
Awesome-Lists is a curated index of selected resources spanning various fields including programming languages and theories, web and frontend development, server-side development and infrastructure, cloud computing and big data, data science and artificial intelligence, product design, etc. It includes articles, books, courses, examples, open-source projects, and more. The repository categorizes resources according to the knowledge system of different domains, aiming to provide valuable and concise material indexes for readers. Users can explore and learn from a wide range of high-quality resources in a systematic way.

Awesome-Trustworthy-Embodied-AI
The Awesome Trustworthy Embodied AI repository focuses on the development of safe and trustworthy Embodied Artificial Intelligence (EAI) systems. It addresses critical challenges related to safety and trustworthiness in EAI, proposing a unified research framework and defining levels of safety and resilience. The repository provides a comprehensive review of state-of-the-art solutions, benchmarks, and evaluation metrics, aiming to bridge the gap between capability advancement and safety mechanisms in EAI development.

Awesome-Lists
Awesome-Lists is a curated list of awesome lists across various domains of computer science and beyond, including programming languages, web development, data science, and more. It provides a comprehensive index of articles, books, courses, open source projects, and other resources. The lists are organized by topic and subtopic, making it easy to find the information you need. Awesome-Lists is a valuable resource for anyone looking to learn more about a particular topic or to stay up-to-date on the latest developments in the field.

cc-sdd
The cc-sdd repository provides a tool for AI-Driven Development Life Cycle with Spec-Driven Development workflows for Claude Code and Gemini CLI. It includes powerful slash commands, Project Memory for AI learning, structured AI-DLC workflow, Spec-Driven Development methodology, and Kiro IDE compatibility. Ideal for feature development, code reviews, technical planning, and maintaining development standards. The tool supports multiple coding agents, offers an AI-DLC workflow with quality gates, and allows for advanced options like language and OS selection, preview changes, safe updates, and custom specs directory. It integrates AI-Driven Development Life Cycle, Project Memory, Spec-Driven Development, supports cross-platform usage, multi-language support, and safe updates with backup options.

FeedCraft
FeedCraft is a powerful tool to process your rss feeds as a middleware. Use it to translate your feed, extract fulltext, emulate browser to render js-heavy page, use llm such as google gemini to generate brief for your rss article, use natural language to filter your rss feed, and more! It is an open-source tool that can be self-deployed and used with any RSS reader. It supports AI-powered processing using Open AI compatible LLMs, custom prompt, saving rules to apply to different RSS sources, portable mode for on-the-go usage, and dock mode for advanced customization of RSS sources and processing parameters.

Rankify
Rankify is a Python toolkit designed for unified retrieval, re-ranking, and retrieval-augmented generation (RAG) research. It integrates 40 pre-retrieved benchmark datasets and supports 7 retrieval techniques, 24 state-of-the-art re-ranking models, and multiple RAG methods. Rankify provides a modular and extensible framework, enabling seamless experimentation and benchmarking across retrieval pipelines. It offers comprehensive documentation, open-source implementation, and pre-built evaluation tools, making it a powerful resource for researchers and practitioners in the field.

awesome-azure-openai-llm
This repository is a collection of references to Azure OpenAI, Large Language Models (LLM), and related services and libraries. It provides information on various topics such as RAG, Azure OpenAI, LLM applications, agent design patterns, semantic kernel, prompting, finetuning, challenges & abilities, LLM landscape, surveys & references, AI tools & extensions, datasets, and evaluations. The content covers a wide range of topics related to AI, machine learning, and natural language processing, offering insights into the latest advancements in the field.

CGraph
CGraph is a cross-platform **D** irected **A** cyclic **G** raph framework based on pure C++ without any 3rd-party dependencies. You, with it, can **build your own operators simply, and describe any running schedules** as you need, such as dependence, parallelling, aggregation and so on. Some useful tools and plugins are also provide to improve your project. Tutorials and contact information are show as follows. Please **get in touch with us for free** if you need more about this repository.

nodetool
NodeTool is a platform designed for AI enthusiasts, developers, and creators, providing a visual interface to access a variety of AI tools and models. It simplifies access to advanced AI technologies, offering resources for content creation, data analysis, automation, and more. With features like a visual editor, seamless integration with leading AI platforms, model manager, and API integration, NodeTool caters to both newcomers and experienced users in the AI field.

mcp
Model Context Protocol (MCP) is an open protocol that standardizes how applications provide context to large language models (LLMs). It allows AI applications to connect with various data sources and tools in a consistent manner, enhancing their capabilities and flexibility. This repository contains core libraries, test frameworks, engineering systems, pipelines, and tooling for Microsoft MCP Server contributors to unify engineering investments and reduce duplication and divergence. For more details, visit the official MCP website.

Avalonia-Assistant
Avalonia-Assistant is an open-source desktop intelligent assistant that aims to provide a user-friendly interactive experience based on the Avalonia UI framework and the integration of Semantic Kernel with OpenAI or other large LLM models. By utilizing Avalonia-Assistant, you can perform various desktop operations through text or voice commands, enhancing your productivity and daily office experience.

claude-flow
Claude-Flow is a workflow automation tool designed to streamline and optimize business processes. It provides a user-friendly interface for creating and managing workflows, allowing users to automate repetitive tasks and improve efficiency. With features such as drag-and-drop workflow builder, customizable templates, and integration with popular business tools, Claude-Flow empowers users to automate their workflows without the need for extensive coding knowledge. Whether you are a small business owner looking to streamline your operations or a project manager seeking to automate task assignments, Claude-Flow offers a flexible and scalable solution to meet your workflow automation needs.

Awesome-Embodied-Agent-with-LLMs
This repository, named Awesome-Embodied-Agent-with-LLMs, is a curated list of research related to Embodied AI or agents with Large Language Models. It includes various papers, surveys, and projects focusing on topics such as self-evolving agents, advanced agent applications, LLMs with RL or world models, planning and manipulation, multi-agent learning and coordination, vision and language navigation, detection, 3D grounding, interactive embodied learning, rearrangement, benchmarks, simulators, and more. The repository provides a comprehensive collection of resources for individuals interested in exploring the intersection of embodied agents and large language models.

AutoAgents
AutoAgents is a cutting-edge multi-agent framework built in Rust that enables the creation of intelligent, autonomous agents powered by Large Language Models (LLMs) and Ractor. Designed for performance, safety, and scalability. AutoAgents provides a robust foundation for building complex AI systems that can reason, act, and collaborate. With AutoAgents you can create Cloud Native Agents, Edge Native Agents and Hybrid Models as well. It is so extensible that other ML Models can be used to create complex pipelines using Actor Framework.
For similar tasks

Awesome-Audio-LLM
Awesome-Audio-LLM is a repository dedicated to various models and methods related to audio and language processing. It includes a wide range of research papers and models developed by different institutions and authors. The repository covers topics such as bridging audio and language, speech emotion recognition, voice assistants, and more. It serves as a comprehensive resource for those interested in the intersection of audio and language processing.

vocode-python
Vocode is an open source library that enables users to easily build voice-based LLM (Large Language Model) apps. With Vocode, users can create real-time streaming conversations with LLMs and deploy them for phone calls, Zoom meetings, and more. The library offers abstractions and integrations for transcription services, LLMs, and synthesis services, making it a comprehensive tool for voice-based applications.

ultravox
Ultravox is a fast multimodal Language Model (LLM) that can understand both text and human speech in real-time without the need for a separate Audio Speech Recognition (ASR) stage. By extending Meta's Llama 3 model with a multimodal projector, Ultravox converts audio directly into a high-dimensional space used by Llama 3, enabling quick responses and potential understanding of paralinguistic cues like timing and emotion in human speech. The current version (v0.3) has impressive speed metrics and aims for further enhancements. Ultravox currently converts audio to streaming text and plans to emit speech tokens for direct audio conversion. The tool is open for collaboration to enhance this functionality.
For similar jobs

Awesome-Audio-LLM
Awesome-Audio-LLM is a repository dedicated to various models and methods related to audio and language processing. It includes a wide range of research papers and models developed by different institutions and authors. The repository covers topics such as bridging audio and language, speech emotion recognition, voice assistants, and more. It serves as a comprehensive resource for those interested in the intersection of audio and language processing.

LLMVoX
LLMVoX is a lightweight 30M-parameter, LLM-agnostic, autoregressive streaming Text-to-Speech (TTS) system designed to convert text outputs from Large Language Models into high-fidelity streaming speech with low latency. It achieves significantly lower Word Error Rate compared to speech-enabled LLMs while operating at comparable latency and speech quality. Key features include being lightweight & fast with only 30M parameters, LLM-agnostic for easy integration with existing models, multi-queue streaming for continuous speech generation, and multilingual support for easy adaptation to new languages.

weave
Weave is a toolkit for developing Generative AI applications, built by Weights & Biases. With Weave, you can log and debug language model inputs, outputs, and traces; build rigorous, apples-to-apples evaluations for language model use cases; and organize all the information generated across the LLM workflow, from experimentation to evaluations to production. Weave aims to bring rigor, best-practices, and composability to the inherently experimental process of developing Generative AI software, without introducing cognitive overhead.

agentcloud
AgentCloud is an open-source platform that enables companies to build and deploy private LLM chat apps, empowering teams to securely interact with their data. It comprises three main components: Agent Backend, Webapp, and Vector Proxy. To run this project locally, clone the repository, install Docker, and start the services. The project is licensed under the GNU Affero General Public License, version 3 only. Contributions and feedback are welcome from the community.

oss-fuzz-gen
This framework generates fuzz targets for real-world `C`/`C++` projects with various Large Language Models (LLM) and benchmarks them via the `OSS-Fuzz` platform. It manages to successfully leverage LLMs to generate valid fuzz targets (which generate non-zero coverage increase) for 160 C/C++ projects. The maximum line coverage increase is 29% from the existing human-written targets.

LLMStack
LLMStack is a no-code platform for building generative AI agents, workflows, and chatbots. It allows users to connect their own data, internal tools, and GPT-powered models without any coding experience. LLMStack can be deployed to the cloud or on-premise and can be accessed via HTTP API or triggered from Slack or Discord.

VisionCraft
The VisionCraft API is a free API for using over 100 different AI models. From images to sound.

kaito
Kaito is an operator that automates the AI/ML inference model deployment in a Kubernetes cluster. It manages large model files using container images, avoids tuning deployment parameters to fit GPU hardware by providing preset configurations, auto-provisions GPU nodes based on model requirements, and hosts large model images in the public Microsoft Container Registry (MCR) if the license allows. Using Kaito, the workflow of onboarding large AI inference models in Kubernetes is largely simplified.