Awesome-Audio-LLM
Audio Large Language Models
Stars: 214
Awesome-Audio-LLM is a repository dedicated to various models and methods related to audio and language processing. It includes a wide range of research papers and models developed by different institutions and authors. The repository covers topics such as bridging audio and language, speech emotion recognition, voice assistants, and more. It serves as a comprehensive resource for those interested in the intersection of audio and language processing.
README:
We thank the following contributors for their valuable contributions! zwenyu, Yuan-ManX, chaoweihuang, Liu-Tianchi, Sakshi113,
- MERaLiON-AudioLLM
- ADU-Bench
- Dynamic-SUPERB Phase-2
- Taiwanese AudioLLM
- WavChat-Survey
- SpeechLLM-Survey
- VoiceBench
- SPIRIT LM
- DiVA
- SPIRIT LM
- SpeechEmotionLlama
- SpeechLM-Survey
- MMAU
- SALMon
- EMOVA
- Moshi
- LLaMA-Omni
- Ultravox
- MoWE-Audio
- AudioBERT
- DeSTA2
- ASRCompare
- MooER
- MuChoMusic
- Mini-Omni
- FunAudioLLM
- Qwen2-Audio
- GAMA
- LLaST
- Decoder-only LLMs for STT
- AudioEntailment
- CompA
- DeSTA
- Audio Hallucination
- SD-Eval
- Speech ReaLLM
- AudioBench
- AIR-Bench
- Audio Flamingo
- VoiceJailbreak
- SALMONN
- WavLLM
- SLAM-LLM
- Pengi
- Qwen-Audio
- CoDi-2
- UniAudio
- Dynamic-SUPERB
- LLaSM
- Segment-level Q-Former
- Prompting LLMs with Speech Recognition
- Macaw-LLM
- SpeechGPT
- AudioGPT
-
【2024-12】-【MERaLiON-AudioLLM】-【I2R, A*STAR, Singapore】-【Type: Model】
- MERaLiON-AudioLLM: Bridging Audio and Language with Large Language Models
- Author(s): Yingxu He, Zhuohan Liu, Shuo Sun, Bin Wang, Wenyu Zhang, Xunlong Zou, Nancy F. Chen, Ai Ti Aw
- Paper / Hugging Face Model / Demo
-
【2024-11】-【Taiwanese AudioLLM】-【National Taiwan University】-【Type: Model】
- Building a Taiwanese Mandarin Spoken Language Model: A First Attempt
- Author(s): Chih-Kai Yang, Yu-Kuan Fu, Chen-An Li, Yi-Cheng Lin, Yu-Xiang Lin, Wei-Chih Chen, Ho Lam Chung, Chun-Yi Kuan, Wei-Ping Huang, Ke-Han Lu, Tzu-Quan Lin, Hsiu-Hsuan Wang, En-Pei Hu, Chan-Jan Hsu, Liang-Hsuan Tseng, I-Hsiang Chiu, Ulin Sanga, Xuanjun Chen, Po-chun Hsu, Shu-wen Yang, Hung-yi Lee
- Paper
-
【2024-10】-【SPIRIT LM】-【Meta】-【Type: Model】
- SPIRIT LM: Interleaved Spoken and Written Language Model
- Author(s): Tu Anh Nguyen, Benjamin Muller, Bokai Yu, Marta R. Costa-jussa, Maha Elbayad, Sravya Popuri, Paul-Ambroise Duquenne, Robin Algayres, Ruslan Mavlyutov, Itai Gat, Gabriel Synnaeve, Juan Pino, Benoît Sagot, Emmanuel Dupoux
- Paper / Demo
-
【2024-10】-【DiVA】-【Georgia Tech, Stanford】-【Type: Model】
-
【2024-10】-【SPIRIT LM】-【Meta】-【Type: Model】
- SPIRIT LM: Interleaved Spoken and Written Language Model
- Author(s): Tu Anh Nguyen, Benjamin Muller, Bokai Yu, Marta R. Costa-jussa, Maha Elbayad, Sravya Popuri, Christophe Ropers, Paul-Ambroise Duquenne, Robin Algayres, Ruslan Mavlyutov, Itai Gat, Mary Williamson, Gabriel Synnaeve, Juan Pino, Benoit Sagot, Emmanuel Dupoux
- Paper / Other Link
-
【2024-10】-【SpeechEmotionLlama】-【MIT, Meta】-【Type: Model】
- Frozen Large Language Models Can Perceive Paralinguistic Aspects of Speech
- Author(s): Wonjune Kang, Junteng Jia, Chunyang Wu, Wei Zhou, Egor Lakomkin, Yashesh Gaur, Leda Sari, Suyoun Kim, Ke Li, Jay Mahadeokar, Ozlem Kalinli
- Paper
-
【2024-09】-【Moshi】-【Kyutai】-【Type: Model】
- Moshi: a speech-text foundation model for real-time dialogue
- Author(s): Alexandre Défossez, Laurent Mazaré, Manu Orsini, Amélie Royer, Patrick Pérez, Hervé Jégou, Edouard Grave, Neil Zeghidour
- Paper
-
【2024-09】-【LLaMA-Omni】-【Institute of Computing Technology, Chinese Academy of Sciences (ICT/CAS)】-【Type: Model】
- LLaMA-Omni: Seamless Speech Interaction with Large Language Models
- Author(s): Qingkai Fang, Shoutao Guo, Yan Zhou, Zhengrui Ma, Shaolei Zhang, Yang Feng
- Paper
-
【2024-09】-【Ultravox】-【Fixie.ai】-【Type: Model】
-
【2024-09】-【MoWE-Audio】-【A*STAR】-【Type: Model】
- MoWE-Audio: Multitask AudioLLMs with Mixture of Weak Encoders
- Author(s): Wenyu Zhang, Shuo Sun, Bin Wang, Xunlong Zou, Zhuohan Liu, Yingxu He, Geyu Lin, Nancy F. Chen, Ai Ti Aw
- Paper
-
【2024-09】-【AudioBERT】-【POSTECH, Inha University】-【Type: Model】
- AudioBERT: Audio Knowledge Augmented Language Model
- Author(s): Hyunjong Ok, Suho Yoo, Jaeho Lee
- Paper
-
【2024-09】-【DeSTA2】-【National Taiwan University, NVIDIA】-【Type: Model】
- Developing Instruction-Following Speech Language Model Without Speech Instruction-Tuning Data
- Author(s): Ke-Han Lu, Zhehuai Chen, Szu-Wei Fu, Chao-Han Huck Yang, Jagadeesh Balam, Boris Ginsburg, Yu-Chiang Frank Wang, Hung-yi Lee
- Paper
-
【2024-09】-【ASRCompare】-【Tsinghua University, Tencent AI Lab】-【Type: Model】
- Comparing Discrete and Continuous Space LLMs for Speech Recognition
- Author(s): Yaoxun Xu, Shi-Xiong Zhang, Jianwei Yu, Zhiyong Wu, Dong Yu
- Paper
-
【2024-08】-【MooER】-【Moore Threads】-【Type: Model】
- MooER: LLM-based Speech Recognition and Translation Models from Moore Threads
- Author(s): Zhenlin Liang, Junhao Xu, Yi Liu, Yichao Hu, Jian Li, Yajun Zheng, Meng Cai, Hua Wang
- Paper
-
【2024-08】-【Mini-Omni】-【Tsinghua University】-【Type: Model】
- Mini-Omni: Language Models Can Hear, Talk While Thinking in Streaming
- Author(s): Zhifei Xie, Changqiao Wu
- Paper
-
【2024-07】-【FunAudioLLM】-【Alibaba】-【Type: Model】
-
【2024-07】-【Qwen2-Audio】-【Alibaba Group】-【Type: Model】
- Qwen2-Audio Technical Report
- Author(s): Yunfei Chu, Jin Xu, Qian Yang, Haojie Wei, Xipin Wei, Zhifang Guo, Yichong Leng, Yuanjun Lv, Jinzheng He, Junyang Lin, Chang Zhou, Jingren Zhou
- Paper
-
【2024-07】-【GAMA】-【University of Maryland, College Park】-【Type: Model】
-
【2024-07】-【LLaST】-【The Chinese University of Hong Kong, Shenzhen; Shanghai AI Laboratory; Nara Institute of Science and Technology, Japan】-【Type: Model】
- LLaST: Improved End-to-end Speech Translation System Leveraged by Large Language Models
- Author(s): Xi Chen, Songyang Zhang, Qibing Bai, Kai Chen, Satoshi Nakamura
- Paper
-
【2024-07】-【Decoder-only LLMs for STT】-【NTU-Taiwan, Meta】-【Type: Research】
- Investigating Decoder-only Large Language Models for Speech-to-text Translation
- Author(s): Authors not specified in the provided information
- Paper
-
【2024-07】-【CompA】-【University of Maryland, College Park; Adobe, USA; NVIDIA, Bangalore, India】-【Type: Model】
-
【2024-06】-【DeSTA】-【NTU-Taiwan, Nvidia】-【Type: Model】
- DeSTA: Enhancing Speech Language Models through Descriptive Speech-Text Alignment
- Author(s): Authors not specified in the provided information
- Paper
-
【2024-06】-【Speech ReaLLM】-【Meta】-【Type: Model】
- Speech ReaLLM – Real-time Streaming Speech Recognition with Multimodal LLMs by Teaching the Flow of Time
- Author(s): Authors not specified in the provided information
- Paper
-
【2024-05】-【Audio Flamingo】-【Nvidia】-【Type: Model】
- Audio Flamingo: A Novel Audio Language Model with Few-Shot Learning and Dialogue Abilities
- Author(s): Authors not specified in the provided information
- Paper
-
【2024-04】-【SALMONN】-【Tsinghua】-【Type: Model】
-
【2024-03】-【WavLLM】-【CUHK】-【Type: Model】
- WavLLM: Towards Robust and Adaptive Speech Large Language Model
- Author(s): Authors not specified in the provided information
- Paper
-
【2024-02】-【SLAM-LLM】-【Shanghai Jiao Tong University (SJTU)】-【Type: Model】
- An Embarrassingly Simple Approach for LLM with Strong ASR Capacity
- Author(s): Authors not specified in the provided information
- Paper
-
【2024-01】-【Pengi】-【Microsoft】-【Type: Model】
- Pengi: An Audio Language Model for Audio Tasks
- Author(s): Authors not specified in the provided information
- Paper
-
【2023-12】-【Qwen-Audio】-【Alibaba】-【Type: Model】
-
【2023-10】-【UniAudio】-【Chinese University of Hong Kong (CUHK)】-【Type: Model】
-
【2023-09】-【LLaSM】-【LinkSoul.AI】-【Type: Model】
- LLaSM: Large Language and Speech Model
- Author(s): Authors not specified in the provided information
- Paper
-
【2023-09】-【Segment-level Q-Former】-【Tsinghua University, ByteDance】-【Type: Model】
- Connecting Speech Encoder and Large Language Model for ASR
- Author(s): Wenyi Yu, Changli Tang, Guangzhi Sun, Xianzhao Chen, Tian Tan, Wei Li, Lu Lu, Zejun Ma, Chao Zhang
- Paper
-
【2023-07】-【Prompting LLMs with Speech Recognition】-【Meta】-【Type: Model】
- Prompting Large Language Models with Speech Recognition Abilities
- Author(s): Yassir Fathullah, Chunyang Wu, Egor Lakomkin, Junteng Jia, Yuan Shangguan, Ke Li, Jinxi Guo, Wenhan Xiong, Jay Mahadeokar, Ozlem Kalinli, Christian Fuegen, Mike Seltzer
- Paper
-
【2023-05】-【SpeechGPT】-【Fudan University】-【Type: Model】
-
【2023-04】-【AudioGPT】-【Zhejiang University】-【Type: Model】
- AudioGPT: Understanding and Generating Speech, Music, Sound, and Talking Head
- Author(s): Rongjie Huang, Mingze Li, Dongchao Yang, Jiatong Shi, Xuankai Chang, Zhenhui Ye, Yuning Wu, Zhiqing Hong, Jiawei Huang, Jinglin Liu, Yi Ren, Zhou Zhao, Shinji Watanabe
- Paper
-
【2024-12】-【ADU-Bench】-【Tsinghua University, University of Oxford】-【Type: Benchmark】
- Benchmarking Open-ended Audio Dialogue Understanding for Large Audio-Language Models
- Author(s): Kuofeng Gao, Shu-Tao Xia, Ke Xu, Philip Torr, Jindong Gu
- Paper
-
【2024-11】-【Dynamic-SUPERB Phase-2】-【National Taiwan University, University of Texas at Austin, Carnegie Mellon University, Nanyang Technological University, Toyota Technological Institute of Chicago, Université du Québec (INRS-EMT), NVIDIA, ASAPP, Renmin University of China】-【Type: Evaluation Framework】
- Dynamic-SUPERB Phase-2: A Collaboratively Expanding Benchmark for Measuring the Capabilities of Spoken Language Models with 180 Tasks
- Author(s): Chien-yu Huang, Wei-Chih Chen, Shu-wen Yang, Andy T. Liu, Chen-An Li, Yu-Xiang Lin, Wei-Cheng Tseng, Anuj Diwan, Yi-Jen Shih, Jiatong Shi, William Chen, Xuanjun Chen, Chi-Yuan Hsiao, Puyuan Peng, Shih-Heng Wang, Chun-Yi Kuan, Haibin Wu, Siddhant Arora, Kai-Wei Chang, Yifan Peng, Roshan Sharma, Shinji Watanabe, Bhiksha Ramakrishnan, Shady Shehata, Hung-yi Lee
- Paper / Other Link
-
【2024-10】-【VoiceBench】-【National University of Singapore】-【Type: Benchmark】
- VoiceBench: Benchmarking LLM-Based Voice Assistants
- Author(s): Yiming Chen, Xianghu Yue, Chen Zhang, Xiaoxue Gao, Robby T. Tan, Haizhou Li
- Paper
-
【2024-10】-【MMAU】-【University of Maryland】-【Type: Benchmark】
- MMAU: A Massive Multi-Task Audio Understanding and Reasoning Benchmark
- Author(s): S Sakshi, Utkarsh Tyagi, Sonal Kumar, Ashish Seth, Ramaneswaran Selvakumar, Oriol Nieto, Ramani Duraiswami, Sreyan Ghosh, Dinesh Manocha
- Paper / Other Link
-
【2024-09】-【SALMon】-【Hebrew University of Jerusalem】-【Type: Benchmark】
-
【2024-08】-【MuChoMusic】-【UPF, QMUL, UMG】-【Type: Benchmark】
- MuChoMusic: Evaluating Music Understanding in Multimodal Audio-Language Models
- Author(s): Benno Weck, Ilaria Manco, Emmanouil Benetos, Elio Quinton, George Fazekas, Dmitry Bogdanov
- Paper
-
【2024-07】-【AudioEntailment】-【CMU, Microsoft】-【Type: Benchmark】
- Audio Entailment: Assessing Deductive Reasoning for Audio Understanding
- Author(s): Soham Deshmukh, Shuo Han, Hazim Bukhari, Benjamin Elizalde, Hannes Gamper, Rita Singh, Bhiksha Raj
- Paper
-
【2024-06】-【SD-Eval】-【CUHK, Bytedance】-【Type: Benchmark】
- SD-Eval: A Benchmark Dataset for Spoken Dialogue Understanding Beyond Words
- Author(s): Junyi Ao, Yuancheng Wang, Xiaohai Tian, Dekun Chen, Jun Zhang, Lu Lu, Yuxuan Wang, Haizhou Li, Zhizheng Wu
- Paper
-
【2024-06】-【AudioBench】-【A*STAR, Singapore】-【Type: Benchmark】
-
【2024-05】-【AIR-Bench】-【ZJU, Alibaba】-【Type: Benchmark】
- AIR-Bench: Benchmarking Large Audio-Language Models via Generative Comprehension
- Author(s): Qian Yang, Jin Xu, Wenrui Liu, Yunfei Chu, Ziyue Jiang, Xiaohuan Zhou, Yichong Leng, Yuanjun Lv, Zhou Zhao, Chang Zhou, Jingren Zhou
- Paper
-
【2023-09】-【Dynamic-SUPERB】-【NTU-Taiwan, etc.】-【Type: Benchmark】
- Dynamic-SUPERB: Towards A Dynamic, Collaborative, and Comprehensive Instruction-Tuning Benchmark for Speech
- Author(s): Chien-yu Huang, Ke-Han Lu, Shih-Heng Wang, Chi-Yuan Hsiao, Chun-Yi Kuan, Haibin Wu, Siddhant Arora, Kai-Wei Chang, Jiatong Shi, Yifan Peng, Roshan Sharma, Shinji Watanabe, Bhiksha Ramakrishnan, Shady Shehata, Hung-yi Lee
- Paper
-
【2024-09】-【EMOVA】-【HKUST】-【Type: Model】
- EMOVA: Empowering Language Models to See, Hear and Speak with Vivid Emotions
- Author(s): Kai Chen, Yunhao Gou, Runhui Huang, Zhili Liu, Daxin Tan, Jing Xu, Chunwei Wang, Yi Zhu, Yihan Zeng, Kuo Yang, Dingdong Wang, Kun Xiang, Haoyuan Li, Haoli Bai, Jianhua Han, Xiaohui Li, Weike Jin, Nian Xie, Yu Zhang, James T. Kwok, Hengshuang Zhao, Xiaodan Liang, Dit-Yan Yeung, Xiao Chen, Zhenguo Li, Wei Zhang, Qun Liu, Jun Yao, Lanqing Hong, Lu Hou, Hang Xu
- Paper / Demo
-
【2023-11】-【CoDi-2】-【UC Berkeley】-【Type: Model】
-
【2023-06】-【Macaw-LLM】-【Tencent】-【Type: Model】
- Macaw-LLM: Multi-Modal Language Modeling with Image, Video, Audio, and Text Integration
- Author(s): Chenyang Lyu, Minghao Wu, Longyue Wang, Xinting Huang, Bingshuai Liu, Zefeng Du, Shuming Shi, Zhaopeng Tu
- Paper
-
【2024-11】-【WavChat-Survey】-【Zhejiang University】-【Type: Survey】
- WavChat: A Survey of Spoken Dialogue Models
- Author(s): Shengpeng Ji, Yifu Chen, Minghui Fang, Jialong Zuo, Jingyu Lu, Hanting Wang, Ziyue Jiang, Long Zhou, Shujie Liu, Xize Cheng, Xiaoda Yang, Zehan Wang, Qian Yang, Jian Li, Yidi Jiang, Jingzhen He, Yunfei Chu, Jin Xu, Zhou Zhao
- Paper
-
【2024-10】-【SpeechLLM-Survey】-【SJTU, AISpeech】-【Type: Survey】
- A Survey on Speech Large Language Models
- Author(s): Jing Peng, Yucheng Wang, Yu Xi, Xu Li, Xizhuo Zhang, Kai Yu
- Paper
-
【2024-10】-【SpeechLM-Survey】-【CUHK, Tencent】-【Type: Survey】
- Recent Advances in Speech Language Models: A Survey
- Author(s): Wenqian Cui, Dianzhi Yu, Xiaoqi Jiao, Ziqiao Meng, Guangyan Zhang, Qichao Wang, Yiwen Guo, Irwin King
- Paper
-
【2024-06】-【Audio Hallucination】-【NTU-Taiwan】-【Type: Research】
- Understanding Sounds, Missing the Questions: The Challenge of Object Hallucination in Large Audio-Language Models
- Author(s): Chun-Yi Kuan, Wei-Ping Huang, Hung-yi Lee
- Paper
-
【2024-05】-【VoiceJailbreak】-【CISPA】-【Type: Method】
- Voice Jailbreak Attacks Against GPT-4o
- Author(s): Xinyue Shen, Yixin Wu, Michael Backes, Yang Zhang
- Paper
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for Awesome-Audio-LLM
Similar Open Source Tools
Awesome-Audio-LLM
Awesome-Audio-LLM is a repository dedicated to various models and methods related to audio and language processing. It includes a wide range of research papers and models developed by different institutions and authors. The repository covers topics such as bridging audio and language, speech emotion recognition, voice assistants, and more. It serves as a comprehensive resource for those interested in the intersection of audio and language processing.
awesome-llm-unlearning
This repository tracks the latest research on machine unlearning in large language models (LLMs). It offers a comprehensive list of papers, datasets, and resources relevant to the topic.
unilm
The 'unilm' repository is a collection of tools, models, and architectures for Foundation Models and General AI, focusing on tasks such as NLP, MT, Speech, Document AI, and Multimodal AI. It includes various pre-trained models, such as UniLM, InfoXLM, DeltaLM, MiniLM, AdaLM, BEiT, LayoutLM, WavLM, VALL-E, and more, designed for tasks like language understanding, generation, translation, vision, speech, and multimodal processing. The repository also features toolkits like s2s-ft for sequence-to-sequence fine-tuning and Aggressive Decoding for efficient sequence-to-sequence decoding. Additionally, it offers applications like TrOCR for OCR, LayoutReader for reading order detection, and XLM-T for multilingual NMT.
Anim
Anim v0.1.0 is an animation tool that allows users to convert videos to animations using mixamorig characters. It features FK animation editing, object selection, embedded Python support (only on Windows), and the ability to export to glTF and FBX formats. Users can also utilize Mediapipe to create animations. The tool is designed to assist users in creating animations with ease and flexibility.
CGraph
CGraph is a cross-platform **D** irected **A** cyclic **G** raph framework based on pure C++ without any 3rd-party dependencies. You, with it, can **build your own operators simply, and describe any running schedules** as you need, such as dependence, parallelling, aggregation and so on. Some useful tools and plugins are also provide to improve your project. Tutorials and contact information are show as follows. Please **get in touch with us for free** if you need more about this repository.
Interview-for-Algorithm-Engineer
This repository provides a collection of interview questions and answers for algorithm engineers. The questions are organized by topic, and each question includes a detailed explanation of the answer. This repository is a valuable resource for anyone preparing for an algorithm engineering interview.
awesome-azure-openai-llm
This repository is a collection of references to Azure OpenAI, Large Language Models (LLM), and related services and libraries. It provides information on various topics such as RAG, Azure OpenAI, LLM applications, agent design patterns, semantic kernel, prompting, finetuning, challenges & abilities, LLM landscape, surveys & references, AI tools & extensions, datasets, and evaluations. The content covers a wide range of topics related to AI, machine learning, and natural language processing, offering insights into the latest advancements in the field.
Avalonia-Assistant
Avalonia-Assistant is an open-source desktop intelligent assistant that aims to provide a user-friendly interactive experience based on the Avalonia UI framework and the integration of Semantic Kernel with OpenAI or other large LLM models. By utilizing Avalonia-Assistant, you can perform various desktop operations through text or voice commands, enhancing your productivity and daily office experience.
bitcart
Bitcart is a platform designed for merchants, users, and developers, providing easy setup and usage. It includes various linked repositories for core daemons, admin panel, ready store, Docker packaging, Python library for coins connection, BitCCL scripting language, documentation, and official site. The platform aims to simplify the process for merchants and developers to interact and transact with cryptocurrencies, offering a comprehensive ecosystem for managing transactions and payments.
PyTorch-Tutorial-2nd
The second edition of "PyTorch Practical Tutorial" was completed after 5 years, 4 years, and 2 years. On the basis of the essence of the first edition, rich and detailed deep learning application cases and reasoning deployment frameworks have been added, so that this book can more systematically cover the knowledge involved in deep learning engineers. As the development of artificial intelligence technology continues to emerge, the second edition of "PyTorch Practical Tutorial" is not the end, but the beginning, opening up new technologies, new fields, and new chapters. I hope to continue learning and making progress in artificial intelligence technology with you in the future.
agenta
Agenta is an open-source LLM developer platform for prompt engineering, evaluation, human feedback, and deployment of complex LLM applications. It provides tools for prompt engineering and management, evaluation, human annotation, and deployment, all without imposing any restrictions on your choice of framework, library, or model. Agenta allows developers and product teams to collaborate in building production-grade LLM-powered applications in less time.
OpenRedTeaming
OpenRedTeaming is a repository focused on red teaming for generative models, specifically large language models (LLMs). The repository provides a comprehensive survey on potential attacks on GenAI and robust safeguards. It covers attack strategies, evaluation metrics, benchmarks, and defensive approaches. The repository also implements over 30 auto red teaming methods. It includes surveys, taxonomies, attack strategies, and risks related to LLMs. The goal is to understand vulnerabilities and develop defenses against adversarial attacks on large language models.
CPP-Notes
CPP-Notes is a comprehensive repository providing detailed insights into the history, evolution, and modern development of the C++ programming language. It covers the foundational concepts of C++ and its transition from C, highlighting key features such as object-oriented programming, generic programming, and modern enhancements introduced in C++11/14/17. The repository delves into the significance of C++ in system programming, library development, and its role as a versatile and efficient language. It explores the historical milestones of C++ development, from its inception in 1979 by Bjarne Stroustrup to the latest C++20 standard, showcasing major advancements like Concepts, Ranges library, Coroutines, Modules, and enhanced concurrency features.
HTFramework
HTFramework is a rapid development framework based on Unity, integrating modular requirements, code reusability, practicality, high cohesion, unified coding standards, extensibility, maintainability, generality, and pluggability. It provides continuous maintenance and upgrades. The framework includes modules for aspect-oriented program code tracking, audio management, controller simplification, coroutine scheduling, custom modules, custom datasets, debugging, entity-component-system, entity management, event handling, exception handling, finite state machines, hotfixing, input management, instruction system, main module access, network client, object pooling, procedures, reference pooling, resource loading, step editing, task editing, UI management, utility tools, web requests, and optional AI, ILRuntime-based hotfixing, XLua integration, and game component modules.
ClashRoyaleBuildABot
Clash Royale Build-A-Bot is a project that allows users to build their own bot to play Clash Royale. It provides an advanced state generator that accurately returns detailed information using cutting-edge technologies. The project includes tutorials for setting up the environment, building a basic bot, and understanding state generation. It also offers updates such as replacing YOLOv5 with YOLOv8 unit model and enhancing performance features like placement and elixir management. The future roadmap includes plans to label more images of diverse cards, add a tracking layer for unit predictions, publish tutorials on Q-learning and imitation learning, release the YOLOv5 training notebook, implement chest opening and card upgrading features, and create a leaderboard for the best bots developed with this repository.
For similar tasks
Awesome-Audio-LLM
Awesome-Audio-LLM is a repository dedicated to various models and methods related to audio and language processing. It includes a wide range of research papers and models developed by different institutions and authors. The repository covers topics such as bridging audio and language, speech emotion recognition, voice assistants, and more. It serves as a comprehensive resource for those interested in the intersection of audio and language processing.
vocode-python
Vocode is an open source library that enables users to easily build voice-based LLM (Large Language Model) apps. With Vocode, users can create real-time streaming conversations with LLMs and deploy them for phone calls, Zoom meetings, and more. The library offers abstractions and integrations for transcription services, LLMs, and synthesis services, making it a comprehensive tool for voice-based applications.
ultravox
Ultravox is a fast multimodal Language Model (LLM) that can understand both text and human speech in real-time without the need for a separate Audio Speech Recognition (ASR) stage. By extending Meta's Llama 3 model with a multimodal projector, Ultravox converts audio directly into a high-dimensional space used by Llama 3, enabling quick responses and potential understanding of paralinguistic cues like timing and emotion in human speech. The current version (v0.3) has impressive speed metrics and aims for further enhancements. Ultravox currently converts audio to streaming text and plans to emit speech tokens for direct audio conversion. The tool is open for collaboration to enhance this functionality.
For similar jobs
Awesome-Audio-LLM
Awesome-Audio-LLM is a repository dedicated to various models and methods related to audio and language processing. It includes a wide range of research papers and models developed by different institutions and authors. The repository covers topics such as bridging audio and language, speech emotion recognition, voice assistants, and more. It serves as a comprehensive resource for those interested in the intersection of audio and language processing.
weave
Weave is a toolkit for developing Generative AI applications, built by Weights & Biases. With Weave, you can log and debug language model inputs, outputs, and traces; build rigorous, apples-to-apples evaluations for language model use cases; and organize all the information generated across the LLM workflow, from experimentation to evaluations to production. Weave aims to bring rigor, best-practices, and composability to the inherently experimental process of developing Generative AI software, without introducing cognitive overhead.
agentcloud
AgentCloud is an open-source platform that enables companies to build and deploy private LLM chat apps, empowering teams to securely interact with their data. It comprises three main components: Agent Backend, Webapp, and Vector Proxy. To run this project locally, clone the repository, install Docker, and start the services. The project is licensed under the GNU Affero General Public License, version 3 only. Contributions and feedback are welcome from the community.
oss-fuzz-gen
This framework generates fuzz targets for real-world `C`/`C++` projects with various Large Language Models (LLM) and benchmarks them via the `OSS-Fuzz` platform. It manages to successfully leverage LLMs to generate valid fuzz targets (which generate non-zero coverage increase) for 160 C/C++ projects. The maximum line coverage increase is 29% from the existing human-written targets.
LLMStack
LLMStack is a no-code platform for building generative AI agents, workflows, and chatbots. It allows users to connect their own data, internal tools, and GPT-powered models without any coding experience. LLMStack can be deployed to the cloud or on-premise and can be accessed via HTTP API or triggered from Slack or Discord.
VisionCraft
The VisionCraft API is a free API for using over 100 different AI models. From images to sound.
kaito
Kaito is an operator that automates the AI/ML inference model deployment in a Kubernetes cluster. It manages large model files using container images, avoids tuning deployment parameters to fit GPU hardware by providing preset configurations, auto-provisions GPU nodes based on model requirements, and hosts large model images in the public Microsoft Container Registry (MCR) if the license allows. Using Kaito, the workflow of onboarding large AI inference models in Kubernetes is largely simplified.
PyRIT
PyRIT is an open access automation framework designed to empower security professionals and ML engineers to red team foundation models and their applications. It automates AI Red Teaming tasks to allow operators to focus on more complicated and time-consuming tasks and can also identify security harms such as misuse (e.g., malware generation, jailbreaking), and privacy harms (e.g., identity theft). The goal is to allow researchers to have a baseline of how well their model and entire inference pipeline is doing against different harm categories and to be able to compare that baseline to future iterations of their model. This allows them to have empirical data on how well their model is doing today, and detect any degradation of performance based on future improvements.