AI-YinMei
AI吟美-人工智能主播-Vtuber
Stars: 529
AI-YinMei is an AI virtual anchor Vtuber development tool (N card version). It supports fastgpt knowledge base chat dialogue, a complete set of solutions for LLM large language models: [fastgpt] + [one-api] + [Xinference], supports docking bilibili live broadcast barrage reply and entering live broadcast welcome speech, supports Microsoft edge-tts speech synthesis, supports Bert-VITS2 speech synthesis, supports GPT-SoVITS speech synthesis, supports expression control Vtuber Studio, supports painting stable-diffusion-webui output OBS live broadcast room, supports painting picture pornography public-NSFW-y-distinguish, supports search and image search service duckduckgo (requires magic Internet access), supports image search service Baidu image search (no magic Internet access), supports AI reply chat box [html plug-in], supports AI singing Auto-Convert-Music, supports playlist [html plug-in], supports dancing function, supports expression video playback, supports head touching action, supports gift smashing action, supports singing automatic start dancing function, chat and singing automatic cycle swing action, supports multi scene switching, background music switching, day and night automatic switching scene, supports open singing and painting, let AI automatically judge the content.
README:
直播效果:
桌宠云:
【😄开发文档|💗视频教程|🚚1.8整合包教程|✨1.7整合包教程】
- AI 虚拟主播 Vtuber 研发(N 卡版本)
- AI 名称:吟美
- 开发者:Winlone
- B站频道:程序猿的退休生活
- 开源代码:https://github.com/worm128/AI-YinMei
- Ai吟美教程集合:https://www.bilibili.com/read/cv33640951/
- Q 群:27831318
- 版本:1.8.1
- 吟美整合包下载地址:
整合包教程:https://www.bilibili.com/video/BV1zD421H76q
百度网盘群号:930109408
提示:因为百度网盘分享总是屏蔽,现在切换到百度网盘的群分享,请在“百度网盘->消息” 添加群号,加入群后可以在文件列表进行下载
功能整合包下载(4个):人工智能 -> yinmei-all
吟美核心【版本迭代】:人工智能 -> 吟美核心
吟美开发文档:人工智能 -> 吟美开发文档
- 旧版吟美项目【因集成过多内置第三方项目,已废弃】:
https://github.com/worm128/AI-YinMei-backup
- 支持 fastgpt 知识库聊天对话
- 支持 LLM 大语言模型的一整套解决方案:[fastgpt] + [one-api] + [Xinference]
- 支持对接 bilibili 直播间弹幕回复和进入直播间欢迎语
- 支持微软 edge-tts 语音合成
- 支持 Bert-VITS2 语音合成
- 支持 GPT-SoVITS 语音合成
- 支持表情控制 Vtuber Studio
- 支持绘画 stable-diffusion-webui 输出 OBS 直播间
- 支持绘画图片鉴黄 public-NSFW-y-distinguish
- 支持搜索和搜图服务 duckduckgo(需要魔法上网)
- 支持搜图服务 baidu 搜图(不需要魔法上网)
- 支持 AI 回复聊天框【html 插件】
- 支持 AI 唱歌 Auto-Convert-Music
- 支持歌单【html 插件】
- 支持跳舞功能
- 支持表情视频播放
- 支持摸摸头动作
- 支持砸礼物动作
- 支持唱歌自动启动伴舞功能
- 聊天和唱歌自动循环摇摆动作
- 支持多场景切换、背景音乐切换、白天黑夜自动切换场景
- 支持开放性唱歌和绘画,让 AI 自动判断内容
- 支持流式聊天,提速 LLM 回复与语音合成
- 对接 bilibili 开放平台弹幕【稳定性高】
- 支持 funasr 阿里语音识别系统
- 增加点赞、送礼物、欢迎词等触发事件
- Ai吟美桌宠【关注B站“程序猿的退休生活”,回复181获取下载链接】
吟美直播间功能说明
-
1、聊天功能:
1.1 设定了名字、性格、语气和嘲讽能力的 AI,能够与粉丝互怼,当然录入了老粉丝的信息记录,能够更好识别老粉丝的行为进行互怼。
1.2 多重性格:吟美有善解人意的女仆和凶残怼人的大小姐性格,根据不同场景自行判断切换 -
2、唱歌功能:
2.1 输入“唱歌+歌曲名称”,吟美会根据你输入的歌曲名称进行学习唱歌。当然,你可以输入类似“吟美给我推荐一首最好听的动漫歌曲”这些开放性的话题,让吟美给你智能选择歌曲进行演唱。
2.2 切歌请输入“切歌”指令,会跳过当前歌曲,直接唱下一首歌曲 -
3、绘画功能:
3.1 输入“画画+图画标题”,吟美会根据你输入的绘画提示词进行实时绘画。
3.2 当然,你可以输入类似“吟美给我画一幅最丑的小龟蛋”这些开放性的话题,让吟美给你智能输出绘画提示词进行画画。 -
4、跳舞功能:
4.1 输入“跳舞+舞蹈名称”,舞蹈如下:
书记舞、科目三、女团舞、社会摇
呱呱舞、马保国、二次元、涩涩
蔡徐坤、江南 style、Chipi、吟美
直接输入“跳舞”两个字是随机跳舞
4.2 停止跳舞请输入“停止跳舞” -
5、表情功能:
输入“表情+名称”, “表情+随机” 是随机表情,表情自己猜,例如,“哭、笑、吐舌头”之类 -
6、场景切换功能:
6.1 输入“切换+场景名称”: 粉色房间、神社、海岸花坊、花房、清晨房间
6.2 系统智能判定时间进行早晚场景切换 -
7、换装功能:
输入“换装+衣服名称”:便衣、爱的翅膀、青春猫娘、眼镜猫娘 -
8、搜图功能:
输入“搜图+关键字” -
9、搜索资讯功能:
输入“搜索+关键字” -
智能辅助:
1、歌单列表显示
2、Ai 回复文字框显示
3、Ai 动作状态提示
4、智能识别唱歌和绘画
5、说话、唱歌循环随机摇摆动作
6、随着心情值增加或者当前的聊天关键字,智能判断输出日语
7、绘画提示词对接 C 站,丰富绘画内容
8、智能判断是否需要唱歌、画画
9、根据关键字进行场景切换
10、funasr 语音识别客户端
- Ai-YinMei:Ai 吟美核心
- stable-diffusion-webui:绘画模块
- public-NSFW-y-distinguish:鉴黄模块
- gpt-SoVITS:语音合成模块
- Auto-Convert-Music:唱歌模块
- fastgpt + one-api + Xinference:聊天模块
- funasr-html-client:语音识别客户端
整合包教程:https://www.bilibili.com/video/BV1zD421H76q
百度网盘群号:930109408
功能整合包下载(4个):人工智能 -> yinmei-all
吟美核心【版本迭代】:人工智能 -> 吟美核心
吟美开发文档:人工智能 -> 吟美开发文档
- 语音播放器 mpv:语音播放、音乐播放使用
在百度网盘->人工智能->软件->mpv.exe
注意:项目需要在根目录放两个播放器,分别是:mpv.exe【播放语音】、song.exe【播放音乐】
- 虚拟声卡:虚拟人物口型输出音频
在百度网盘->人工智能->软件->虚拟声卡 Virtual Audio Cable v4.10 破解版
- ffmpeg:音频解码器,用于语音合成
在百度网盘->人工智能->软件->ffmpeg
- mongodb 连接工具-NoSQLBooster for MongoDB
人工智能->软件->nosqlbooster4mongo-8.1.7.exe
- fastgpt 的 docker-compose 配置
人工智能->软件->docker 知识库
- Python 3.11.6
注意:更详细的启动方法,请参考
🔥整合包说明文档
🚚1.8整合包教程
fastgpt:https://github.com/labring/FastGPT
one-api:https://github.com/songquanpeng/one-api
Xinference:https://github.com/xorbitsai/inference
启动:使用 window WSL 的 docker 启动,启动流程看教程文档第 23 点
教程视频:https://www.bilibili.com/video/BV1SH4y1J7Wy/
项目 github:https://github.com/oobabooga/text-generation-webui
#进入虚拟环境
& 盘符:py虚拟空间路径/Scripts/Activate.ps1
#安装py包
pip install -r requirements.txt
#启动text-generation-webui程序,start.bat是我自定义的window启动脚本
./start.bat
window 的 bat 启动命令:
python server.py --trust-remote-code --listen-host 0.0.0.0 --listen-port 7866 --listen --api --api-port 5000 --model chatglm2-6b --load-in-8bit --bf16
API 访问:http://127.0.0.1:5000/
项目地址:https://github.com/fishaudio/Bert-VITS2
启动:使用 Bert-VITS2-clap-novq-ui 里面的 start.bat 启动
定制页面:hiyoriUI.py 包含中英日混合语音合成方法,需要放到对应项目,不一定兼容
效果:Ai 与用户的语音互动,包括:聊天、绘画提示、唱歌提示、跳舞提示等
项目地址:https://github.com/fishaudio/Bert-VITS2
效果:Ai 与用户的语音互动,包括:聊天、绘画提示、唱歌提示、跳舞提示等
百度网盘群号:930109408
提示:因为百度网盘分享总是屏蔽,现在切换到百度网盘的群分享,请在“百度网盘->消息” 添加群号,加入群后可以在文件列表进行下载
双击执行 start.bat
edge不需要另外安装语音合成服务
stable-diffusion-webui项目
项目地址:https://github.com/AUTOMATIC1111/stable-diffusion-webui
效果:输入“画画 xxx”,触发 Ai 使用 stable-diffusion 进行绘图
百度网盘群号:930109408
提示:因为百度网盘分享总是屏蔽,现在切换到百度网盘的群分享,请在“百度网盘->消息” 添加群号,加入群后可以在文件列表进行下载
双击执行 start.bat
public-NSFW-y-distinguish项目
项目地址:https://github.com/fd-freedom/public-NSFW-y-distinguish
百度网盘群号:930109408<br>
提示:因为百度网盘分享总是屏蔽,现在切换到百度网盘的群分享,请在“百度网盘->消息” 添加群号,加入群后可以在文件列表进行下载<br>
双击执行 start.bat <br>
Auto-Convert-Music项目
原创开发者:木白 Mu_Bai、宫园薰ヾ(≧∪≦*)ノ〃
项目地址:https://github.com/MuBai-He/Auto-Convert-Music
启动:使用 Auto-Convert-Music 里面的 start.bat 启动
效果:输入“唱歌 歌曲名称”,触发 Ai 从歌库学习唱歌
皮肤启动,安装 steam,安装 VTube Studio
这个自行下载 steam 平台,在平台里面有一个 VTube Studio 软件,它就是启动 live2D 的虚拟主播皮肤
效果:Ai 主播的发声来源
百度网盘群号:930109408
加群下载软件
下载banana版本即可【注意你主板要安装声卡驱动,不然虚拟声卡通道可能失效】:
百度网盘群号:930109408
加群下载软件
把项目文件:ai-yinmei\html\chatui.html 放入 OBS 浏览器插件展示
效果:Ai 的回复内容会在回复插件显示
把项目文件:ai-yinmei\html\songlist.html 放入 OBS 浏览器插件展示
效果:用户点歌的歌单会在上面以列表形式显示:
'xxx 用户'点播《歌曲名称》[正在播放]
'xxx 用户 2'点播《歌曲名称》
把项目文件:ai-yinmei\html\time.html 放入 OBS 浏览器插件展示
整合包说明文档
跳舞视频的存放地址【支持子文件夹存放】: dance_path = 'J:\ai\跳舞视频\横屏'
效果:输入跳舞,立即进行跳舞视频随机抽取播放;输入\停止跳舞,可以立即停止跳舞
表情视频的存放地址【支持子文件夹存放】: emote_path = 'H:\人工智能\ai\跳舞视频\表情'
效果:输入表情随机 或者 表情名称,立即进行表情视频播放,表情随机 为随机播放表情视频
表情视频的名称展示【支持子文件夹存放】: emote_font = 'H:\人工智能\ai\跳舞视频\表情\表情符号'
效果:表情名称会显示在 obs 的字体控件,提示用户可以输入这些表情名称
吟美目定制funasr插件:./funasr/index.html
服务端:需要根据阿里 funasr进行配置, 建议安装容器,参考服务器部署文档:
服务端启动:
docker run -p 10095:10095 --name funasr -it --privileged=true -v /j/ai/ai-code/funasr/models:/workspace/models registry.cn-hangzhou.aliyuncs.com/funasr_repo/funasr:funasr-runtime-sdk-online-cpu-0.1.9
-
func:
吟美功能类库,全部功能源代码在这里 -
runtime:
整合包才有的python运行类库 -
html:
html插件,包含歌单列表、流式回复框、彩色回复框、功能说明框等 -
background:
背景图,可以在OBS软件自行添加背景图 -
porn:
存放鉴黄图片、绘画图片、搜图 -
output:
语音合成中转目录,还有歌曲、伴奏保存目录 -
logs:
日志输出目录 -
config:
OBS配置、fastgpt配置,可以参考 -
api.py:
接口启动主要文件 -
config.yml:
所有配置的文件 -
mpv.exe:
语音聊天播放器,输出设备设置:设置Voicemeeter第二个虚拟通道 -
Voicemeeter虚拟声卡官网:
下载banana版本即可【注意你主板要安装声卡驱动,不然虚拟声卡通道可能失效】:
- 唱歌变声:Auto-Convert-Music 开发者:木白 Mu_Bai、宫园薰ヾ(≧∪≦*)ノ〃
项目地址:https://github.com/MuBai-He/Auto-Convert-Music
- GPT-SoVITS:花儿不哭大佬开发的 TTS 语音合成
https://github.com/RVC-Boss/GPT-SoVITS
- Bert-VITS2:TTS 语音合成,合成速度超快
https://github.com/fishaudio/Bert-VITS2
- 知识库:fastgpt
项目地址:https://github.com/labring/FastGPT
- 大语言模型框架:one-api + Xinference
项目地址:https://github.com/songquanpeng/one-api
项目地址:https://github.com/xorbitsai/inference
- LLM 模型:ChatGLM
https://github.com/THUDM/ChatGLM2-6B
- 聚合 LLM 调用模型:text-generation-webui
https://github.com/oobabooga/text-generation-webui
- AI 虚拟主播模型:B 站的·领航员未鸟·
https://github.com/AliceNavigator/AI-Vtuber-chatglm
- AI 训练模型:LLaMA-Factory
https://github.com/hiyouga/LLaMA-Factory
- MPV 播放器:MPV
https://github.com/mpv-player/mpv
- 语音识别系统:FunASR
https://github.com/alibaba-damo-academy/FunASR/
- 其他:
Lora 训练:https://github.com/yuanzhoulvpi2017/zero_nlp
ChatGLM 训练:https://github.com/hiyouga/ChatGLM-Efficient-Tuning
SillyTavern 酒馆:https://github.com/SillyTavern/SillyTavern
LoRA 中文训练:https://github.com/super-wuliao/LoRA-ChatGLM-Chinese-Alpaca
数据集-训练语料:https://github.com/codemayq/chinese-chatbot-corpus
- 讨论Q群:27831318
- 我的Q号【定制化开发】:314769095
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for AI-YinMei
Similar Open Source Tools
AI-YinMei
AI-YinMei is an AI virtual anchor Vtuber development tool (N card version). It supports fastgpt knowledge base chat dialogue, a complete set of solutions for LLM large language models: [fastgpt] + [one-api] + [Xinference], supports docking bilibili live broadcast barrage reply and entering live broadcast welcome speech, supports Microsoft edge-tts speech synthesis, supports Bert-VITS2 speech synthesis, supports GPT-SoVITS speech synthesis, supports expression control Vtuber Studio, supports painting stable-diffusion-webui output OBS live broadcast room, supports painting picture pornography public-NSFW-y-distinguish, supports search and image search service duckduckgo (requires magic Internet access), supports image search service Baidu image search (no magic Internet access), supports AI reply chat box [html plug-in], supports AI singing Auto-Convert-Music, supports playlist [html plug-in], supports dancing function, supports expression video playback, supports head touching action, supports gift smashing action, supports singing automatic start dancing function, chat and singing automatic cycle swing action, supports multi scene switching, background music switching, day and night automatic switching scene, supports open singing and painting, let AI automatically judge the content.
LLM-And-More
LLM-And-More is a one-stop solution for training and applying large models, covering the entire process from data processing to model evaluation, from training to deployment, and from idea to service. In this project, users can easily train models through this project and generate the required product services with one click.
ai-paint-today-BE
AI Paint Today is an API server repository that allows users to record their emotions and daily experiences, and based on that, AI generates a beautiful picture diary of their day. The project includes features such as generating picture diaries from written entries, utilizing DALL-E 2 model for image generation, and deploying on AWS and Cloudflare. The project also follows specific conventions and collaboration strategies for development.
FisherAI
FisherAI is a Chrome extension designed to improve learning efficiency. It supports automatic summarization, web and video translation, multi-turn dialogue, and various large language models such as gpt/azure/gemini/deepseek/mistral/groq/yi/moonshot. Users can enjoy flexible and powerful AI tools with FisherAI.
ezwork-ai-doc-translation
EZ-Work AI Document Translation is an AI document translation assistant accessible to everyone. It enables quick and cost-effective utilization of major language model APIs like OpenAI to translate documents in formats such as txt, word, csv, excel, pdf, and ppt. The tool supports AI translation for various document types, including pdf scanning, compatibility with OpenAI format endpoints via intermediary API, batch operations, multi-threading, and Docker deployment.
chatgpt-plus
ChatGPT-PLUS is an open-source AI assistant solution based on AI large language model API, with a built-in operational management backend for easy deployment. It integrates multiple large language models from platforms like OpenAI, Azure, ChatGLM, Xunfei Xinghuo, and Wenxin Yanyan. Additionally, it includes MidJourney and Stable Diffusion AI drawing features. The system offers a complete open-source solution with ready-to-use frontend and backend applications, providing a seamless typing experience via Websocket. It comes with various pre-trained role applications such as Xiaohongshu writer, English translation master, Socrates, Confucius, Steve Jobs, and weekly report assistant to meet various chat and application needs. Users can enjoy features like Suno Wensheng music, integration with MidJourney/Stable Diffusion AI drawing, personal WeChat QR code for payment, built-in Alipay and WeChat payment functions, support for various membership packages and point card purchases, and plugin API integration for developing powerful plugins using large language model functions.
ERNIE-SDK
ERNIE SDK repository contains two projects: ERNIE Bot Agent and ERNIE Bot. ERNIE Bot Agent is a large model intelligent agent development framework based on the Wenxin large model orchestration capability introduced by Baidu PaddlePaddle, combined with the rich preset platform functions of the PaddlePaddle Star River community. ERNIE Bot provides developers with convenient interfaces to easily call the Wenxin large model for text creation, general conversation, semantic vectors, and AI drawing basic functions.
geekai
GeekAI is an open-source AI assistant solution based on AI large language model API, featuring a complete system with ready-to-use front-end and back-end management, providing a seamless typing experience via Websocket. It integrates various pre-trained character applications like Xiaohongshu writing assistant, English translation master, Socrates, Confucius, Steve Jobs, and weekly report assistant. The tool supports multiple large language models from platforms like OpenAI, Azure, Wenxin Yanyan, Xunfei Xinghuo, and Tsinghua ChatGLM. Additionally, it includes MidJourney and Stable Diffusion AI drawing functionalities for creating various artworks such as text-based images, face swapping, and blending images. Users can utilize personal WeChat QR codes for payment without the need for enterprise payment channels, and the tool offers integrated payment options like Alipay and WeChat Pay with support for multiple membership packages and point card purchases. It also features a plugin API for developing powerful plugins using large language model functions, including built-in plugins for Weibo hot search, today's headlines, morning news, and AI drawing functions.
Juggle
Juggle is a low-code tool for interface orchestration, which can quickly orchestrate simple APIs into a complex interface. The orchestrated interface can be directly used by the front end, greatly improving development efficiency and reducing development costs.
Stable-Diffusion
Stable Diffusion is a text-to-image AI model that can generate realistic images from a given text prompt. It is a powerful tool that can be used for a variety of creative and practical applications, such as generating concept art, creating illustrations, and designing products. Stable Diffusion is also a great tool for learning about AI and machine learning. This repository contains a collection of tutorials and resources on how to use Stable Diffusion.
llm-resource
llm-resource is a comprehensive collection of high-quality resources for Large Language Models (LLM). It covers various aspects of LLM including algorithms, training, fine-tuning, alignment, inference, data engineering, compression, evaluation, prompt engineering, AI frameworks, AI basics, AI infrastructure, AI compilers, LLM application development, LLM operations, AI systems, and practical implementations. The repository aims to gather and share valuable resources related to LLM for the community to benefit from.
EduChat
EduChat is a large-scale language model-based chatbot system designed for intelligent education by the EduNLP team at East China Normal University. The project focuses on developing a dialogue-based language model for the education vertical domain, integrating diverse education vertical domain data, and providing functions such as automatic question generation, homework correction, emotional support, course guidance, and college entrance examination consultation. The tool aims to serve teachers, students, and parents to achieve personalized, fair, and warm intelligent education.
how-to-optim-algorithm-in-cuda
This repository documents how to optimize common algorithms based on CUDA. It includes subdirectories with code implementations for specific optimizations. The optimizations cover topics such as compiling PyTorch from source, NVIDIA's reduce optimization, OneFlow's elementwise template, fast atomic add for half data types, upsample nearest2d optimization in OneFlow, optimized indexing in PyTorch, OneFlow's softmax kernel, linear attention optimization, and more. The repository also includes learning resources related to deep learning frameworks, compilers, and optimization techniques.
new-api
New API is an open-source project based on One API with additional features and improvements. It offers a new UI interface, supports Midjourney-Proxy(Plus) interface, online recharge functionality, model-based charging, channel weight randomization, data dashboard, token-controlled models, Telegram authorization login, Suno API support, Rerank model integration, and various third-party models. Users can customize models, retry channels, and configure caching settings. The deployment can be done using Docker with SQLite or MySQL databases. The project provides documentation for Midjourney and Suno interfaces, and it is suitable for AI enthusiasts and developers looking to enhance AI capabilities.
Long-Novel-GPT
Long-Novel-GPT is a long novel generator based on large language models like GPT. It utilizes a hierarchical outline/chapter/text structure to maintain the coherence of long novels. It optimizes API calls cost through context management and continuously improves based on self or user feedback until reaching the set goal. The tool aims to continuously refine and build novel content based on user-provided initial ideas, ultimately generating long novels at the level of human writers.
For similar tasks
open-webui
Open WebUI is an extensible, feature-rich, and user-friendly self-hosted WebUI designed to operate entirely offline. It supports various LLM runners, including Ollama and OpenAI-compatible APIs. For more information, be sure to check out our Open WebUI Documentation.
mistral.rs
Mistral.rs is a fast LLM inference platform written in Rust. We support inference on a variety of devices, quantization, and easy-to-use application with an Open-AI API compatible HTTP server and Python bindings.
llama-cpp-agent
The llama-cpp-agent framework is a tool designed for easy interaction with Large Language Models (LLMs). Allowing users to chat with LLM models, execute structured function calls and get structured output (objects). It provides a simple yet robust interface and supports llama-cpp-python and OpenAI endpoints with GBNF grammar support (like the llama-cpp-python server) and the llama.cpp backend server. It works by generating a formal GGML-BNF grammar of the user defined structures and functions, which is then used by llama.cpp to generate text valid to that grammar. In contrast to most GBNF grammar generators it also supports nested objects, dictionaries, enums and lists of them.
baml
BAML is a config file format for declaring LLM functions that you can then use in TypeScript or Python. With BAML you can Classify or Extract any structured data using Anthropic, OpenAI or local models (using Ollama) ## Resources ![](https://img.shields.io/discord/1119368998161752075.svg?logo=discord&label=Discord%20Community) [Discord Community](https://discord.gg/boundaryml) ![](https://img.shields.io/twitter/follow/boundaryml?style=social) [Follow us on Twitter](https://twitter.com/boundaryml) * Discord Office Hours - Come ask us anything! We hold office hours most days (9am - 12pm PST). * Documentation - Learn BAML * Documentation - BAML Syntax Reference * Documentation - Prompt engineering tips * Boundary Studio - Observability and more #### Starter projects * BAML + NextJS 14 * BAML + FastAPI + Streaming ## Motivation Calling LLMs in your code is frustrating: * your code uses types everywhere: classes, enums, and arrays * but LLMs speak English, not types BAML makes calling LLMs easy by taking a type-first approach that lives fully in your codebase: 1. Define what your LLM output type is in a .baml file, with rich syntax to describe any field (even enum values) 2. Declare your prompt in the .baml config using those types 3. Add additional LLM config like retries or redundancy 4. Transpile the .baml files to a callable Python or TS function with a type-safe interface. (VSCode extension does this for you automatically). We were inspired by similar patterns for type safety: protobuf and OpenAPI for RPCs, Prisma and SQLAlchemy for databases. BAML guarantees type safety for LLMs and comes with tools to give you a great developer experience: ![](docs/images/v3/prompt_view.gif) Jump to BAML code or how Flexible Parsing works without additional LLM calls. | BAML Tooling | Capabilities | | ----------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | BAML Compiler install | Transpiles BAML code to a native Python / Typescript library (you only need it for development, never for releases) Works on Mac, Windows, Linux ![](https://img.shields.io/badge/Python-3.8+-default?logo=python)![](https://img.shields.io/badge/Typescript-Node_18+-default?logo=typescript) | | VSCode Extension install | Syntax highlighting for BAML files Real-time prompt preview Testing UI | | Boundary Studio open (not open source) | Type-safe observability Labeling |
wenxin-starter
WenXin-Starter is a spring-boot-starter for Baidu's "Wenxin Qianfan WENXINWORKSHOP" large model, which can help you quickly access Baidu's AI capabilities. It fully integrates the official API documentation of Wenxin Qianfan. Supports text-to-image generation, built-in dialogue memory, and supports streaming return of dialogue. Supports QPS control of a single model and supports queuing mechanism. Plugins will be added soon.
intel-extension-for-transformers
Intel® Extension for Transformers is an innovative toolkit designed to accelerate GenAI/LLM everywhere with the optimal performance of Transformer-based models on various Intel platforms, including Intel Gaudi2, Intel CPU, and Intel GPU. The toolkit provides the below key features and examples: * Seamless user experience of model compressions on Transformer-based models by extending [Hugging Face transformers](https://github.com/huggingface/transformers) APIs and leveraging [Intel® Neural Compressor](https://github.com/intel/neural-compressor) * Advanced software optimizations and unique compression-aware runtime (released with NeurIPS 2022's paper [Fast Distilbert on CPUs](https://arxiv.org/abs/2211.07715) and [QuaLA-MiniLM: a Quantized Length Adaptive MiniLM](https://arxiv.org/abs/2210.17114), and NeurIPS 2021's paper [Prune Once for All: Sparse Pre-Trained Language Models](https://arxiv.org/abs/2111.05754)) * Optimized Transformer-based model packages such as [Stable Diffusion](examples/huggingface/pytorch/text-to-image/deployment/stable_diffusion), [GPT-J-6B](examples/huggingface/pytorch/text-generation/deployment), [GPT-NEOX](examples/huggingface/pytorch/language-modeling/quantization#2-validated-model-list), [BLOOM-176B](examples/huggingface/pytorch/language-modeling/inference#BLOOM-176B), [T5](examples/huggingface/pytorch/summarization/quantization#2-validated-model-list), [Flan-T5](examples/huggingface/pytorch/summarization/quantization#2-validated-model-list), and end-to-end workflows such as [SetFit-based text classification](docs/tutorials/pytorch/text-classification/SetFit_model_compression_AGNews.ipynb) and [document level sentiment analysis (DLSA)](workflows/dlsa) * [NeuralChat](intel_extension_for_transformers/neural_chat), a customizable chatbot framework to create your own chatbot within minutes by leveraging a rich set of [plugins](https://github.com/intel/intel-extension-for-transformers/blob/main/intel_extension_for_transformers/neural_chat/docs/advanced_features.md) such as [Knowledge Retrieval](./intel_extension_for_transformers/neural_chat/pipeline/plugins/retrieval/README.md), [Speech Interaction](./intel_extension_for_transformers/neural_chat/pipeline/plugins/audio/README.md), [Query Caching](./intel_extension_for_transformers/neural_chat/pipeline/plugins/caching/README.md), and [Security Guardrail](./intel_extension_for_transformers/neural_chat/pipeline/plugins/security/README.md). This framework supports Intel Gaudi2/CPU/GPU. * [Inference](https://github.com/intel/neural-speed/tree/main) of Large Language Model (LLM) in pure C/C++ with weight-only quantization kernels for Intel CPU and Intel GPU (TBD), supporting [GPT-NEOX](https://github.com/intel/neural-speed/tree/main/neural_speed/models/gptneox), [LLAMA](https://github.com/intel/neural-speed/tree/main/neural_speed/models/llama), [MPT](https://github.com/intel/neural-speed/tree/main/neural_speed/models/mpt), [FALCON](https://github.com/intel/neural-speed/tree/main/neural_speed/models/falcon), [BLOOM-7B](https://github.com/intel/neural-speed/tree/main/neural_speed/models/bloom), [OPT](https://github.com/intel/neural-speed/tree/main/neural_speed/models/opt), [ChatGLM2-6B](https://github.com/intel/neural-speed/tree/main/neural_speed/models/chatglm), [GPT-J-6B](https://github.com/intel/neural-speed/tree/main/neural_speed/models/gptj), and [Dolly-v2-3B](https://github.com/intel/neural-speed/tree/main/neural_speed/models/gptneox). Support AMX, VNNI, AVX512F and AVX2 instruction set. We've boosted the performance of Intel CPUs, with a particular focus on the 4th generation Intel Xeon Scalable processor, codenamed [Sapphire Rapids](https://www.intel.com/content/www/us/en/products/docs/processors/xeon-accelerated/4th-gen-xeon-scalable-processors.html).
bce-qianfan-sdk
The Qianfan SDK provides best practices for large model toolchains, allowing AI workflows and AI-native applications to access the Qianfan large model platform elegantly and conveniently. The core capabilities of the SDK include three parts: large model reasoning, large model training, and general and extension: * `Large model reasoning`: Implements interface encapsulation for reasoning of Yuyan (ERNIE-Bot) series, open source large models, etc., supporting dialogue, completion, Embedding, etc. * `Large model training`: Based on platform capabilities, it supports end-to-end large model training process, including training data, fine-tuning/pre-training, and model services. * `General and extension`: General capabilities include common AI development tools such as Prompt/Debug/Client. The extension capability is based on the characteristics of Qianfan to adapt to common middleware frameworks.
lmdeploy
LMDeploy is a toolkit for compressing, deploying, and serving LLM, developed by the MMRazor and MMDeploy teams. It has the following core features: * **Efficient Inference** : LMDeploy delivers up to 1.8x higher request throughput than vLLM, by introducing key features like persistent batch(a.k.a. continuous batching), blocked KV cache, dynamic split&fuse, tensor parallelism, high-performance CUDA kernels and so on. * **Effective Quantization** : LMDeploy supports weight-only and k/v quantization, and the 4-bit inference performance is 2.4x higher than FP16. The quantization quality has been confirmed via OpenCompass evaluation. * **Effortless Distribution Server** : Leveraging the request distribution service, LMDeploy facilitates an easy and efficient deployment of multi-model services across multiple machines and cards. * **Interactive Inference Mode** : By caching the k/v of attention during multi-round dialogue processes, the engine remembers dialogue history, thus avoiding repetitive processing of historical sessions.
For similar jobs
promptflow
**Prompt flow** is a suite of development tools designed to streamline the end-to-end development cycle of LLM-based AI applications, from ideation, prototyping, testing, evaluation to production deployment and monitoring. It makes prompt engineering much easier and enables you to build LLM apps with production quality.
deepeval
DeepEval is a simple-to-use, open-source LLM evaluation framework specialized for unit testing LLM outputs. It incorporates various metrics such as G-Eval, hallucination, answer relevancy, RAGAS, etc., and runs locally on your machine for evaluation. It provides a wide range of ready-to-use evaluation metrics, allows for creating custom metrics, integrates with any CI/CD environment, and enables benchmarking LLMs on popular benchmarks. DeepEval is designed for evaluating RAG and fine-tuning applications, helping users optimize hyperparameters, prevent prompt drifting, and transition from OpenAI to hosting their own Llama2 with confidence.
MegaDetector
MegaDetector is an AI model that identifies animals, people, and vehicles in camera trap images (which also makes it useful for eliminating blank images). This model is trained on several million images from a variety of ecosystems. MegaDetector is just one of many tools that aims to make conservation biologists more efficient with AI. If you want to learn about other ways to use AI to accelerate camera trap workflows, check out our of the field, affectionately titled "Everything I know about machine learning and camera traps".
leapfrogai
LeapfrogAI is a self-hosted AI platform designed to be deployed in air-gapped resource-constrained environments. It brings sophisticated AI solutions to these environments by hosting all the necessary components of an AI stack, including vector databases, model backends, API, and UI. LeapfrogAI's API closely matches that of OpenAI, allowing tools built for OpenAI/ChatGPT to function seamlessly with a LeapfrogAI backend. It provides several backends for various use cases, including llama-cpp-python, whisper, text-embeddings, and vllm. LeapfrogAI leverages Chainguard's apko to harden base python images, ensuring the latest supported Python versions are used by the other components of the stack. The LeapfrogAI SDK provides a standard set of protobuffs and python utilities for implementing backends and gRPC. LeapfrogAI offers UI options for common use-cases like chat, summarization, and transcription. It can be deployed and run locally via UDS and Kubernetes, built out using Zarf packages. LeapfrogAI is supported by a community of users and contributors, including Defense Unicorns, Beast Code, Chainguard, Exovera, Hypergiant, Pulze, SOSi, United States Navy, United States Air Force, and United States Space Force.
llava-docker
This Docker image for LLaVA (Large Language and Vision Assistant) provides a convenient way to run LLaVA locally or on RunPod. LLaVA is a powerful AI tool that combines natural language processing and computer vision capabilities. With this Docker image, you can easily access LLaVA's functionalities for various tasks, including image captioning, visual question answering, text summarization, and more. The image comes pre-installed with LLaVA v1.2.0, Torch 2.1.2, xformers 0.0.23.post1, and other necessary dependencies. You can customize the model used by setting the MODEL environment variable. The image also includes a Jupyter Lab environment for interactive development and exploration. Overall, this Docker image offers a comprehensive and user-friendly platform for leveraging LLaVA's capabilities.
carrot
The 'carrot' repository on GitHub provides a list of free and user-friendly ChatGPT mirror sites for easy access. The repository includes sponsored sites offering various GPT models and services. Users can find and share sites, report errors, and access stable and recommended sites for ChatGPT usage. The repository also includes a detailed list of ChatGPT sites, their features, and accessibility options, making it a valuable resource for ChatGPT users seeking free and unlimited GPT services.
TrustLLM
TrustLLM is a comprehensive study of trustworthiness in LLMs, including principles for different dimensions of trustworthiness, established benchmark, evaluation, and analysis of trustworthiness for mainstream LLMs, and discussion of open challenges and future directions. Specifically, we first propose a set of principles for trustworthy LLMs that span eight different dimensions. Based on these principles, we further establish a benchmark across six dimensions including truthfulness, safety, fairness, robustness, privacy, and machine ethics. We then present a study evaluating 16 mainstream LLMs in TrustLLM, consisting of over 30 datasets. The document explains how to use the trustllm python package to help you assess the performance of your LLM in trustworthiness more quickly. For more details about TrustLLM, please refer to project website.
AI-YinMei
AI-YinMei is an AI virtual anchor Vtuber development tool (N card version). It supports fastgpt knowledge base chat dialogue, a complete set of solutions for LLM large language models: [fastgpt] + [one-api] + [Xinference], supports docking bilibili live broadcast barrage reply and entering live broadcast welcome speech, supports Microsoft edge-tts speech synthesis, supports Bert-VITS2 speech synthesis, supports GPT-SoVITS speech synthesis, supports expression control Vtuber Studio, supports painting stable-diffusion-webui output OBS live broadcast room, supports painting picture pornography public-NSFW-y-distinguish, supports search and image search service duckduckgo (requires magic Internet access), supports image search service Baidu image search (no magic Internet access), supports AI reply chat box [html plug-in], supports AI singing Auto-Convert-Music, supports playlist [html plug-in], supports dancing function, supports expression video playback, supports head touching action, supports gift smashing action, supports singing automatic start dancing function, chat and singing automatic cycle swing action, supports multi scene switching, background music switching, day and night automatic switching scene, supports open singing and painting, let AI automatically judge the content.