
Feishu-MCP
为 Cursor、Windsurf、Cline 和其他 AI 驱动的编码工具提供访问、编辑和结构化处理飞书文档的能力,基于 Model Context Protocol 服务器实现。
Stars: 159

Feishu-MCP is a server that provides access, editing, and structured processing capabilities for Feishu documents for Cursor, Windsurf, Cline, and other AI-driven coding tools, based on the Model Context Protocol server. This project enables AI coding tools to directly access and understand the structured content of Feishu documents, significantly improving the intelligence and efficiency of document processing. It covers the real usage process of Feishu documents, allowing efficient utilization of document resources, including folder directory retrieval, content retrieval and understanding, smart creation and editing, efficient search and retrieval, and more. It enhances the intelligent access, editing, and searching of Feishu documents in daily usage, improving content processing efficiency and experience.
README:
为 Cursor、Windsurf、Cline 和其他 AI 驱动的编码工具提供访问、编辑和结构化处理飞书文档的能力,基于 Model Context Protocol 服务器实现。
本项目让 AI 编码工具能够直接获取和理解飞书文档的结构化内容,显著提升文档处理的智能化和效率。
完整覆盖飞书文档的真实使用流程,助你高效利用文档资源:
- 文件夹目录获取:快速获取和浏览飞书文档文件夹下的所有文档,便于整体管理和查找。
- 内容获取与理解:支持结构化、分块、富文本等多维度内容读取,AI 能精准理解文档上下文。
- 智能创建与编辑:可自动创建新文档、批量生成和编辑内容,满足多样化写作需求。
- 高效检索与搜索:内置关键字搜索,帮助你在大量文档中迅速找到目标信息。
本项目让你在飞书文档的日常使用流程中实现智能获取、编辑和搜索,提升内容处理效率和体验。
你可以通过以下视频了解 MCP 的实际使用效果和操作流程:


⭐ Star 本项目,第一时间获取最新功能和重要更新! 关注项目可以让你不错过任何新特性、修复和优化,助你持续高效使用。你的支持也将帮助我们更好地完善和发展项目。⭐
功能类别 | 工具名称 | 描述 | 使用场景 | 状态 |
---|---|---|---|---|
文档管理 | create_feishu_document |
创建新的飞书文档 | 从零开始创建文档 | ✅ 已完成 |
get_feishu_document_info |
获取文档基本信息 | 验证文档存在性和权限 | ✅ 已完成 | |
get_feishu_document_blocks |
获取文档块结构 | 了解文档层级结构 | ✅ 已完成 | |
内容编辑 | batch_create_feishu_blocks |
批量创建多个块 | 高效创建连续内容 | ✅ 已完成 |
update_feishu_block_text |
更新块文本内容 | 修改现有内容 | ✅ 已完成 | |
delete_feishu_document_blocks |
删除文档块 | 清理和重构文档内容 | ✅ 已完成 | |
文件夹管理 | get_feishu_folder_files |
获取文件夹文件列表 | 浏览文件夹内容 | ✅ 已完成 |
create_feishu_folder |
创建新文件夹 | 组织文档结构 | ✅ 已完成 | |
搜索功能 | search_feishu_documents |
搜索文档 | 查找特定内容 | ✅ 已完成 |
工具功能 | convert_feishu_wiki_to_document_id |
Wiki链接转换 | 将Wiki链接转为文档ID | ✅ 已完成 |
get_feishu_image_resource |
获取图片资源 | 下载文档中的图片 | ✅ 已完成 | |
get_feishu_whiteboard_content |
获取画板内容 | 获取画板中的图形元素和结构(流程图、思维导图等) | ✅ 已完成 | |
高级功能 | create_feishu_table |
创建和编辑表格 | 结构化数据展示 | ✅ 已完成 |
流程图插入 | 支持流程图和思维导图 | 流程梳理和可视化 | ✅ 已完成 | |
图片插入 | upload_and_bind_image_to_block |
支持插入本地和远程图片 | 修改文档内容 | ✅ 已完成 |
公式支持 | 支持数学公式 | 学术和技术文档 | ✅ 已完成 |
- 文本样式:粗体、斜体、下划线、删除线、行内代码
- 文本颜色:灰色、棕色、橙色、黄色、绿色、蓝色、紫色
- 对齐方式:左对齐、居中、右对齐
- 标题级别:支持1-9级标题
- 代码块:支持多种编程语言语法高亮
- 列表:有序列表(编号)、无序列表(项目符号)
- 图片:支持本地图片和网络图片
- 公式:在文本块中插入数学公式,支持LaTeX语法
- mermaid图表:支持流程图、时序图、思维导图、类图、饼图等等
- 表格:支持创建多行列表格,单元格可包含文本、标题、列表、代码块等多种内容类型
-
精简工具集:21个工具 → 13个工具,移除冗余,聚焦核心功能0.0.15 ✅ -
优化描述:7000+ tokens → 3000+ tokens,简化提示,节省请求token0.0.15 ✅ -
批量增强:新增批量更新、批量图片上传,单次操作效率提升50%0.0.15 ✅ - 流程优化:减少多步调用,实现一键完成复杂任务
-
支持多种凭证类型:包括 tenant_access_token和 user_access_token,满足不同场景下的认证需求(飞书应用配置发生变更) 0.0.16 ✅。 - 支持cursor用户登录:方便在cursor平台用户认证
-
支持mermaid图表:流程图、时序图等等,丰富文档内容0.1.11 ✅ -
支持表格创建:创建包含各种块类型的复杂表格,支持样式控制0.1.2 ✅
关于如何创建飞书应用和获取应用凭证的说明可以在官方教程找到。
详细的飞书应用配置步骤:有关注册飞书应用、配置权限、添加文档访问权限的详细指南,请参阅 手把手教程 FEISHU_CONFIG.md。
npx feishu-mcp@latest --feishu-app-id=<你的飞书应用ID> --feishu-app-secret=<你的飞书应用密钥>
已发布到 Smithery 平台,可访问: https://smithery.ai/server/@cso1z/feishu-mcp
本项目采用主分支(main)+功能分支(feature/xxx)协作模式:
-
main
稳定主线分支,始终保持可用、可部署状态。所有已验证和正式发布的功能都会合并到 main 分支。 -
multi-user-token
多用户隔离与按用户授权的 Feishu Token 获取功能开发分支。该分支支持 userKey 参数、按用户获取和缓存 Token、自定义 Token 服务等高级特性,适用于需要多用户隔离和授权场景的开发与测试。
⚠️ 该分支为 beta 版本,功能更新相较 main 分支可能会有延后。如有相关需求请在 issue 区留言,我会优先同步最新功能到该分支。
-
克隆仓库
git clone https://github.com/cso1z/Feishu-MCP.git cd Feishu-MCP
-
安装依赖
pnpm install
-
配置环境变量(复制一份.env.example保存为.env文件)
macOS/Linux:
cp .env.example .env
Windows:
copy .env.example .env
-
编辑 .env 文件 在项目根目录下找到并用任意文本编辑器打开
.env
文件,填写你的飞书应用凭证:FEISHU_APP_ID=cli_xxxxx FEISHU_APP_SECRET=xxxxx PORT=3333
-
运行服务器
pnpm run dev
变量名 | 必需 | 描述 | 默认值 |
---|---|---|---|
FEISHU_APP_ID |
✅ | 飞书应用 ID | - |
FEISHU_APP_SECRET |
✅ | 飞书应用密钥 | - |
PORT |
❌ | 服务器端口 | 3333 |
FEISHU_AUTH_TYPE |
❌ | 认证凭证类型,建议本地运行时使用 user (用户级,需OAuth授权),云端/生产环境使用 tenant (应用级,默认) |
tenant |
FEISHU_TOKEN_ENDPOINT |
❌ | 获取 token 的接口地址,仅当自定义 token 管理时需要 | http://localhost:3333/getToken |
注意:
- 只有本地运行服务时支持
user
凭证,否则需配置FEISHU_TOKEN_ENDPOINT
,自行实现 token 获取与管理(可参考callbackService
、feishuAuthService
)。FEISHU_TOKEN_ENDPOINT
接口参数:client_id
,client_secret
,token_type
(可选,tenant/user);返回参数:access_token
,needAuth
,url
(需授权时),expires_in
(单位:s)。
参数 | 描述 | 默认值 |
---|---|---|
--port |
服务器监听端口 | 3333 |
--log-level |
日志级别 (debug/info/log/warn/error/none) | info |
--feishu-app-id |
飞书应用 ID | - |
--feishu-app-secret |
飞书应用密钥 | - |
--feishu-base-url |
飞书API基础URL | https://open.feishu.cn/open-apis |
--cache-enabled |
是否启用缓存 | true |
--cache-ttl |
缓存生存时间(秒) | 3600 |
--stdio |
命令模式运行 | - |
--help |
显示帮助菜单 | - |
--version |
显示版本号 | - |
{
"mcpServers": {
"feishu-mcp": {
"command": "npx",
"args": ["-y", "feishu-mcp", "--stdio"],
"env": {
"FEISHU_APP_ID": "<你的飞书应用ID>",
"FEISHU_APP_SECRET": "<你的飞书应用密钥>"
}
},
"feishu_local": {
"url": "http://localhost:3333/sse"
}
}
}
-
新建文档时,建议主动提供飞书文件夹 token(可为具体文件夹或根文件夹),这样可以更高效地定位和管理文档。如果不确定具体的子文件夹,可以让LLM自动在你指定的文件夹下查找最合适的子目录来新建文档。
如何获取文件夹 token? 打开飞书文件夹页面,复制链接(如
https://.../drive/folder/xxxxxxxxxxxxxxxxxxxxxx
),token 就是链接最后的那一串字符(如xxxxxxxxxxxxxxxxxxxxxx
,请勿泄露真实 token)。 -
本地运行 MCP 时,图片路径既支持本地绝对路径,也支持 http/https 网络图片;如在服务器环境,仅支持网络图片链接(由于cursor调用mcp时参数长度限制,暂不支持直接上传图片文件本体,请使用图片路径或链接方式上传)。
-
在文本块中可以混合使用普通文本和公式元素。公式使用LaTeX语法,如:
1+2=3
、\frac{a}{b}
、\sqrt{x}
等。支持在同一文本块中包含多个公式和普通文本。
先对照配置问题查看: 手把手教程 FEISHU_CONFIG.md。
- 检查应用权限:确保应用已获得必要的文档访问权限
- 验证文档授权:确认目标文档已授权给应用或应用所在的群组
- 检查可用范围:确保应用发布版本的可用范围包含文档所有者
- 获取token:自建应用获取 app_access_token
- 使用第1步获取的token,验证是否有权限访问该文档:获取文档基本信息
如果这个项目帮助到了你,请考虑:
- ⭐ 给项目一个 Star
- 🐛 报告 Bug 和问题
- 💡 提出新功能建议
- 📖 改进文档
- 🔀 提交 Pull Request
你的支持是我们前进的动力!
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for Feishu-MCP
Similar Open Source Tools

Feishu-MCP
Feishu-MCP is a server that provides access, editing, and structured processing capabilities for Feishu documents for Cursor, Windsurf, Cline, and other AI-driven coding tools, based on the Model Context Protocol server. This project enables AI coding tools to directly access and understand the structured content of Feishu documents, significantly improving the intelligence and efficiency of document processing. It covers the real usage process of Feishu documents, allowing efficient utilization of document resources, including folder directory retrieval, content retrieval and understanding, smart creation and editing, efficient search and retrieval, and more. It enhances the intelligent access, editing, and searching of Feishu documents in daily usage, improving content processing efficiency and experience.

prisma-ai
Prisma-AI is an open-source tool designed to assist users in their job search process by addressing common challenges such as lack of project highlights, mismatched resumes, difficulty in learning, and lack of answers in interview experiences. The tool utilizes AI to analyze user experiences, generate actionable project highlights, customize resumes for specific job positions, provide study materials for efficient learning, and offer structured interview answers. It also features a user-friendly interface for easy deployment and supports continuous improvement through user feedback and collaboration.

Speech-AI-Forge
Speech-AI-Forge is a project developed around TTS generation models, implementing an API Server and a WebUI based on Gradio. The project offers various ways to experience and deploy Speech-AI-Forge, including online experience on HuggingFace Spaces, one-click launch on Colab, container deployment with Docker, and local deployment. The WebUI features include TTS model functionality, speaker switch for changing voices, style control, long text support with automatic text segmentation, refiner for ChatTTS native text refinement, various tools for voice control and enhancement, support for multiple TTS models, SSML synthesis control, podcast creation tools, voice creation, voice testing, ASR tools, and post-processing tools. The API Server can be launched separately for higher API throughput. The project roadmap includes support for various TTS models, ASR models, voice clone models, and enhancer models. Model downloads can be manually initiated using provided scripts. The project aims to provide inference services and may include training-related functionalities in the future.

BlueLM
BlueLM is a large-scale pre-trained language model developed by vivo AI Global Research Institute, featuring 7B base and chat models. It includes high-quality training data with a token scale of 26 trillion, supporting both Chinese and English languages. BlueLM-7B-Chat excels in C-Eval and CMMLU evaluations, providing strong competition among open-source models of similar size. The models support 32K long texts for better context understanding while maintaining base capabilities. BlueLM welcomes developers for academic research and commercial applications.

UltraRAG
The UltraRAG framework is a researcher and developer-friendly RAG system solution that simplifies the process from data construction to model fine-tuning in domain adaptation. It introduces an automated knowledge adaptation technology system, supporting no-code programming, one-click synthesis and fine-tuning, multidimensional evaluation, and research-friendly exploration work integration. The architecture consists of Frontend, Service, and Backend components, offering flexibility in customization and optimization. Performance evaluation in the legal field shows improved results compared to VanillaRAG, with specific metrics provided. The repository is licensed under Apache-2.0 and encourages citation for support.

bailing
Bailing is an open-source voice assistant designed for natural conversations with users. It combines Automatic Speech Recognition (ASR), Voice Activity Detection (VAD), Large Language Model (LLM), and Text-to-Speech (TTS) technologies to provide a high-quality voice interaction experience similar to GPT-4o. Bailing aims to achieve GPT-4o-like conversation effects without the need for GPU, making it suitable for various edge devices and low-resource environments. The project features efficient open-source models, modular design allowing for module replacement and upgrades, support for memory function, tool integration for information retrieval and task execution via voice commands, and efficient task management with progress tracking and reminders.

AI-Guide-and-Demos-zh_CN
This is a Chinese AI/LLM introductory project that aims to help students overcome the initial difficulties of accessing foreign large models' APIs. The project uses the OpenAI SDK to provide a more compatible learning experience. It covers topics such as AI video summarization, LLM fine-tuning, and AI image generation. The project also offers a CodePlayground for easy setup and one-line script execution to experience the charm of AI. It includes guides on API usage, LLM configuration, building AI applications with Gradio, customizing prompts for better model performance, understanding LoRA, and more.

Awesome-ChatTTS
Awesome-ChatTTS is an official recommended guide for ChatTTS beginners, compiling common questions and related resources. It provides a comprehensive overview of the project, including official introduction, quick experience options, popular branches, parameter explanations, voice seed details, installation guides, FAQs, and error troubleshooting. The repository also includes video tutorials, discussion community links, and project trends analysis. Users can explore various branches for different functionalities and enhancements related to ChatTTS.

XianyuAutoAgent
Xianyu AutoAgent is an AI customer service robot system specifically designed for the Xianyu platform, providing 24/7 automated customer service, supporting multi-expert collaborative decision-making, intelligent bargaining, and context-aware conversations. The system includes intelligent conversation engine with features like context awareness and expert routing, business function matrix with modules like core engine, bargaining system, technical support, and operation monitoring. It requires Python 3.8+ and NodeJS 18+ for installation and operation. Users can customize prompts for different experts and contribute to the project through issues or pull requests.

devops-gpt
DevOpsGPT is a revolutionary tool designed to streamline your workflow and empower you to build systems and automate tasks with ease. Tired of spending hours on repetitive DevOps tasks? DevOpsGPT is here to help! Whether you're setting up infrastructure, speeding up deployments, or tackling any other DevOps challenge, our app can make your life easier and more productive. With DevOpsGPT, you can expect faster task completion, simplified workflows, and increased efficiency. Ready to experience the DevOpsGPT difference? Visit our website, sign in or create an account, start exploring the features, and share your feedback to help us improve. DevOpsGPT will become an essential tool in your DevOps toolkit.

ChuanhuChatGPT
Chuanhu Chat is a user-friendly web graphical interface that provides various additional features for ChatGPT and other language models. It supports GPT-4, file-based question answering, local deployment of language models, online search, agent assistant, and fine-tuning. The tool offers a range of functionalities including auto-solving questions, online searching with network support, knowledge base for quick reading, local deployment of language models, GPT 3.5 fine-tuning, and custom model integration. It also features system prompts for effective role-playing, basic conversation capabilities with options to regenerate or delete dialogues, conversation history management with auto-saving and search functionalities, and a visually appealing user experience with themes, dark mode, LaTeX rendering, and PWA application support.

AIClient-2-API
AIClient-2-API is a versatile and lightweight API proxy designed for developers, providing ample free API request quotas and comprehensive support for various mainstream large models like Gemini, Qwen Code, Claude, etc. It converts multiple backend APIs into standard OpenAI format interfaces through a Node.js HTTP server. The project adopts a modern modular architecture, supports strategy and adapter patterns, comes with complete test coverage and health check mechanisms, and is ready to use after 'npm install'. By easily switching model service providers in the configuration file, any OpenAI-compatible client or application can seamlessly access different large model capabilities through the same API address, eliminating the hassle of maintaining multiple sets of configurations for different services and dealing with incompatible interfaces.

k8m
k8m is an AI-driven Mini Kubernetes AI Dashboard lightweight console tool designed to simplify cluster management. It is built on AMIS and uses 'kom' as the Kubernetes API client. k8m has built-in Qwen2.5-Coder-7B model interaction capabilities and supports integration with your own private large models. Its key features include miniaturized design for easy deployment, user-friendly interface for intuitive operation, efficient performance with backend in Golang and frontend based on Baidu AMIS, pod file management for browsing, editing, uploading, downloading, and deleting files, pod runtime management for real-time log viewing, log downloading, and executing shell commands within pods, CRD management for automatic discovery and management of CRD resources, and intelligent translation and diagnosis based on ChatGPT for YAML property translation, Describe information interpretation, AI log diagnosis, and command recommendations, providing intelligent support for managing k8s. It is cross-platform compatible with Linux, macOS, and Windows, supporting multiple architectures like x86 and ARM for seamless operation. k8m's design philosophy is 'AI-driven, lightweight and efficient, simplifying complexity,' helping developers and operators quickly get started and easily manage Kubernetes clusters.

ChatTTS-Forge
ChatTTS-Forge is a powerful text-to-speech generation tool that supports generating rich audio long texts using a SSML-like syntax and provides comprehensive API services, suitable for various scenarios. It offers features such as batch generation, support for generating super long texts, style prompt injection, full API services, user-friendly debugging GUI, OpenAI-style API, Google-style API, support for SSML-like syntax, speaker management, style management, independent refine API, text normalization optimized for ChatTTS, and automatic detection and processing of markdown format text. The tool can be experienced and deployed online through HuggingFace Spaces, launched with one click on Colab, deployed using containers, or locally deployed after cloning the project, preparing models, and installing necessary dependencies.

DeepAI
DeepAI is a proxy server that enhances the interaction experience of large language models (LLMs) by integrating the 'thinking chain' process. It acts as an intermediary layer, receiving standard OpenAI API compatible requests, using independent 'thinking services' to generate reasoning processes, and then forwarding the enhanced requests to the LLM backend of your choice. This ensures that responses are not only generated by the LLM but also based on pre-inference analysis, resulting in more insightful and coherent answers. DeepAI supports seamless integration with applications designed for the OpenAI API, providing endpoints for '/v1/chat/completions' and '/v1/models', making it easy to integrate into existing applications. It offers features such as reasoning chain enhancement, flexible backend support, API key routing, weighted random selection, proxy support, comprehensive logging, and graceful shutdown.

nndeploy
nndeploy is a tool that allows you to quickly build your visual AI workflow without the need for frontend technology. It provides ready-to-use algorithm nodes for non-AI programmers, including large language models, Stable Diffusion, object detection, image segmentation, etc. The workflow can be exported as a JSON configuration file, supporting Python/C++ API for direct loading and running, deployment on cloud servers, desktops, mobile devices, edge devices, and more. The framework includes mainstream high-performance inference engines and deep optimization strategies to help you transform your workflow into enterprise-level production applications.
For similar tasks

Feishu-MCP
Feishu-MCP is a server that provides access, editing, and structured processing capabilities for Feishu documents for Cursor, Windsurf, Cline, and other AI-driven coding tools, based on the Model Context Protocol server. This project enables AI coding tools to directly access and understand the structured content of Feishu documents, significantly improving the intelligence and efficiency of document processing. It covers the real usage process of Feishu documents, allowing efficient utilization of document resources, including folder directory retrieval, content retrieval and understanding, smart creation and editing, efficient search and retrieval, and more. It enhances the intelligent access, editing, and searching of Feishu documents in daily usage, improving content processing efficiency and experience.
For similar jobs

promptflow
**Prompt flow** is a suite of development tools designed to streamline the end-to-end development cycle of LLM-based AI applications, from ideation, prototyping, testing, evaluation to production deployment and monitoring. It makes prompt engineering much easier and enables you to build LLM apps with production quality.

deepeval
DeepEval is a simple-to-use, open-source LLM evaluation framework specialized for unit testing LLM outputs. It incorporates various metrics such as G-Eval, hallucination, answer relevancy, RAGAS, etc., and runs locally on your machine for evaluation. It provides a wide range of ready-to-use evaluation metrics, allows for creating custom metrics, integrates with any CI/CD environment, and enables benchmarking LLMs on popular benchmarks. DeepEval is designed for evaluating RAG and fine-tuning applications, helping users optimize hyperparameters, prevent prompt drifting, and transition from OpenAI to hosting their own Llama2 with confidence.

MegaDetector
MegaDetector is an AI model that identifies animals, people, and vehicles in camera trap images (which also makes it useful for eliminating blank images). This model is trained on several million images from a variety of ecosystems. MegaDetector is just one of many tools that aims to make conservation biologists more efficient with AI. If you want to learn about other ways to use AI to accelerate camera trap workflows, check out our of the field, affectionately titled "Everything I know about machine learning and camera traps".

leapfrogai
LeapfrogAI is a self-hosted AI platform designed to be deployed in air-gapped resource-constrained environments. It brings sophisticated AI solutions to these environments by hosting all the necessary components of an AI stack, including vector databases, model backends, API, and UI. LeapfrogAI's API closely matches that of OpenAI, allowing tools built for OpenAI/ChatGPT to function seamlessly with a LeapfrogAI backend. It provides several backends for various use cases, including llama-cpp-python, whisper, text-embeddings, and vllm. LeapfrogAI leverages Chainguard's apko to harden base python images, ensuring the latest supported Python versions are used by the other components of the stack. The LeapfrogAI SDK provides a standard set of protobuffs and python utilities for implementing backends and gRPC. LeapfrogAI offers UI options for common use-cases like chat, summarization, and transcription. It can be deployed and run locally via UDS and Kubernetes, built out using Zarf packages. LeapfrogAI is supported by a community of users and contributors, including Defense Unicorns, Beast Code, Chainguard, Exovera, Hypergiant, Pulze, SOSi, United States Navy, United States Air Force, and United States Space Force.

llava-docker
This Docker image for LLaVA (Large Language and Vision Assistant) provides a convenient way to run LLaVA locally or on RunPod. LLaVA is a powerful AI tool that combines natural language processing and computer vision capabilities. With this Docker image, you can easily access LLaVA's functionalities for various tasks, including image captioning, visual question answering, text summarization, and more. The image comes pre-installed with LLaVA v1.2.0, Torch 2.1.2, xformers 0.0.23.post1, and other necessary dependencies. You can customize the model used by setting the MODEL environment variable. The image also includes a Jupyter Lab environment for interactive development and exploration. Overall, this Docker image offers a comprehensive and user-friendly platform for leveraging LLaVA's capabilities.

carrot
The 'carrot' repository on GitHub provides a list of free and user-friendly ChatGPT mirror sites for easy access. The repository includes sponsored sites offering various GPT models and services. Users can find and share sites, report errors, and access stable and recommended sites for ChatGPT usage. The repository also includes a detailed list of ChatGPT sites, their features, and accessibility options, making it a valuable resource for ChatGPT users seeking free and unlimited GPT services.

TrustLLM
TrustLLM is a comprehensive study of trustworthiness in LLMs, including principles for different dimensions of trustworthiness, established benchmark, evaluation, and analysis of trustworthiness for mainstream LLMs, and discussion of open challenges and future directions. Specifically, we first propose a set of principles for trustworthy LLMs that span eight different dimensions. Based on these principles, we further establish a benchmark across six dimensions including truthfulness, safety, fairness, robustness, privacy, and machine ethics. We then present a study evaluating 16 mainstream LLMs in TrustLLM, consisting of over 30 datasets. The document explains how to use the trustllm python package to help you assess the performance of your LLM in trustworthiness more quickly. For more details about TrustLLM, please refer to project website.

AI-YinMei
AI-YinMei is an AI virtual anchor Vtuber development tool (N card version). It supports fastgpt knowledge base chat dialogue, a complete set of solutions for LLM large language models: [fastgpt] + [one-api] + [Xinference], supports docking bilibili live broadcast barrage reply and entering live broadcast welcome speech, supports Microsoft edge-tts speech synthesis, supports Bert-VITS2 speech synthesis, supports GPT-SoVITS speech synthesis, supports expression control Vtuber Studio, supports painting stable-diffusion-webui output OBS live broadcast room, supports painting picture pornography public-NSFW-y-distinguish, supports search and image search service duckduckgo (requires magic Internet access), supports image search service Baidu image search (no magic Internet access), supports AI reply chat box [html plug-in], supports AI singing Auto-Convert-Music, supports playlist [html plug-in], supports dancing function, supports expression video playback, supports head touching action, supports gift smashing action, supports singing automatic start dancing function, chat and singing automatic cycle swing action, supports multi scene switching, background music switching, day and night automatic switching scene, supports open singing and painting, let AI automatically judge the content.