chinese-llm-benchmark
中文大模型能力评测榜单:目前已囊括139个大模型,覆盖chatgpt、gpt-4o、谷歌gemini、Claude3.5、百度文心一言、千问、百川、讯飞星火、商汤senseChat、minimax等商用模型, 以及deepseek-v3、qwen2.5、llama3.1、glm4、书生internLM2.5等开源大模型。不仅提供能力评分排行榜,也提供所有模型的原始输出结果!
Stars: 3255
The Chinese LLM Benchmark is a continuous evaluation list of large models in CLiB, covering a wide range of commercial and open-source models from various companies and research institutions. It supports multidimensional evaluation of capabilities including classification, information extraction, reading comprehension, data analysis, Chinese encoding efficiency, and Chinese instruction compliance. The benchmark not only provides capability score rankings but also offers the original output results of all models for interested individuals to score and rank themselves.
README:
- 目前已囊括139个大模型,覆盖chatgpt、gpt-4o、谷歌gemini、Claude3.5、百度文心一言、千问、百川、讯飞星火、商汤senseChat、minimax等商用模型, 以及deepseek-v3、qwen2.5、llama3.1、glm4、书生internLM2.5等开源大模型。
- 模型来源涉及国内外大厂、大模型创业公司、高校研究机构。
- 支持多维度能力评测,包括分类能力、信息抽取、阅读理解、数据分析、指令遵从、算术运算、初中数学、符号推理BBH、代词理解CLUEWSC、诗词匹配CCPM、中文编码效率。
- 不仅提供能力评分排行榜,也提供所有模型的原始输出结果!有兴趣的朋友可以自己打分、自己排行!
- 🔄最近更新
- ⚓TODO
- 📝大模型基本信息
-
📊排行榜
-
综合能力排行榜
- 商用大模型排行榜(含开源模型的付费API)
- 输出价格30元及以上
- 输出价格5~30元
- 输出价格1~5元
- 输出价格1元以下
- 开源大模型排行榜
- 5B以下
- 5B~20B
- 20B以上
- 商用大模型排行榜(含开源模型的付费API)
- 初中数学排行榜
- 代词理解CLUEWSC排行榜
- 诗词匹配CCPM排行榜
- 符号推理BBH排行榜
- 分类能力排行榜
- 信息抽取能力排行榜
- 阅读理解能力排行榜
- 数据分析排行榜
- 中文指令遵从排行榜
- 算术能力排行榜
- 中文编码效率排行榜
-
综合能力排行榜
- 🌐各项能力评分
- ⚖️原始评测数据
- 为什么做榜单?
- [2025/1/7] 发布v2.7版本评测榜单
- 新增代词理解CLUEWSC榜单(比如“他”是指谁)、诗词匹配CCPM榜单
- 新增5个模型:Claude-3.5-Sonnet、gemma-2-27b-it、Llama-3.1-405B-Instruct、Baichuan4-Air、Baichuan4-Turbo
- 删除陈旧的模型:Baichuan3-Turbo、qwen2-72b-instruct、Qwen2-7B-Instruct、qwen2-1.5b-instruct、qwen2-0.5b-instruct、qwen2-57b-a14b-instruct,☛查看模型完整信息
- [2024/12/28] 发布v2.6版本评测榜单
- 新增BBH(学术界常用符号推理评测集)榜单,并计入总分
- 将初中数学(七/八/九年级)成绩计入总分
- 删除陈旧的模型:deepseek-chat-v2、Llama-3-70B-Instruct、Llama-3-8B-Instruct、MiniCPM-2B-dpo、minimax-abab6.5-chat、DeepSeek-V2-Lite-Chat、internlm2-chat-1_8b
- [2024/12/27] 发布v2.5版本评测榜单
- 新增Grade8Math-zh(八年级数学)、Grade9Math-zh(九年级数学)榜单
- 新增6个模型:deepseek-chat-v3、abab7-chat-preview、hunyuan-standard、hunyuan-large、hunyuan-turbo、SenseChat-5,☛查看模型完整信息
- [2024/12/25] 发布v2.4版本评测榜单
- 新增Grade7Math-zh(七年级数学)榜单
- 删除陈旧的模型:Phi-3-mini-128k-instruct、Qwen1.5系列、openbuddy-llama3-8b、yi-large、yi-large-turbo、yi-medium、yi-spark、internlm2-chat-20b、internlm2-chat-7b、gpt-4-turbo、gpt-3.5-turbo
- [2024/10/20] 发布v2.3版本评测榜单
- 新增6个模型:yi-lightning、gemini-1.5-flash、gemini-1.0-pro、gemini-1.5-pro、GLM-4-Long、GLM-4-Plus
- 更新4个模型:GLM4、qwen-max、ERNIE-4.0-Turbo-8K、ERNIE-3.5-8K
- 删除陈旧的模型:Baichuan2-13B-Chat、Baichuan2-7B-Chat、deepseek-llm-67b-chat、gpt4、gemma-2b-it、gemma-7b-it
- [2024/9/29]v2.2版本,[2024/8/27]v2.1版本,[2024/8/7]v2.0版本,[2024/7/26]v1.21版本,[2024/7/15]v1.20版本,[2024/6/29]v1.19版本,[2024/6/2]v1.18版本,[2024/5/8]v1.17版本,[2024/4/13]v1.16版本,[2024/3/20]v1.15版本,[2024/2/28]v1.14版本,[2024/1/29]v1.13版本
- 2023年:[2023/12/10]v1.12版本,[2023/11/22]v1.11版本,[2023/11/5]v1.10版本,[2023/10/11]v1.9版本,[2023/9/13]v1.8版本,[2023/8/29]v1.7版本,[2023/8/13]v1.6版本,[2023/7/26]v1.5版本, [2023/7/18]v1.4版本, [2023/7/2]v1.3版本, [2023/6/17]v1.2版, [2023/6/10]v1.1版本, [2023/6/4]v1版本
各版本更新详情:CHANGELOG
- 将更多大模型加入评测:Mistral等等
- 引入更多维度的评测:代码能力、开放域问答、多轮对话、头脑风暴、翻译……
- 评测维度更细分,比如信息抽取可以细分时间实体抽取能力、地址实体抽取能力……
- 海纳百川,整合各类评测榜单,扩充细分领域榜单(比如教育领域、医疗领域)
- 加入更多评测数据,使得评测得分越来越有说服力
价格单位:元/1M tokens,即元每百万token
model | producer | open-source | price_input | price_output | 直接体验 | download | paper | badcase |
---|---|---|---|---|---|---|---|---|
GLM-4-Flash | 智谱AI | No | 0.0 | 0.0 | link | / | link | link |
ERNIE-Speed-8K | 百度 | No | 0.0 | 0.0 | link | / | / | link |
internlm2_5-7b-chat | 上海人工智能实验室 | Yes | 0.3 | 0.3 | link | link | / | link |
Yi-1.5-9B-Chat | 零一万物 | Yes | 0.4 | 0.4 | link | link | link | link |
Llama-3.1-8B-Instruct | meta | Yes | 0.4 | 0.4 | link | link | link | link |
Doubao-lite-32k | 豆包 | No | 0.3 | 0.6 | link | / | / | link |
glm-4-9b-chat | 智谱AI | Yes | 0.6 | 0.6 | link | link | link | link |
gemma-2-9b-it | Yes | 0.6 | 0.6 | link | link | link | link | |
qwen2.5-7b-instruct | 阿里巴巴 | Yes | 1.0 | 2.0 | link | link | / | link |
gemini-1.5-flash | No | 0.5 | 2.2 | link | / | / | link | |
gpt-4o-mini | openAI | No | 1.1 | 4.3 | link | / | link | link |
... | ... | ... | ... | ... | ... | ... | ... | ... |
更多模型信息详见:
综合能力得分为分类能力、信息抽取、阅读理解、数据分析、指令遵从、算术运算、初中数学、符号推理BBH、代词理解CLUEWSC、诗词匹配CCPM等10项得分的平均值。
详细数据见total
大模型 | 输出价格 | 分类能力 | 信息抽取 | 阅读理解 | 数据分析 | 指令遵从 | 算术运算 | 初中数学 | 符号推理 | 代词理解 | 诗词匹配 | 总分 | 排名 |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Claude-3.5-Sonnet(new)☛去体验 | 108.6元 | 97.0 | 94.8 | 96.0 | 99.3 | 81.8 | 92.2 | 82.7 | 91.1 | 95.1 | 86.1 | 91.6 | 1 |
GLM-4-Plus☛去体验 | 50元 | 87.0 | 91.9 | 95.3 | 99.3 | 81.0 | 88.7 | 89.5 | 87.0 | 90.9 | 89.4 | 90.0 | 2 |
ERNIE-4.0-Turbo-8K☛去体验 | 60元 | 90.0 | 94.8 | 96.0 | 98.7 | 78.0 | 97.7 | 82.9 | 82.8 | 92.7 | 86.4 | 90.0 | 3 |
hunyuan-turbo☛去体验 | 50元 | 93.0 | 85.2 | 93.3 | 97.3 | 78.0 | 99.5 | 93.7 | 83.2 | 92.0 | 82.4 | 89.8 | 4 |
ERNIE-4.0☛去体验 | 90元 | 88.0 | 89.0 | 94.7 | 94.0 | 79.0 | 100.0 | 88.6 | 82.8 | 92.0 | 84.0 | 89.2 | 5 |
gemini-1.5-pro☛去体验 | 36元 | 87.0 | 90.4 | 93.3 | 99.3 | 75.0 | 92.2 | 92.5 | 85.9 | 91.3 | 84.2 | 89.1 | 6 |
gpt-4o☛去体验 | 72.4元 | 93.0 | 96.3 | 98.0 | 100.0 | 83.0 | 95.7 | 81.1 | 72.8 | 87.1 | 82.7 | 89.0 | 7 |
xunfei-4.0Ultra☛去体验 | 100元 | 88.0 | 84.4 | 96.0 | 92.7 | 80.0 | 94.3 | 93.7 | 81.9 | 92.0 | 85.0 | 88.8 | 8 |
SenseChat-5☛去体验 | 100元 | 93.0 | 90.4 | 89.3 | 97.3 | 82.0 | 85.0 | 82.9 | 86.2 | 90.0 | 86.0 | 88.2 | 9 |
qwen-max☛去体验 | 60元 | 92.0 | 88.9 | 94.7 | 99.3 | 77.0 | 79.8 | 91.9 | 74.5 | 93.0 | 88.9 | 88.0 | 10 |
xunfei-spark-max☛去体验 | 30元 | 87.0 | 92.0 | 89.3 | 87.3 | 74.0 | 93.5 | 93.7 | 72.5 | 91.6 | 87.0 | 86.8 | 11 |
GLM4☛去体验 | 100元 | 92.0 | 86.7 | 90.0 | 98.0 | 77.0 | 78.0 | 84.3 | 77.0 | 93.0 | 83.0 | 85.9 | 12 |
Baichuan4☛去体验 | 100元 | 86.0 | 94.1 | 93.3 | 95.3 | 75.0 | 78.2 | 75.1 | 82.3 | 90.0 | 83.0 | 85.2 | 13 |
xunfei-spark-pro☛去体验 | 30元 | 87.0 | 82.0 | 88.0 | 86.0 | 74.0 | 94.0 | 94.6 | 35.0 | 90.9 | 86.9 | 81.8 | 14 |
大模型 | 输出价格 | 分类能力 | 信息抽取 | 阅读理解 | 数据分析 | 指令遵从 | 算术运算 | 初中数学 | 符号推理 | 代词理解 | 诗词匹配 | 总分 | 排名 |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
abab7-chat-preview☛去体验 | 10元 | 89.0 | 96.3 | 94.7 | 97.3 | 83.0 | 94.2 | 86.1 | 82.4 | 92.3 | 87.8 | 90.3 | 1 |
Baichuan4-Turbo(new)☛去体验 | 15元 | 91.0 | 93.3 | 97.3 | 100.0 | 78.0 | 93.2 | 92.0 | 81.9 | 88.5 | 87.2 | 90.2 | 2 |
hunyuan-large☛去体验 | 12元 | 91.0 | 88.9 | 92.7 | 96.7 | 79.0 | 93.0 | 93.9 | 88.9 | 92.7 | 81.6 | 89.8 | 3 |
qwen2.5-72b-instruct☛去体验 | 12元 | 92.0 | 87.4 | 92.0 | 92.7 | 83.0 | 95.5 | 91.1 | 85.8 | 91.3 | 86.6 | 89.7 | 4 |
qwen2.5-32b-instruct☛去体验 | 7元 | 91.0 | 94.1 | 96.0 | 91.3 | 83.0 | 94.0 | 90.3 | 66.6 | 94.1 | 88.2 | 88.9 | 5 |
Meta-Llama-3.1-405B-Instruct(new)☛去体验 | 21元 | 90.0 | 90.4 | 98.7 | 98.7 | 76.7 | 95.0 | 64.2 | 91.0 | 88.9 | 79.7 | 87.4 | 6 |
qwen2.5-14b-instruct☛去体验 | 6元 | 89.0 | 90.4 | 94.0 | 98.0 | 81.0 | 91.5 | 93.7 | 54.4 | 92.7 | 87.5 | 87.2 | 7 |
GLM-4-AirX☛去体验 | 10元 | 89.0 | 91.9 | 92.7 | 88.0 | 83.0 | 74.2 | 84.0 | 57.7 | 88.9 | 83.7 | 83.3 | 8 |
moonshot-v1-8k☛去体验 | 12元 | 92.0 | 85.0 | 84.0 | 89.3 | 72.0 | 79.3 | 85.1 | 66.7 | 86.4 | 82.9 | 82.3 | 9 |
SenseChat-Turbo☛去体验 | 5元 | 81.0 | 77.8 | 76.7 | 86.0 | 72.0 | 78.5 | 81.9 | 74.1 | 89.9 | 82.9 | 80.1 | 10 |
SenseChat-v4☛去体验 | 12元 | 89.0 | 78.5 | 88.0 | 86.7 | 71.0 | 72.2 | 39.0 | 70.7 | 84.7 | 76.8 | 75.7 | 11 |
gemini-1.0-pro☛去体验 | 10.8元 | 84.0 | 89.6 | 92.7 | 99.3 | 76.0 | 50.8 | 40.6 | 75.0 | 67.6 | 76.3 | 75.2 | 12 |
abab5.5-chat☛去体验 | 15元 | 83.0 | 79.0 | 86.7 | 72.7 | 76.0 | 39.7 | 38.8 | 64.2 | 88.9 | 78.9 | 70.8 | 13 |
abab5.5s-chat☛去体验 | 5元 | 58.0 | 57.0 | 70.7 | 56.0 | 49.0 | 57.0 | 26.4 | 8.6 | 39.7 | 58.9 | 48.1 | 14 |
大模型 | 输出价格 | 分类能力 | 信息抽取 | 阅读理解 | 数据分析 | 指令遵从 | 算术运算 | 初中数学 | 符号推理 | 代词理解 | 诗词匹配 | 总分 | 排名 |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
deepseek-chat-v3☛去体验 | 2元 | 93.0 | 97.0 | 94.7 | 100.0 | 84.0 | 99.0 | 91.4 | 90.5 | 94.4 | 86.8 | 93.1 | 1 |
Doubao-pro-32k☛去体验 | 2元 | 86.0 | 88.1 | 96.7 | 86.7 | 85.0 | 98.2 | 91.0 | 84.3 | 92.0 | 88.1 | 89.6 | 2 |
ERNIE-3.5-8K☛去体验 | 2元 | 94.0 | 89.6 | 98.0 | 100.0 | 72.0 | 100.0 | 81.8 | 68.8 | 91.3 | 86.2 | 88.2 | 3 |
gemini-1.5-flash☛去体验 | 2.2元 | 91.0 | 87.4 | 92.7 | 97.3 | 77.0 | 91.8 | 88.7 | 83.3 | 88.5 | 83.9 | 88.2 | 4 |
gpt-4o-mini☛去体验 | 4.3元 | 90.0 | 93.3 | 89.3 | 100.0 | 83.0 | 92.7 | 80.7 | 65.6 | 84.7 | 77.7 | 85.7 | 5 |
qwen-plus☛去体验 | 2元 | 88.0 | 89.6 | 90.0 | 84.0 | 73.0 | 93.0 | 91.4 | 67.7 | 93.0 | 86.3 | 85.6 | 6 |
gemma-2-27b-it(new)☛去体验 | 1.26元 | 92.0 | 93.3 | 94.7 | 96.7 | 83.1 | 88.3 | 66.4 | 74.8 | 80.5 | 80.0 | 85.0 | 7 |
qwen-long☛去体验 | 2元 | 89.0 | 85.9 | 90.0 | 86.7 | 75.0 | 83.3 | 91.3 | 64.6 | 92.3 | 86.3 | 84.4 | 8 |
qwen2.5-7b-instruct☛去体验 | 2元 | 85.0 | 88.1 | 93.3 | 91.3 | 77.0 | 89.8 | 79.9 | 61.7 | 90.6 | 83.4 | 84.0 | 9 |
Llama-3.1-70B-Instruct☛去体验 | 4.1元 | 87.0 | 88.9 | 92.0 | 90.7 | 79.0 | 94.8 | 49.2 | 84.0 | 88.9 | 81.1 | 83.6 | 10 |
hunyuan-standard☛去体验 | 2元 | 87.0 | 89.6 | 93.3 | 85.3 | 74.0 | 83.0 | 80.0 | 72.3 | 86.8 | 75.4 | 82.7 | 11 |
Yi-1.5-34B-Chat☛去体验 | 1.3元 | 90.0 | 83.0 | 82.7 | 83.3 | 74.0 | 79.0 | 75.6 | 77.2 | 84.0 | 81.3 | 81.0 | 12 |
大模型 | 输出价格 | 分类能力 | 信息抽取 | 阅读理解 | 数据分析 | 指令遵从 | 算术运算 | 初中数学 | 符号推理 | 代词理解 | 诗词匹配 | 总分 | 排名 |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
yi-lightning☛去体验 | 0.99元 | 94.0 | 90.4 | 95.3 | 100.0 | 82.0 | 96.0 | 83.5 | 82.4 | 90.6 | 84.7 | 89.9 | 1 |
Baichuan4-Air(new)☛去体验 | 0.98元 | 90.0 | 91.9 | 98.7 | 97.3 | 75.4 | 90.0 | 77.5 | 77.3 | 85.4 | 84.0 | 86.7 | 2 |
internlm2_5-20b-chat☛去体验 | 1元 | 86.0 | 90.4 | 86.0 | 97.3 | 75.0 | 89.7 | 86.8 | 78.7 | 88.2 | 82.2 | 86.0 | 3 |
GLM-4-Long☛去体验 | 1元 | 85.0 | 93.3 | 89.3 | 96.7 | 80.0 | 81.2 | 79.0 | 81.2 | 88.9 | 81.6 | 85.6 | 4 |
abab6.5s-chat☛去体验 | 1元 | 87.0 | 88.0 | 88.7 | 88.0 | 80.0 | 91.7 | 75.9 | 75.8 | 89.2 | 80.3 | 84.5 | 5 |
GLM-4-Air☛去体验 | 1元 | 89.0 | 91.9 | 92.7 | 88.0 | 83.0 | 74.5 | 78.1 | 56.8 | 89.2 | 83.7 | 82.7 | 6 |
qwen-turbo☛去体验 | 0.6元 | 83.0 | 85.2 | 88.0 | 76.0 | 66.0 | 81.3 | 89.6 | 64.4 | 91.6 | 83.2 | 80.8 | 7 |
internlm2_5-7b-chat☛去体验 | 0.4元 | 86.0 | 84.4 | 90.0 | 83.3 | 79.0 | 59.8 | 81.1 | 73.5 | 87.1 | 83.0 | 80.7 | 8 |
gemma-2-9b-it☛去体验 | 0.6元 | 85.0 | 82.2 | 88.7 | 87.3 | 81.0 | 89.3 | 67.4 | 59.9 | 81.9 | 78.5 | 80.1 | 9 |
GLM-4-Flash☛去体验 | 0元 | 89.0 | 80.0 | 86.0 | 82.0 | 79.0 | 75.5 | 78.3 | 61.7 | 89.2 | 80.3 | 80.1 | 10 |
ERNIE-Speed-8K☛去体验 | 0元 | 88.0 | 88.1 | 88.0 | 89.3 | 68.0 | 68.7 | 65.7 | 54.1 | 86.4 | 80.5 | 77.7 | 11 |
Yi-1.5-9B-Chat☛去体验 | 0.4元 | 82.0 | 83.0 | 84.7 | 80.0 | 72.0 | 73.8 | 54.7 | 70.8 | 85.4 | 75.8 | 76.2 | 12 |
Llama-3.1-8B-Instruct☛去体验 | 0.4元 | 63.0 | 85.2 | 82.0 | 84.0 | 69.0 | 90.5 | 50.4 | 65.7 | 71.8 | 77.9 | 74.0 | 13 |
Doubao-lite-32k☛去体验 | 0.6元 | 77.0 | 86.7 | 88.7 | 64.7 | 62.0 | 87.2 | 71.8 | 52.3 | 79.4 | 64.6 | 73.4 | 14 |
旗舰商用模型badcase: gpt-4o |
moonshot-v1-8k |
deepseek-chat-v2 |
yi-large |
更多
类别 | 大模型 | 分类能力 | 信息抽取 | 阅读理解 | 数据分析 | 指令遵从 | 算术运算 | 初中数学 | 符号推理 | 代词理解 | 诗词匹配 | 总分 | 排名 |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
开源 | qwen2.5-3b-instruct☛去体验 | 81.0 | 75.6 | 78.7 | 83.3 | 77.0 | 85.7 | 75.5 | 43.5 | 84.3 | 80.3 | 76.5 | 1 |
开源 | qwen2.5-1.5b-instruct☛去体验 | 70.0 | 71.9 | 72.7 | 63.3 | 62.0 | 83.3 | 56.1 | 34.0 | 36.2 | 75.1 | 62.5 | 2 |
开源 | qwen2.5-0.5b-instruct☛去体验 | 52.0 | 53.3 | 63.3 | 46.0 | 58.0 | 51.8 | 36.6 | 15.7 | 48.1 | 50.4 | 47.5 | 3 |
类别 | 大模型 | 分类能力 | 信息抽取 | 阅读理解 | 数据分析 | 指令遵从 | 算术运算 | 初中数学 | 符号推理 | 代词理解 | 诗词匹配 | 总分 | 排名 |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
开源 | qwen2.5-14b-instruct☛去体验 | 89.0 | 90.4 | 94.0 | 98.0 | 81.0 | 91.5 | 93.7 | 54.4 | 92.7 | 87.5 | 87.2 | 1 |
开源 | internlm2_5-20b-chat☛去体验 | 86.0 | 90.4 | 86.0 | 97.3 | 75.0 | 89.7 | 86.8 | 78.7 | 88.2 | 82.2 | 86.0 | 2 |
开源 | qwen2.5-7b-instruct☛去体验 | 85.0 | 88.1 | 93.3 | 91.3 | 77.0 | 89.8 | 79.9 | 61.7 | 90.6 | 83.4 | 84.0 | 3 |
开源 | internlm2_5-7b-chat☛去体验 | 86.0 | 84.4 | 90.0 | 83.3 | 79.0 | 59.8 | 81.1 | 73.5 | 87.1 | 83.0 | 80.7 | 4 |
开源 | glm-4-9b-chat☛去体验 | 90.0 | 82.2 | 90.0 | 82.0 | 79.0 | 76.5 | 74.5 | 62.4 | 88.9 | 80.3 | 80.6 | 5 |
开源 | gemma-2-9b-it☛去体验 | 85.0 | 82.2 | 88.7 | 87.3 | 81.0 | 89.3 | 67.4 | 59.9 | 81.9 | 78.5 | 80.1 | 6 |
开源 | Yi-1.5-9B-Chat☛去体验 | 82.0 | 83.0 | 84.7 | 80.0 | 72.0 | 73.8 | 54.7 | 70.8 | 85.4 | 75.8 | 76.2 | 7 |
开源 | Llama-3.1-8B-Instruct☛去体验 | 63.0 | 85.2 | 82.0 | 84.0 | 69.0 | 90.5 | 50.4 | 65.7 | 71.8 | 77.9 | 74.0 | 8 |
类别 | 大模型 | 分类能力 | 信息抽取 | 阅读理解 | 数据分析 | 指令遵从 | 算术运算 | 初中数学 | 符号推理 | 代词理解 | 诗词匹配 | 总分 | 排名 |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
开源 | deepseek-chat-v3☛去体验 | 93.0 | 97.0 | 94.7 | 100.0 | 84.0 | 99.0 | 91.4 | 90.5 | 94.4 | 86.8 | 93.1 | 1 |
开源 | qwen2.5-72b-instruct☛去体验 | 92.0 | 87.4 | 92.0 | 92.7 | 83.0 | 95.5 | 91.1 | 85.8 | 91.3 | 86.6 | 89.7 | 2 |
开源 | qwen2.5-32b-instruct☛去体验 | 91.0 | 94.1 | 96.0 | 91.3 | 83.0 | 94.0 | 90.3 | 66.6 | 94.1 | 88.2 | 88.9 | 3 |
开源 | Meta-Llama-3.1-405B-Instruct(new)☛去体验 | 90.0 | 90.4 | 98.7 | 98.7 | 76.7 | 95.0 | 64.2 | 91.0 | 88.9 | 79.7 | 87.4 | 4 |
开源 | gemma-2-27b-it(new)☛去体验 | 92.0 | 93.3 | 94.7 | 96.7 | 83.1 | 88.3 | 66.4 | 74.8 | 80.5 | 80.0 | 85.0 | 5 |
开源 | Llama-3.1-70B-Instruct☛去体验 | 87.0 | 88.9 | 92.0 | 90.7 | 79.0 | 94.8 | 49.2 | 84.0 | 88.9 | 81.1 | 83.6 | 6 |
开源 | Yi-1.5-34B-Chat☛去体验 | 90.0 | 83.0 | 82.7 | 83.3 | 74.0 | 79.0 | 75.6 | 77.2 | 84.0 | 81.3 | 81.0 | 7 |
七/八/九年级的平均分计入总分。
评分标准:七、八、九年级分别有40道题、21道题、36道题,所有题目都只判断对错(没有中间分数)。对于任何题目,只有模型response完全正确才给分,部分正确或错误都不得分。
评测样本举例:
因式分解:3x^2y-12xy+12y
☛查看七年级数学badcase
☛查看八年级数学badcase
☛查看九年级数学badcase
中文指代消解任务,参考CLUEWSC2020。 评测样本举例:
少平仍然不知道怎样给奶奶说清他姐夫的事,就只好随口说:“他犯了点错误,人家让他劳教!”
上述文本中的“他犯了点错误”中的“他”是指少平吗?
选项:(A)是
(B)否
完整排行榜见CLUEWSC
☛查看代词理解CLUEWSC badcase
中国古典诗歌匹配,给定中国古典诗歌的现代问描述,要求从候选的四句诗中选出与现代文描述语义匹配的那一句。 利用古典诗歌和现代文翻译的平行语料构建正确选项,并利用正确选项从古代诗歌语料库中利用相似检索构造出错误候选。 参考CCPM。 评测样本举例:
昏暗的灯熄灭了又被重新点亮。
上述文本最匹配下面哪句诗:
(A)渔灯灭复明
(B)残灯灭又然
(C)残灯暗复明
(D)残灯灭又明
完整排行榜见CCPM
☛查看诗词匹配CCPM badcase
学术界最常用的符号推理评测集,包含23个子任务,详细介绍见BBH。 评测样本举例:
Task description: Answer questions about which times certain events could have occurred.
Q: Today, Emily went to the museum. Between what times could they have gone?
We know that:
Emily woke up at 1pm.
Elizabeth saw Emily reading at the library from 2pm to 4pm.
Jessica saw Emily watching a movie at the theater from 4pm to 5pm.
Leslie saw Emily waiting at the airport from 5pm to 6pm.
William saw Emily buying clothes at the mall from 6pm to 7pm.
The museum was closed after 7pm.
Between what times could Emily have gone to the museum?
Options:
(A) 1pm to 2pm
(B) 6pm to 7pm
(C) 5pm to 6pm
(D) 2pm to 4pm
A:
完整排行榜见BBH
☛查看BBH符号推理badcase
评测样本举例:
将下列单词按词性分类。
狗,追,跑,大人,高兴,树
完整排行榜见classification
☛查看分类能力badcase
评测样本举例:
“中信银行3亿元,交通银行增长约2.7亿元,光大银行约1亿元。”
提取出以上文本中的所有组织机构名称
完整排行榜见extract
☛查看信息抽取能力badcase
阅读理解能力是一种符合能力,考查针对给定信息的理解能力。
依据给定信息的种类,可以细分为:文章问答、表格问答、对话问答……
评测样本举例:
牙医:好的,让我们看看你的牙齿。从你的描述和我们的检查结果来看,你可能有一些牙齦疾病,导致牙齿的神经受到刺激,引起了敏感。此外,这些黑色斑点可能是蛀牙。
病人:哦,真的吗?那我该怎么办?
牙医:别担心,我们可以为你制定一个治疗计划。我们需要首先治疗牙龈疾病,然后清除蛀牙并填充牙洞。在此过程中,我们将确保您感到舒适,并使用先进的技术和材料来实现最佳效果。
病人:好的,谢谢您,医生。那么我什么时候可以开始治疗?
牙医:让我们为您安排一个约会。您的治疗将在两天后开始。在此期间,请继续刷牙,使用牙线,并避免吃过于甜腻和酸性的食物和饮料。
病人:好的,我会的。再次感谢您,医生。
牙医:不用谢,我们会尽最大的努力帮助您恢复健康的牙齿。
基于以上对话回答:病人在检查中发现的牙齿问题有哪些?
完整排行榜见mrc
☛查看阅读理解能力badcase
专门考查大模型对表格的理解分析能力,常用于数据分析。
评测样本举例:
姓名,年龄,性别,国籍,身高(cm),体重(kg),学历
张三,28,男,中国,180,70,本科
Lisa,33,女,美国,165,58,硕士
Paulo,41,男,巴西,175,80,博士
Miyuki,25,女,日本,160,50,大专
Ahmed,30,男,埃及,175,68,本科
Maria,29,女,墨西哥,170,65,硕士
Antonio,36,男,西班牙,182,75,博士
基于这个表格回答:学历最低的是哪国人?
完整排行榜见tableqa
☛查看数据分析badcase
参考谷歌IFEval,并将其翻译和适配到中文,精选9类25种指令,说明如下:
完整排行榜见IFEval
☛查看中文指令遵从badcase
考查大模型的数学基础能力之算数能力,测试题目为1000以内的整数加减法、不超过2位有效数字的浮点数加减乘除。 举例:166 + 215 + 53 = ?,0.97 + 0.4 / 4.51 = ?
完整排行榜见arithmetic
☛查看算术能力badcase
暂不计入综合能力评分。
专门考查大模型编码中文字符的效率,同等尺寸大模型,编码效率越高推理速度越快,几乎成正比。
中文编码效率相当于大模型生成的每个token解码后对应的中文平均字数
(大模型每次生成一个token,然后解码成真正可见的字符,比如中文、英文、标点符号等)。
比如baichuan2、llama2的中文中文编码效率分别为1.67、0.61,意味着在同尺寸模型下,baichuan2的运行速度是llama2的2.7倍(1.67/0.61)。
评分方法:从各个维度给大模型打分,每个维度都对应一个评测数据集,包含若干道题。 每道题依据大模型回复质量给1~5分,将评测集内所有题的得分累加并归一化为100分制,即作为最终得分。
所有评分数据详见alldata
包含各维度评测集以及大模型输出结果,详见本项目的eval文件目录
- 大模型百花齐放,也参差不齐。不少媒体的宣传往往夸大其词,避重就轻,容易混淆视听;而某些公司为了PR,也过分标榜自己大模型的能力,动不动就“达到chatgpt水平”,动不动就“国内第一”。 所谓“外行看热闹,内行看门道”,业界急需一股气流,摒弃浮躁,静下心来打磨前沿技术,真真正正用技术实力说话。这就少不了一个公开、公正、公平的大模型评测系统,把各类大模型的优点、不足一一展示出来。 如此,大家既能把握当下的发展水平、与国外顶尖技术的差距,也能更加清晰地看明白未来的努力方向,而不被资本热潮、舆论热潮所裹挟。
- 对于产业界来说,特别是对于不具备大模型研发能力的公司,熟悉大模型的技术边界、高效有针对性地做大模型技术选型,在现如今显得尤为重要。 而一个公开、公正、公平的大模型评测系统,恰好能够提供应有的助力,避免重复造轮子,避免因技术栈不同而导致不必要的争论,避免“鸡同鸭讲”。
- 对于大模型研发人员,包括对大模型技术感兴趣的人、学术界看中实践的人,各类大模型的效果对比,反应出了背后不同技术路线、技术方法的有效性,这就提供了非常好的参考意义。 不同大模型的相互参考、借鉴,帮忙大家躲过不必要的坑、避免重复实验带来的资源浪费,有助于整个大模型生态圈的良性高效发展。
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for chinese-llm-benchmark
Similar Open Source Tools
chinese-llm-benchmark
The Chinese LLM Benchmark is a continuous evaluation list of large models in CLiB, covering a wide range of commercial and open-source models from various companies and research institutions. It supports multidimensional evaluation of capabilities including classification, information extraction, reading comprehension, data analysis, Chinese encoding efficiency, and Chinese instruction compliance. The benchmark not only provides capability score rankings but also offers the original output results of all models for interested individuals to score and rank themselves.
Llama-Chinese
Llama中文社区是一个专注于Llama模型在中文方面的优化和上层建设的高级技术社区。 **已经基于大规模中文数据,从预训练开始对Llama2模型进行中文能力的持续迭代升级【Done】**。**正在对Llama3模型进行中文能力的持续迭代升级【Doing】** 我们热忱欢迎对大模型LLM充满热情的开发者和研究者加入我们的行列。
Step-DPO
Step-DPO is a method for enhancing long-chain reasoning ability of LLMs with a data construction pipeline creating a high-quality dataset. It significantly improves performance on math and GSM8K tasks with minimal data and training steps. The tool fine-tunes pre-trained models like Qwen2-7B-Instruct with Step-DPO, achieving superior results compared to other models. It provides scripts for training, evaluation, and deployment, along with examples and acknowledgements.
awesome-ai-painting
This repository, named 'awesome-ai-painting', is a comprehensive collection of resources related to AI painting. It is curated by a user named 秋风, who is an AI painting enthusiast with a background in the AIGC industry. The repository aims to help more people learn AI painting and also documents the user's goal of creating 100 AI products, with current progress at 4/100. The repository includes information on various AI painting products, tutorials, tools, and models, providing a valuable resource for individuals interested in AI painting and related technologies.
Firefly
Firefly is an open-source large model training project that supports pre-training, fine-tuning, and DPO of mainstream large models. It includes models like Llama3, Gemma, Qwen1.5, MiniCPM, Llama, InternLM, Baichuan, ChatGLM, Yi, Deepseek, Qwen, Orion, Ziya, Xverse, Mistral, Mixtral-8x7B, Zephyr, Vicuna, Bloom, etc. The project supports full-parameter training, LoRA, QLoRA efficient training, and various tasks such as pre-training, SFT, and DPO. Suitable for users with limited training resources, QLoRA is recommended for fine-tuning instructions. The project has achieved good results on the Open LLM Leaderboard with QLoRA training process validation. The latest version has significant updates and adaptations for different chat model templates.
LLMs
LLMs is a Chinese large language model technology stack for practical use. It includes high-availability pre-training, SFT, and DPO preference alignment code framework. The repository covers pre-training data cleaning, high-concurrency framework, SFT dataset cleaning, data quality improvement, and security alignment work for Chinese large language models. It also provides open-source SFT dataset construction, pre-training from scratch, and various tools and frameworks for data cleaning, quality optimization, and task alignment.
MMOS
MMOS (Mix of Minimal Optimal Sets) is a dataset designed for math reasoning tasks, offering higher performance and lower construction costs. It includes various models and data subsets for tasks like arithmetic reasoning and math word problem solving. The dataset is used to identify minimal optimal sets through reasoning paths and statistical analysis, with a focus on QA-pairs generated from open-source datasets. MMOS also provides an auto problem generator for testing model robustness and scripts for training and inference.
SakuraLLM
SakuraLLM is a project focused on building large language models for Japanese to Chinese translation in the light novel and galgame domain. The models are based on open-source large models and are pre-trained and fine-tuned on general Japanese corpora and specific domains. The project aims to provide high-performance language models for galgame/light novel translation that are comparable to GPT3.5 and can be used offline. It also offers an API backend for running the models, compatible with the OpenAI API format. The project is experimental, with version 0.9 showing improvements in style, fluency, and accuracy over GPT-3.5.
speechless
Speechless.AI is committed to integrating the superior language processing and deep reasoning capabilities of large language models into practical business applications. By enhancing the model's language understanding, knowledge accumulation, and text creation abilities, and introducing long-term memory, external tool integration, and local deployment, our aim is to establish an intelligent collaborative partner that can independently interact, continuously evolve, and closely align with various business scenarios.
Qwen-TensorRT-LLM
Qwen-TensorRT-LLM is a project developed for the NVIDIA TensorRT Hackathon 2023, focusing on accelerating inference for the Qwen-7B-Chat model using TRT-LLM. The project offers various functionalities such as FP16/BF16 support, INT8 and INT4 quantization options, Tensor Parallel for multi-GPU parallelism, web demo setup with gradio, Triton API deployment for maximum throughput/concurrency, fastapi integration for openai requests, CLI interaction, and langchain support. It supports models like qwen2, qwen, and qwen-vl for both base and chat models. The project also provides tutorials on Bilibili and blogs for adapting Qwen models in NVIDIA TensorRT-LLM, along with hardware requirements and quick start guides for different model types and quantization methods.
TigerBot
TigerBot is a cutting-edge foundation for your very own LLM, providing a world-class large model for innovative Chinese-style contributions. It offers various upgrades and features, such as search mode enhancements, support for large context lengths, and the ability to play text-based games. TigerBot is suitable for prompt-based game engine development, interactive game design, and real-time feedback for playable games.
Chinese-Mixtral-8x7B
Chinese-Mixtral-8x7B is an open-source project based on Mistral's Mixtral-8x7B model for incremental pre-training of Chinese vocabulary, aiming to advance research on MoE models in the Chinese natural language processing community. The expanded vocabulary significantly improves the model's encoding and decoding efficiency for Chinese, and the model is pre-trained incrementally on a large-scale open-source corpus, enabling it with powerful Chinese generation and comprehension capabilities. The project includes a large model with expanded Chinese vocabulary and incremental pre-training code.
gpt_server
The GPT Server project leverages the basic capabilities of FastChat to provide the capabilities of an openai server. It perfectly adapts more models, optimizes models with poor compatibility in FastChat, and supports loading vllm, LMDeploy, and hf in various ways. It also supports all sentence_transformers compatible semantic vector models, including Chat templates with function roles, Function Calling (Tools) capability, and multi-modal large models. The project aims to reduce the difficulty of model adaptation and project usage, making it easier to deploy the latest models with minimal code changes.
Embodied-AI-Guide
Embodied-AI-Guide is a comprehensive guide for beginners to understand Embodied AI, focusing on the path of entry and useful information in the field. It covers topics such as Reinforcement Learning, Imitation Learning, Large Language Model for Robotics, 3D Vision, Control, Benchmarks, and provides resources for building cognitive understanding. The repository aims to help newcomers quickly establish knowledge in the field of Embodied AI.
build_MiniLLM_from_scratch
This repository aims to build a low-parameter LLM model through pretraining, fine-tuning, model rewarding, and reinforcement learning stages to create a chat model capable of simple conversation tasks. It features using the bert4torch training framework, seamless integration with transformers package for inference, optimized file reading during training to reduce memory usage, providing complete training logs for reproducibility, and the ability to customize robot attributes. The chat model supports multi-turn conversations. The trained model currently only supports basic chat functionality due to limitations in corpus size, model scale, SFT corpus size, and quality.
rulm
This repository contains language models for the Russian language, as well as their implementation and comparison. The models are trained on a dataset of ChatGPT-generated instructions and chats in Russian. They can be used for a variety of tasks, including question answering, text generation, and translation.
For similar tasks
Azure-Analytics-and-AI-Engagement
The Azure-Analytics-and-AI-Engagement repository provides packaged Industry Scenario DREAM Demos with ARM templates (Containing a demo web application, Power BI reports, Synapse resources, AML Notebooks etc.) that can be deployed in a customer’s subscription using the CAPE tool within a matter of few hours. Partners can also deploy DREAM Demos in their own subscriptions using DPoC.
sorrentum
Sorrentum is an open-source project that aims to combine open-source development, startups, and brilliant students to build machine learning, AI, and Web3 / DeFi protocols geared towards finance and economics. The project provides opportunities for internships, research assistantships, and development grants, as well as the chance to work on cutting-edge problems, learn about startups, write academic papers, and get internships and full-time positions at companies working on Sorrentum applications.
tidb
TiDB is an open-source distributed SQL database that supports Hybrid Transactional and Analytical Processing (HTAP) workloads. It is MySQL compatible and features horizontal scalability, strong consistency, and high availability.
zep-python
Zep is an open-source platform for building and deploying large language model (LLM) applications. It provides a suite of tools and services that make it easy to integrate LLMs into your applications, including chat history memory, embedding, vector search, and data enrichment. Zep is designed to be scalable, reliable, and easy to use, making it a great choice for developers who want to build LLM-powered applications quickly and easily.
telemetry-airflow
This repository codifies the Airflow cluster that is deployed at workflow.telemetry.mozilla.org (behind SSO) and commonly referred to as "WTMO" or simply "Airflow". Some links relevant to users and developers of WTMO: * The `dags` directory in this repository contains some custom DAG definitions * Many of the DAGs registered with WTMO don't live in this repository, but are instead generated from ETL task definitions in bigquery-etl * The Data SRE team maintains a WTMO Developer Guide (behind SSO)
mojo
Mojo is a new programming language that bridges the gap between research and production by combining Python syntax and ecosystem with systems programming and metaprogramming features. Mojo is still young, but it is designed to become a superset of Python over time.
pandas-ai
PandasAI is a Python library that makes it easy to ask questions to your data in natural language. It helps you to explore, clean, and analyze your data using generative AI.
databend
Databend is an open-source cloud data warehouse that serves as a cost-effective alternative to Snowflake. With its focus on fast query execution and data ingestion, it's designed for complex analysis of the world's largest datasets.
For similar jobs
sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.
teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.
ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.
classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.
chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.
BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students
uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.
griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.