
chinese-llm-benchmark
ReLE中文大模型能力评测(持续更新):目前已囊括291个大模型,覆盖chatgpt、gpt-5、o4-mini、谷歌gemini-2.5、Claude4、智谱GLM-Z1、文心一言、qwen-max、百川、讯飞星火、商汤senseChat、minimax等商用模型, 以及kimi-k2、ernie4.5、minimax-M1、DeepSeek-R1-0528、deepseek-v3.1、qwen3-2507、llama4、phi-4、GLM4.5、gemma3、mistral等开源大模型。不仅提供排行榜,也提供规模超200万的大模型缺陷库!方便广大社区研究分析、改进大模型。
Stars: 4818

The Chinese LLM Benchmark is a continuous evaluation list of large models in CLiB, covering a wide range of commercial and open-source models from various companies and research institutions. It supports multidimensional evaluation of capabilities including classification, information extraction, reading comprehension, data analysis, Chinese encoding efficiency, and Chinese instruction compliance. The benchmark not only provides capability score rankings but also offers the original output results of all models for interested individuals to score and rank themselves.
README:
- ReLE (Really Reliable Live Evaluation for LLM),原名CLiB
- 目前已囊括298个大模型,覆盖chatgpt、gpt-5、o4-mini、谷歌gemini-2.5、Claude4、智谱GLM-Z1、文心一言、qwen3-max、百川、讯飞星火、商汤senseChat、minimax等商用模型, 以及kimi-k2、ernie4.5、minimax-M1、DeepSeek-R1-0528、deepseek-v3.1、qwen3-2507、llama4、phi-4、GLM4.5、gemma3、mistral等开源大模型。
- 支持多维度能力评测,包括教育、医疗与心理健康、金融、法律与行政公务、推理与数学计算、语言与指令遵从等6个领域,以及细分的~300个维度(比如牙科、高中语文…)。
- 不仅提供排行榜,也提供规模超200万的大模型缺陷库!方便广大社区研究分析、改进大模型。
- 为您的私有大模型提供免费评测服务,联系我们:加微信
- 🔄最近更新
- ⚓GitHub热门大模型评测项目
- 📝大模型基本信息
- 📊排行榜
- 🌐各项能力评分
- ⚖️原始评测数据
- 为什么做榜单?
- 大模型选型及评测交流群
- [2025/9/6] v5.1版本
- 新增4个大模型:阿里万亿参数模型qwen3-max-preview、qwen-plus-2025-07-28、qwen-plus-think-2025-07-28(qwen-plus思考模式)、 qwen-turbo-think-2025-07-15(qwen-turbo思考模式),☛查看模型完整信息
- [2025/9/1] v5.0版本
- 优化“综合能力”计分方式:“综合能力”改为“专业能力”和“通用能力”平均分,其中“专业能力”为“教育”、“医疗与心理健康”、“金融”、“法律与行政公务”4大领域平均分,“通用能力”为“推理与数学计算”、“语言与指令遵从”两大领域平均分。各模型排名有所变动。
- 新增“表格总结”评测集,属于“推理与数学计算”领域,详见表格总结排行榜
- 新增3个大模型:mistral-medium-2508、Magistral-Small-2507、Mistral-Small-3.2-24B-Instruct-2506,☛查看模型完整信息
- 删除陈旧的模型:SenseChat-5-1202、qwq-32b、qwq-plus-2025-03-05、GLM-Z1-Flash、mistral-small2、mistral-large2.1、ERNIE-Tiny-8K、Mistral-Small-3.1-24B-Instruct-2503
- [2025/8/26] v4.13版本
- 多模态评测新增qwen-vl-max-2025-08-13、qwen-vl-plus-2025-08-15、gpt-5系列、gemini-2.5系列模型,详见多模态评测
- 删除陈旧的模型:chatgpt-4o-latest、gpt-4.1、gpt-4.1-mini、step-r1-v-mini
- [2025/8/20] v4.12版本
- 新增3个大模型:DeepSeek-V3.1、DeepSeek-V3.1-Think、gemini-2.5-flash-lite,☛查看模型完整信息
- 更新“算术能力”、“公式识别”(多模态)评测集:剔除过于简单的样本并新增部分数据,各模型相关分数有所更新
- 删除陈旧的模型:internlm2_5-7b-chat 、qwen2.5系列开源模型、 qwen2.5-max、GLM-4/GLM-Z1系列闭源模型、GLM-Z1-Rumination-32B-0414、hunyuan-standard 、hunyuan-large 、phi-4 、360gpt-turbo Qwen3-235B-A22B、Qwen3-235B-A22B-nothink、Qwen3-30B-A3B、Qwen3-30B-A3B-nothink、gemini-2.5-flash-lite-preview-06-17、qwen-plus-think-2025-04-28、qwen-turbo-think-2025-04-28
- [2025/8/15] v4.11版本
- “多模态·小学学科”新增3个评测集:PrimarySchoolChinese(图形题)、PrimarySchoolMathematics(图形题)、PrimarySchoolScience(图形题),详见多模态评测
- “多模态·高中学科”新增4个评测集:HighSchoolBiology(图形题)、HighSchoolChemistry(图形题)、HighSchoolMathematics(图形题)、HighSchoolPhysics(图形题),详见多模态评测
- “多模态·初中学科”新增8个评测集:MiddleSchoolBiology(图形题)、MiddleSchoolChemistry(图形题)、MiddleSchoolChinese(图形题)、MiddleSchoolPolitics(图形题)、 MiddleSchoolGeography(图形题)、MiddleSchoolHistory(图形题)、MiddleSchoolMathematics(图形题)、MiddleSchoolPhysics(图形题),详见多模态评测
- 删除陈旧的模型:hunyuan-turbos-20250604、gpt-4o-mini
- [2025/8/10] v4.10版本
- [2025/8/7] v4.9版本
- 新增5个大模型:OpenAI闭源GPT5系列(gpt5/gpt5-mini/gpt5-nano),OpenAI开源gpt-oss-120b、gpt-oss-20b
- [2025/8/1] v4.8版本
- 新增多个大模型:阿里开源Qwen3-30B-A3B-Thinking-2507、阶跃星辰开源step-3、GLM4.5-nothink系列(关闭思考)
- 删除陈旧的模型:doubao-seed-1-6-thinking-250615、xunfei-spark-x1、SenseChat-5-beta、SenseChat-Turbo-120、 GLM-4-Flash、GLM-4-Air、qwen-plus-2025-04-28、qwen-turbo-2025-04-28
- [2025/7/29] v4.7版本
- 新增多个大模型:GLM4.5系列、阿里开源Qwen3-30B-A3B-Instruct-2507、Qwen3-nothink系列(关闭思考)
- [2025/7/26] v4.6版本
- 新增2个语言大模型:阿里开源qwen3-235b-a22b-thinking-2507、讯飞闭源xunfei-spark-x1-0725
- 删除陈旧的模型:hunyuan-t1-20250529
- [2025/7/23] v4.5版本
- 新增4个语言大模型:阿里开源qwen3-235b-a22b-instruct-2507、阿里闭源qwen-turbo-2025-07-15、阿里闭源qwen-plus-2025-07-14、豆包闭源doubao-seed-1-6-thinking-250715,☛查看模型完整信息
- 删除陈旧的模型:Doubao-1.5-thinking-pro
- [2025/7/17] v4.4版本
- 新增各模型在各评测维度的费用信息,详见各维度榜单
- 新增2个语言大模型:华为开源模型pangu-pro-moe、腾讯闭源推理模型hunyuan-t1-20250711
- 删除陈旧的模型:moonshot-v1-8k、hunyuan-turbo
- [2025/7/13] v4.3版本
- 新增2个语言大模型:首个万亿参数开源模型kimi-k2-0711-preview、Qwen3-235B-A22B-nothink(关闭思考),☛查看模型完整信息
- 删除陈旧的模型:gemini-2.5-flash-preview-05-20、gemini-2.5-pro-preview-05-06
- [2025/7/12] v4.2版本
- [2025/7/9] v4.1版本
- [2025/7/2] v4.0版本
- 首次新增多模态评测:“公式识别”,覆盖常见的数学、物理、化学公式,详见link
- 新增4个语言大模型:腾讯首个混合推理模型 Hunyuan-A13B-Instruct、百度ERNIE4.5系列开源模型(ERNIE-4.5-0.3B、ERNIE-4.5-21B-A3B、ERNIE-4.5-300B-A47B),☛查看模型完整信息
- 更新数据:各维度新增及更新部分评测数据,各模型相关分数有所更新
- 删除陈旧的模型:hunyuan-turbos-20250313、hunyuan-t1-20250321、DeepSeek-R1-Distill-Qwen-7B、DeepSeek-R1-Distill-Llama-8B、DeepSeek-R1-Distill-Llama-70B、qwen-turbo-2025-02-11、qwen-plus-2025-01-25
- [2025/6/23]v3.33版本,[2025/6/18]v3.32版本,[2025/6/16]v3.31版本,[2025/6/13]v3.30版本,[2025/6/9]v3.29版本,[2025/6/4]v3.28版本,[2025/5/29]v3.27版本,[2025/5/23]v3.26版本,[2025/5/18]v3.25版本,[2025/5/15]v3.24版本,[2025/5/10]v3.23版本,[2025/5/5]v3.22版本,[2025/5/2]v3.21版本,[2025/4/30]v3.20版本,[2025/4/28]v3.19版本,[2025/4/22]v3.18版本,[2025/4/17]v3.17版本,[2025/4/9]v3.16版本,[2025/4/5]v3.15版本,[2025/4/3]v3.14版本,[2025/3/31]v3.13版本,[2025/3/29]v3.12版本,[2025/3/27]v3.11版本,[2025/3/25]v3.10版本,[2025/3/23]v3.9版本,[2025/3/21]v3.8版本,[2025/3/19]v3.7版本,[2025/3/17]v3.6版本,[2025/3/15]v3.5版本,[2025/3/13]v3.4版本,[2025/3/11]v3.3版本,[2025/3/10]v3.2版本,[2025/3/7]v3.1版本,[2025/3/4]v3.0版本,[2025/3/3]v2.22版本,[2025/2/28]v2.21版本,[2025/2/24]v2.20版本,[2025/2/22]v2.19版本,[2025/2/18]v2.18版本,[2025/2/14]v2.17版本,[2025/2/13]v2.16版本,[2025/2/12]v2.15版本,[2025/2/10]v2.14版本,[2025/1/29]v2.13版本,[2025/1/25]v2.12版本,[2025/1/23]v2.11版本,[2025/1/22]v2.10版本,[2025/1/20]v2.9版本,[2025/1/17]v2.8版本,[2025/1/7]v2.7版本
- 2024年:[2024/12/28]v2.6版本,[2024/12/27]v2.5版本,[2024/12/25]v2.4版本, [2024/10/20]v2.3版本,[2024/9/29]v2.2版本,[2024/8/27]v2.1版本,[2024/8/7]v2.0版本,[2024/7/26]v1.21版本,[2024/7/15]v1.20版本,[2024/6/29]v1.19版本,[2024/6/2]v1.18版本,[2024/5/8]v1.17版本,[2024/4/13]v1.16版本,[2024/3/20]v1.15版本,[2024/2/28]v1.14版本,[2024/1/29]v1.13版本
- 2023年:[2023/12/10]v1.12版本,[2023/11/22]v1.11版本,[2023/11/5]v1.10版本,[2023/10/11]v1.9版本,[2023/9/13]v1.8版本,[2023/8/29]v1.7版本,[2023/8/13]v1.6版本,[2023/7/26]v1.5版本, [2023/7/18]v1.4版本, [2023/7/2]v1.3版本, [2023/6/17]v1.2版, [2023/6/10]v1.1版本, [2023/6/4]v1版本
各版本更新详情:CHANGELOG
repo | star | area | about |
---|---|---|---|
langfuse | 14.9k | 国外 | Open source LLM engineering platform: LLM Observability, metrics, evals, prompt management, playground, datasets. Integrates with OpenTelemetry, Langchain, OpenAI SDK, LiteLLM, and more. 🍊YC W23 |
opik | 12.5k | 国外 | Debug, evaluate, and monitor your LLM applications, RAG systems, and agentic workflows with comprehensive tracing, automated evaluations, and production-ready dashboards. |
ragas | 10.3k | 国外 | Supercharge Your LLM Application Evaluations 🚀 |
…… | …… | …… | …… |
⭐chinese-llm-benchmark(我们) | 4.7k | 国内 | ReLE中文大模型能力评测(持续更新) |
…… | …… | …… | …… |
详见hot50
详细数据见多模态评测
“综合能力”计分方式:“综合能力”改为“专业能力”和“通用能力”平均分,其中“专业能力”为“教育”、“医疗与心理健康”、“金融”、“法律与行政公务”4大领域平均分,“通用能力”为“推理与数学计算”、“语言与指令遵从”两大领域平均分。
类别 | 机构 | 大模型 | 【总分】准确率 | 平均耗时 | 平均消耗token | 花费/千次(元) | 排名(准确率) |
---|---|---|---|---|---|---|---|
商用 | 豆包 | doubao-seed-1-6-thinking-250715 | 88.0% | 37s | 2144 | 15.5 | 1 |
商用 | 腾讯 | hunyuan-t1-20250711 | 85.5% | 40s | 2693 | 9.9 | 2 |
详细数据见:
排名 | 大模型 | 机构 | 输出价格 | 总分 | 教育 | 医疗与心理健康 | 金融 | 法律与行政公务 | 推理与数学计算 | 语言与指令遵从 | |
---|---|---|---|---|---|---|---|---|---|---|---|
1 | doubao-seed-1-6-thinking-250715☛去体验 | 豆包 | 8.0元 | 88.0% | 89.8% | 87.8% | 84.1% | 85.0% | 90.0% | 88.5% | |
2 | hunyuan-t1-20250711☛去体验 | 腾讯 | 4.0元 | 85.5% | 89.3% | 82.9% | 83.6% | 76.5% | 87.0% | 89.0% |
完整排行榜见推理模型排行榜
排名 | 大模型 | 机构 | 输出价格 | 总分 | 教育 | 医疗与心理健康 | 金融 | 法律与行政公务 | 推理与数学计算 | 语言与指令遵从 | |
---|---|---|---|---|---|---|---|---|---|---|---|
1 | doubao-seed-1-6-thinking-250715☛去体验 | 豆包 | 8.0元 | 88.0% | 89.8% | 87.8% | 84.1% | 85.0% | 90.0% | 88.5% | |
2 | DeepSeek-R1-0528☛去体验 | 深度求索 | 16.0元 | 84.4% | 82.6% | 80.6% | 79.0% | 81.0% | 88.5% | 87.6% |
完整排行榜见5元及以上商用大模型
排名 | 大模型 | 机构 | 输出价格 | 总分 | 教育 | 医疗与心理健康 | 金融 | 法律与行政公务 | 推理与数学计算 | 语言与指令遵从 | |
---|---|---|---|---|---|---|---|---|---|---|---|
1 | hunyuan-t1-20250711☛去体验 | 腾讯 | 4.0元 | 85.5% | 89.3% | 82.9% | 83.6% | 76.5% | 87.0% | 89.0% | |
2 | ERNIE-4.5-Turbo-32K☛去体验 | 百度 | 3.2元 | 83.6% | 85.6% | 91.5% | 85.8% | 81.5% | 74.9% | 87.1% |
完整排行榜见1~5元商用大模型
排名 | 大模型 | 机构 | 输出价格 | 总分 | 教育 | 医疗与心理健康 | 金融 | 法律与行政公务 | 推理与数学计算 | 语言与指令遵从 | |
---|---|---|---|---|---|---|---|---|---|---|---|
1 | GLM-4.5-Flash☛去体验 | 智谱AI | 0.0元 | 77.4% | 75.6% | 73.3% | 70.3% | 72.7% | 79.9% | 83.5% | |
2 | Doubao-1.5-lite-32k-250115☛去体验 | 豆包 | 0.6元 | 74.7% | 81.4% | 80.5% | 77.2% | 66.0% | 65.2% | 81.0% |
完整排行榜见1元以下商用大模型
DIY自定义维度筛选榜单:☛ link
排名 | 大模型 | 机构 | 输出价格 | 总分 | 教育 | 医疗与心理健康 | 金融 | 法律与行政公务 | 推理与数学计算 | 语言与指令遵从 | |
---|---|---|---|---|---|---|---|---|---|---|---|
1 | Qwen3-4B☛去体验 | 阿里巴巴 | 3.0元 | 68.9% | 73.2% | 64.8% | 70.6% | 53.0% | 68.5% | 76.2% | |
2 | Qwen3-1.7B☛去体验 | 阿里巴巴 | 3.0元 | 60.4% | 58.5% | 51.7% | 59.1% | 46.0% | 61.1% | 73.0% |
完整排行榜见5B以下开源大模型
排名 | 大模型 | 机构 | 输出价格 | 总分 | 教育 | 医疗与心理健康 | 金融 | 法律与行政公务 | 推理与数学计算 | 语言与指令遵从 | |
---|---|---|---|---|---|---|---|---|---|---|---|
1 | Qwen3-14B☛去体验 | 阿里巴巴 | 2.0元 | 75.9% | 80.0% | 75.6% | 80.2% | 66.2% | 73.8% | 79.0% | |
2 | Qwen3-8B☛去体验 | 阿里巴巴 | 0.0元 | 72.1% | 73.1% | 67.6% | 71.4% | 64.0% | 70.8% | 76.6% |
完整排行榜见5B~20B开源大模型
排名 | 大模型 | 机构 | 输出价格 | 总分 | 教育 | 医疗与心理健康 | 金融 | 法律与行政公务 | 推理与数学计算 | 语言与指令遵从 | |
---|---|---|---|---|---|---|---|---|---|---|---|
1 | DeepSeek-R1-0528☛去体验 | 深度求索 | 16.0元 | 84.4% | 82.6% | 80.6% | 79.0% | 81.0% | 88.5% | 87.6% | |
2 | DeepSeek-V3.1-Think(new)☛去体验 | 深度求索 | 12.0元 | 84.3% | 85.0% | 80.5% | 82.8% | 82.0% | 86.2% | 85.9% |
完整排行榜见20B以上开源大模型
DIY自定义维度筛选榜单:☛link
☛☛完整排行榜见教育
☛☛完整排行榜见小学学科。
语文:排行榜|badcase,
英语:排行榜|badcase,
数学:排行榜|badcase,
道德与法治:排行榜|badcase,
科学:排行榜|badcase
☛☛完整排行榜见初中学科。
生物:排行榜|badcase,
化学:排行榜|badcase,
语文:排行榜|badcase,
英语:排行榜|badcase,
地理:排行榜|badcase,
历史:排行榜|badcase,
数学:排行榜|badcase,
物理:排行榜|badcase,
政治:排行榜|badcase
☛☛完整排行榜见高中学科。
生物:排行榜|badcase,
化学:排行榜|badcase,
语文:排行榜|badcase,
英语:排行榜|badcase,
地理:排行榜|badcase,
历史:排行榜|badcase,
数学:排行榜|badcase,
物理:排行榜|badcase,
政治:排行榜|badcase
历年高考真题,含简单题、填空题、选择题等等,只保留客观题。所有分数均为准确率,全部答对为100%;比如数学100,表示全部答对。☛☛完整排行榜见高考。
(1)2025年高考
生物:排行榜|badcase,
化学:排行榜|badcase,
语文:排行榜|badcase,
英语:排行榜|badcase,
地理:排行榜|badcase,
历史:排行榜|badcase,
数学:排行榜|badcase,
物理:排行榜|badcase,
政治:排行榜|badcase。
(2)2024年及之前高考
生物:排行榜|badcase,
化学:排行榜|badcase,
语文:排行榜|badcase,
地理:排行榜|badcase,
历史:排行榜|badcase,
数学:排行榜|badcase,
物理:排行榜|badcase,
政治:排行榜|badcase。
☛☛完整排行榜见医疗与心理健康
☛☛完整排行榜见医师
(1)内科,排行榜
内科规培结业:排行榜|badcase,
中医内科主治医师:排行榜|badcase,
内科主治医师:排行榜|badcase,
心血管内科与呼吸内科主治医师:排行榜|badcase,
肾内科主治医师:排行榜|badcase,
消化内科主治医师:排行榜|badcase,
中西医结合内科主治医师:排行榜|badcase,
消化内科高级职称:排行榜|badcase,
普通内科高级职称:排行榜|badcase,
呼吸内科高级职称:排行榜|badcase,
心内科高级职称:排行榜|badcase,
结核病主治医师:排行榜|badcase,
内分泌科高级职称:排行榜|badcase
(2)外科,排行榜
外科规培结业:排行榜|badcase,
口腔颌面外科主治医师:排行榜|badcase,
整形外科主治医师:排行榜|badcase,
外科主治医师:排行榜|badcase,
普通外科高级职称:排行榜|badcase,
骨科:排行榜|badcase,
骨科:排行榜|badcase,
骨科高级职称:排行榜|badcase
(3)妇产科,排行榜
妇产科规培结业:排行榜|badcase,
妇产科主治医师:排行榜|badcase,
妇产科学副主任、主任医师职称考试:排行榜|badcase
(4)儿科,排行榜
儿科规培结业:排行榜|badcase,
儿科主治医师:排行榜|badcase,
小儿外科:排行榜|badcase
(5)眼科,排行榜
眼科规培结业:排行榜|badcase,
眼科主治医师:排行榜|badcase
(6)口腔科,排行榜
口腔科规培结业:排行榜|badcase,
口腔执业助理医师:排行榜|badcase,
口腔执业医师:排行榜|badcase,
口腔内科主治医师:排行榜|badcase,
口腔科主治医师:排行榜|badcase,
口腔修复科主治医师:排行榜|badcase,
口腔正畸学主治医师:排行榜|badcase
(7)耳鼻咽喉科,排行榜
耳鼻咽喉科规培结业:排行榜|badcase,
耳鼻咽喉科主治医师:排行榜|badcase
(8)脑系科,排行榜
神经内科规培结业:排行榜|badcase,
神经内科主治医师:排行榜|badcase,
精神科规培结业:排行榜|badcase,
精神病学主治医师:排行榜|badcase,
心理治疗学主治医师:排行榜|badcase,
心理咨询师:排行榜|badcase
(9)皮肤科,排行榜
皮肤科规培结业:排行榜|badcase,
皮肤科中级职称:排行榜|badcase,
皮肤与性病学主治医师:排行榜|badcase
(10)中医与中西医结合,排行榜
中西医结合执业助理医师:排行榜|badcase,
中医执业助理医师:排行榜|badcase,
中西医结合执业医师:排行榜|badcase,
中医执业医师:排行榜|badcase,
中医针灸主治医师:排行榜|badcase
(11)康复医学科,排行榜
康复医学科规培结业:排行榜|badcase,
康复医学主治医师:排行榜|badcase
(12)全科医学科,排行榜
全科医学科规培结业:排行榜|badcase,
全科主治医师:排行榜|badcase
(13)临床营养与重症医学,排行榜
临床执业助理医师:排行榜|badcase,
临床执业医师:排行榜|badcase,
风湿与临床免疫主治医师:排行榜|badcase,
重症医学主治医师:排行榜|badcase,
营养学主治医师:排行榜|badcase,
临床病理科规培结业:排行榜|badcase
(14)肿瘤科,排行榜
肿瘤学主治医师:排行榜|badcase
(15)麻醉疼痛科,排行榜
麻醉科规培结业:排行榜|badcase,
麻醉科主治医师:排行榜|badcase,
疼痛科主治医师:排行榜|badcase
(16)公共卫生与职业病,排行榜
公共卫生执业助理医师:排行榜|badcase,
公共卫生执业医师:排行榜|badcase,
医院感染中级职称:排行榜|badcase,
传染病主治医师:排行榜|badcase,
预防医学主治医师:排行榜|badcase,
传染病学中级职称:排行榜|badcase,
职业病主治医师:排行榜|badcase
☛☛完整排行榜见护理
护士执业资格考试:排行榜|badcase,
护师资格考试:排行榜|badcase,
儿科主管护师:排行榜|badcase,
内科护理学:排行榜|badcase,
妇产科护理学:排行榜|badcase,
妇产科主管护师:排行榜|badcase,
外科主管护师:排行榜|badcase,
主管护师资格考试:排行榜|badcase,
内科主管护师:排行榜|badcase,
副主任、主任护师资格考试:排行榜|badcase
☛☛完整排行榜见药师
执业西药师:排行榜|badcase,
执业中药师:排行榜|badcase,
药士初级考试:排行榜|badcase,
药师初级考试:排行榜|badcase,
中药学(士):排行榜|badcase,
中药学(师):排行榜|badcase,
主管药师资格考试:排行榜|badcase,
主管中药师:排行榜|badcase
☛☛完整排行榜见医技
超声科:排行榜|badcase,
超声波医学主治医师:排行榜|badcase,
超声波医学主管技师:排行榜|badcase,
心电学主管技师:排行榜|badcase,
医学影像科:排行榜|badcase,
核医学主治医师:排行榜|badcase,
核医学主管技师:排行榜|badcase,
放射科主治医师:排行榜|badcase,
放射学技术(士):排行榜|badcase,
放射学技术(师):排行榜|badcase,
放射医学主管技师:排行榜|badcase ,
检验技术(士):排行榜|badcase,
检验技术(师):排行榜|badcase,
微生物检验主管技师:排行榜|badcase,
理化检验主管技师:排行榜|badcase,
临床医学检验主管技师:排行榜|badcase,
病理科主治医师:排行榜|badcase,
病理学主管技师:排行榜|badcase,
病理学技术:排行榜|badcase,
康复医学治疗技术(士):排行榜|badcase,
康复医学治疗技术(师):排行榜|badcase,
康复医学与治疗主管技师:排行榜|badcase,
肿瘤学技术(士):排行榜|badcase,
肿瘤学技术(师):排行榜|badcase,
肿瘤放射治疗主管技师:排行榜|badcase,
输血技术主管技师:排行榜|badcase,
消毒技术主管技师:排行榜|badcase,
病案信息主管技师:排行榜|badcase
(1)基础医学,排行榜
医学三基:排行榜|badcase,
医学心理学:排行榜|badcase,
生物化学与分子生物学:排行榜|badcase,
细胞生物学:排行榜|badcase,
医学免疫学:排行榜|badcase,
免疫学:排行榜|badcase,
病理生理学:排行榜|badcase,
病理学:排行榜|badcase,
医学遗传学:排行榜|badcase,
寄生虫学:排行榜|badcase,
人体寄生虫学:排行榜|badcase,
系统解剖学:排行榜|badcase,
解剖学:排行榜|badcase,
局部解剖学:排行榜|badcase,
生物信息学:排行榜|badcase,
生理学:排行榜|badcase,
药理学:排行榜|badcase,
药物分析学:排行榜|badcase,
医学微生物学:排行榜|badcase,
组织学与胚胎学:排行榜|badcase,
医学统计学:排行榜|badcase
(2)临床医学,排行榜
临床医学:排行榜|badcase,
医学影像学:排行榜|badcase,
放射学:排行榜|badcase,
实验诊断学:排行榜|badcase,
神经病学:排行榜|badcase,
外科学:排行榜|badcase,
皮肤性病学:排行榜|badcase,
儿科学:排行榜|badcase,
核医学:排行榜|badcase,
物理诊断学:排行榜|badcase,
牙体牙髓病学:排行榜|badcase,
护理学基础:排行榜|badcase,
护理学:排行榜|badcase,
基础护理学:排行榜|badcase,
诊断学:排行榜|badcase,
超声医学:排行榜|badcase,
口腔护理学:排行榜|badcase,
循证医学:排行榜|badcase,
流行病学:排行榜|badcase,
口腔组织病理学:排行榜|badcase,
传染病学:排行榜|badcase,
口腔解剖生理学:排行榜|badcase,
麻醉学:排行榜|badcase,
介入放射学:排行榜|badcase
(3)预防医学与公共卫生学,排行榜
预防医学:排行榜|badcase,
卫生学:排行榜|badcase,
医学伦理学:排行榜|badcase
(4)中医学与中药学,排行榜
中医眼科学:排行榜|badcase,
金匮要略讲义:排行榜|badcase,
中医基础理论:排行榜|badcase,
中医诊断学:排行榜|badcase,
中医学:排行榜|badcase,
温病学:排行榜|badcase,
中国医学史:排行榜|badcase,
中医内科学:排行榜|badcase,
中医儿科学:排行榜|badcase,
伤寒论:排行榜|badcase,
内经讲义:排行榜|badcase
医学考研,包含外科护理学、基础护理学、西医综合等5个方向,参考CMB。☛☛完整排行榜见医学考研。
(1)外科护理学:排行榜|badcase,
(2)基础护理学:排行榜|badcase,
(3)考研政治:排行榜|badcase,
(4)西医综合:排行榜|badcase,
(5)中医综合:排行榜|badcase
目前包含4个子项:心理综合,心理治疗学主治医师,心理咨询师,医学心理学。☛☛完整排行榜见心理健康。
(1)心理综合:排行榜|badcase,
(2)心理治疗学主治医师:排行榜|badcase,
(3)心理咨询师:排行榜|badcase,
(4)医学心理学:排行榜|badcase
☛☛完整排行榜见金融
☛☛完整排行榜见财务。
初级会计职称:排行榜|badcase,
注册会计师:排行榜|badcase,
会计从业资格:排行榜|badcase,
审计师考试:排行榜|badcase,
注册税务师:排行榜|badcase,
注册管理会计师:排行榜|badcase
☛☛完整排行榜见银行。
银行初级资格:排行榜|badcase,
银从中级资格:排行榜|badcase,
银行从业资格:排行榜|badcase
☛☛完整排行榜见保险。
保险从业资格:排行榜|badcase
☛☛完整排行榜见证券。
证券专项考试:排行榜|badcase,
证券从业资格:排行榜|badcase
☛☛完整排行榜见其他金融资格考试。
初级经济师:排行榜|badcase,
中级经济师:排行榜|badcase,
反假货币知识:排行榜|badcase,
期货从业资格:排行榜|badcase,
金融理财师AFP:排行榜|badcase,
基金从业资格:排行榜|badcase,
黄金从业资格:排行榜|badcase,
中国精算师:排行榜|badcase
☛☛完整排行榜见金融基础知识。
金融学:排行榜|badcase,
公司战略与风险管理:排行榜|badcase,
宏观经济学:排行榜|badcase,
金融市场学:排行榜|badcase,
会计学:排行榜|badcase,
成本会计学:排行榜|badcase,
货币金融学:排行榜|badcase,
政治经济学:排行榜|badcase,
投资学:排行榜|badcase,
计量经济学:排行榜|badcase,
公司金融学:排行榜|badcase,
财政学:排行榜|badcase,
商业银行金融学:排行榜|badcase,
管理会计学:排行榜|badcase,
中央银行学:排行榜|badcase,
审计学:排行榜|badcase,
国际经济学:排行榜|badcase,
中级财务会计:排行榜|badcase,
财务管理学:排行榜|badcase,
微观经济学:排行榜|badcase,
国际金融学:排行榜|badcase,
金融工程学:排行榜|badcase,
经济法:排行榜|badcase,
高级财务会计:排行榜|badcase,
保险学:排行榜|badcase
☛☛完整排行榜见金融应用。
保险知识解读:排行榜|badcase,
金融术语解释:排行榜|badcase,
执业医师资格考试:排行榜|badcase,
理财知识解读:排行榜|badcase,
执业药师资格考试:排行榜|badcase,
金融文档抽取:排行榜|badcase,
研判观点提取:排行榜|badcase,
金融情绪识别:排行榜|badcase,
保险槽位识别:排行榜|badcase,
保险意图理解:排行榜|badcase,
金融意图理解:排行榜|badcase,
保险属性抽取:排行榜|badcase,
保险条款解读:排行榜|badcase,
金融产品分析:排行榜|badcase,
金融数值计算:排行榜|badcase,
金融事件解读:排行榜|badcase,
内容生成-投教话术生成:排行榜|badcase,
内容生成-文本总结归纳:排行榜|badcase,
内容生成-营销文案生成:排行榜|badcase,
内容生成-资讯标题生成:排行榜|badcase,
安全合规-金融合规性:排行榜|badcase,
安全合规-金融问题识别:排行榜|badcase,
安全合规-信息安全合规:排行榜|badcase,
安全合规-金融事实性:排行榜|badcase
☛☛完整排行榜见法律与行政公务
选择题,共1000道,参考AGIEval。
完整排行榜见JEC-QA-KD,☛查看JEC-QA-KD:badcase
选择题,共1000道,参考AGIEval。
完整排行榜见JEC-QA-CA,☛查看JEC-QA-CA:badcase
完整排行榜见法律综合,☛查看法律综合:badcase
公务员考试行测选择题,共651道,参考AGIEval。 评测样本举例:
某乡镇进行新区规划,决定以市民公园为中心,在东南西北分别建设一个特色社区。这四个社区分别定为,文化区、休闲区、商业区和行政服务区。已知行政服务区在文化区的西南方向,文化区在休闲区的东南方向。
根据以上陈述,可以得出以下哪项?
(A)市民公园在行政服务区的北面 (B)休闲区在文化区的西南 (C)文化区在商业区的东北 (D)商业区在休闲区的东南
完整排行榜见公务员考试
☛查看公务员考试:badcase
☛☛完整排行榜见推理与数学计算
演绎推理(modus_tollens)选择题,共123道,参考ISP。
评测样本举例:
考虑以下语句:
1.如果约翰是个好父母,那么约翰就是严格但公平的。2.约翰不严格但公平。 结论:因此,约翰不是一个好父母。 问题:根据陈述1.和2.,结论是否正确?
回答: (A) 否 (B) 是
完整排行榜见演绎推理
☛查看演绎推理:badcase
常识推理选择题,共99道,参考ISP。
评测样本举例:
以下是关于常识的选择题。
问题:当某人把土豆放到篝火边的余烬中,此时余烬并没有在
A、释放热量 B、吸收热量
完整排行榜见常识推理
☛查看常识推理:badcase
学术界最常用的符号推理评测集,包含23个子任务,详细介绍见BBH。 评测样本举例:
Task description: Answer questions about which times certain events could have occurred.
Q: Today, Emily went to the museum. Between what times could they have gone?
We know that:
Emily woke up at 1pm.
Elizabeth saw Emily reading at the library from 2pm to 4pm.
Jessica saw Emily watching a movie at the theater from 4pm to 5pm.
Leslie saw Emily waiting at the airport from 5pm to 6pm.
William saw Emily buying clothes at the mall from 6pm to 7pm.
The museum was closed after 7pm.
Between what times could Emily have gone to the museum?
Options:
(A) 1pm to 2pm (B) 6pm to 7pm (C) 5pm to 6pm (D) 2pm to 4pm
A:
完整排行榜见BBH
☛查看BBH符号推理:badcase
考查大模型的数学基础能力之算数能力,测试题目为1000以内的整数加减法、不超过2位有效数字的浮点数加减乘除。 举例:166 + 215 + 53 = ?,0.97 + 0.4 / 4.51 = ?
完整排行榜见算术能力
☛查看算术能力:badcase
专门考查大模型对表格的理解分析能力,常用于数据分析。
评测样本举例:
姓名,年龄,性别,国籍,身高(cm),体重(kg),学历
张三,28,男,中国,180,70,本科
Lisa,33,女,美国,165,58,硕士
Paulo,41,男,巴西,175,80,博士
Miyuki,25,女,日本,160,50,大专
Ahmed,30,男,埃及,175,68,本科
Maria,29,女,墨西哥,170,65,硕士
Antonio,36,男,西班牙,182,75,博士
基于这个表格回答:学历最低的是哪国人?
完整排行榜见表格问答
☛查看表格问答:badcase
专门考查大模型对表格的分析总结能力,常用于数据分析、文章撰写,没有固定的标准答案,但容易相对客观地分辨好坏。 评测样本举例(由于例子过长,部分数据予以省略):
类别 机构 大模型 准确率 平均耗时 平均消耗token 花费/千次(元) 排名(准确率) 商用 豆包 doubao-seed-1-6-thinking-250715 87.5 37s 1976 14.6 1 商用 百度 ERNIE-4.5-Turbo-32K 84.7 33s 676 1.8 2 商用 腾讯 hunyuan-t1-20250711 84.7 37s 2465 9.2 3 商用 腾讯 hunyuan-turbos-20250716 83.9 24s 1288 2.3 4 …… …… …… …… …… …… …… ……
已知新模型为:GLM-4.5,GLM-4.5-Air,GLM-4.5-Flash,step-3。
基于以上表格写一段总结,格式为:“xx机构、xx机构……占据前5(机构名不要重复),然后描述开源模型和商用模型的分布。新模型中,xx排第xx,xx排第xx……(排名由高到低)”。严格按照表格中的模型名称、机构名称。
完整排行榜见表格总结
☛查看表格总结:badcase
2024年预赛试题,参考Math24o。 评测样本举例:
设集合 $S={1, 2, 3, \cdots, 9 9 7, 9 9 8 }$,集合 $S$ 的 $k$ 个 $499$ 元子集 $A_{1},A_{2}, \cdots, A_{k}$ 满足:对 $S$ 中任一二元子集 $B$,均存在 $i \in{1, 2, \cdots, k }$,使得 $B \subset A_{i}$。求 $k$ 的最小值。
完整排行榜见高中奥林匹克数学竞赛
☛查看高中奥林匹克数学竞赛:badcase
完整排行榜见小学奥数
☛查看小学奥数:badcase
完整排行榜见数独
☛查看数独:badcase
☛☛完整排行榜见语言与指令遵从
给定上下文,选择最匹配的成语。
评测样本举例:
说完作品的优点,咱们再来聊聊为何说它最后的结局____,片子本身提出的话题观点很尖锐,“扶弟魔”也成为众多当代年轻人婚姻里的不定因素,所以对于这种过于敏感的东西,片子的结局仅仅只是以弟弟的可爱化解了姐姐的心结,最后选择陪伴照顾...
给上文空格处选择最合适的成语或俗语:
(A) 有条有理 (B) 偏听偏信 (C) 狗尾续貂 (D) 半壁江山 (E) 身家性命 (F) 胆小如鼠 (G) 独善其身
完整排行榜见成语理解
☛查看成语理解:badcase
分析用户评论的情感属性,消极或积极。
评测样本举例:
用了几天,发现很多问题,无线网容易掉线,屏幕容易刮花,打开网页容易死掉,不值的买
以上用户评论是正面还是负面?
(A) 负面 (B) 正面
完整排行榜见情感分析
☛查看情感分析:badcase
文本蕴含,判断两个句子之间的语义关系:蕴含、中立、矛盾,参考OCNLI。
评测样本举例:
句子一:农机具购置补贴覆盖到全国所有农牧业县(场),中央财政拟安排资金130亿元,比上年增加90亿元
句子二:按农民人数发放补贴
以上两个句子是什么关系?
(A)蕴含 (B)中立 (C)矛盾
完整排行榜见文本蕴含
☛查看文本蕴含:badcase
评测样本举例:
将下列单词按词性分类。
狗,追,跑,大人,高兴,树
完整排行榜见文本分类
☛查看文本分类:badcase
评测样本举例:
“中信银行3亿元,交通银行增长约2.7亿元,光大银行约1亿元。”
提取出以上文本中的所有组织机构名称
完整排行榜见信息抽取
☛查看信息抽取:badcase
阅读理解能力是一种符合能力,考查针对给定信息的理解能力。
依据给定信息的种类,可以细分为:文章问答、表格问答、对话问答……
评测样本举例:
牙医:好的,让我们看看你的牙齿。从你的描述和我们的检查结果来看,你可能有一些牙齦疾病,导致牙齿的神经受到刺激,引起了敏感。此外,这些黑色斑点可能是蛀牙。
病人:哦,真的吗?那我该怎么办?
牙医:别担心,我们可以为你制定一个治疗计划。我们需要首先治疗牙龈疾病,然后清除蛀牙并填充牙洞。在此过程中,我们将确保您感到舒适,并使用先进的技术和材料来实现最佳效果。
病人:好的,谢谢您,医生。那么我什么时候可以开始治疗?
牙医:让我们为您安排一个约会。您的治疗将在两天后开始。在此期间,请继续刷牙,使用牙线,并避免吃过于甜腻和酸性的食物和饮料。
病人:好的,我会的。再次感谢您,医生。
牙医:不用谢,我们会尽最大的努力帮助您恢复健康的牙齿。
基于以上对话回答:病人在检查中发现的牙齿问题有哪些?
完整排行榜见阅读理解
☛查看阅读理解:badcase
中文指代消解任务,参考CLUEWSC2020。 评测样本举例:
少平仍然不知道怎样给奶奶说清他姐夫的事,就只好随口说:“他犯了点错误,人家让他劳教!”
上述文本中的“他犯了点错误”中的“他”是指少平吗? 选项:(A)是 (B)否
完整排行榜见代词理解
☛查看代词理解:badcase
中国古典诗歌匹配,给定中国古典诗歌的现代问描述,要求从候选的四句诗中选出与现代文描述语义匹配的那一句。 利用古典诗歌和现代文翻译的平行语料构建正确选项,并利用正确选项从古代诗歌语料库中利用相似检索构造出错误候选。 参考CCPM。 评测样本举例:
昏暗的灯熄灭了又被重新点亮。
上述文本最匹配下面哪句诗:
(A)渔灯灭复明 (B)残灯灭又然 (C)残灯暗复明 (D)残灯灭又明
完整排行榜见诗词匹配
☛查看诗词匹配:badcase
参考谷歌IFEval,并将其翻译和适配到中文,精选9类25种指令,说明如下:
完整排行榜见IFEval
☛查看中文指令遵从:badcase
完整排行榜见汉字字形
☛查看汉字字形:badcase
评分方法:从各个维度给大模型打分,每个维度都对应一个评测数据集,包含若干道题。 每道题依据大模型回复质量给1~5分,将评测集内所有题的得分累加并归一化为100分制,即作为最终得分。
所有评分数据详见alldata
包含各维度评测集以及大模型输出结果,详见本项目的eval文件目录
- 大模型百花齐放,也参差不齐。不少媒体的宣传往往夸大其词,避重就轻,容易混淆视听;而某些公司为了PR,也过分标榜自己大模型的能力,动不动就“达到chatgpt水平”,动不动就“国内第一”。 所谓“外行看热闹,内行看门道”,业界急需一股气流,摒弃浮躁,静下心来打磨前沿技术,真真正正用技术实力说话。这就少不了一个公开、公正、公平的大模型评测系统,把各类大模型的优点、不足一一展示出来。 如此,大家既能把握当下的发展水平、与国外顶尖技术的差距,也能更加清晰地看明白未来的努力方向,而不被资本热潮、舆论热潮所裹挟。
- 对于产业界来说,特别是对于不具备大模型研发能力的公司,熟悉大模型的技术边界、高效有针对性地做大模型技术选型,在现如今显得尤为重要。 而一个公开、公正、公平的大模型评测系统,恰好能够提供应有的助力,避免重复造轮子,避免因技术栈不同而导致不必要的争论,避免“鸡同鸭讲”。
- 对于大模型研发人员,包括对大模型技术感兴趣的人、学术界看中实践的人,各类大模型的效果对比,反应出了背后不同技术路线、技术方法的有效性,这就提供了非常好的参考意义。 不同大模型的相互参考、借鉴,帮忙大家躲过不必要的坑、避免重复实验带来的资源浪费,有助于整个大模型生态圈的良性高效发展。
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for chinese-llm-benchmark
Similar Open Source Tools

chinese-llm-benchmark
The Chinese LLM Benchmark is a continuous evaluation list of large models in CLiB, covering a wide range of commercial and open-source models from various companies and research institutions. It supports multidimensional evaluation of capabilities including classification, information extraction, reading comprehension, data analysis, Chinese encoding efficiency, and Chinese instruction compliance. The benchmark not only provides capability score rankings but also offers the original output results of all models for interested individuals to score and rank themselves.

ml-engineering
This repository provides a comprehensive collection of methodologies, tools, and step-by-step instructions for successful training of large language models (LLMs) and multi-modal models. It is a technical resource suitable for LLM/VLM training engineers and operators, containing numerous scripts and copy-n-paste commands to facilitate quick problem-solving. The repository is an ongoing compilation of the author's experiences training BLOOM-176B and IDEFICS-80B models, and currently focuses on the development and training of Retrieval Augmented Generation (RAG) models at Contextual.AI. The content is organized into six parts: Insights, Hardware, Orchestration, Training, Development, and Miscellaneous. It includes key comparison tables for high-end accelerators and networks, as well as shortcuts to frequently needed tools and guides. The repository is open to contributions and discussions, and is licensed under Attribution-ShareAlike 4.0 International.

PaddleNLP
PaddleNLP is an easy-to-use and high-performance NLP library. It aggregates high-quality pre-trained models in the industry and provides out-of-the-box development experience, covering a model library for multiple NLP scenarios with industry practice examples to meet developers' flexible customization needs.

auto-round
AutoRound is an advanced weight-only quantization algorithm for low-bits LLM inference. It competes impressively against recent methods without introducing any additional inference overhead. The method adopts sign gradient descent to fine-tune rounding values and minmax values of weights in just 200 steps, often significantly outperforming SignRound with the cost of more tuning time for quantization. AutoRound is tailored for a wide range of models and consistently delivers noticeable improvements.

all-in-rag
All-in-RAG is a comprehensive repository for all things related to Randomized Algorithms and Graphs. It provides a wide range of resources, including implementations of various randomized algorithms, graph data structures, and visualization tools. The repository aims to serve as a one-stop solution for researchers, students, and enthusiasts interested in exploring the intersection of randomized algorithms and graph theory. Whether you are looking to study theoretical concepts, implement algorithms in practice, or visualize graph structures, All-in-RAG has got you covered.

eval-assist
EvalAssist is an LLM-as-a-Judge framework built on top of the Unitxt open source evaluation library for large language models. It provides users with a convenient way of iteratively testing and refining LLM-as-a-judge criteria, supporting both direct (rubric-based) and pairwise assessment paradigms. EvalAssist is model-agnostic, supporting a rich set of off-the-shelf judge models that can be extended. Users can auto-generate a Notebook with Unitxt code to run bulk evaluations and save their own test cases. The tool is designed for evaluating text data using language models.

agentic
Agentic is a lightweight and flexible Python library for building multi-agent systems. It provides a simple and intuitive API for creating and managing agents, defining their behaviors, and simulating interactions in a multi-agent environment. With Agentic, users can easily design and implement complex agent-based models to study emergent behaviors, social dynamics, and decentralized decision-making processes. The library supports various agent architectures, communication protocols, and simulation scenarios, making it suitable for a wide range of research and educational applications in the fields of artificial intelligence, machine learning, social sciences, and robotics.

infinity
Infinity is an AI-native database designed for LLM applications, providing incredibly fast full-text and vector search capabilities. It supports a wide range of data types, including vectors, full-text, and structured data, and offers a fused search feature that combines multiple embeddings and full text. Infinity is easy to use, with an intuitive Python API and a single-binary architecture that simplifies deployment. It achieves high performance, with 0.1 milliseconds query latency on million-scale vector datasets and up to 15K QPS.

LMCache
LMCache is a serving engine extension designed to reduce time to first token (TTFT) and increase throughput, particularly in long-context scenarios. It stores key-value caches of reusable texts across different locations like GPU, CPU DRAM, and Local Disk, allowing the reuse of any text in any serving engine instance. By combining LMCache with vLLM, significant delay savings and GPU cycle reduction are achieved in various large language model (LLM) use cases, such as multi-round question answering and retrieval-augmented generation (RAG). LMCache provides integration with the latest vLLM version, offering both online serving and offline inference capabilities. It supports sharing key-value caches across multiple vLLM instances and aims to provide stable support for non-prefix key-value caches along with user and developer documentation.

LLM-Fine-Tuning
This GitHub repository contains examples of fine-tuning open source large language models. It showcases the process of fine-tuning and quantizing large language models using efficient techniques like Lora and QLora. The repository serves as a practical guide for individuals looking to optimize the performance of language models through fine-tuning.

Fast-dLLM
Fast-DLLM is a diffusion-based Large Language Model (LLM) inference acceleration framework that supports efficient inference for models like Dream and LLaDA. It offers fast inference support, multiple optimization strategies, code generation, evaluation capabilities, and an interactive chat interface. Key features include Key-Value Cache for Block-Wise Decoding, Confidence-Aware Parallel Decoding, and overall performance improvements. The project structure includes directories for Dream and LLaDA model-related code, with installation and usage instructions provided for using the LLaDA and Dream models.

RAGElo
RAGElo is a streamlined toolkit for evaluating Retrieval Augmented Generation (RAG)-powered Large Language Models (LLMs) question answering agents using the Elo rating system. It simplifies the process of comparing different outputs from multiple prompt and pipeline variations to a 'gold standard' by allowing a powerful LLM to judge between pairs of answers and questions. RAGElo conducts tournament-style Elo ranking of LLM outputs, providing insights into the effectiveness of different settings.

llm_recipes
This repository showcases the author's experiments with Large Language Models (LLMs) for text generation tasks. It includes dataset preparation, preprocessing, model fine-tuning using libraries such as Axolotl and HuggingFace, and model evaluation.

rag-in-action
rag-in-action is a GitHub repository that provides a practical course structure for developing a RAG system based on DeepSeek. The repository likely contains resources, code samples, and tutorials to guide users through the process of building and implementing a RAG system using DeepSeek technology. Users interested in learning about RAG systems and their development may find this repository helpful in gaining hands-on experience and practical knowledge in this area.

www-project-top-10-for-large-language-model-applications
The OWASP Top 10 for Large Language Model Applications is a standard awareness document for developers and web application security, providing practical, actionable, and concise security guidance for applications utilizing Large Language Model (LLM) technologies. The project aims to make application security visible and bridge the gap between general application security principles and the specific challenges posed by LLMs. It offers a comprehensive guide to navigate potential security risks in LLM applications, serving as a reference for both new and experienced developers and security professionals.

EvoAgentX
EvoAgentX is an open-source framework for building, evaluating, and evolving LLM-based agents or agentic workflows in an automated, modular, and goal-driven manner. It enables developers and researchers to move beyond static prompt chaining or manual workflow orchestration by introducing a self-evolving agent ecosystem. The framework includes features such as agent workflow autoconstruction, built-in evaluation, self-evolution engine, plug-and-play compatibility, comprehensive built-in tools, memory module support, and human-in-the-loop interactions.
For similar tasks

Azure-Analytics-and-AI-Engagement
The Azure-Analytics-and-AI-Engagement repository provides packaged Industry Scenario DREAM Demos with ARM templates (Containing a demo web application, Power BI reports, Synapse resources, AML Notebooks etc.) that can be deployed in a customer’s subscription using the CAPE tool within a matter of few hours. Partners can also deploy DREAM Demos in their own subscriptions using DPoC.

sorrentum
Sorrentum is an open-source project that aims to combine open-source development, startups, and brilliant students to build machine learning, AI, and Web3 / DeFi protocols geared towards finance and economics. The project provides opportunities for internships, research assistantships, and development grants, as well as the chance to work on cutting-edge problems, learn about startups, write academic papers, and get internships and full-time positions at companies working on Sorrentum applications.

tidb
TiDB is an open-source distributed SQL database that supports Hybrid Transactional and Analytical Processing (HTAP) workloads. It is MySQL compatible and features horizontal scalability, strong consistency, and high availability.

zep-python
Zep is an open-source platform for building and deploying large language model (LLM) applications. It provides a suite of tools and services that make it easy to integrate LLMs into your applications, including chat history memory, embedding, vector search, and data enrichment. Zep is designed to be scalable, reliable, and easy to use, making it a great choice for developers who want to build LLM-powered applications quickly and easily.

telemetry-airflow
This repository codifies the Airflow cluster that is deployed at workflow.telemetry.mozilla.org (behind SSO) and commonly referred to as "WTMO" or simply "Airflow". Some links relevant to users and developers of WTMO: * The `dags` directory in this repository contains some custom DAG definitions * Many of the DAGs registered with WTMO don't live in this repository, but are instead generated from ETL task definitions in bigquery-etl * The Data SRE team maintains a WTMO Developer Guide (behind SSO)

mojo
Mojo is a new programming language that bridges the gap between research and production by combining Python syntax and ecosystem with systems programming and metaprogramming features. Mojo is still young, but it is designed to become a superset of Python over time.

pandas-ai
PandasAI is a Python library that makes it easy to ask questions to your data in natural language. It helps you to explore, clean, and analyze your data using generative AI.

databend
Databend is an open-source cloud data warehouse that serves as a cost-effective alternative to Snowflake. With its focus on fast query execution and data ingestion, it's designed for complex analysis of the world's largest datasets.
For similar jobs

sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.

teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.

ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.

classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.

chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.

BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students

uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.

griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.