llm_benchmark

llm_benchmark

None

Stars: 65

Visit
 screenshot

The 'llm_benchmark' repository is a personal evaluation project that tracks and tests various large models using a private question bank. It focuses on testing models' logic, mathematics, programming, and human intuition. The evaluation is not authoritative or comprehensive but aims to observe the long-term evolution trends of different large models. The question bank is small, with around 30 questions/240 test cases, and is not publicly available on the internet. The questions are updated monthly to share evaluation methods and personal insights. Users should assess large models based on their own needs and not blindly trust any evaluation. Model scores may vary by around +/-4 points each month due to question changes, but the overall ranking remains stable.

README:

大模型测评记录

简介

  1. 本评测是个人性质,使用滚动更新的私有题库进行长期跟踪评测。
  2. 本评测侧重模型对逻辑,数学,编程,人类直觉等问题的测试。不够权威,不够全面。仅从一个侧面观察各个大模型的长期进化趋势。
  3. 本评测的题库规模不大,长期维持在30题/240个用例以内,不使用任何互联网公开题目。题目每月会有滚动更新。题目不公开,意图是分享一种评测思路,以及个人见解。每个人应该根据自己所需,对大模型进行考察。不可盲信任何评测。
  4. 因为题目会每月增减,每个模型的成绩在每个月榜单中会有正负4分左右的变化,属于正常现象。大致排序保持稳定。

评测方法

  1. 每道题设置若干个得分点,有些题目每个用例通过记1分,有些题目每答出一个符合要求的数据/文本记1分。每题有至少1个得分点。最终得分是计分除以得分点总数,再乘以10。(即每道题满分10分)
  2. 每题要求推导过程必须正确,猜对的答案不得分。部分题目有额外要求,输出多余答案会扣分,避免枚举。
  3. 要求回答必须完全符合题目要求,如果明确要求不写解释,不得编程等,而回答包含了解释或编程部分,即使正确,也记0分。
  4. 评测统一使用官方API,控制温度值0.1,其他参数均默认。部分不提供API的模型使用官网问答。每道题测3遍,取最高分。

更新机制

每月月末发布当月榜单。新模型发布第一时间初测更新在知乎个人号: 知乎主页

For Tasks:

Click tags to check more tools for each tasks

For Jobs:

Alternative AI tools for llm_benchmark

Similar Open Source Tools

For similar tasks

For similar jobs