EuroEval

EuroEval

The robust European language model benchmark.

Stars: 160

Visit
 screenshot

EuroEval is a robust European language model benchmark tool, formerly known as ScandEval. It provides a platform to benchmark pretrained models on various tasks across different languages. Users can evaluate models, datasets, and metrics both online and offline. The tool supports benchmarking from the command line, script, and Docker. Additionally, users can reproduce datasets used in the project using provided scripts. EuroEval welcomes contributions and offers guidelines for general contributions and adding new datasets.

README:

The robust European language model benchmark

(formerly known as ScandEval)


Documentation PyPI Status First paper Second paper License LastCommit Code Coverage Contributor Covenant

Maintainer

Installation and usage

See the documentation for more information.

Reproducing the evaluation datasets

All datasets used in this project are generated using the scripts located in the src/scripts folder. To reproduce a dataset, run the corresponding script with the following command

uv run src/scripts/<name-of-script>.py

Replace with the specific script you wish to execute, e.g.,

uv run src/scripts/create_allocine.py

Contributors 🙏

A huge thank you to all the contributors who have helped make this project a success!

Contributor avatar for peter-sk Contributor avatar for AJDERS Contributor avatar for oliverkinch Contributor avatar for versae Contributor avatar for KennethEnevoldsen Contributor avatar for viggo-gascou Contributor avatar for mathiasesn Contributor avatar for Alkarex Contributor avatar for marksverdhei Contributor avatar for Mikeriess Contributor avatar for ThomasKluiters Contributor avatar for BramVanroy Contributor avatar for peregilk Contributor avatar for Rijgersberg Contributor avatar for duarteocarmo Contributor avatar for slowwavesleep Contributor avatar for mrkowalski Contributor avatar for simonevanbruggen Contributor avatar for tvosch Contributor avatar for Touzen Contributor avatar for caldaibis Contributor avatar for SwekeR-463

Contribute to EuroEval

We welcome contributions to EuroEval! Whether you're fixing bugs, adding features, or contributing new datasets, your help makes this project better for everyone.

  • General contributions: Check out our contribution guidelines for information on how to get started.
  • Adding datasets: If you're interested in adding a new dataset to EuroEval, we have a dedicated guide with step-by-step instructions.

Special thanks

  • Thanks to Google for sponsoring Gemini credits as part of their Google Cloud for Researchers Program.
  • Thanks @Mikeriess for evaluating many of the larger models on the leaderboards.
  • Thanks to OpenAI for sponsoring OpenAI credits as part of their Researcher Access Program.
  • Thanks to UWV and KU Leuven for sponsoring the Azure OpenAI credits used to evaluate GPT-4-turbo in Dutch.
  • Thanks to Miðeind for sponsoring the OpenAI credits used to evaluate GPT-4-turbo in Icelandic and Faroese.
  • Thanks to CHC for sponsoring the OpenAI credits used to evaluate GPT-4-turbo in German.

Citing EuroEval

If you want to cite the framework then feel free to use this:

@article{smart2024encoder,
  title={Encoder vs Decoder: Comparative Analysis of Encoder and Decoder Language Models on Multilingual NLU Tasks},
  author={Smart, Dan Saattrup and Enevoldsen, Kenneth and Schneider-Kamp, Peter},
  journal={arXiv preprint arXiv:2406.13469},
  year={2024}
}
@inproceedings{smart2023scandeval,
  author = {Smart, Dan Saattrup},
  booktitle = {Proceedings of the 24th Nordic Conference on Computational Linguistics (NoDaLiDa)},
  month = may,
  pages = {185--201},
  title = {{ScandEval: A Benchmark for Scandinavian Natural Language Processing}},
  year = {2023}
}

For Tasks:

Click tags to check more tools for each tasks

For Jobs:

Alternative AI tools for EuroEval

Similar Open Source Tools

For similar tasks

For similar jobs