UHGEval
[ACL 2024] User-friendly evaluation framework: Eval Suite & Benchmarks: UHGEval, HaluEval, HalluQA, etc.
Stars: 178
UHGEval is a comprehensive framework designed for evaluating the hallucination phenomena. It includes UHGEval, a framework for evaluating hallucination, XinhuaHallucinations dataset, and UHGEval-dataset pipeline for creating XinhuaHallucinations. The framework offers flexibility and extensibility for evaluating common hallucination tasks, supporting various models and datasets. Researchers can use the open-source pipeline to create customized datasets. Supported tasks include QA, dialogue, summarization, and multi-choice tasks.
README:
🍄 UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation
What does this repository include?
UHGEval: An unconstrained hallucination evaluation benchmark.
Eval Suite: A user-friendly evaluation framework for hallucination tasks.
Eval Suite supports other benchmarks, such as HalluQA and HaluEval.
# Install Eval Suite
conda create -n uhg python=3.10
conda activate uhg
pip install eval-suite
# Run evaluations with OpenAI Compatible API
eval_suite eval openai \
--model_name gpt-4o \
--api_key your_api_key \
--base_url https://api.openai.com/v1 \
--evaluators ExampleQAEvaluator UHGSelectiveEvaluator
# Or run evaluations with Hugging Face Transformers
eval_suite eval huggingface \
--model_name_or_path Qwen/Qwen2-0.5B-Instruct \
--apply_chat_template \
--evaluators ExampleQAEvaluator UHGSelectiveEvaluator
# After evaluation, you can gather statistics of the evaluation results
eval_suite stat
# List all available evaluators
eval_suite list
# Get help
eval_suite --help
[!Tip]
- Refer to
demo.ipynb
for more detailed examples.- Run
export HF_ENDPOINT=https://hf-mirror.com
to use the Chinese mirror if you cannot connect to Hugging Face.- SilliconFlow provides free API keys for many models, and you can apply for one at https://siliconflow.cn/pricing.
UHGEval is a large-scale benchmark designed for evaluating hallucination in professional Chinese content generation. It builds on unconstrained text generation and hallucination collection, incorporating both automatic annotation and manual review.
UHGEvalDataset. UHGEval contains two dataset versions. The full version includes 5,141 data items, while a concise version with 1,000 items has been created for more efficient evaluation. Below is an example in UHGEvalDataset.
Example
{
"id": "num_000432",
"headLine": "(社会)江苏首次评选消费者最喜爱的百种绿色食品",
"broadcastDate": "2015-02-11 19:46:49",
"type": "num",
"newsBeginning": " 新华社南京2月11日电(记者李响)“民以食为天,食以安为先”。江苏11日发布“首届消费者最喜爱的绿色食品”评选结果,老山蜂蜜等100种食品获得消费者“最喜爱的绿色食品”称号。",
"hallucinatedContinuation": "江苏是全国绿色食品生产最发达的省份之一。",
"generatedBy": "InternLM_20B_Chat",
"annotations": [
"江苏<sep>合理",
"全国<sep>合理",
"绿色食品生产<sep>合理",
"发达<sep>不合理,没有事实证明江苏是全国绿色食品生产发达的省份,但可以确定的是,江苏在绿色食品生产上有积极的实践和推动",
"省份<sep>合理",
"之一<sep>不合理,没有具体的事实证据表明江苏是全国绿色食品生产发达的省份之一"
],
"realContinuation": "61家获奖生产企业共同签署诚信公约,共建绿色食品诚信联盟。",
"newsRemainder": "61家获奖生产企业共同签署诚信公约,共建绿色食品诚信联盟。这是江苏保障食品安全、推动绿色食品生产的重要举措。\n 此次评选由江苏省绿色食品协会等部门主办,并得到江苏省农委、省委农工办、省工商局、省地税局、省信用办、省消协等单位大力支持。评选历时4个多月,经企业报名、组委会初筛、消费者投票等层层选拔,最终出炉的百强食品榜单由消费者亲自票选得出,网络、短信、报纸及现场投票共310多万份票数,充分说明了评选结果的含金量。\n 食品安全一直是社会关注的热点。此次评选过程中,组委会工作人员走街头、进超市,邀请媒体、消费者、专家深入产地开展绿色食品基地行,除了超市选购外,还搭建“诚信购微信商城”“中国移动MO生活绿色有机馆”等线上销售平台,开创江苏绿色食品“评展销”结合新局面。评选不仅宣传了江苏绿色品牌食品,更推动了省内绿色食品市场诚信体系的建立,为江苏绿色食品走向全国搭建了权威的平台。\n 江苏省农委副主任李俊超表示,绿色食品消费是当前社会重要的消费趋势。本次评选不仅为社会培育了食品安全诚信文化,也提高了消费者对食品质量和标识的甄别能力,实现了消费者和生产企业的“双赢”。\n 与会企业表示,能够入选“首届江苏消费者最喜爱的绿色食品”是消费者的信任和支持,他们将以此荣誉作为企业发展的新起点,严把食品质量关,推介放心安全的绿色品牌食品,促进产业稳定健康发展。(完)"
}
Evaluation Methods. UHGEval offers a variety of evaluation methods, including discriminative evaluation, generative evaluation, and selective evaluation.
Evaluator | Metric | Description |
---|---|---|
UHGDiscKeywordEvaluator |
Average Accuracy | Given a keyword, the LLM determines whether it contains hallucination. |
UHGDiscSentenceEvaluator |
Average Accuracy | Given a sentence, the LLM determines whether it contains hallucination. |
UHGGenerativeEvaluator |
BLEU-4, ROUGE-L, kwPrec, BertScore | Given a continuation prompt, the LLM generates a continuation. |
UHGSelectiveEvaluator |
Accuracy | Given hallucinated text and unhallucinated text, the LLM selects the realistic text. |
To facilitate evaluation, we have developed a user-friendly evaluation framework called Eval Suite. Currently, Eval Suite supports common hallucination evaluation benchmarks, allowing for comprehensive evaluation of the same LLM with just one command as shown in the Quick Start section.
Benchmark | Evaluator | More Information |
---|---|---|
C-Eval | CEvalEvaluator |
src/eval_suite/benchs/ceval |
ExampleQA | ExampleQAEvaluator |
src/eval_suite/benchs/exampleqa |
HalluQA | HalluQAMCEvaluator |
src/eval_suite/benchs/halluqa |
HaluEval |
HaluEvalDialogEvaluator HaluEvalQAEvaluator HaluEvalSummaEvaluator
|
src/eval_suite/benchs/halueval |
UHGEval |
UHGDiscKeywordEvaluator UHGDiscSentenceEvaluator UHGGenerativeEvaluator UHGSelectiveEvaluator
|
src/eval_suite/benchs/uhgeval |
@inproceedings{liang-etal-2024-uhgeval,
title = "{UHGE}val: Benchmarking the Hallucination of {C}hinese Large Language Models via Unconstrained Generation",
author = "Liang, Xun and
Song, Shichao and
Niu, Simin and
Li, Zhiyu and
Xiong, Feiyu and
Tang, Bo and
Wang, Yezhaohui and
He, Dawei and
Peng, Cheng and
Wang, Zhonghao and
Deng, Haiying",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.288",
doi = "10.18653/v1/2024.acl-long.288",
pages = "5266--5293",
}
Click me to show all TODOs
- [ ] feat: vLLM offline inference benchmarking
- [ ] feat(benchs): add TruthfulQA benchmark
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for UHGEval
Similar Open Source Tools
UHGEval
UHGEval is a comprehensive framework designed for evaluating the hallucination phenomena. It includes UHGEval, a framework for evaluating hallucination, XinhuaHallucinations dataset, and UHGEval-dataset pipeline for creating XinhuaHallucinations. The framework offers flexibility and extensibility for evaluating common hallucination tasks, supporting various models and datasets. Researchers can use the open-source pipeline to create customized datasets. Supported tasks include QA, dialogue, summarization, and multi-choice tasks.
llm-gateway
llm-gateway is a gateway tool designed for interacting with third-party LLM providers such as OpenAI, Cohere, etc. It tracks data exchanged with these providers in a postgres database, applies PII scrubbing heuristics, and ensures safe communication with OpenAI's services. The tool supports various models from different providers and offers API and Python usage examples. Developers can set up the tool using Poetry, Pyenv, npm, and yarn for dependency management. The project also includes Docker setup for backend and frontend development.
cellseg_models.pytorch
cellseg-models.pytorch is a Python library built upon PyTorch for 2D cell/nuclei instance segmentation models. It provides multi-task encoder-decoder architectures and post-processing methods for segmenting cell/nuclei instances. The library offers high-level API to define segmentation models, open-source datasets for training, flexibility to modify model components, sliding window inference, multi-GPU inference, benchmarking utilities, regularization techniques, and example notebooks for training and finetuning models with different backbones.
langcorn
LangCorn is an API server that enables you to serve LangChain models and pipelines with ease, leveraging the power of FastAPI for a robust and efficient experience. It offers features such as easy deployment of LangChain models and pipelines, ready-to-use authentication functionality, high-performance FastAPI framework for serving requests, scalability and robustness for language processing applications, support for custom pipelines and processing, well-documented RESTful API endpoints, and asynchronous processing for faster response times.
agentic_security
Agentic Security is an open-source vulnerability scanner designed for safety scanning, offering customizable rule sets and agent-based attacks. It provides comprehensive fuzzing for any LLMs, LLM API integration, and stress testing with a wide range of fuzzing and attack techniques. The tool is not a foolproof solution but aims to enhance security measures against potential threats. It offers installation via pip and supports quick start commands for easy setup. Users can utilize the tool for LLM integration, adding custom datasets, running CI checks, extending dataset collections, and dynamic datasets with mutations. The tool also includes a probe endpoint for integration testing. The roadmap includes expanding dataset variety, introducing new attack vectors, developing an attacker LLM, and integrating OWASP Top 10 classification.
candle-vllm
Candle-vllm is an efficient and easy-to-use platform designed for inference and serving local LLMs, featuring an OpenAI compatible API server. It offers a highly extensible trait-based system for rapid implementation of new module pipelines, streaming support in generation, efficient management of key-value cache with PagedAttention, and continuous batching. The tool supports chat serving for various models and provides a seamless experience for users to interact with LLMs through different interfaces.
mlcontests.github.io
ML Contests is a platform that provides a sortable list of public machine learning/data science/AI contests, viewable on mlcontests.com. Users can submit pull requests for any changes or additions to the competitions list by editing the competitions.json file on the GitHub repository. The platform requires mandatory fields such as competition name, URL, type of ML, deadline for submissions, prize information, platform running the competition, and sponsorship details. Optional fields include conference affiliation, conference year, competition launch date, registration deadline, additional URLs, and tags relevant to the challenge type. The platform is transitioning towards assigning multiple tags to competitions for better categorization and searchability.
mergoo
Mergoo is a library for easily merging multiple LLM experts and efficiently training the merged LLM. With Mergoo, you can efficiently integrate the knowledge of different generic or domain-based LLM experts. Mergoo supports several merging methods, including Mixture-of-Experts, Mixture-of-Adapters, and Layer-wise merging. It also supports various base models, including LLaMa, Mistral, and BERT, and trainers, including Hugging Face Trainer, SFTrainer, and PEFT. Mergoo provides flexible merging for each layer and supports training choices such as only routing MoE layers or fully fine-tuning the merged LLM.
ax
Ax is a Typescript library that allows users to build intelligent agents inspired by agentic workflows and the Stanford DSP paper. It seamlessly integrates with multiple Large Language Models (LLMs) and VectorDBs to create RAG pipelines or collaborative agents capable of solving complex problems. The library offers advanced features such as streaming validation, multi-modal DSP, and automatic prompt tuning using optimizers. Users can easily convert documents of any format to text, perform smart chunking, embedding, and querying, and ensure output validation while streaming. Ax is production-ready, written in Typescript, and has zero dependencies.
AutoGPTQ
AutoGPTQ is an easy-to-use LLM quantization package with user-friendly APIs, based on GPTQ algorithm (weight-only quantization). It provides a simple and efficient way to quantize large language models (LLMs) to reduce their size and computational cost while maintaining their performance. AutoGPTQ supports a wide range of LLM models, including GPT-2, GPT-J, OPT, and BLOOM. It also supports various evaluation tasks, such as language modeling, sequence classification, and text summarization. With AutoGPTQ, users can easily quantize their LLM models and deploy them on resource-constrained devices, such as mobile phones and embedded systems.
functionary
Functionary is a language model that interprets and executes functions/plugins. It determines when to execute functions, whether in parallel or serially, and understands their outputs. Function definitions are given as JSON Schema Objects, similar to OpenAI GPT function calls. It offers documentation and examples on functionary.meetkai.com. The newest model, meetkai/functionary-medium-v3.1, is ranked 2nd in the Berkeley Function-Calling Leaderboard. Functionary supports models with different context lengths and capabilities for function calling and code interpretation. It also provides grammar sampling for accurate function and parameter names. Users can deploy Functionary models serverlessly using Modal.com.
google-cloud-gcp-openai-api
This project provides a drop-in replacement REST API for Google Cloud Vertex AI (PaLM 2, Codey, Gemini) that is compatible with the OpenAI API specifications. It aims to make Google Cloud Platform Vertex AI more accessible by translating OpenAI API calls to Vertex AI. The software is developed in Python and based on FastAPI and LangChain, designed to be simple and customizable for individual needs. It includes step-by-step guides for deployment, supports various OpenAI API services, and offers configuration through environment variables. Additionally, it provides examples for running locally and usage instructions consistent with the OpenAI API format.
parrot.nvim
Parrot.nvim is a Neovim plugin that prioritizes a seamless out-of-the-box experience for text generation. It simplifies functionality and focuses solely on text generation, excluding integration of DALLE and Whisper. It supports persistent conversations as markdown files, custom hooks for inline text editing, multiple providers like Anthropic API, perplexity.ai API, OpenAI API, Mistral API, and local/offline serving via ollama. It allows custom agent definitions, flexible API credential support, and repository-specific instructions with a `.parrot.md` file. It does not have autocompletion or hidden requests in the background to analyze files.
e2m
E2M is a Python library that can parse and convert various file types into Markdown format. It supports the conversion of multiple file formats, including doc, docx, epub, html, htm, url, pdf, ppt, pptx, mp3, and m4a. The ultimate goal of the E2M project is to provide high-quality data for Retrieval-Augmented Generation (RAG) and model training or fine-tuning. The core architecture consists of a Parser responsible for parsing various file types into text or image data, and a Converter responsible for converting text or image data into Markdown format.
scylla
Scylla is an intelligent proxy pool tool designed for humanities, enabling users to extract content from the internet and build their own Large Language Models in the AI era. It features automatic proxy IP crawling and validation, an easy-to-use JSON API, a simple web-based user interface, HTTP forward proxy server, Scrapy and requests integration, and headless browser crawling. Users can start using Scylla with just one command, making it a versatile tool for various web scraping and content extraction tasks.
avante.nvim
avante.nvim is a Neovim plugin that emulates the behavior of the Cursor AI IDE, providing AI-driven code suggestions and enabling users to apply recommendations to their source files effortlessly. It offers AI-powered code assistance and one-click application of suggested changes, streamlining the editing process and saving time. The plugin is still in early development, with functionalities like setting API keys, querying AI about code, reviewing suggestions, and applying changes. Key bindings are available for various actions, and the roadmap includes enhancing AI interactions, stability improvements, and introducing new features for coding tasks.
For similar tasks
cyclops
Cyclops is a toolkit for facilitating research and deployment of ML models for healthcare. It provides a few high-level APIs namely: data - Create datasets for training, inference and evaluation. We use the popular 🤗 datasets to efficiently load and slice different modalities of data models - Use common model implementations using scikit-learn and PyTorch tasks - Use common ML task formulations such as binary classification or multi-label classification on tabular, time-series and image data evaluate - Evaluate models on clinical prediction tasks monitor - Detect dataset shift relevant for clinical use cases report - Create model report cards for clinical ML models
gaussian-painters
This tool is a fork of the 3D Gaussian Splatting code. It allows users to create a dataset ready to be trained with the Gaussian Splatting code. The dataset can be used for various experiments, such as creating orthogonal images, steganography, and lenticular effects. The tool also includes a visualizer that allows users to visualize the "painting" process during the Gaussian Splatting optimization.
UHGEval
UHGEval is a comprehensive framework designed for evaluating the hallucination phenomena. It includes UHGEval, a framework for evaluating hallucination, XinhuaHallucinations dataset, and UHGEval-dataset pipeline for creating XinhuaHallucinations. The framework offers flexibility and extensibility for evaluating common hallucination tasks, supporting various models and datasets. Researchers can use the open-source pipeline to create customized datasets. Supported tasks include QA, dialogue, summarization, and multi-choice tasks.
RAGFoundry
RAG Foundry is a library designed to enhance Large Language Models (LLMs) by fine-tuning models on RAG-augmented datasets. It helps create training data, train models using parameter-efficient finetuning (PEFT), and measure performance using RAG-specific metrics. The library is modular, customizable using configuration files, and facilitates prototyping with various RAG settings and configurations for tasks like data processing, retrieval, training, inference, and evaluation.
ollama-ebook-summary
The 'ollama-ebook-summary' repository is a Python project that creates bulleted notes summaries of books and long texts, particularly in epub and pdf formats with ToC metadata. It automates the extraction of chapters, splits them into ~2000 token chunks, and allows for asking arbitrary questions to parts of the text for improved granularity of response. The tool aims to provide summaries for each page of a book rather than a one-page summary of the entire document, enhancing content curation and knowledge sharing capabilities.
agentneo
AgentNeo is a Python package that provides functionalities for project, trace, dataset, experiment management. It allows users to authenticate, create projects, trace agents and LangGraph graphs, manage datasets, and run experiments with metrics. The tool aims to streamline AI project management and analysis by offering a comprehensive set of features.
RAG-FiT
RAG-FiT is a library designed to improve Language Models' ability to use external information by fine-tuning models on specially created RAG-augmented datasets. The library assists in creating training data, training models using parameter-efficient finetuning (PEFT), and evaluating performance using RAG-specific metrics. It is modular, customizable via configuration files, and facilitates fast prototyping and experimentation with various RAG settings and configurations.
djl
Deep Java Library (DJL) is an open-source, high-level, engine-agnostic Java framework for deep learning. It is designed to be easy to get started with and simple to use for Java developers. DJL provides a native Java development experience and allows users to integrate machine learning and deep learning models with their Java applications. The framework is deep learning engine agnostic, enabling users to switch engines at any point for optimal performance. DJL's ergonomic API interface guides users with best practices to accomplish deep learning tasks, such as running inference and training neural networks.
For similar jobs
weave
Weave is a toolkit for developing Generative AI applications, built by Weights & Biases. With Weave, you can log and debug language model inputs, outputs, and traces; build rigorous, apples-to-apples evaluations for language model use cases; and organize all the information generated across the LLM workflow, from experimentation to evaluations to production. Weave aims to bring rigor, best-practices, and composability to the inherently experimental process of developing Generative AI software, without introducing cognitive overhead.
LLMStack
LLMStack is a no-code platform for building generative AI agents, workflows, and chatbots. It allows users to connect their own data, internal tools, and GPT-powered models without any coding experience. LLMStack can be deployed to the cloud or on-premise and can be accessed via HTTP API or triggered from Slack or Discord.
VisionCraft
The VisionCraft API is a free API for using over 100 different AI models. From images to sound.
kaito
Kaito is an operator that automates the AI/ML inference model deployment in a Kubernetes cluster. It manages large model files using container images, avoids tuning deployment parameters to fit GPU hardware by providing preset configurations, auto-provisions GPU nodes based on model requirements, and hosts large model images in the public Microsoft Container Registry (MCR) if the license allows. Using Kaito, the workflow of onboarding large AI inference models in Kubernetes is largely simplified.
PyRIT
PyRIT is an open access automation framework designed to empower security professionals and ML engineers to red team foundation models and their applications. It automates AI Red Teaming tasks to allow operators to focus on more complicated and time-consuming tasks and can also identify security harms such as misuse (e.g., malware generation, jailbreaking), and privacy harms (e.g., identity theft). The goal is to allow researchers to have a baseline of how well their model and entire inference pipeline is doing against different harm categories and to be able to compare that baseline to future iterations of their model. This allows them to have empirical data on how well their model is doing today, and detect any degradation of performance based on future improvements.
tabby
Tabby is a self-hosted AI coding assistant, offering an open-source and on-premises alternative to GitHub Copilot. It boasts several key features: * Self-contained, with no need for a DBMS or cloud service. * OpenAPI interface, easy to integrate with existing infrastructure (e.g Cloud IDE). * Supports consumer-grade GPUs.
spear
SPEAR (Simulator for Photorealistic Embodied AI Research) is a powerful tool for training embodied agents. It features 300 unique virtual indoor environments with 2,566 unique rooms and 17,234 unique objects that can be manipulated individually. Each environment is designed by a professional artist and features detailed geometry, photorealistic materials, and a unique floor plan and object layout. SPEAR is implemented as Unreal Engine assets and provides an OpenAI Gym interface for interacting with the environments via Python.
Magick
Magick is a groundbreaking visual AIDE (Artificial Intelligence Development Environment) for no-code data pipelines and multimodal agents. Magick can connect to other services and comes with nodes and templates well-suited for intelligent agents, chatbots, complex reasoning systems and realistic characters.