inspect_ai

inspect_ai

Inspect: A framework for large language model evaluations

Stars: 584

Visit
 screenshot

Inspect AI is a framework developed by the UK AI Safety Institute for evaluating large language models. It offers various built-in components for prompt engineering, tool usage, multi-turn dialog, and model graded evaluations. Users can extend Inspect by adding new elicitation and scoring techniques through additional Python packages. The tool aims to provide a comprehensive solution for assessing the performance and safety of language models.

README:

Welcome to Inspect, a framework for large language model evaluations created by the UK AI Safety Institute.

Inspect provides many built-in components, including facilities for prompt engineering, tool usage, multi-turn dialog, and model graded evaluations. Extensions to Inspect (e.g. to support new elicitation and scoring techniques) can be provided by other Python packages.

To get started with Inspect, please see the documentation at https://inspect.ai-safety-institute.org.uk/.


To work on development of Inspect, clone the repository and install with the -e flag and [dev] optional dependencies:

$ git clone https://github.com/UKGovernmentBEIS/inspect_ai.git
$ cd inspect_ai
$ pip install -e ".[dev]"

Optionally install pre-commit hooks via

make hooks

Run linting, formatting, and tests via

make check
make test

If you use VS Code, you should be sure to have installed the recommended extensions (Python, Ruff, and MyPy). Note that you'll be prompted to install these when you open the project in VS Code.

For Tasks:

Click tags to check more tools for each tasks

For Jobs:

Alternative AI tools for inspect_ai

Similar Open Source Tools

For similar tasks

For similar jobs