trulens

trulens

Evaluation and Tracking for LLM Experiments

Stars: 2087

Visit
 screenshot

TruLens provides a set of tools for developing and monitoring neural nets, including large language models. This includes both tools for evaluation of LLMs and LLM-based applications with _TruLens-Eval_ and deep learning explainability with _TruLens-Explain_. _TruLens-Eval_ and _TruLens-Explain_ are housed in separate packages and can be used independently.

README:

PyPI - Version Azure Build Status GitHub PyPI - Downloads Slack Docs Open In Colab

šŸ¦‘ Welcome to TruLens!

TruLens

Don't just vibe-check your llm app! Systematically evaluate and track your LLM experiments with TruLens. As you develop your app including prompts, models, retrievers, knowledge sources and more, TruLens is the tool you need to understand its performance.

Fine-grained, stack-agnostic instrumentation and comprehensive evaluations help you to identify failure modes & systematically iterate to improve your application.

Read more about the core concepts behind TruLens including Feedback Functions, The RAG Triad, and Honest, Harmless and Helpful Evals.

TruLens in the development workflow

Build your first prototype then connect instrumentation and logging with TruLens. Decide what feedbacks you need, and specify them with TruLens to run alongside your app. Then iterate and compare versions of your app in an easy-to-use user interface šŸ‘‡

Architecture Diagram

Installation and Setup

Install the trulens pip package from PyPI.

    pip install trulens

Quick Usage

Walk through how to instrument and evaluate a RAG built from scratch with TruLens.

Open In Colab

šŸ’” Contributing & Community

Interested in contributing? See our contributing guide for more details.

The best way to support TruLens is to give us a ā­ on GitHub and join our slack community!

For Tasks:

Click tags to check more tools for each tasks

For Jobs:

Alternative AI tools for trulens

Similar Open Source Tools

For similar tasks

For similar jobs