holisticai

holisticai

This is an open-source tool to assess and improve the trustworthiness of AI systems.

Stars: 69

Visit
 screenshot

Holistic AI is an open-source library dedicated to assessing and improving the trustworthiness of AI systems. It focuses on measuring and mitigating bias, explainability, robustness, security, and efficacy in AI models. The tool provides comprehensive metrics, mitigation techniques, a user-friendly interface, and visualization tools to enhance AI system trustworthiness. It offers documentation, tutorials, and detailed installation instructions for easy integration into existing workflows.

README:


Holistic AI: building trustworthy AI systems

PyPI Documentation Status PyPI - License PyPI - Downloads Slack


Holistic AI is an open-source library dedicated to assessing and improving the trustworthiness of AI systems. We believe that responsible AI development requires a comprehensive evaluation across multiple dimensions, beyond just accuracy.

Current Capabilities


Holistic AI currently focuses on five verticals of AI trustworthiness:

  1. Bias: measure and mitigate bias in AI models.
  2. Explainability: measure into model behavior and decision-making.
  3. Robustness: measure model performance under various conditions.
  4. Security: measure the privacy risks associated with AI models.
  5. Efficacy: measure the effectiveness of AI models.

Quick Start


pip install holisticai  # Basic installation
pip install holisticai[bias]  # Bias mitigation support
pip install holisticai[explainability]  # For explainability metrics and plots
pip install holisticai[all]  # Install all packages for bias and explainability
# imports
from holisticai.bias.metrics import classification_bias_metrics
from holisticai.datasets import load_dataset
from holisticai.bias.plots import bias_metrics_report
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import StandardScaler

# load an example dataset and split
dataset = load_dataset('law_school', protected_attribute="race")
dataset_split = dataset.train_test_split(test_size=0.3)

# separate the data into train and test sets
train_data = dataset_split['train']
test_data = dataset_split['test']

# rescale the data
scaler = StandardScaler()
X_train_t = scaler.fit_transform(train_data['X'])
X_test_t = scaler.transform(test_data['X'])

# train a logistic regression model
model = LogisticRegression(random_state=42, max_iter=500)
model.fit(X_train_t, train_data['y'])

# make predictions
y_pred = model.predict(X_test_t)

# compute bias metrics
metrics = classification_bias_metrics(
    group_a = test_data['group_a'],
    group_b = test_data['group_b'],
    y_true = test_data['y'],
    y_pred = y_pred
    )

# create a comprehensive report
bias_metrics_report(model_type='binary_classification', table_metrics=metrics)

Key Features


  • Comprehensive Metrics: Measure various aspects of AI system trustworthiness, including bias, fairness, and explainability.
  • Mitigation Techniques: Implement strategies to address identified issues and improve the fairness and robustness of AI models.
  • User-Friendly Interface: Intuitive API for easy integration into existing workflows.
  • Visualization Tools: Generate insightful visualizations for better understanding of model behavior and bias patterns.

Documentation and Tutorials


Detailed Installation


Troubleshooting (macOS):

Before installing the library, you may need to install these packages:

brew install cbc pkg-config
python -m pip install cylp
brew install cmake

Contributing

We welcome contributions from the community To learn more about contributing to Holistic AI, please refer to our Contributing Guide.

For Tasks:

Click tags to check more tools for each tasks

For Jobs:

Alternative AI tools for holisticai

Similar Open Source Tools

For similar tasks

For similar jobs