Fairness and Explainability with SageMaker Clarify


This notebook’s CI test result for us-west-2 is as follows. CI test results in other regions can be found at the end of the notebook.

This us-west-2 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable


Runtime

This notebook takes approximately 30 minutes to run.

Contents

  1. Overview

  2. Prerequisites and Data

    1. Initialize SageMaker

    2. Download data

    3. Loading the data: Adult Dataset

    4. Data inspection

    5. Data encoding and upload to S3

  3. Train and Deploy XGBoost Model

    1. Train Model

    2. Create Model

  4. Amazon SageMaker Clarify

    1. Detecting Bias

      1. Writing BiasConfig

      2. Pre-training Bias

      3. Post-training Bias

      4. Viewing the Bias Report

    2. Explaining Predictions

      1. Viewing the Explainability Report

  5. Clean Up

Overview

Amazon SageMaker Clarify helps improve your machine learning models by detecting potential bias and helping explain how these models make predictions. The fairness and explainability functionality provided by SageMaker Clarify takes a step towards enabling AWS customers to build trustworthy and understandable machine learning models. The product comes with the tools to help you with the following tasks.

  • Measure biases that can occur during each stage of the ML lifecycle (data collection, model training and tuning, and monitoring of ML models deployed for inference).

  • Generate model governance reports targeting risk and compliance teams and external regulators.

  • Provide explanations of the data, models, and monitoring used to assess predictions.

This sample notebook walks you through:
1. Key terms and concepts needed to understand SageMaker Clarify 1. Measuring the pre-training bias of a dataset and post-training bias of a model 1. Explaining the importance of the various input features on the model’s decision 1. Accessing the reports through SageMaker Studio if you have an instance set up.

In doing so, the notebook first trains a SageMaker XGBoost model using training dataset, then use SageMaker Clarify to analyze a testing dataset in CSV format. SageMaker Clarify also supports analyzing dataset in SageMaker JSON Lines dense format, which is illustrated in another notebook.

Prerequisites and Data

Initialize SageMaker

[2]:
from sagemaker import Session

session = Session()
bucket = session.default_bucket()
prefix = "sagemaker/DEMO-sagemaker-clarify"
region = session.boto_region_name
# Define IAM role
from sagemaker import get_execution_role
import pandas as pd
import numpy as np
import os
import boto3
from datetime import datetime

role = get_execution_role()
s3_client = boto3.client("s3")

Download data

Data Source: https://archive.ics.uci.edu/ml/machine-learning-databases/adult/

Let’s download the data and save it in the local folder with the name adult.data and adult.test from UCI repository\(^{[2]}\).

\(^{[2]}\)Dua Dheeru, and Efi Karra Taniskidou. “UCI Machine Learning Repository”. Irvine, CA: University of California, School of Information and Computer Science (2017).

[3]:
adult_columns = [
    "Age",
    "Workclass",
    "fnlwgt",
    "Education",
    "Education-Num",
    "Marital Status",
    "Occupation",
    "Relationship",
    "Ethnic group",
    "Sex",
    "Capital Gain",
    "Capital Loss",
    "Hours per week",
    "Country",
    "Target",
]
if not os.path.isfile("adult.data"):
    s3_client.download_file(
        "sagemaker-sample-files", "datasets/tabular/uci_adult/adult.data", "adult.data"
    )
    print("adult.data saved!")
else:
    print("adult.data already on disk.")

if not os.path.isfile("adult.test"):
    s3_client.download_file(
        "sagemaker-sample-files", "datasets/tabular/uci_adult/adult.test", "adult.test"
    )
    print("adult.test saved!")
else:
    print("adult.test already on disk.")
adult.data saved!
adult.test saved!

Loading the data: Adult Dataset

From the UCI repository of machine learning datasets, this database contains 14 features concerning demographic characteristics of 45,222 rows (32,561 for training and 12,661 for testing). The task is to predict whether a person has a yearly income that is more or less than $50,000.

Here are the features and their possible values: 1. Age: continuous. 1. Workclass: Private, Self-emp-not-inc, Self-emp-inc, Federal-gov, Local-gov, State-gov, Without-pay, Never-worked. 1. Fnlwgt: continuous (the number of people the census takers believe that observation represents). 1. Education: Bachelors, Some-college, 11th, HS-grad, Prof-school, Assoc-acdm, Assoc-voc, 9th, 7th-8th, 12th, Masters, 1st-4th, 10th, Doctorate, 5th-6th, Preschool. 1. Education-num: continuous. 1. Marital-status: Married-civ-spouse, Divorced, Never-married, Separated, Widowed, Married-spouse-absent, Married-AF-spouse. 1. Occupation: Tech-support, Craft-repair, Other-service, Sales, Exec-managerial, Prof-specialty, Handlers-cleaners, Machine-op-inspct, Adm-clerical, Farming-fishing, Transport-moving, Priv-house-serv, Protective-serv, Armed-Forces. 1. Relationship: Wife, Own-child, Husband, Not-in-family, Other-relative, Unmarried. 1. Ethnic group: White, Asian-Pac-Islander, Amer-Indian-Eskimo, Other, Black. 1. Sex: Female, Male. * Note: this data is extracted from the 1994 Census and enforces a binary option on Sex 1. Capital-gain: continuous. 1. Capital-loss: continuous. 1. Hours-per-week: continuous. 1. Native-country: United-States, Cambodia, England, Puerto-Rico, Canada, Germany, Outlying-US(Guam-USVI-etc), India, Japan, Greece, South, China, Cuba, Iran, Honduras, Philippines, Italy, Poland, Jamaica, Vietnam, Mexico, Portugal, Ireland, France, Dominican-Republic, Laos, Ecuador, Taiwan, Haiti, Columbia, Hungary, Guatemala, Nicaragua, Scotland, Thailand, Yugoslavia, El-Salvador, Trinadad&Tobago, Peru, Hong, Holand-Netherlands.

Next, we specify our binary prediction task:
15. Target: <=50,000, >$50,000.
[4]:
training_data = pd.read_csv(
    "adult.data", names=adult_columns, sep=r"\s*,\s*", engine="python", na_values="?"
).dropna()

testing_data = pd.read_csv(
    "adult.test", names=adult_columns, sep=r"\s*,\s*", engine="python", na_values="?", skiprows=1
).dropna()

training_data.head()
[4]:
Age Workclass fnlwgt Education Education-Num Marital Status Occupation Relationship Ethnic group Sex Capital Gain Capital Loss Hours per week Country Target
0 39 State-gov 77516 Bachelors 13 Never-married Adm-clerical Not-in-family White Male 2174 0 40 United-States <=50K
1 50 Self-emp-not-inc 83311 Bachelors 13 Married-civ-spouse Exec-managerial Husband White Male 0 0 13 United-States <=50K
2 38 Private 215646 HS-grad 9 Divorced Handlers-cleaners Not-in-family White Male 0 0 40 United-States <=50K
3 53 Private 234721 11th 7 Married-civ-spouse Handlers-cleaners Husband Black Male 0 0 40 United-States <=50K
4 28 Private 338409 Bachelors 13 Married-civ-spouse Prof-specialty Wife Black Female 0 0 40 Cuba <=50K

Data inspection

Plotting histograms for the distribution of the different features is a good way to visualize the data. Let’s plot a few of the features that can be considered sensitive.
Let’s take a look specifically at the Sex feature of a census respondent. In the first plot we see that there are fewer Female respondents as a whole but especially in the positive outcomes, where they form ~\(\frac{1}{7}\)th of respondents.
[5]:
%matplotlib inline
training_data["Sex"].value_counts().sort_values().plot(kind="bar", title="Counts of Sex", rot=0)
[5]:
<AxesSubplot:title={'center':'Counts of Sex'}>
../../_images/sagemaker-clarify_fairness_and_explainability_fairness_and_explainability_outputs_10_1.png
[6]:
training_data["Sex"].where(training_data["Target"] == ">50K").value_counts().sort_values().plot(
    kind="bar", title="Counts of Sex earning >$50K", rot=0
)
[6]:
<AxesSubplot:title={'center':'Counts of Sex earning >$50K'}>
../../_images/sagemaker-clarify_fairness_and_explainability_fairness_and_explainability_outputs_11_1.png

Encode and Upload the Dataset

Here we encode the training and test data. Encoding input data is not necessary for SageMaker Clarify, but is necessary for the model.

[7]:
from sklearn import preprocessing


def number_encode_features(df):
    result = df.copy()
    encoders = {}
    for column in result.columns:
        if result.dtypes[column] == np.object:
            encoders[column] = preprocessing.LabelEncoder()
            result[column] = encoders[column].fit_transform(result[column].fillna("None"))
    return result, encoders


training_data = pd.concat([training_data["Target"], training_data.drop(["Target"], axis=1)], axis=1)
training_data, _ = number_encode_features(training_data)
training_data.to_csv("train_data.csv", index=False, header=False)

testing_data, _ = number_encode_features(testing_data)
test_features = testing_data.drop(["Target"], axis=1)
test_target = testing_data["Target"]
test_features.to_csv("test_features.csv", index=False, header=False)

A quick note about our encoding: the “Female” Sex value has been encoded as 0 and “Male” as 1.

[8]:
training_data.head()
[8]:
Target Age Workclass fnlwgt Education Education-Num Marital Status Occupation Relationship Ethnic group Sex Capital Gain Capital Loss Hours per week Country
0 0 39 5 77516 9 13 4 0 1 4 1 2174 0 40 38
1 0 50 4 83311 9 13 2 3 0 4 1 0 0 13 38
2 0 38 2 215646 11 9 0 5 1 4 1 0 0 40 38
3 0 53 2 234721 1 7 2 5 0 2 1 0 0 40 38
4 0 28 2 338409 9 13 2 9 5 2 0 0 0 40 4

Lastly, let’s upload the data to S3.

[9]:
from sagemaker.s3 import S3Uploader
from sagemaker.inputs import TrainingInput

train_uri = S3Uploader.upload("train_data.csv", "s3://{}/{}".format(bucket, prefix))
train_input = TrainingInput(train_uri, content_type="csv")
test_uri = S3Uploader.upload("test_features.csv", "s3://{}/{}".format(bucket, prefix))

Train XGBoost Model

Train Model

Since our focus is on understanding how to use SageMaker Clarify, we keep it simple by using a standard XGBoost model.

[10]:
from sagemaker.image_uris import retrieve
from sagemaker.estimator import Estimator

container = retrieve("xgboost", region, version="1.2-1")
xgb = Estimator(
    container,
    role,
    instance_count=1,
    instance_type="ml.m5.xlarge",
    disable_profiler=True,
    sagemaker_session=session,
)

xgb.set_hyperparameters(
    max_depth=5,
    eta=0.2,
    gamma=4,
    min_child_weight=6,
    subsample=0.8,
    objective="binary:logistic",
    num_round=800,
)

xgb.fit({"train": train_input}, logs=False)

2022-04-18 15:42:18 Starting - Starting the training job...
2022-04-18 15:42:34 Starting - Preparing the instances for training............
2022-04-18 15:43:42 Downloading - Downloading input data...
2022-04-18 15:44:02 Training - Downloading the training image...........
2022-04-18 15:45:03 Training - Training image download completed. Training in progress......
2022-04-18 15:45:34 Uploading - Uploading generated training model.
2022-04-18 15:45:40 Completed - Training job completed

Create Model

Here we create the SageMaker model.

[11]:
model_name = "DEMO-clarify-model-{}".format(datetime.now().strftime("%d-%m-%Y-%H-%M-%S"))
model = xgb.create_model(name=model_name)
container_def = model.prepare_container_def()
session.create_model(model_name, role, container_def)
[11]:
'DEMO-clarify-model-18-04-2022-15-45-44'

Amazon SageMaker Clarify

Now that you have your model set up, let’s say hello to SageMaker Clarify!

[12]:
from sagemaker import clarify

clarify_processor = clarify.SageMakerClarifyProcessor(
    role=role, instance_count=1, instance_type="ml.m5.xlarge", sagemaker_session=session
)

Detecting Bias

SageMaker Clarify helps you detect possible pre- and post-training biases using a variety of metrics. #### Writing DataConfig and ModelConfig A DataConfig object communicates some basic information about data I/O to SageMaker Clarify. We specify where to find the input dataset, where to store the output, the target column (label), the header names, and the dataset type.

[13]:
bias_report_output_path = "s3://{}/{}/clarify-bias".format(bucket, prefix)
bias_data_config = clarify.DataConfig(
    s3_data_input_path=train_uri,
    s3_output_path=bias_report_output_path,
    label="Target",
    headers=training_data.columns.to_list(),
    dataset_type="text/csv",
)

A ModelConfig object communicates information about your trained model. To avoid additional traffic to your production models, SageMaker Clarify sets up and tears down a dedicated endpoint when processing. * instance_type and instance_count specify your preferred instance type and instance count used to run your model on during SageMaker Clarify’s processing. The testing dataset is small so a single standard instance is good enough to run this example. If your have a large complex dataset, you may want to use a better instance type to speed up, or add more instances to enable Spark parallelization. * accept_type denotes the endpoint response payload format, and content_type denotes the payload format of request to the endpoint.

[14]:
model_config = clarify.ModelConfig(
    model_name=model_name,
    instance_type="ml.m5.xlarge",
    instance_count=1,
    accept_type="text/csv",
    content_type="text/csv",
)

A ModelPredictedLabelConfig provides information on the format of your predictions. XGBoost model outputs probabilities of samples, so SageMaker Clarify invokes the endpoint then uses probability_threshold to convert the probability to binary labels for bias analysis. Prediction above the threshold is interpreted as label value 1 and below or equal as label value 0.

[15]:
predictions_config = clarify.ModelPredictedLabelConfig(probability_threshold=0.8)

Writing BiasConfig

SageMaker Clarify also needs information on what the sensitive columns (facets) are, what the sensitive features (facet_values_or_threshold) may be, and what the desirable outcomes are (label_values_or_threshold). SageMaker Clarify can handle both categorical and continuous data for facet_values_or_threshold and for label_values_or_threshold. In this case we are using categorical data.

We specify this information in the BiasConfig API. Here that the positive outcome is earning >$50,000, Sex is a sensitive category, and Female respondents are the sensitive group. group_name is used to form subgroups for the measurement of Conditional Demographic Disparity in Labels (CDDL) and Conditional Demographic Disparity in Predicted Labels (CDDPL) with regards to Simpson’s paradox.

[16]:
bias_config = clarify.BiasConfig(
    label_values_or_threshold=[1], facet_name="Sex", facet_values_or_threshold=[0], group_name="Age"
)

Pre-training Bias

Bias can be present in your data before any model training occurs. Inspecting your data for bias before training begins can help detect any data collection gaps, inform your feature engineering, and help you understand what societal biases the data may reflect.

Computing pre-training bias metrics does not require a trained model.

Post-training Bias

Computing post-training bias metrics does require a trained model.

Unbiased training data (as determined by concepts of fairness measured by bias metric) may still result in biased model predictions after training. Whether this occurs depends on several factors including hyperparameter choices.

You can run these options separately with run_pre_training_bias() and run_post_training_bias() or at the same time with run_bias() as shown below.

[17]:
clarify_processor.run_bias(
    data_config=bias_data_config,
    bias_config=bias_config,
    model_config=model_config,
    model_predicted_label_config=predictions_config,
    pre_training_methods="all",
    post_training_methods="all",
)

Job Name:  Clarify-Bias-2022-04-18-15-45-46-171
Inputs:  [{'InputName': 'dataset', 'AppManaged': False, 'S3Input': {'S3Uri': 's3://sagemaker-us-west-2-000000000000/sagemaker/DEMO-sagemaker-clarify/train_data.csv', 'LocalPath': '/opt/ml/processing/input/data', 'S3DataType': 'S3Prefix', 'S3InputMode': 'File', 'S3DataDistributionType': 'FullyReplicated', 'S3CompressionType': 'None'}}, {'InputName': 'analysis_config', 'AppManaged': False, 'S3Input': {'S3Uri': 's3://sagemaker-us-west-2-000000000000/sagemaker/DEMO-sagemaker-clarify/clarify-bias/analysis_config.json', 'LocalPath': '/opt/ml/processing/input/config', 'S3DataType': 'S3Prefix', 'S3InputMode': 'File', 'S3DataDistributionType': 'FullyReplicated', 'S3CompressionType': 'None'}}]
Outputs:  [{'OutputName': 'analysis_result', 'AppManaged': False, 'S3Output': {'S3Uri': 's3://sagemaker-us-west-2-000000000000/sagemaker/DEMO-sagemaker-clarify/clarify-bias', 'LocalPath': '/opt/ml/processing/output', 'S3UploadMode': 'EndOfJob'}}]
.............................2022-04-18 15:50:24,047 logging.conf not found when configuring logging, using default logging configuration.
2022-04-18 15:50:24,047 Starting SageMaker Clarify Processing job
2022-04-18 15:50:24,048 Analysis config path: /opt/ml/processing/input/config/analysis_config.json
2022-04-18 15:50:24,048 Analysis result path: /opt/ml/processing/output
2022-04-18 15:50:24,048 This host is algo-1.
2022-04-18 15:50:24,048 This host is the leader.
2022-04-18 15:50:24,048 Number of hosts in the cluster is 1.
2022-04-18 15:50:24,194 Running Python / Pandas based analyzer.
2022-04-18 15:50:24,195 Dataset type: text/csv uri: /opt/ml/processing/input/data
2022-04-18 15:50:24,205 Loading dataset...
2022-04-18 15:50:24,239 Loaded dataset. Dataset info:
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 30162 entries, 0 to 30161
Data columns (total 14 columns):
 #   Column          Non-Null Count  Dtype
---  ------          --------------  -----
 0   Age             30162 non-null  int64
 1   Workclass       30162 non-null  int64
 2   fnlwgt          30162 non-null  int64
 3   Education       30162 non-null  int64
 4   Education-Num   30162 non-null  int64
 5   Marital Status  30162 non-null  int64
 6   Occupation      30162 non-null  int64
 7   Relationship    30162 non-null  int64
 8   Ethnic group    30162 non-null  int64
 9   Sex             30162 non-null  int64
 10  Capital Gain    30162 non-null  int64
 11  Capital Loss    30162 non-null  int64
 12  Hours per week  30162 non-null  int64
 13  Country         30162 non-null  int64
dtypes: int64(14)
memory usage: 3.2 MB
2022-04-18 15:50:24,377 Spinning up shadow endpoint
2022-04-18 15:50:24,377 Creating endpoint-config with name sm-clarify-config-1650297024-8acd
2022-04-18 15:50:24,486 Creating endpoint: 'sm-clarify-DEMO-clarify-model-18-04-2022-15-45--1650297024-12c3'
2022-04-18 15:50:24,766 Using endpoint name: sm-clarify-DEMO-clarify-model-18-04-2022-15-45--1650297024-12c3
2022-04-18 15:50:24,767 Waiting for endpoint ...
2022-04-18 15:50:24,767 Checking endpoint status:
Legend:
(OutOfService: x, Creating: -, Updating: -, InService: !, RollingBack: <, Deleting: o, Failed: *)
2022-04-18 15:53:25,164 Endpoint is in service after 180 seconds
2022-04-18 15:53:25,164 Endpoint ready.
2022-04-18 15:53:25,164 ======================================
2022-04-18 15:53:25,164 Calculating post-training bias metrics
2022-04-18 15:53:25,164 ======================================
2022-04-18 15:53:25,164 Getting predictions from the endpoint
2022-04-18 15:53:27,354 We assume a prediction above 0.800 indicates 1 and below or equal indicates 0.
2022-04-18 15:53:27,355 Column Target with data uniqueness fraction 6.630860022544923e-05 is classifed as a CATEGORICAL column
2022-04-18 15:53:27,357 Column Sex with data uniqueness fraction 6.630860022544923e-05 is classifed as a CATEGORICAL column
2022-04-18 15:53:27,360 Column Target with data uniqueness fraction 6.630860022544923e-05 is classifed as a CATEGORICAL column
2022-04-18 15:53:27,362 Column None with data uniqueness fraction 6.630860022544923e-05 is classifed as a CATEGORICAL column
2022-04-18 15:53:28,676 Stop using endpoint: sm-clarify-DEMO-clarify-model-18-04-2022-15-45--1650297024-12c3
2022-04-18 15:53:28,676 Deleting endpoint configuration with name: sm-clarify-config-1650297024-8acd
2022-04-18 15:53:28,745 Deleting endpoint with name: sm-clarify-DEMO-clarify-model-18-04-2022-15-45--1650297024-12c3
2022-04-18 15:53:28,834 Model endpoint delivered 0.56949 requests per second and a total of 2 requests over 4 seconds
2022-04-18 15:53:28,835 Stop using endpoint: None
2022-04-18 15:53:28,835 =====================================
2022-04-18 15:53:28,835 Calculating pre-training bias metrics
2022-04-18 15:53:28,835 =====================================
2022-04-18 15:53:28,835 Column Target with data uniqueness fraction 6.630860022544923e-05 is classifed as a CATEGORICAL column
2022-04-18 15:53:28,838 Column Sex with data uniqueness fraction 6.630860022544923e-05 is classifed as a CATEGORICAL column
2022-04-18 15:53:28,840 Column Target with data uniqueness fraction 6.630860022544923e-05 is classifed as a CATEGORICAL column
2022-04-18 15:53:29,145 ======================================
2022-04-18 15:53:29,145 Calculating bias statistics for report
2022-04-18 15:53:29,145 ======================================
2022-04-18 15:53:29,146 Column Target with data uniqueness fraction 6.630860022544923e-05 is classifed as a CATEGORICAL column
2022-04-18 15:53:29,149 Column Sex with data uniqueness fraction 6.630860022544923e-05 is classifed as a CATEGORICAL column
2022-04-18 15:53:29,151 Column Target with data uniqueness fraction 6.630860022544923e-05 is classifed as a CATEGORICAL column
2022-04-18 15:53:29,153 Column None with data uniqueness fraction 6.630860022544923e-05 is classifed as a CATEGORICAL column
2022-04-18 15:53:29,161 Stop using endpoint: None
2022-04-18 15:53:29,430 jupyter nbconvert --to html --output /opt/ml/processing/output/report.html /opt/ml/processing/output/report.ipynb --template sagemaker-xai
[NbConvertApp] Converting notebook /opt/ml/processing/output/report.ipynb to html
[NbConvertApp] Writing 344696 bytes to /opt/ml/processing/output/report.html
2022-04-18 15:53:30,320 HTML report '/opt/ml/processing/output/report.html' generated successfully.
2022-04-18 15:53:30,320 wkhtmltopdf -q /opt/ml/processing/output/report.html /opt/ml/processing/output/report.pdf
2022-04-18 15:53:30,813 PDF report '/opt/ml/processing/output/report.pdf' generated successfully.
2022-04-18 15:53:30,813 Collected analyses: 
{
    "version": "1.0",
    "post_training_bias_metrics": {
        "label": "Target",
        "facets": {
            "Sex": [
                {
                    "value_or_threshold": "0",
                    "metrics": [
                        {
                            "name": "AD",
                            "description": "Accuracy Difference (AD)",
                            "value": -0.1141572442143538
                        },
                        {
                            "name": "CDDPL",
                            "description": "Conditional Demographic Disparity in Predicted Labels (CDDPL)",
                            "value": 0.19716203919079375
                        },
                        {
                            "name": "DAR",
                            "description": "Difference in Acceptance Rates (DAR)",
                            "value": -0.007405223292617502
                        },
                        {
                            "name": "DCA",
                            "description": "Difference in Conditional Acceptance (DCA)",
                            "value": -0.22750276729134145
                        },
                        {
                            "name": "DCR",
                            "description": "Difference in Conditional Rejection (DCR)",
                            "value": 0.13282504190308553
                        },
                        {
                            "name": "DI",
                            "description": "Disparate Impact (DI)",
                            "value": 0.32939129409419415
                        },
                        {
                            "name": "DPPL",
                            "description": "Difference in Positive Proportions in Predicted Labels (DPPL)",
                            "value": 0.0922004707530946
                        },
                        {
                            "name": "DRR",
                            "description": "Difference in Rejection Rates (DRR)",
                            "value": 0.13653296409568605
                        },
                        {
                            "name": "FT",
                            "description": "Flip Test (FT)",
                            "value": -0.004600286240032713
                        },
                        {
                            "name": "RD",
                            "description": "Recall Difference (RD)",
                            "value": 0.03556460647616988
                        },
                        {
                            "name": "TE",
                            "description": "Treatment Equality (TE)",
                            "value": 24.11428571428572
                        }
                    ]
                }
            ]
        },
        "label_value_or_threshold": "1"
    },
    "pre_training_bias_metrics": {
        "label": "Target",
        "facets": {
            "Sex": [
                {
                    "value_or_threshold": "0",
                    "metrics": [
                        {
                            "name": "CDDL",
                            "description": "Conditional Demographic Disparity in Labels (CDDL)",
                            "value": 0.214915908649356
                        },
                        {
                            "name": "CI",
                            "description": "Class Imbalance (CI)",
                            "value": 0.3513692725946555
                        },
                        {
                            "name": "DPL",
                            "description": "Difference in Positive Proportions in Labels (DPL)",
                            "value": 0.20015891077100018
                        },
                        {
                            "name": "JS",
                            "description": "Jensen-Shannon Divergence (JS)",
                            "value": 0.030756144659773006
                        },
                        {
                            "name": "KL",
                            "description": "Kullback-Liebler Divergence (KL)",
                            "value": 0.14306865156306434
                        },
                        {
                            "name": "KS",
                            "description": "Kolmogorov-Smirnov Distance (KS)",
                            "value": 0.20015891077100018
                        },
                        {
                            "name": "LP",
                            "description": "L-p Norm (LP)",
                            "value": 0.2830674462421746
                        },
                        {
                            "name": "TVD",
                            "description": "Total Variation Distance (TVD)",
                            "value": 0.20015891077100015
                        }
                    ]
                }
            ]
        },
        "label_value_or_threshold": "1"
    }
}
2022-04-18 15:53:30,814 exit_message: Completed: SageMaker XAI Analyzer ran successfully
---!

Viewing the Bias Report

In Studio, you can view the results under the experiments tab.

bc4c064cc7434169bafc1d7e5d9281e0

Each bias metric has detailed explanations with examples that you can explore.

688f7bb4c45d4b4a935991722c4315b1

You could also summarize the results in a handy table!

e74e3f71821a46fbb97966f5b6fd860a

If you’re not a Studio user yet, you can access the bias report in pdf, html and ipynb formats in the following S3 bucket:

[18]:
bias_report_output_path
[18]:
's3://sagemaker-us-west-2-000000000000/sagemaker/DEMO-sagemaker-clarify/clarify-bias'

Explaining Predictions

There are expanding business needs and legislative regulations that require explanations of why a model made the decision it did. SageMaker Clarify uses SHAP to explain the contribution that each input feature makes to the final decision.

Kernel SHAP algorithm requires a baseline (also known as background dataset). If not provided, a baseline is calculated automatically by SageMaker Clarify using K-means or K-prototypes in the input dataset. Baseline dataset type shall be the same as dataset_type of DataConfig, and baseline samples shall only include features. By definition, baseline should either be a S3 URI to the baseline dataset file, or an in-place list of samples. In this case we chose the latter, and put the first sample of the test dataset to the list.

[19]:
shap_config = clarify.SHAPConfig(
    baseline=[test_features.iloc[0].values.tolist()],
    num_samples=15,
    agg_method="mean_abs",
    save_local_shap_values=True,
)

explainability_output_path = "s3://{}/{}/clarify-explainability".format(bucket, prefix)
explainability_data_config = clarify.DataConfig(
    s3_data_input_path=train_uri,
    s3_output_path=explainability_output_path,
    label="Target",
    headers=training_data.columns.to_list(),
    dataset_type="text/csv",
)
[20]:
clarify_processor.run_explainability(
    data_config=explainability_data_config,
    model_config=model_config,
    explainability_config=shap_config,
)

Job Name:  Clarify-Explainability-2022-04-18-15-54-03-536
Inputs:  [{'InputName': 'dataset', 'AppManaged': False, 'S3Input': {'S3Uri': 's3://sagemaker-us-west-2-000000000000/sagemaker/DEMO-sagemaker-clarify/train_data.csv', 'LocalPath': '/opt/ml/processing/input/data', 'S3DataType': 'S3Prefix', 'S3InputMode': 'File', 'S3DataDistributionType': 'FullyReplicated', 'S3CompressionType': 'None'}}, {'InputName': 'analysis_config', 'AppManaged': False, 'S3Input': {'S3Uri': 's3://sagemaker-us-west-2-000000000000/sagemaker/DEMO-sagemaker-clarify/clarify-explainability/analysis_config.json', 'LocalPath': '/opt/ml/processing/input/config', 'S3DataType': 'S3Prefix', 'S3InputMode': 'File', 'S3DataDistributionType': 'FullyReplicated', 'S3CompressionType': 'None'}}]
Outputs:  [{'OutputName': 'analysis_result', 'AppManaged': False, 'S3Output': {'S3Uri': 's3://sagemaker-us-west-2-000000000000/sagemaker/DEMO-sagemaker-clarify/clarify-explainability', 'LocalPath': '/opt/ml/processing/output', 'S3UploadMode': 'EndOfJob'}}]
.............................2022-04-18 15:58:37,472 logging.conf not found when configuring logging, using default logging configuration.
2022-04-18 15:58:37,473 Starting SageMaker Clarify Processing job
2022-04-18 15:58:37,473 Analysis config path: /opt/ml/processing/input/config/analysis_config.json
2022-04-18 15:58:37,473 Analysis result path: /opt/ml/processing/output
2022-04-18 15:58:37,473 This host is algo-1.
2022-04-18 15:58:37,473 This host is the leader.
2022-04-18 15:58:37,473 Number of hosts in the cluster is 1.
2022-04-18 15:58:37,640 Running Python / Pandas based analyzer.
2022-04-18 15:58:37,640 Dataset type: text/csv uri: /opt/ml/processing/input/data
2022-04-18 15:58:37,651 Loading dataset...
2022-04-18 15:58:37,687 Loaded dataset. Dataset info:
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 30162 entries, 0 to 30161
Data columns (total 14 columns):
 #   Column          Non-Null Count  Dtype
---  ------          --------------  -----
 0   Age             30162 non-null  int64
 1   Workclass       30162 non-null  int64
 2   fnlwgt          30162 non-null  int64
 3   Education       30162 non-null  int64
 4   Education-Num   30162 non-null  int64
 5   Marital Status  30162 non-null  int64
 6   Occupation      30162 non-null  int64
 7   Relationship    30162 non-null  int64
 8   Ethnic group    30162 non-null  int64
 9   Sex             30162 non-null  int64
 10  Capital Gain    30162 non-null  int64
 11  Capital Loss    30162 non-null  int64
 12  Hours per week  30162 non-null  int64
 13  Country         30162 non-null  int64
dtypes: int64(14)
memory usage: 3.2 MB
2022-04-18 15:58:37,835 Spinning up shadow endpoint
2022-04-18 15:58:37,835 Creating endpoint-config with name sm-clarify-config-1650297517-31f2
2022-04-18 15:58:37,915 Creating endpoint: 'sm-clarify-DEMO-clarify-model-18-04-2022-15-45--1650297517-f947'
2022-04-18 15:58:38,339 Using endpoint name: sm-clarify-DEMO-clarify-model-18-04-2022-15-45--1650297517-f947
2022-04-18 15:58:38,339 Waiting for endpoint ...
2022-04-18 15:58:38,339 Checking endpoint status:
Legend:
(OutOfService: x, Creating: -, Updating: -, InService: !, RollingBack: <, Deleting: o, Failed: *)
2022-04-18 16:01:38,784 Endpoint is in service after 180 seconds
2022-04-18 16:01:38,785 Endpoint ready.
2022-04-18 16:01:38,786 SHAP n_samples 15
2022-04-18 16:01:38,904 =====================================================
2022-04-18 16:01:38,904 Shap analyzer: explaining 30162 rows, 14 columns...
2022-04-18 16:01:38,904 =====================================================
  0% (0 of 30162) |                      | Elapsed Time: 0:00:00 ETA:  --:--:--
  9% (2798 of 30162) |#                  | Elapsed Time: 0:00:30 ETA:   0:04:53
 18% (5681 of 30162) |###                | Elapsed Time: 0:01:00 ETA:   0:04:14
 28% (8703 of 30162) |#####              | Elapsed Time: 0:01:30 ETA:   0:03:33
 39% (11989 of 30162) |#######           | Elapsed Time: 0:02:00 ETA:   0:02:45
 51% (15453 of 30162) |#########         | Elapsed Time: 0:02:30 ETA:   0:02:07
 62% (18999 of 30162) |###########       | Elapsed Time: 0:03:00 ETA:   0:01:34
 74% (22510 of 30162) |#############     | Elapsed Time: 0:03:30 ETA:   0:01:05
 86% (25999 of 30162) |###############   | Elapsed Time: 0:04:00 ETA:   0:00:35
 97% (29442 of 30162) |################# | Elapsed Time: 0:04:30 ETA:   0:00:06
100% (30162 of 30162) |##################| Elapsed Time: 0:04:36 Time:  0:04:36
2022-04-18 16:06:15,695 getting explanations took 276.79 seconds.
2022-04-18 16:06:15,695 ===================================================
2022-04-18 16:06:18,084 converting explanations to tabular took 2.39 seconds.
2022-04-18 16:06:18,084 ===================================================
2022-04-18 16:06:18,088 Wrote baseline used to compute explanations to: /opt/ml/processing/output/explanations_shap/baseline.csv
2022-04-18 16:06:18,731 Wrote 30162 local explanations to: /opt/ml/processing/output/explanations_shap/out.csv
2022-04-18 16:06:18,731 writing local explanations took 0.65 seconds.
2022-04-18 16:06:18,731 ===================================================
2022-04-18 16:06:18,735 aggregating local explanations took 0.00 seconds.
2022-04-18 16:06:18,735 ===================================================
2022-04-18 16:06:18,736 Shap analysis finished.
2022-04-18 16:06:18,736 Stop using endpoint: sm-clarify-DEMO-clarify-model-18-04-2022-15-45--1650297517-f947
2022-04-18 16:06:18,736 Deleting endpoint configuration with name: sm-clarify-config-1650297517-31f2
2022-04-18 16:06:18,853 Deleting endpoint with name: sm-clarify-DEMO-clarify-model-18-04-2022-15-45--1650297517-f947
2022-04-18 16:06:18,997 Model endpoint delivered 107.74733 requests per second and a total of 30164 requests over 280 seconds
2022-04-18 16:06:26,813 Stop using endpoint: None
2022-04-18 16:06:50,264 jupyter nbconvert --to html --output /opt/ml/processing/output/report.html /opt/ml/processing/output/report.ipynb --template sagemaker-xai
[NbConvertApp] Converting notebook /opt/ml/processing/output/report.ipynb to html
[NbConvertApp] Writing 570437 bytes to /opt/ml/processing/output/report.html
2022-04-18 16:06:51,317 HTML report '/opt/ml/processing/output/report.html' generated successfully.
2022-04-18 16:06:51,317 wkhtmltopdf -q /opt/ml/processing/output/report.html /opt/ml/processing/output/report.pdf
2022-04-18 16:06:51,874 PDF report '/opt/ml/processing/output/report.pdf' generated successfully.
2022-04-18 16:06:51,875 Collected analyses: 
{
    "version": "1.0",
    "explanations": {
        "kernel_shap": {
            "label0": {
                "global_shap_values": {
                    "Age": 0.03655626472022009,
                    "Workclass": 0.017905832546722414,
                    "fnlwgt": 0.021385894167534045,
                    "Education": 0.018535316056790388,
                    "Education-Num": 0.03609330944536093,
                    "Marital Status": 0.02898470399620499,
                    "Occupation": 0.026482349644306062,
                    "Relationship": 0.03615980532944972,
                    "Ethnic group": 0.020033663785676746,
                    "Sex": 0.017880631469685705,
                    "Capital Gain": 0.033581907850084025,
                    "Capital Loss": 0.019556674983842386,
                    "Hours per week": 0.021283579738450336,
                    "Country": 0.04712774225543154
                },
                "expected_value": 0.0006380207487381995
            }
        }
    }
}
2022-04-18 16:06:51,875 exit_message: Completed: SageMaker XAI Analyzer ran successfully
---!

Viewing the Explainability Report

As with the bias report, you can view the explainability report in Studio under the experiments tab.

a07cfab312e04c0c99e0c44f6b23da95

The Model Insights tab contains direct links to the report and model insights.

If you’re not a Studio user yet, as with the Bias Report, you can access this report at the following S3 bucket.

[21]:
explainability_output_path
[21]:
's3://sagemaker-us-west-2-000000000000/sagemaker/DEMO-sagemaker-clarify/clarify-explainability'

Analysis of local explanations

It is possible to visualize the the local explanations for single examples in your dataset. You can use the obtained results from running Kernel SHAP algorithm for global explanations.

You can simply load the local explanations stored in your output path, and visualize the explanation (i.e., the impact that the single features have on the prediction of your model) for any single example.

[22]:
local_explanations_out = pd.read_csv(explainability_output_path + "/explanations_shap/out.csv")
feature_names = [str.replace(c, "_label0", "") for c in local_explanations_out.columns.to_series()]
local_explanations_out.columns = feature_names

selected_example = 111
print(
    "Example number:",
    selected_example,
    "\nwith model prediction:",
    sum(local_explanations_out.iloc[selected_example]) > 0,
)
print("\nFeature values -- Label", training_data.iloc[selected_example])
local_explanations_out.iloc[selected_example].plot(
    kind="bar", title="Local explanation for the example number " + str(selected_example), rot=90
)
Example number: 111
with model prediction: False

Feature values -- Label Target                 0
Age                   21
Workclass              2
fnlwgt            199915
Education             15
Education-Num         10
Marital Status         4
Occupation             7
Relationship           3
Ethnic group           4
Sex                    0
Capital Gain           0
Capital Loss           0
Hours per week        40
Country               38
Name: 120, dtype: int64
[22]:
<AxesSubplot:title={'center':'Local explanation for the example number 111'}>
../../_images/sagemaker-clarify_fairness_and_explainability_fairness_and_explainability_outputs_44_2.png

Clean Up

Finally, don’t forget to clean up the resources we set up and used for this demo!

[23]:
session.delete_model(model_name)

Notebook CI Test Results

This notebook was tested in multiple regions. The test results are as follows, except for us-west-2 which is shown at the top of the notebook.

This us-east-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This us-east-2 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This us-west-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This ca-central-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This sa-east-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This eu-west-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This eu-west-2 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This eu-west-3 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This eu-central-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This eu-north-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This ap-southeast-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This ap-southeast-2 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This ap-northeast-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This ap-northeast-2 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This ap-south-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable