Explaining text sentiment analysis using SageMaker Clarify

  1. Overview

  2. Prerequisites and Data

    1. Initialize SageMaker

    2. Loading the data: Women’s Ecommerce clothing reviews Dataset

    3. Data preparation for model training

  3. Train and Deploy Hugging Face Model

    1. Train model with Hugging Face estimator

    2. Deploy Model to Endpoint

  4. Model Explainability with SageMaker Clarify for text features

    1. Explaining Predictions

    2. Visualize local explanations

    3. Clean Up

Overview

Amazon SageMaker Clarify helps improve your machine learning models by detecting potential bias and helping explain how these models make predictions. The fairness and explainability functionality provided by SageMaker Clarify takes a step towards enabling AWS customers to build trustworthy and understandable machine learning models. The product comes with the tools to help you with the following tasks.

  • Measure biases that can occur during each stage of the ML lifecycle (data collection, model training and tuning, and monitoring of ML models deployed for inference).

  • Generate model governance reports targeting risk and compliance teams and external regulators.

  • Provide explanations of the data, models, and monitoring used to assess predictions for input containing data of various modalities like numerical data, categorical data, text, and images.

Learn more about SageMaker Clarify here. This sample notebook walks you through: 1. Key terms and concepts needed to understand SageMaker Clarify 1. The incremental updates required to explain text features, along with other tabular features. 1. Explaining the importance of the various new input features on the model’s decision

In doing so, the notebook will first train a Hugging Face model using the Hugging Face Estimator in the SageMaker Python SDK using training dataset, then use SageMaker Clarify to analyze a testing dataset in CSV format, and then visualize the results.

Prerequisites and Data

We require the following AWS resources to be able to successfully run this notebook. 1. Kernel: Python 3 (Data Science) kernel on SageMaker Studio or conda_python3 kernel on notebook instances 2. Instance type: Any GPU instance. Here, we use ml.g4dn.xlarge 3. SageMaker Python SDK version 2.70.0 or greater 4. Transformers > 4.6.1 5. Datasets > 1.6.2

[ ]:
!pip --quiet install "transformers==4.6.1" "datasets[s3]==1.6.2" "captum" --upgrade

Let’s start by installing the latest version of the SageMaker Python SDK, boto, and AWS CLI.

[ ]:
! pip install sagemaker botocore boto3 awscli --upgrade

Initialize SageMaker

[ ]:
# Import libraries for data loading and pre-processing
import os
import numpy as np
import pandas as pd
import json
import botocore
import sagemaker
import tarfile
from datetime import datetime

from sagemaker.huggingface import HuggingFace
from sagemaker.pytorch import PyTorchModel
from sagemaker import get_execution_role, clarify
from captum.attr import visualization
from sklearn.model_selection import train_test_split
from datasets import Dataset
from datasets.filesystems import S3FileSystem

# SageMaker session bucket is used to upload the dataset, model and model training logs
sess = sagemaker.Session()
sess = sagemaker.Session(default_bucket=sess.default_bucket())
region = sess.boto_region_name
bucket = sess.default_bucket()
prefix = "sagemaker/DEMO-sagemaker-clarify-text"

# Define the IAM role
role = sagemaker.get_execution_role()

# SageMaker Clarify model directory name
model_path = "model/"

If you change the value of model_path variable above, please be sure to update the model_path in `code/inference.py <./code/inference.py>`__ script as well.

Loading the data: Women’s ecommerce clothing reviews dataset

Download the dataset

Data Source: https://www.kaggle.com/nicapotato/womens-ecommerce-clothing-reviews/ The Women’s E-Commerce Clothing Reviews dataset has been made available under a Creative Commons Public Domain license. A copy of the dataset has been saved in a sample data Amazon S3 bucket. In the first section of the notebook, we’ll walk through how to download the data and get started with building the ML workflow as a SageMaker pipeline

[ ]:
! curl https://sagemaker-sample-files.s3.amazonaws.com/datasets/tabular/womens_clothing_ecommerce/Womens_Clothing_E-Commerce_Reviews.csv > womens_clothing_reviews_dataset.csv

Load the dataset

[ ]:
df = pd.read_csv("womens_clothing_reviews_dataset.csv", index_col=[0])
df.head()

Context

The Women’s Clothing E-Commerce dataset contains reviews written by customers. Because the dataset contains real commercial data, it has been anonymized, and any references to the company in the review text and body have been replaced with “retailer”.

Content

The dataset contains 23486 rows and 10 columns. Each row corresponds to a customer review.

The columns include:

  • Clothing ID: Integer Categorical variable that refers to the specific piece being reviewed.

  • Age: Positive Integer variable of the reviewer’s age.

  • Title: String variable for the title of the review.

  • Review Text: String variable for the review body.

  • Rating: Positive Ordinal Integer variable for the product score granted by the customer from 1 Worst, to 5 Best.

  • Recommended IND: Binary variable stating where the customer recommends the product where 1 is recommended, 0 is not recommended.

  • Positive Feedback Count: Positive Integer documenting the number of other customers who found this review positive.

  • Division Name: Categorical name of the product high level division.

  • Department Name: Categorical name of the product department name.

  • Class Name: Categorical name of the product class name.

Goal

To predict the sentiment of a review based on the text, and then explain the predictions using SageMaker Clarify.

Data preparation for model training

Target Variable Creation

Since the dataset does not contain a column that indicates the sentiment of the customer reviews, lets create one. To do this, let’s assume that reviews with a Rating of 4 or higher indicate positive sentiment and reviews with a Rating of 2 or lower indicate negative sentiment. Let’s also assume that a Rating of 3 indicates neutral sentiment and exclude these rows from the dataset. Additionally, to predict the sentiment of a review, we are going to use the Review Text column; therefore let’s remove rows that are empty in the Review Text column of the dataset

[ ]:
def create_target_column(df, min_positive_score, max_negative_score):
    neutral_values = [i for i in range(max_negative_score + 1, min_positive_score)]
    for neutral_value in neutral_values:
        df = df[df["Rating"] != neutral_value]
    df["Sentiment"] = df["Rating"] >= min_positive_score
    replace_dict = {True: 1, False: 0}
    df["Sentiment"] = df["Sentiment"].map(replace_dict)
    return df


df = create_target_column(df, 4, 2)
df = df[~df["Review Text"].isna()]

Train-Validation-Test splits

The most common approach for model evaluation is using the train/validation/test split. Although this approach can be very effective in general, it can result in misleading results and potentially fail when used on classification problems with a severe class imbalance. Instead, the technique must be modified to stratify the sampling by the class label as below. Stratification ensures that all classes are well represented across the train, validation and test datasets.

[ ]:
target = "Sentiment"
cols = "Review Text"

X = df[cols]
y = df[target]

# Data split: 11%(val) of the 90% (train and test) of the dataset ~ 10%; resulting in 80:10:10split
test_dataset_size = 0.10
val_dataset_size = 0.11
RANDOM_STATE = 42

# Stratified train-val-test split
X_train, X_test, y_train, y_test = train_test_split(
    X, y, test_size=test_dataset_size, stratify=y, random_state=RANDOM_STATE
)
X_train, X_val, y_train, y_val = train_test_split(
    X_train, y_train, test_size=val_dataset_size, stratify=y_train, random_state=RANDOM_STATE
)

print(
    "Dataset: train ",
    X_train.shape,
    y_train.shape,
    y_train.value_counts(dropna=False, normalize=True).to_dict(),
)
print(
    "Dataset: validation ",
    X_val.shape,
    y_val.shape,
    y_val.value_counts(dropna=False, normalize=True).to_dict(),
)
print(
    "Dataset: test ",
    X_test.shape,
    y_test.shape,
    y_test.value_counts(dropna=False, normalize=True).to_dict(),
)

# Combine the independent columns with the label
df_train = pd.concat([X_train, y_train], axis=1).reset_index(drop=True)
df_test = pd.concat([X_test, y_test], axis=1).reset_index(drop=True)
df_val = pd.concat([X_val, y_val], axis=1).reset_index(drop=True)

We have split the dataset into train, test, and validation datasets. We use the train and validation datasets during training process, and run Clarify on the test dataset.

In the cell below, we convert the Pandas DataFrames into Hugging Face Datasets for downstream modeling

[ ]:
train_dataset = Dataset.from_pandas(df_train)
test_dataset = Dataset.from_pandas(df_val)

Upload prepared dataset to the S3

Here, we upload the prepared datasets to S3 buckets so that we can train the model with the Hugging Face Estimator.

[ ]:
# S3 key prefix for the datasets
s3_prefix = "samples/datasets/womens_clothing_ecommerce_reviews"
s3 = S3FileSystem()

# save train_dataset to s3
training_input_path = f"s3://{sess.default_bucket()}/{s3_prefix}/train"
train_dataset.save_to_disk(training_input_path, fs=s3)

# save test_dataset to s3
test_input_path = f"s3://{sess.default_bucket()}/{s3_prefix}/test"
test_dataset.save_to_disk(test_input_path, fs=s3)

Train and Deploy Hugging Face Model

In this step of the workflow, we use the Hugging Face Estimator to load the pre-trained distilbert-base-uncased model and fine-tune the model on our dataset.

The hyperparameters defined below are parameters that are passed to the custom PyTorch code in `scripts/train.py <./scripts/train.py>`__. The only required parameter is model_name. The other parameters like epoch, train_batch_size all have default values which can be overriden by setting their values here.

[ ]:
# Hyperparameters passed into the training job
hyperparameters = {"epochs": 1, "model_name": "distilbert-base-uncased"}

huggingface_estimator = HuggingFace(
    entry_point="train.py",
    source_dir="scripts",
    instance_type="ml.g4dn.xlarge",
    instance_count=1,
    transformers_version="4.6.1",
    pytorch_version="1.7.1",
    py_version="py36",
    role=role,
    hyperparameters=hyperparameters,
)

# starting the train job with our uploaded datasets as input
huggingface_estimator.fit({"train": training_input_path, "test": test_input_path})
[ ]:
! aws s3 cp {huggingface_estimator.model_data} model.tar.gz
! mkdir -p {model_path}
! tar -xvf model.tar.gz -C  {model_path}/

We are going to use the trained model files along with the PyTorch Inference container to deploy the model to a SageMaker endpoint.

[ ]:
with tarfile.open("hf_model.tar.gz", mode="w:gz") as archive:
    archive.add(model_path, recursive=True)
    archive.add("code/")
prefix = s3_prefix.split("/")[-1]
zipped_model_path = sess.upload_data(path="hf_model.tar.gz", key_prefix=prefix + "/hf-model-sm")
[ ]:
model_name = "womens-ecommerce-reviews-model-{}".format(
    datetime.now().strftime("%d-%m-%Y-%H-%M-%S")
)
endpoint_name = "womens-ecommerce-reviews-endpoint-{}".format(
    datetime.now().strftime("%d-%m-%Y-%H-%M-%S")
)
[ ]:
model = PyTorchModel(
    entry_point="inference.py",
    name=model_name,
    model_data=zipped_model_path,
    role=get_execution_role(),
    framework_version="1.7.1",
    py_version="py3",
)
predictor = model.deploy(
    initial_instance_count=1, instance_type="ml.g4dn.xlarge", endpoint_name=endpoint_name
)

Test the model endpoint

Lets test the model endpoint to ensure that deployment was successful.

[ ]:
test_sentence1 = "A very versatile and cozy top. would look great dressed up or down for a casual comfy fall day. what a fun piece for my wardrobe!"
test_sentence2 = "Love the color! very soft. unique look. can't wait to wear it this fall"
test_sentence3 = (
    "These leggings are loose fitting and the quality is just not there.. i am returning the item."
)
test_sentence4 = "Very disappointed the back of this blouse is plain, not as displayed."

predictor = sagemaker.predictor.Predictor(endpoint_name, sess)
predictor.serializer = sagemaker.serializers.CSVSerializer()
predictor.deserializer = sagemaker.deserializers.CSVDeserializer()
predictor.predict([[test_sentence1], [test_sentence2], [test_sentence3], [test_sentence4]])

Model Explainability with SageMaker Clarify for text features

Now that the model is ready, and we are able to get predictions, we are ready to get explanations for text data from Clarify processing job. For a detailed example that showcases how to use the Clarify processing job, please refer to this example. This example shows how to get explanations for text data from Clarify.

In the cell below, we create the CSV file to pass on to the Clarify dataset. We are using 10 samples here to make it fast, but we can use entire dataset at a time. We are also filtering out any reviews with less than 500 characters as long reviews provide better visualization with sentence level granularity (When granularity is sentence, each sentence is a feature, and we need a few sentences per review for good visualization).

[ ]:
file_path = "clarify_data.csv"
num_examples = 10

df_test["len"] = df_test["Review Text"].apply(lambda ele: len(ele))

df_test_clarify = pd.DataFrame(
    df_test[df_test["len"] > 500].sample(n=num_examples, random_state=RANDOM_STATE),
    columns=["Review Text"],
)
df_test_clarify.to_csv(file_path, header=True, index=False)
df_test_clarify

There are expanding business needs and legislative regulations that require explanations of why a model made the decision it did. SageMaker Clarify uses SHAP to explain the contribution that each input feature makes to the final decision.

How does the Kernel SHAP algorithm work? Kernel SHAP algorithm is a local explanation method. That is, it explains each instance or row of the dataset at a time. To explain each instance, it perturbs the features values - that is, it changes the values of some features to a baseline (or non-informative) value, and then get predictions from the model for the perturbed samples. It does this for a number of times per instance (determined by the optional parameter num_samples in SHAPConfig), and computes the importance of each feature based on how the model prediction changed.

We are now extending this functionality to text data. In order to be able to explain text, we need the TextConfig. The TextConfig is an optional parameter of SHAPConfig, which you need to provide if you need explanations for the text features in your dataset. TextConfig in turn requires three parameters: 1. granularity (required): To explain text features, Clarify further breaks down text into smaller text units, and considers each such text unit as a feature. The parameter granularity informs the level to which Clarify will break down the text: token, sentence, or paragraph are the allowed values for granularity. 2. language (required): the language of the text features. This is required to tokenize the text to break them down to their granular form. 3. max_top_tokens (optional): the number of top token attributions that will be shown in the output (we need this because the size of vocabulary can be very big). This is an optional parameter, and defaults to 50.

Kernel SHAP algorithm requires a baseline (also known as background dataset). In case of tabular features, the baseline value/s for a feature is ideally a non-informative or least informative value for that feature. However, for text feature, the baseline values must be the value you want to replace the individual text feature (token, sentence or paragraph) with. For instance, in the example below, we have chosen the baseline values for review_text as <UNK>, and granularity is sentence. Every time a sentence has to replaced in the perturbed inputs, we will replace it with <UNK>.

If baseline is not provided, a baseline is calculated automatically by SageMaker Clarify using K-means or K-prototypes in the input dataset for tabular features. For text features, if baseline is not provided, the default replacement value will be the string <PAD>.

[ ]:
clarify_processor = clarify.SageMakerClarifyProcessor(
    role=role, instance_count=1, instance_type="ml.m5.xlarge", sagemaker_session=sess
)

model_config = clarify.ModelConfig(
    model_name=model_name,
    instance_type="ml.m5.xlarge",
    instance_count=1,
    accept_type="text/csv",
    content_type="text/csv",
)

explainability_output_path = "s3://{}/{}/clarify-text-explainability".format(bucket, prefix)
explainability_data_config = clarify.DataConfig(
    s3_data_input_path=file_path,
    s3_output_path=explainability_output_path,
    headers=["Review Text"],
    dataset_type="text/csv",
)
[ ]:
shap_config = clarify.SHAPConfig(
    baseline=[["<UNK>"]],
    num_samples=1000,
    agg_method="mean_abs",
    save_local_shap_values=True,
    text_config=clarify.TextConfig(granularity="sentence", language="english"),
)
[ ]:
# Running the clarify explainability job involves spinning up a processing job and a model endpoint which may take a few minutes.
# After this you will see a progress bar for the SHAP computation.
# The size of the dataset (num_examples) and the num_samples for shap will effect the running time.
clarify_processor.run_explainability(
    data_config=explainability_data_config,
    model_config=model_config,
    explainability_config=shap_config,
)

We use Captum to visualize the feature importances computed by Clarify. First, lets load the local explanations. Local text explanations can be found in the analysis results folder in a file named out.jsonl in the explanations_shap directory.

[ ]:
local_feature_attributions_file = "out.jsonl"
analysis_results = []
analysis_result = sagemaker.s3.S3Downloader.download(
    explainability_output_path + "/explanations_shap/" + local_feature_attributions_file,
    local_path="./",
)

shap_out = []
file = sagemaker.s3.S3Downloader.read_file(
    explainability_output_path + "/explanations_shap/" + local_feature_attributions_file
)
for line in file.split("\n"):
    if line:
        shap_out.append(json.loads(line))

The local explanations file is a JSON Lines file, that contains the explanation of one instance per row. Let’s examine the output format of the explanations.

[ ]:
print(json.dumps(shap_out[0], indent=2))

At the highest level of this JSON Line, there are two keys: explanations, join_source_value (Not present here as we have not included a joinsource column in the input dataset). explanations contains a list of attributions for each feature in the dataset. In this case, we have a single element, because the input dataset also had a single feature. It also contains details like feature_name, data_type of the features (indicating whether Clarify inferred the column as numerical, categorical or text). Each token attribution also contains a description field that contains the token itself, and the starting index of the token in original input. This allows you to reconstruct the original sentence from the output as well.

In the following block, we create a list of attributions and a list of tokens for use in visualizations.

[ ]:
attributions_dataset = [
    np.array([attr["attribution"][0] for attr in expl["explanations"][0]["attributions"]])
    for expl in shap_out
]
tokens_dataset = [
    np.array(
        [attr["description"]["partial_text"] for attr in expl["explanations"][0]["attributions"]]
    )
    for expl in shap_out
]

We obtain predictions as well so that they can be displayed alongside the feature attributions.

[ ]:
preds = predictor.predict([t for t in df_test_clarify.values])
[ ]:
# This method is a wrapper around the captum that helps produce visualizations for local explanations. It will
# visualize the attributions for the tokens with red or green colors for negative and positive attributions.
def visualization_record(
    attributions,  # list of attributions for the tokens
    text,  # list of tokens
    pred,  # the prediction value obtained from the endpoint
    delta,
    true_label,  # the true label from the dataset
    normalize=True,  # normalizes the attributions so that the max absolute value is 1. Yields stronger colors.
    max_frac_to_show=0.05,  # what fraction of tokens to highlight, set to 1 for all.
    match_to_pred=False,  # whether to limit highlights to red for negative predictions and green for positive ones.
    # By enabling `match_to_pred` you show what tokens contribute to a high/low prediction not those that oppose it.
):
    if normalize:
        attributions = attributions / max(max(attributions), max(-attributions))
    if max_frac_to_show is not None and max_frac_to_show < 1:
        num_show = int(max_frac_to_show * attributions.shape[0])
        sal = attributions
        if pred < 0.5:
            sal = -sal
        if not match_to_pred:
            sal = np.abs(sal)
        top_idxs = np.argsort(-sal)[:num_show]
        mask = np.zeros_like(attributions)
        mask[top_idxs] = 1
        attributions = attributions * mask
    return visualization.VisualizationDataRecord(
        attributions,
        pred,
        int(pred > 0.5),
        true_label,
        attributions.sum() > 0,
        attributions.sum(),
        text,
        delta,
    )
[ ]:
# You can customize the following display settings
normalize = True
max_frac_to_show = 1
match_to_pred = False
labels = test_dataset["Sentiment"][:num_examples]
vis = []
for attr, token, pred, label in zip(attributions_dataset, tokens_dataset, preds, labels):
    vis.append(
        visualization_record(
            attr, token, float(pred[0]), 0.0, label, normalize, max_frac_to_show, match_to_pred
        )
    )

Now that we compiled the record we are finally ready to render the visualization.

We see a row per review in the selected dataset. For each row we have the prediction, the label, and the highlighted text. Additionally, we show the total sum of attributions (as attribution score) and its label (as attribution label), which indicates whether it is greater than zero.

[ ]:
_ = visualization.visualize_text(vis)

Finally, please remember to delete the Amazon SageMaker endpoint to avoid charges:

[ ]:
predictor.delete_endpoint()