Amazon SageMaker Clarify Model Explainability Monitor - JSON Lines Format
This notebook’s CI test result for us-west-2 is as follows. CI test results in other regions can be found at the end of the notebook.
Runtime
This notebook takes approximately 60 minutes to run.
Contents
Introduction
Amazon SageMaker Model Monitor continuously monitors the quality of Amazon SageMaker machine learning models in production. It enables developers to set alerts for when there are deviations in the model quality. Early and pro-active detection of these deviations enables corrective actions, such as retraining models, auditing upstream systems, or fixing data quality issues without having to monitor models manually or build additional tooling.
Amazon SageMaker Clarify Model Explainability Monitor is a model monitor that helps data scientists and ML engineers monitor predictions for feature attribution drift on a regular basis. A drift in the distribution of live data for models in production can result in a corresponding drift in the feature attribution values. As the model is monitored, customers can view exportable reports and graphs detailing feature attributions in SageMaker Studio and configure alerts in Amazon CloudWatch to receive notifications if it is detected that the attribution values drift beyond a certain threshold.
This notebook demonstrates the process for setting up a model monitor for continuous monitoring of feature attribution drift of a SageMaker real-time inference endpoint. The model input and output are in SageMaker JSON Lines dense format. SageMaker Clarify model monitor also supports analyzing CSV data, which is illustrated in another notebook.
In general, you can use the model explainability monitor for real-time inference endpoint in this way,
Enable the endpoint for data capture. Then, when the customer invokes the endpoint, the endpoint saves the invocations to a data capture S3 location.
Schedule a model explainability monitor to monitor the endpoint (to be more specific, the data capture S3 location) and a ground truth S3 location.
The monitor executes processing jobs regularly to do feature attribution analysis, and then generate analysis reports and publish metrics to CloudWatch.
General Setup
The notebook uses the SageMaker Python SDK. The following cell upgrades the SDK and its dependencies. Then you may need to restart the kernel and rerun the notebook to pick up the up-to-date APIs, if the notebook is executed in the SageMaker Studio.
[ ]:
!pip install -U sagemaker
!pip install -U boto3
!pip install -U botocore
Imports
The following cell imports the APIs to be used by the notebook.
[2]:
import sagemaker
import pandas as pd
import copy
import datetime
import json
import random
import threading
import time
import pprint
Handful of configuration
To begin, ensure that these prerequisites have been completed.
Specify an AWS Region to host the model.
Specify an IAM role to execute jobs.
Define the S3 URIs that stores the model file, input data and output data. For demonstration purposes, this notebook uses the same bucket for them. In reality, they could be separated with different security policies.
[3]:
sagemaker_session = sagemaker.Session()
region = sagemaker_session.boto_region_name
print(f"AWS region: {region}")
role = sagemaker.get_execution_role()
print(f"RoleArn: {role}")
# A different bucket can be used, but make sure the role for this notebook has
# the s3:PutObject permissions. This is the bucket into which the data is captured
bucket = sagemaker_session.default_bucket()
print(f"Demo Bucket: {bucket}")
prefix = sagemaker.utils.unique_name_from_base("sagemaker/DEMO-ClarifyModelMonitor")
print(f"Demo Prefix: {prefix}")
s3_key = f"s3://{bucket}/{prefix}"
print(f"Demo S3 key: {s3_key}")
data_capture_s3_uri = f"{s3_key}/data-capture"
baselining_output_s3_uri = f"{s3_key}/baselining-output"
monitor_output_s3_uri = f"{s3_key}/monitor-output"
print(f"The endpoint will save the captured data to: {data_capture_s3_uri}")
print(f"The baselining job will save the analysis results to: {baselining_output_s3_uri}")
print(f"The monitor will save the analysis results to: {monitor_output_s3_uri}")
AWS region: us-west-2
RoleArn: arn:aws:iam::000000000000:role/service-role/AmazonSageMaker-ExecutionRole-20200714T163791
Demo Bucket: sagemaker-us-west-2-000000000000
Demo Prefix: sagemaker/DEMO-ClarifyModelMonitor-1674106123-4d04
Demo S3 key: s3://sagemaker-us-west-2-000000000000/sagemaker/DEMO-ClarifyModelMonitor-1674106123-4d04
The endpoint will save the captured data to: s3://sagemaker-us-west-2-000000000000/sagemaker/DEMO-ClarifyModelMonitor-1674106123-4d04/data-capture
The baselining job will save the analysis results to: s3://sagemaker-us-west-2-000000000000/sagemaker/DEMO-ClarifyModelMonitor-1674106123-4d04/baselining-output
The monitor will save the analysis results to: s3://sagemaker-us-west-2-000000000000/sagemaker/DEMO-ClarifyModelMonitor-1674106123-4d04/monitor-output
Model file and data files
This example includes a prebuilt SageMaker Linear Learner model trained by a SageMaker Clarify offline processing example notebook. The model supports SageMaker JSON Lines dense
format (MIME type "application/jsonlines"
).
The model input can one or more lines, each line is a JSON object that has a “features” key pointing to a list of feature values concerning demographic characteristics of individuals. For example,
{"features":[28,2,133937,9,13,2,0,0,4,1,15024,0,55,37]}
{"features":[43,2,72338,12,14,2,12,0,1,1,0,0,40,37]}
The model output has the predictions of whether a person has a yearly income that is more than $50,000. Each prediction is a JSON object that has a “predicted_label” key pointing to the predicted label, and the “score” key pointing to the confidence score. For example,
{"predicted_label":1,"score":0.989977359771728}
{"predicted_label":1,"score":0.504138827323913}
[4]:
model_file = "model/ll-adult-prediction-model.tar.gz"
This example includes two dataset files, both in the JSON Lines format. The data also originates from the SageMaker Clarify offline processing example notebook.
[5]:
train_dataset_path = "test_data/validation-dataset.jsonl"
test_dataset_path = "test_data/test-dataset.jsonl"
dataset_type = "application/jsonlines"
The train dataset has the features and the ground truth label (pointed to by the key “label”),
[6]:
!head -n 5 $train_dataset_path
{"features":[41,2,220531,14,15,2,9,0,4,1,0,0,60,38],"label":1}
{"features":[33,2,35378,9,13,2,11,5,4,0,0,0,45,38],"label":1}
{"features":[36,2,223433,12,14,2,11,0,4,1,7688,0,50,38],"label":1}
{"features":[40,2,220589,7,12,4,0,1,4,0,0,0,40,38],"label":0}
{"features":[30,2,231413,15,10,2,2,0,4,1,0,0,40,38],"label":1}
The test dataset only has features.
[7]:
!head -n 5 $test_dataset_path
{"features":[28,2,133937,9,13,2,0,0,4,1,15024,0,55,37]}
{"features":[43,2,72338,12,14,2,12,0,1,1,0,0,40,37]}
{"features":[34,2,162604,11,9,4,2,2,2,1,0,0,40,37]}
{"features":[20,2,258509,11,9,4,6,3,2,1,0,0,40,37]}
{"features":[27,2,446947,9,13,4,0,4,2,0,0,0,55,37]}
Here are the headers of the train dataset. “Target” is the header of the ground truth label, and the others are the feature headers. They will be used to beautify the analysis report.
[8]:
all_headers = [
"Age",
"Workclass",
"fnlwgt",
"Education",
"Education-Num",
"Marital Status",
"Occupation",
"Relationship",
"Ethnic group",
"Sex",
"Capital Gain",
"Capital Loss",
"Hours per week",
"Country",
"Target",
]
label_header = all_headers[-1]
To verify that the execution role for this notebook has the necessary permissions to proceed, put a simple test object into the S3 bucket specified above. If this command fails, update the role to have s3:PutObject
permission on the bucket and try again.
[9]:
sagemaker.s3.S3Uploader.upload_string_as_file_body(
body="hello",
desired_s3_uri=f"{s3_key}/upload-test-file.txt",
sagemaker_session=sagemaker_session,
)
print("Success! We are all set to proceed with uploading to S3.")
Success! We are all set to proceed with uploading to S3.
Then upload the files to S3 so that they can be used by SageMaker.
[10]:
model_url = sagemaker.s3.S3Uploader.upload(
local_path=model_file,
desired_s3_uri=s3_key,
sagemaker_session=sagemaker_session,
)
print(f"Model file has been uploaded to {model_url}")
train_data_s3_uri = sagemaker.s3.S3Uploader.upload(
local_path=train_dataset_path,
desired_s3_uri=s3_key,
sagemaker_session=sagemaker_session,
)
print(f"Train data is uploaded to: {train_data_s3_uri}")
test_data_s3_uri = sagemaker.s3.S3Uploader.upload(
local_path=test_dataset_path,
desired_s3_uri=s3_key,
sagemaker_session=sagemaker_session,
)
print(f"Test data is uploaded to: {test_data_s3_uri}")
Model file has been uploaded to s3://sagemaker-us-west-2-000000000000/sagemaker/DEMO-ClarifyModelMonitor-1674106123-4d04/ll-adult-prediction-model.tar.gz
Train data is uploaded to: s3://sagemaker-us-west-2-000000000000/sagemaker/DEMO-ClarifyModelMonitor-1674106123-4d04/validation-dataset.jsonl
Test data is uploaded to: s3://sagemaker-us-west-2-000000000000/sagemaker/DEMO-ClarifyModelMonitor-1674106123-4d04/test-dataset.jsonl
Real-time Inference Endpoint
This section creates a SageMaker real-time inference endpoint to showcase the data capture capability in action. The model monitor will be scheduled for the endpoint and process the captured data.
Deploy the model to an endpoint
Start with deploying the pre-trained model. Here, create a SageMaker Model
object with the inference image and model file. Then deploy the model with the data capture configuration and wait until the endpoint is ready to serve traffic.
DataCaptureConfig enables capturing the request payload and the response payload of the endpoint. Payloads are typically treated as binary data and encoded in BASE64 by default, allowing them to be stored in capture data files. However, by specifying the data format in the json_content_types
parameter as shown below, the payloads can be captured as
plain text instead.
[11]:
model_name = sagemaker.utils.unique_name_from_base("DEMO-ll-adult-pred-model-monitor")
endpoint_name = model_name
print(f"SageMaker model name: {model_name}")
print(f"SageMaker endpoint name: {endpoint_name}")
image_uri = sagemaker.image_uris.retrieve("linear-learner", region, "1")
print(f"SageMaker Linear Learner image: {image_uri}")
model = sagemaker.model.Model(
role=role,
name=model_name,
image_uri=image_uri,
model_data=model_url,
sagemaker_session=sagemaker_session,
)
data_capture_config = sagemaker.model_monitor.DataCaptureConfig(
enable_capture=True,
sampling_percentage=100, # Capture 100% of the traffic
destination_s3_uri=data_capture_s3_uri,
json_content_types=[dataset_type],
)
SageMaker model name: DEMO-ll-adult-pred-model-monitor-1674106124-9611
SageMaker endpoint name: DEMO-ll-adult-pred-model-monitor-1674106124-9611
SageMaker Linear Learner image: 174872318107.dkr.ecr.us-west-2.amazonaws.com/linear-learner:1
NOTE: The following cell takes about 10 minutes to deploy the model.
[12]:
print(f"Deploying model {model_name} to endpoint {endpoint_name}")
model.deploy(
initial_instance_count=1,
instance_type="ml.m5.xlarge",
endpoint_name=endpoint_name,
data_capture_config=data_capture_config,
)
Deploying model DEMO-ll-adult-pred-model-monitor-1674106124-9611 to endpoint DEMO-ll-adult-pred-model-monitor-1674106124-9611
------!
Invoke the endpoint
Now send data to this endpoint to get inferences in real time. The model supports mini-batch predictions, so you can put one or more records to a single request.
[13]:
with open(test_dataset_path, "r") as f:
test_data = f.read().splitlines()
Example: Single record
Request payload:
[14]:
request_payload = test_data[0]
print(request_payload)
{"features":[28,2,133937,9,13,2,0,0,4,1,15024,0,55,37]}
Response payload:
[15]:
response = sagemaker_session.sagemaker_runtime_client.invoke_endpoint(
EndpointName=endpoint_name,
ContentType=dataset_type,
Accept=dataset_type,
Body=request_payload,
)
response_payload = response["Body"].read().decode("utf-8")
response_payload
[15]:
'{"predicted_label":1,"score":0.989977359771728}\n'
Example: Two records
Request payload:
[16]:
request_payload = "\n".join(test_data[:2])
request_payload
[16]:
'{"features":[28,2,133937,9,13,2,0,0,4,1,15024,0,55,37]}\n{"features":[43,2,72338,12,14,2,12,0,1,1,0,0,40,37]}'
Response payload:
[17]:
response = sagemaker_session.sagemaker_runtime_client.invoke_endpoint(
EndpointName=endpoint_name,
ContentType=dataset_type,
Accept=dataset_type,
Body=request_payload,
)
response_payload = response["Body"].read().decode("utf-8")
response_payload
[17]:
'{"predicted_label":1,"score":0.989977359771728}\n{"predicted_label":1,"score":0.504138827323913}\n'
View captured data
Because data capture is enabled in the previous steps, the request and response payload, along with some additional metadata, are saved in the Amazon S3 location specified in the DataCaptureConfig.
Now list the captured data files stored in Amazon S3. There should be different files from different time periods organized based on the hour in which the invocation occurred. The format of the Amazon S3 path is:
s3://{data_capture_s3_uri}/{endpoint_name}/{variant-name}/yyyy/mm/dd/hh/filename.jsonl
[18]:
print("Waiting for captured data to show up", end="")
for _ in range(120):
captured_data_files = sorted(
sagemaker.s3.S3Downloader.list(
s3_uri=f"{data_capture_s3_uri}/{endpoint_name}",
sagemaker_session=sagemaker_session,
)
)
if captured_data_files:
break
print(".", end="", flush=True)
time.sleep(1)
print()
print("Found capture data files:")
print("\n ".join(captured_data_files[-5:]))
Waiting for captured data to show up............................................................
Found capture data files:
s3://sagemaker-us-west-2-000000000000/sagemaker/DEMO-ClarifyModelMonitor-1674106123-4d04/data-capture/DEMO-ll-adult-pred-model-monitor-1674106124-9611/AllTraffic/2023/01/19/05/31-46-585-9ee7358c-33e7-467d-9133-624e448e6552.jsonl
Next, view the content of a single capture file.
[19]:
captured_data = sagemaker.s3.S3Downloader.read_file(
s3_uri=captured_data_files[-1],
sagemaker_session=sagemaker_session,
)
print(captured_data)
{"captureData":{"endpointInput":{"observedContentType":"application/jsonlines","mode":"INPUT","data":"{\"features\":[28,2,133937,9,13,2,0,0,4,1,15024,0,55,37]}","encoding":"JSON"},"endpointOutput":{"observedContentType":"application/jsonlines","mode":"OUTPUT","data":"{\"predicted_label\":1,\"score\":0.989977359771728}\n","encoding":"JSON"}},"eventMetadata":{"eventId":"40394cfe-37c3-4abb-b22c-dd7accbe9608","inferenceTime":"2023-01-19T05:31:46Z"},"eventVersion":"0"}
{"captureData":{"endpointInput":{"observedContentType":"application/jsonlines","mode":"INPUT","data":"{\"features\":[28,2,133937,9,13,2,0,0,4,1,15024,0,55,37]}\n{\"features\":[43,2,72338,12,14,2,12,0,1,1,0,0,40,37]}","encoding":"JSON"},"endpointOutput":{"observedContentType":"application/jsonlines","mode":"OUTPUT","data":"{\"predicted_label\":1,\"score\":0.989977359771728}\n{\"predicted_label\":1,\"score\":0.504138827323913}\n","encoding":"JSON"}},"eventMetadata":{"eventId":"32aaea1b-1a60-4870-b81d-40271f950c4a","inferenceTime":"2023-01-19T05:31:46Z"},"eventVersion":"0"}
Finally, the contents of a single line is present below in formatted JSON to observe a little better.
captureData
has two fields,endpointInput
has the captured invocation request, andendpointOutput
has the response.eventMetadata
has the inference ID and event ID.
[20]:
print(json.dumps(json.loads(captured_data.splitlines()[-1]), indent=4))
{
"captureData": {
"endpointInput": {
"observedContentType": "application/jsonlines",
"mode": "INPUT",
"data": "{\"features\":[28,2,133937,9,13,2,0,0,4,1,15024,0,55,37]}\n{\"features\":[43,2,72338,12,14,2,12,0,1,1,0,0,40,37]}",
"encoding": "JSON"
},
"endpointOutput": {
"observedContentType": "application/jsonlines",
"mode": "OUTPUT",
"data": "{\"predicted_label\":1,\"score\":0.989977359771728}\n{\"predicted_label\":1,\"score\":0.504138827323913}\n",
"encoding": "JSON"
}
},
"eventMetadata": {
"eventId": "32aaea1b-1a60-4870-b81d-40271f950c4a",
"inferenceTime": "2023-01-19T05:31:46Z"
},
"eventVersion": "0"
}
Start generating some artificial traffic
The cell below starts a thread to send some traffic to the endpoint. If there is no traffic, the monitoring jobs are marked as Failed
since there is no data to process.
Notice the InferenceId
attribute used to invoke, in this example, it will be used to join the captured data with the ground truth data. If it is not available, then the eventId
will be used for the join operation.
[21]:
class WorkerThread(threading.Thread):
def __init__(self, do_run, *args, **kwargs):
super(WorkerThread, self).__init__(*args, **kwargs)
self.__do_run = do_run
self.__terminate_event = threading.Event()
def terminate(self):
self.__terminate_event.set()
def run(self):
while not self.__terminate_event.is_set():
self.__do_run(self.__terminate_event)
[22]:
def invoke_endpoint(terminate_event):
for index, record in enumerate(test_data):
response = sagemaker_session.sagemaker_runtime_client.invoke_endpoint(
EndpointName=endpoint_name,
ContentType=dataset_type,
Accept=dataset_type,
Body=record,
InferenceId=str(index), # unique ID per row
)
response["Body"].read()
time.sleep(1)
if terminate_event.is_set():
break
# Keep invoking the endpoint with test data
invoke_endpoint_thread = WorkerThread(do_run=invoke_endpoint)
invoke_endpoint_thread.start()
Model Explainability Monitor
Similar to the other monitoring types, the standard procedure of creating a feature attribution drift monitor is first run a baselining job, and then schedule the monitor.
[23]:
model_explainability_monitor = sagemaker.model_monitor.ModelExplainabilityMonitor(
role=role,
sagemaker_session=sagemaker_session,
max_runtime_in_seconds=3600,
)
Baselining job
A baselining job runs predictions on training dataset and suggests constraints. The suggest_baseline()
method of ModelExplainabilityMonitor
starts a SageMaker Clarify processing job to generate the constraints.
The step is not mandatory, but providing constraints file to the monitor can enable violations file generation.
Configurations
Information about the input data need to be provided to the processor.
DataConfig
stores information about the dataset to be analyzed. For example, the dataset file and its format (like JSON Lines), where to store the analysis results. Some special things to note about this configuration for the JSON Lines dataset,
The parameter value
"features"
or"label"
is NOT a header string. Instead, it is aJMESPath
expression (refer to its specification) that is used to locate the features list or the ground truth label in the dataset (the ground truth label is not needed for the explainability analysis, the parameter is specified so that the job knows it should be excluded from the dataset). In this example notebook they happen to be the same as the keys in the dataset. But for example, if the dataset has records like below, then thefeatures
parameter should use value"data.features.values"
, and thelabel
parameter should use value"data.label"
.{"data": {"features": {"values": [25, 2, 226802, 1, 7, 4, 6, 3, 2, 1, 0, 0, 40, 37]}, "label": 0}}
SageMaker Clarify processing job will load the JSON Lines dataset into tabular representation for further analysis, and the parameter
headers
is the list of column names. The label header shall be the last one in the headers list, and the order of feature headers shall be the same as the order of features in a record.
[24]:
features_jmespath = "features"
ground_truth_label_jmespath = "label"
data_config = sagemaker.clarify.DataConfig(
s3_data_input_path=train_data_s3_uri,
s3_output_path=baselining_output_s3_uri,
features=features_jmespath,
label=ground_truth_label_jmespath,
headers=all_headers,
dataset_type=dataset_type,
)
ModelConfig
is configuration related to model to be used for inferencing. In order to compute SHAP values, the SageMaker Clarify explainer generates synthetic dataset and then get its predictions for the SageMaker model. To accomplish this, the processing job will use the model to create an ephemeral endpoint (also known as “shadow endpoint”). The processing job will delete the shadow endpoint after the computations are completed. One special thing to note about this configuration for the
JSON Lines model input and output,
content_template
is used by SageMaker Clarify processing job to convert the tabular data to the request payload acceptable to the shadow endpoint. To be more specific, the placeholder$features
will be replaced by the features list from records. The request payload of a record from the testing dataset happens to be similar to the record itself, like{"features":[28,2,133937,9,13,2,0,0,4,1,15024,0,55,37]}
, because both the dataset and the model input conform to the same format.
[25]:
content_template = '{"features":$features}'
model_config = sagemaker.clarify.ModelConfig(
model_name=model_name, # The name of the SageMaker model
instance_type="ml.m5.xlarge", # The instance type of the shadow endpoint
instance_count=1, # The instance count of the shadow endpoint
content_type=dataset_type, # The data format of the model input
accept_type=dataset_type, # The data format of the model output
content_template=content_template,
)
Currently, the SageMaker Clarify explainer offers a scalable and efficient implementation of SHAP, so the explainability config is SHAPConfig
, including
baseline
: A list of records (at least one) to be used as the baseline dataset in the Kernel SHAP algorithm, each record is JSON object that includes a list of features. It can also be a S3 object URI, the S3 file should be in the same format as dataset.num_samples
: Number of samples to be used in the Kernel SHAP algorithm. This number determines the size of the generated synthetic dataset to compute the SHAP values.agg_method
: Aggregation method for global SHAP values. Valid values are“mean_abs” (mean of absolute SHAP values for all instances),
“median” (median of SHAP values for all instances) and
“mean_sq” (mean of squared SHAP values for all instances).
use_logit
: Indicator of whether the logit function is to be applied to the model predictions. Default is False. If “use_logit” is true then the SHAP values will have log-odds units.save_local_shap_values
: Indicator of whether to save the local SHAP values in the output location. Default is True.
[26]:
# Here use the mean value of train dataset as SHAP baseline
dataset = []
with open(train_dataset_path) as f:
dataset = [json.loads(row)["features"] for row in f]
mean_values = pd.DataFrame(dataset).mean().round().astype(int).to_list()
mean_record = {"features": mean_values}
shap_baseline = [mean_record]
print(f"SHAP baseline: {shap_baseline}")
shap_config = sagemaker.clarify.SHAPConfig(
baseline=shap_baseline,
num_samples=100,
agg_method="mean_abs",
save_local_shap_values=False,
)
SHAP baseline: [{'features': [39, 2, 184870, 10, 10, 3, 6, 1, 4, 1, 1597, 61, 41, 37]}]
Kick off baselining job
Call the suggest_baseline()
method to start the baselining job. The model output has a key “score” pointing to a confidence score value between 0
and 1
. So, the model_scores
parameter is set to the JMESPath
expression “score” which can locate the score in the model output.
[27]:
confidence_score_jmespath = "score"
model_explainability_monitor.suggest_baseline(
explainability_config=shap_config,
data_config=data_config,
model_config=model_config,
model_scores=confidence_score_jmespath, # The JMESPath to locate the confidence score in model output
)
Job Name: baseline-suggestion-job-2023-01-19-05-32-51-896
Inputs: [{'InputName': 'dataset', 'AppManaged': False, 'S3Input': {'S3Uri': 's3://sagemaker-us-west-2-000000000000/sagemaker/DEMO-ClarifyModelMonitor-1674106123-4d04/validation-dataset.jsonl', 'LocalPath': '/opt/ml/processing/input/data', 'S3DataType': 'S3Prefix', 'S3InputMode': 'File', 'S3DataDistributionType': 'FullyReplicated', 'S3CompressionType': 'None'}}, {'InputName': 'analysis_config', 'AppManaged': False, 'S3Input': {'S3Uri': 's3://sagemaker-us-west-2-000000000000/sagemaker/DEMO-ClarifyModelMonitor-1674106123-4d04/baselining-output/analysis_config.json', 'LocalPath': '/opt/ml/processing/input/config', 'S3DataType': 'S3Prefix', 'S3InputMode': 'File', 'S3DataDistributionType': 'FullyReplicated', 'S3CompressionType': 'None'}}]
Outputs: [{'OutputName': 'analysis_result', 'AppManaged': False, 'S3Output': {'S3Uri': 's3://sagemaker-us-west-2-000000000000/sagemaker/DEMO-ClarifyModelMonitor-1674106123-4d04/baselining-output', 'LocalPath': '/opt/ml/processing/output', 'S3UploadMode': 'EndOfJob'}}]
[27]:
<sagemaker.processing.ProcessingJob at 0x7f4ca93c6410>
NOTE: The following cell waits until the baselining job is completed (in about 10 minutes). It then inspects the suggested constraints. This step can be skipped, because the monitor to be scheduled will automatically pick up baselining job name and wait for it before monitoring execution.
[28]:
model_explainability_monitor.latest_baselining_job.wait(logs=False)
print()
model_explainability_constraints = model_explainability_monitor.suggested_constraints()
print(f"Suggested constraints: {model_explainability_constraints.file_s3_uri}")
print(
sagemaker.s3.S3Downloader.read_file(
s3_uri=model_explainability_constraints.file_s3_uri,
sagemaker_session=sagemaker_session,
)
)
..................................................................................................!
Suggested constraints: s3://sagemaker-us-west-2-000000000000/sagemaker/DEMO-ClarifyModelMonitor-1674106123-4d04/baselining-output/analysis.json
{
"version": "1.0",
"explanations": {
"kernel_shap": {
"label0": {
"global_shap_values": {
"Age": 0.05962069398767555,
"Workclass": 0.009340874120660063,
"fnlwgt": 0.0010900750377509304,
"Education": 0.014739126275038199,
"Education-Num": 0.09891391226656666,
"Marital Status": 0.05452765230404344,
"Occupation": 0.0025392834714334667,
"Relationship": 0.018169508641909988,
"Ethnic group": 0.005295263900463686,
"Sex": 0.032080828962127876,
"Capital Gain": 0.09913318680892579,
"Capital Loss": 0.013518474176382519,
"Hours per week": 0.03641124946588507,
"Country": 0.004894213349476741
},
"expected_value": 0.250623226165771
}
}
}
}
Monitoring Schedule
With above constraints collected, now call create_monitoring_schedule()
method to schedule an hourly model explainability monitor.
If a baselining job has been submitted, then the monitor object will automatically pick up the analysis configuration from the baselining job. But if the baselining step is skipped, or if the capture dataset has different nature than the training dataset, then analysis configuration has to be provided.
ModelConfig
is required by ExplainabilityAnalysisConfig
for the same reason as it is required by the baselining job. Note that only features are required for computing feature attribution, so ground truth label should be excluded.
Highlights,
From
endpoint_name
the monitor can figure out the location of data captured by the endpoint.features_attribute
is theJMESPath
expression to locate the features in model input, similar to thefeatures
parameter ofDataConfig
.inference_attribute
stores theJMESPath
expression to locate the confidence score in model output, similar to themodel_scores
parameter of thesuggest_baseline()
method.
[29]:
schedule_expression = sagemaker.model_monitor.CronExpressionGenerator.hourly()
[30]:
# Remove label because only features are required for the analysis
headers_without_label_header = copy.deepcopy(all_headers)
headers_without_label_header.remove(label_header)
model_explainability_analysis_config = sagemaker.model_monitor.ExplainabilityAnalysisConfig(
explainability_config=shap_config,
model_config=model_config,
headers=headers_without_label_header,
)
model_explainability_monitor.create_monitoring_schedule(
analysis_config=model_explainability_analysis_config,
endpoint_input=sagemaker.model_monitor.EndpointInput(
endpoint_name=endpoint_name,
destination="/opt/ml/processing/input/endpoint",
features_attribute=features_jmespath,
inference_attribute=confidence_score_jmespath,
),
output_s3_uri=monitor_output_s3_uri,
schedule_cron_expression=schedule_expression,
)
print(
f"Model explainability monitoring schedule: {model_explainability_monitor.monitoring_schedule_name}"
)
Model explainability monitoring schedule: monitoring-schedule-2023-01-19-05-41-04-758
Wait for the first execution
The schedule starts jobs at the previously specified intervals. Code below waits until time crosses the hour boundary (in UTC) to see executions kick off.
Note: Even for an hourly schedule, Amazon SageMaker has a buffer period of 20 minutes to schedule executions. The execution might start in anywhere from zero to ~20 minutes from the hour boundary. This is expected and done for load balancing in the backend.
[31]:
def wait_for_execution_to_start(model_monitor):
print(
"An hourly schedule was created above and it will kick off executions ON the hour (plus 0 - 20 min buffer)."
)
print("Waiting for the first execution to happen", end="")
schedule_desc = model_monitor.describe_schedule()
while "LastMonitoringExecutionSummary" not in schedule_desc:
schedule_desc = model_monitor.describe_schedule()
print(".", end="", flush=True)
time.sleep(60)
print()
print("Done! Execution has been created")
print("Now waiting for execution to start", end="")
while schedule_desc["LastMonitoringExecutionSummary"]["MonitoringExecutionStatus"] in "Pending":
schedule_desc = model_monitor.describe_schedule()
print(".", end="", flush=True)
time.sleep(10)
print()
print("Done! Execution has started")
NOTE: The following cell waits until the first monitoring execution is started. As explained above, the wait could take more than 60 minutes.
[32]:
wait_for_execution_to_start(model_explainability_monitor)
An hourly schedule was created above and it will kick off executions ON the hour (plus 0 - 20 min buffer).
Waiting for the first execution to happen........................
Done! Execution has been created
Now waiting for execution to start.
Done! Execution has started
In real world, a monitoring schedule is supposed to be active all the time. But in this example, it can be stopped to avoid incurring extra charges. A stopped schedule will not trigger further executions, but the ongoing execution will continue. And if needed, the schedule can be restarted by start_monitoring_schedule()
.
[33]:
model_explainability_monitor.stop_monitoring_schedule()
Stopping Monitoring Schedule with name: monitoring-schedule-2023-01-19-05-41-04-758
Wait for the execution to finish
In the previous cell, the first execution has started. This section waits for the execution to finish so that its analysis results are available. Here are the possible terminal states and what each of them mean:
Completed
- This means the monitoring execution completed, and no issues were found in the violations report.CompletedWithViolations
- This means the execution completed, but constraint violations were detected.Failed
- The monitoring execution failed, maybe due to client error (perhaps incorrect role permissions) or infrastructure issues. Further examination ofFailureReason
andExitMessage
is necessary to identify what exactly happened.Stopped
- job exceeded max runtime or was manually stopped.
[34]:
# Waits for the schedule to have last execution in a terminal status.
def wait_for_execution_to_finish(model_monitor):
schedule_desc = model_monitor.describe_schedule()
execution_summary = schedule_desc.get("LastMonitoringExecutionSummary")
if execution_summary is not None:
print("Waiting for execution to finish", end="")
while execution_summary["MonitoringExecutionStatus"] not in [
"Completed",
"CompletedWithViolations",
"Failed",
"Stopped",
]:
print(".", end="", flush=True)
time.sleep(60)
schedule_desc = model_monitor.describe_schedule()
execution_summary = schedule_desc["LastMonitoringExecutionSummary"]
print()
print(f"Done! Execution Status: {execution_summary['MonitoringExecutionStatus']}")
else:
print("Last execution not found")
NOTE: The following cell takes about 10 minutes.
[35]:
wait_for_execution_to_finish(model_explainability_monitor)
Waiting for execution to finish........
Done! Execution Status: Completed
Inspect execution results
List the generated reports,
analysis.json includes the global SHAP values.
report.* files are static report files to visualize the SHAP values.
[36]:
schedule_desc = model_explainability_monitor.describe_schedule()
execution_summary = schedule_desc.get("LastMonitoringExecutionSummary")
if execution_summary and execution_summary["MonitoringExecutionStatus"] in [
"Completed",
"CompletedWithViolations",
]:
last_model_explainability_monitor_execution = model_explainability_monitor.list_executions()[-1]
last_model_explainability_monitor_execution_report_uri = (
last_model_explainability_monitor_execution.output.destination
)
print(f"Report URI: {last_model_explainability_monitor_execution_report_uri}")
last_model_explainability_monitor_execution_report_files = sorted(
sagemaker.s3.S3Downloader.list(
s3_uri=last_model_explainability_monitor_execution_report_uri,
sagemaker_session=sagemaker_session,
)
)
print("Found Report Files:")
print("\n ".join(last_model_explainability_monitor_execution_report_files))
else:
last_model_explainability_monitor_execution = None
print(
"====STOP==== \n No completed executions to inspect further. Please wait till an execution completes or investigate previously reported failures."
)
Report URI: s3://sagemaker-us-west-2-000000000000/sagemaker/DEMO-ClarifyModelMonitor-1674106123-4d04/monitor-output/DEMO-ll-adult-pred-model-monitor-1674106124-9611/monitoring-schedule-2023-01-19-05-41-04-758/2023/01/19/06
Found Report Files:
s3://sagemaker-us-west-2-000000000000/sagemaker/DEMO-ClarifyModelMonitor-1674106123-4d04/monitor-output/DEMO-ll-adult-pred-model-monitor-1674106124-9611/monitoring-schedule-2023-01-19-05-41-04-758/2023/01/19/06/analysis.json
s3://sagemaker-us-west-2-000000000000/sagemaker/DEMO-ClarifyModelMonitor-1674106123-4d04/monitor-output/DEMO-ll-adult-pred-model-monitor-1674106124-9611/monitoring-schedule-2023-01-19-05-41-04-758/2023/01/19/06/report.html
s3://sagemaker-us-west-2-000000000000/sagemaker/DEMO-ClarifyModelMonitor-1674106123-4d04/monitor-output/DEMO-ll-adult-pred-model-monitor-1674106124-9611/monitoring-schedule-2023-01-19-05-41-04-758/2023/01/19/06/report.ipynb
s3://sagemaker-us-west-2-000000000000/sagemaker/DEMO-ClarifyModelMonitor-1674106123-4d04/monitor-output/DEMO-ll-adult-pred-model-monitor-1674106124-9611/monitoring-schedule-2023-01-19-05-41-04-758/2023/01/19/06/report.pdf
If there are any violations compared to the baseline, they are listed here. See Feature Attribution Drift Violations for the schema of the file, and how violations are detected.
[37]:
violations = model_explainability_monitor.latest_monitoring_constraint_violations()
if violations is not None:
pprint.PrettyPrinter(indent=4).pprint(violations.body_dict)
By default, the analysis results are also published to CloudWatch, see CloudWatch Metrics for Feature Attribution Drift Analysis.
Cleanup
The endpoint can keep running and capturing data, but if there is no plan to collect more data or use this endpoint further, it should be deleted to avoid incurring additional charges. Note that deleting endpoint does not delete the data that was captured during the model invocations.
First stop the worker thread,
[38]:
invoke_endpoint_thread.terminate()
Then stop all monitors scheduled for the endpoint
[39]:
model_explainability_monitor.stop_monitoring_schedule()
wait_for_execution_to_finish(model_explainability_monitor)
model_explainability_monitor.delete_monitoring_schedule()
Stopping Monitoring Schedule with name: monitoring-schedule-2023-01-19-05-41-04-758
Waiting for execution to finish
Done! Execution Status: Completed
Deleting Monitoring Schedule with name: monitoring-schedule-2023-01-19-05-41-04-758
Finally, delete the endpoint
[40]:
sagemaker_session.delete_endpoint(endpoint_name=endpoint_name)
sagemaker_session.delete_model(model_name=model_name)
Notebook CI Test Results
This notebook was tested in multiple regions. The test results are as follows, except for us-west-2 which is shown at the top of the notebook.