Amazon SageMaker Model Monitor


This notebook’s CI test result for us-west-2 is as follows. CI test results in other regions can be found at the end of the notebook.

This us-west-2 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable


This notebook shows how to:

  • Host a machine learning model in Amazon SageMaker and capture inference requests, results, and metadata

  • Analyze a training dataset to generate baseline constraints

  • Monitor a live endpoint for violations against constraints

Background

Amazon SageMaker provides every developer and data scientist with the ability to build, train, and deploy machine learning models quickly. Amazon SageMaker is a fully-managed service that encompasses the entire machine learning workflow. You can label and prepare your data, choose an algorithm, train a model, and then tune and optimize it for deployment. You can deploy your models to production with Amazon SageMaker to make predictions and lower costs than was previously possible.

In addition, Amazon SageMaker enables you to capture the input, output and metadata for invocations of the models that you deploy. It also enables you to analyze the data and monitor its quality. In this notebook, you learn how Amazon SageMaker enables these capabilities.

Runtime

This notebook uses an hourly monitor, so it takes between 30-90 minutes to run.

Contents

  1. PART A: Capturing real-time inference data from Amazon SageMaker endpoints

  2. PART B: Model Monitor - Baselining and continuous monitoring

    1. Constraint suggestion with baseline/training dataset

    2. Analyze collected data for data quality issues

Setup

To get started, make sure you have these prerequisites completed:

  • Specify an AWS Region to host your model.

  • An IAM role ARN exists that is used to give Amazon SageMaker access to your data in Amazon Simple Storage Service (Amazon S3).

  • Use the default S3 bucket to store the data used to train your model, any additional model data, and the data captured from model invocations. For demonstration purposes, you are using the same bucket for these. In reality, you might want to separate them with different security policies.

[2]:
import os
import boto3
import re
import json
import sagemaker
from sagemaker import get_execution_role, session

sm_session = sagemaker.Session()
region = sm_session.boto_region_name

role = get_execution_role()
print("Role ARN: {}".format(role))

bucket = sm_session.default_bucket()
print("Demo Bucket: {}".format(bucket))
prefix = "sagemaker/DEMO-ModelMonitor"

data_capture_prefix = "{}/datacapture".format(prefix)
s3_capture_upload_path = "s3://{}/{}".format(bucket, data_capture_prefix)
reports_prefix = "{}/reports".format(prefix)
s3_report_path = "s3://{}/{}".format(bucket, reports_prefix)
code_prefix = "{}/code".format(prefix)
s3_code_preprocessor_uri = "s3://{}/{}/{}".format(bucket, code_prefix, "preprocessor.py")
s3_code_postprocessor_uri = "s3://{}/{}/{}".format(bucket, code_prefix, "postprocessor.py")

print("Capture path: {}".format(s3_capture_upload_path))
print("Report path: {}".format(s3_report_path))
print("Preproc Code path: {}".format(s3_code_preprocessor_uri))
print("Postproc Code path: {}".format(s3_code_postprocessor_uri))
Role ARN: arn:aws:iam::000000000000:role/ProdBuildSystemStack-ReleaseBuildRoleFB326D49-QK8LUA2UI1IC
Demo Bucket: sagemaker-us-west-2-521695447989
Capture path: s3://sagemaker-us-west-2-000000000000/sagemaker/DEMO-ModelMonitor/datacapture
Report path: s3://sagemaker-us-west-2-000000000000/sagemaker/DEMO-ModelMonitor/reports
Preproc Code path: s3://sagemaker-us-west-2-000000000000/sagemaker/DEMO-ModelMonitor/code/preprocessor.py
Postproc Code path: s3://sagemaker-us-west-2-000000000000/sagemaker/DEMO-ModelMonitor/code/postprocessor.py

PART A: Capturing real-time inference data from Amazon SageMaker endpoints

Create an endpoint to showcase the data capture capability in action.

Upload the pre-trained model to Amazon S3

This code uploads a pre-trained XGBoost model that is ready for you to deploy. This model was trained using the XGB Churn Prediction Notebook in SageMaker. You can also use your own pre-trained model in this step. If you already have a pretrained model in Amazon S3, you can add it instead by specifying the s3_key.

[3]:
model_file = open("model/xgb-churn-prediction-model.tar.gz", "rb")
s3_key = os.path.join(prefix, "xgb-churn-prediction-model.tar.gz")
boto3.Session().resource("s3").Bucket(bucket).Object(s3_key).upload_fileobj(model_file)

Deploy the model to Amazon SageMaker

Start with deploying a pre-trained churn prediction model. Here, you create the model object with the image and model data.

[4]:
from time import gmtime, strftime
from sagemaker.model import Model
from sagemaker.image_uris import retrieve

model_name = "DEMO-xgb-churn-pred-model-monitor-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
model_url = "https://{}.s3-{}.amazonaws.com/{}/xgb-churn-prediction-model.tar.gz".format(
    bucket, region, prefix
)

image_uri = retrieve("xgboost", region, "0.90-1")

model = Model(image_uri=image_uri, model_data=model_url, role=role)

To enable data capture for monitoring the model data quality, you specify the new capture option called DataCaptureConfig. You can capture the request payload, the response payload or both with this configuration. The capture config applies to all variants. Go ahead with the deployment.

[5]:
from sagemaker.model_monitor import DataCaptureConfig

endpoint_name = "DEMO-xgb-churn-pred-model-monitor-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
print("EndpointName={}".format(endpoint_name))

data_capture_config = DataCaptureConfig(
    enable_capture=True, sampling_percentage=100, destination_s3_uri=s3_capture_upload_path
)

predictor = model.deploy(
    initial_instance_count=1,
    instance_type="ml.m4.xlarge",
    endpoint_name=endpoint_name,
    data_capture_config=data_capture_config,
)
EndpointName=DEMO-xgb-churn-pred-model-monitor-2022-04-18-00-13-16
---------------!

Invoke the deployed model

You can now send data to this endpoint to get inferences in real time. Because you enabled the data capture in the previous steps, the request and response payload, along with some additional metadata, is saved in the Amazon Simple Storage Service (Amazon S3) location you have specified in the DataCaptureConfig.

This step invokes the endpoint with included sample data for about 3 minutes. Data is captured based on the sampling percentage specified and the capture continues until the data capture option is turned off.

[6]:
from sagemaker.predictor import Predictor
from sagemaker.serializers import CSVSerializer
import time

predictor = Predictor(endpoint_name=endpoint_name, serializer=CSVSerializer())

# Get a subset of test data for a quick test
!head -180 test_data/test-dataset-input-cols.csv > test_data/test_sample.csv
print("Sending test traffic to the endpoint {}. \nPlease wait...".format(endpoint_name))

with open("test_data/test_sample.csv", "r") as f:
    for row in f:
        payload = row.rstrip("\n")
        response = predictor.predict(data=payload)
        time.sleep(1)

print("Done!")
Sending test traffic to the endpoint DEMO-xgb-churn-pred-model-monitor-2022-04-18-00-13-16.
Please wait...
Done!

View captured data

Now list the data capture files stored in Amazon S3. You should expect to see different files from different time periods organized based on the hour in which the invocation occurred. The format of the Amazon S3 path is:

s3://{destination-bucket-prefix}/{endpoint-name}/{variant-name}/yyyy/mm/dd/hh/filename.jsonl

[7]:
s3_client = boto3.Session().client("s3")
current_endpoint_capture_prefix = "{}/{}".format(data_capture_prefix, endpoint_name)
result = s3_client.list_objects(Bucket=bucket, Prefix=current_endpoint_capture_prefix)
capture_files = [capture_file.get("Key") for capture_file in result.get("Contents")]
print("Found Capture Files:")
print("\n ".join(capture_files))
Found Capture Files:
sagemaker/DEMO-ModelMonitor/datacapture/DEMO-xgb-churn-pred-model-monitor-2022-04-18-00-13-16/AllTraffic/2022/04/18/00/20-48-871-0b7bfd33-c8a7-4adf-a09a-98fd0af79974.jsonl
 sagemaker/DEMO-ModelMonitor/datacapture/DEMO-xgb-churn-pred-model-monitor-2022-04-18-00-13-16/AllTraffic/2022/04/18/00/21-48-903-028573f5-3215-431a-b3b8-a200be4d2d9a.jsonl

Next, view the contents of a single capture file. Here you should see all the data captured in an Amazon SageMaker specific JSON-line formatted file. Take a quick peek at the first few lines in the captured file.

[8]:
def get_obj_body(obj_key):
    return s3_client.get_object(Bucket=bucket, Key=obj_key).get("Body").read().decode("utf-8")


capture_file = get_obj_body(capture_files[-1])
print(capture_file[:2000])
{"captureData":{"endpointInput":{"observedContentType":"text/csv","mode":"INPUT","data":"132,10,182.9,54,292.4,68,142.3,116,11.5,4,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,0,0,1","encoding":"CSV"},"endpointOutput":{"observedContentType":"text/csv; charset=utf-8","mode":"OUTPUT","data":"0.036397725343704224","encoding":"CSV"}},"eventMetadata":{"eventId":"f04a8538-696c-4648-8064-8b680b2e3318","inferenceTime":"2022-04-18T00:21:48Z"},"eventVersion":"0"}
{"captureData":{"endpointInput":{"observedContentType":"text/csv","mode":"INPUT","data":"130,0,263.7,113,186.5,103,195.3,99,18.3,6,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,0,1,0","encoding":"CSV"},"endpointOutput":{"observedContentType":"text/csv; charset=utf-8","mode":"OUTPUT","data":"0.3791300654411316","encoding":"CSV"}},"eventMetadata":{"eventId":"4f301857-49c9-44d1-b275-ba0ab7767033","inferenceTime":"2022-04-18T00:21:49Z"},"eventVersion":"0"}
{"captureData":{"endpointInput":{"observedContentType":"text/csv","mode":"INPUT","data":"85,0,210.3,66,195.8,76,221.6,82,11.2,7,1,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,0,1,0","encoding":"CSV"},"endpointOutput":{"observedContentType":"text/csv; charset=utf-8","mode":"OUTPUT","data":"0.017852725461125374","encoding":"CSV"}},"eventMetadata":{"eventId":"9827355a-ac22-40f4-9748-539e03db6c2a","inferenceTime":"2022-04-18T00:21:50Z"},"eventVersion":"0"}
{"captureData":{"endpointInput":{"observedContentType":"text/csv","mode":"INPUT","data":"146,0,160.1,63,208.4,112,177.6,98,9.2,3,2,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,1,1,0","encoding":"CSV"},"endpointOutput":{"observedContentType":"text/csv; charset=utf-8","mode":"OUTPUT","data":"0.13224346935749054","encoding":"CSV"}},"eventMetadata":{"eventId":"7dd7b9

Finally, the contents of a single line is present below in a formatted JSON file so that you can observe a little better.

[9]:
import json

print(json.dumps(json.loads(capture_file.split("\n")[0]), indent=2))
{
  "captureData": {
    "endpointInput": {
      "observedContentType": "text/csv",
      "mode": "INPUT",
      "data": "132,10,182.9,54,292.4,68,142.3,116,11.5,4,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,0,0,1",
      "encoding": "CSV"
    },
    "endpointOutput": {
      "observedContentType": "text/csv; charset=utf-8",
      "mode": "OUTPUT",
      "data": "0.036397725343704224",
      "encoding": "CSV"
    }
  },
  "eventMetadata": {
    "eventId": "f04a8538-696c-4648-8064-8b680b2e3318",
    "inferenceTime": "2022-04-18T00:21:48Z"
  },
  "eventVersion": "0"
}

As you can see, each inference request is captured in one line in the jsonl file. The line contains both the input and output merged together. In the example, you provided the ContentType as text/csv which is reflected in the observedContentType value. Also, you expose the encoding that you used to encode the input and output payloads in the capture format with the encoding value.

To recap, you observed how you can enable capturing the input or output payloads to an endpoint with a new parameter. You have also observed what the captured format looks like in Amazon S3. Next, continue to explore how Amazon SageMaker helps with monitoring the data collected in Amazon S3.

PART B: Model Monitor - Baselining and continuous monitoring

In addition to collecting the data, Amazon SageMaker provides the capability for you to monitor and evaluate the data observed by the endpoints. For this: 1. Create a baseline with which you compare the realtime traffic. 1. Once a baseline is ready, setup a schedule to continously evaluate and compare against the baseline.

1. Constraint suggestion with baseline/training dataset

The training dataset with which you trained the model is usually a good baseline dataset. Note that the training dataset data schema and the inference dataset schema should exactly match (i.e. the number and order of the features).

From the training dataset you can ask Amazon SageMaker to suggest a set of baseline constraints and generate descriptive statistics to explore the data. For this example, upload the training dataset that was used to train the pre-trained model included in this example. If you already have it in Amazon S3, you can directly point to it.

[10]:
# copy over the training dataset to Amazon S3 (if you already have it in Amazon S3, you could reuse it)
baseline_prefix = prefix + "/baselining"
baseline_data_prefix = baseline_prefix + "/data"
baseline_results_prefix = baseline_prefix + "/results"

baseline_data_uri = "s3://{}/{}".format(bucket, baseline_data_prefix)
baseline_results_uri = "s3://{}/{}".format(bucket, baseline_results_prefix)
print("Baseline data uri: {}".format(baseline_data_uri))
print("Baseline results uri: {}".format(baseline_results_uri))
Baseline data uri: s3://sagemaker-us-west-2-000000000000/sagemaker/DEMO-ModelMonitor/baselining/data
Baseline results uri: s3://sagemaker-us-west-2-000000000000/sagemaker/DEMO-ModelMonitor/baselining/results
[11]:
training_data_file = open("test_data/training-dataset-with-header.csv", "rb")
s3_key = os.path.join(baseline_prefix, "data", "training-dataset-with-header.csv")
boto3.Session().resource("s3").Bucket(bucket).Object(s3_key).upload_fileobj(training_data_file)

Create a baselining job with training dataset

Now that you have the training data ready in Amazon S3, start a job to suggest constraints. DefaultModelMonitor.suggest_baseline(..) starts a ProcessingJob using an Amazon SageMaker provided Model Monitor container to generate the constraints.

[12]:
from sagemaker.model_monitor import DefaultModelMonitor
from sagemaker.model_monitor.dataset_format import DatasetFormat

my_default_monitor = DefaultModelMonitor(
    role=role,
    instance_count=1,
    instance_type="ml.m5.xlarge",
    volume_size_in_gb=20,
    max_runtime_in_seconds=3600,
)

my_default_monitor.suggest_baseline(
    baseline_dataset=baseline_data_uri + "/training-dataset-with-header.csv",
    dataset_format=DatasetFormat.csv(header=True),
    output_s3_uri=baseline_results_uri,
    wait=True,
)

Job Name:  baseline-suggestion-job-2022-04-18-00-23-53-286
Inputs:  [{'InputName': 'baseline_dataset_input', 'AppManaged': False, 'S3Input': {'S3Uri': 's3://sagemaker-us-west-2-000000000000/sagemaker/DEMO-ModelMonitor/baselining/data/training-dataset-with-header.csv', 'LocalPath': '/opt/ml/processing/input/baseline_dataset_input', 'S3DataType': 'S3Prefix', 'S3InputMode': 'File', 'S3DataDistributionType': 'FullyReplicated', 'S3CompressionType': 'None'}}]
Outputs:  [{'OutputName': 'monitoring_output', 'AppManaged': False, 'S3Output': {'S3Uri': 's3://sagemaker-us-west-2-000000000000/sagemaker/DEMO-ModelMonitor/baselining/results', 'LocalPath': '/opt/ml/processing/output', 'S3UploadMode': 'EndOfJob'}}]
.........................2022-04-18 00:27:57,964 - matplotlib.font_manager - INFO - Generating new fontManager, this may take some time...
2022-04-18 00:27:58.544887: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
...
2022-04-18 00:29:19 INFO  FileUtil:29 - Write to file statistics.json at path /opt/ml/processing/output.
2022-04-18 00:29:19 INFO  YarnClientSchedulerBackend:54 - Interrupting monitor thread
2022-04-18 00:29:19 INFO  YarnClientSchedulerBackend:54 - Shutting down all executors
2022-04-18 00:29:19 INFO  YarnSchedulerBackend$YarnDriverEndpoint:54 - Asking each executor to shut down
2022-04-18 00:29:19 INFO  SchedulerExtensionServices:54 - Stopping SchedulerExtensionServices
(serviceOption=None,
 services=List(),
 started=false)
2022-04-18 00:29:19 INFO  YarnClientSchedulerBackend:54 - Stopped
2022-04-18 00:29:19 INFO  MapOutputTrackerMasterEndpoint:54 - MapOutputTrackerMasterEndpoint stopped!
2022-04-18 00:29:19 INFO  MemoryStore:54 - MemoryStore cleared
2022-04-18 00:29:19 INFO  BlockManager:54 - BlockManager stopped
2022-04-18 00:29:19 INFO  BlockManagerMaster:54 - BlockManagerMaster stopped
2022-04-18 00:29:19 INFO  OutputCommitCoordinator$OutputCommitCoordinatorEndpoint:54 - OutputCommitCoordinator stopped!
2022-04-18 00:29:19 INFO  SparkContext:54 - Successfully stopped SparkContext
2022-04-18 00:29:19 INFO  Main:65 - Completed: Job completed successfully with no violations.
2022-04-18 00:29:19 INFO  Main:141 - Write to file /opt/ml/output/message.
2022-04-18 00:29:19 INFO  ShutdownHookManager:54 - Shutdown hook called
2022-04-18 00:29:19 INFO  ShutdownHookManager:54 - Deleting directory /tmp/spark-4d0fa5da-ce51-48bb-9fde-31f5cfd39c10
2022-04-18 00:29:19 INFO  ShutdownHookManager:54 - Deleting directory /tmp/spark-c4c4235c-d98b-42f7-8ebf-aaf29574aeeb
2022-04-18 00:29:20,048 - DefaultDataAnalyzer - INFO - Completed spark-submit with return code : 0
2022-04-18 00:29:20,048 - DefaultDataAnalyzer - INFO - Spark job completed.

[12]:
<sagemaker.processing.ProcessingJob at 0x7ff2badb7350>

Explore the generated constraints and statistics

[13]:
s3_client = boto3.Session().client("s3")
result = s3_client.list_objects(Bucket=bucket, Prefix=baseline_results_prefix)
report_files = [report_file.get("Key") for report_file in result.get("Contents")]
print("Found Files:")
print("\n ".join(report_files))
Found Files:
sagemaker/DEMO-ModelMonitor/baselining/results/constraints.json
 sagemaker/DEMO-ModelMonitor/baselining/results/statistics.json
[14]:
import pandas as pd

baseline_job = my_default_monitor.latest_baselining_job
schema_df = pd.io.json.json_normalize(baseline_job.baseline_statistics().body_dict["features"])
schema_df.head(10)
/opt/conda/lib/python3.7/site-packages/ipykernel_launcher.py:4: FutureWarning: pandas.io.json.json_normalize is deprecated, use pandas.json_normalize instead
  after removing the cwd from sys.path.
[14]:
name inferred_type numerical_statistics.common.num_present numerical_statistics.common.num_missing numerical_statistics.mean numerical_statistics.sum numerical_statistics.std_dev numerical_statistics.min numerical_statistics.max numerical_statistics.distribution.kll.buckets numerical_statistics.distribution.kll.sketch.parameters.c numerical_statistics.distribution.kll.sketch.parameters.k numerical_statistics.distribution.kll.sketch.data
0 Churn Integral 2333 0 0.139306 325.0 0.346265 0.0 1.0 [{'lower_bound': 0.0, 'upper_bound': 0.1, 'cou... 0.64 2048.0 [[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0,...
1 Account Length Integral 2333 0 101.276897 236279.0 39.552442 1.0 243.0 [{'lower_bound': 1.0, 'upper_bound': 25.2, 'co... 0.64 2048.0 [[119.0, 100.0, 111.0, 181.0, 95.0, 104.0, 70....
2 VMail Message Integral 2333 0 8.214316 19164.0 13.776908 0.0 51.0 [{'lower_bound': 0.0, 'upper_bound': 5.1, 'cou... 0.64 2048.0 [[19.0, 0.0, 0.0, 40.0, 36.0, 0.0, 0.0, 24.0, ...
3 Day Mins Fractional 2333 0 180.226489 420468.4 53.987179 0.0 350.8 [{'lower_bound': 0.0, 'upper_bound': 35.08, 'c... 0.64 2048.0 [[178.1, 160.3, 197.1, 105.2, 283.1, 113.6, 23...
4 Day Calls Integral 2333 0 100.259323 233905.0 20.165008 0.0 165.0 [{'lower_bound': 0.0, 'upper_bound': 16.5, 'co... 0.64 2048.0 [[110.0, 138.0, 117.0, 61.0, 112.0, 87.0, 122....
5 Eve Mins Fractional 2333 0 200.050107 466716.9 50.015928 31.2 361.8 [{'lower_bound': 31.2, 'upper_bound': 64.26, '... 0.64 2048.0 [[212.8, 221.3, 227.8, 341.3, 286.2, 158.6, 29...
6 Eve Calls Integral 2333 0 99.573939 232306.0 19.675578 12.0 170.0 [{'lower_bound': 12.0, 'upper_bound': 27.8, 'c... 0.64 2048.0 [[100.0, 92.0, 128.0, 79.0, 86.0, 98.0, 112.0,...
7 Night Mins Fractional 2333 0 201.388598 469839.6 50.627961 23.2 395.0 [{'lower_bound': 23.2, 'upper_bound': 60.37999... 0.64 2048.0 [[226.3, 150.4, 214.0, 165.7, 261.7, 187.7, 20...
8 Night Calls Integral 2333 0 100.227175 233830.0 19.282029 42.0 175.0 [{'lower_bound': 42.0, 'upper_bound': 55.3, 'c... 0.64 2048.0 [[123.0, 120.0, 101.0, 97.0, 129.0, 87.0, 112....
9 Intl Mins Fractional 2333 0 10.253065 23920.4 2.778766 0.0 18.4 [{'lower_bound': 0.0, 'upper_bound': 1.8399999... 0.64 2048.0 [[10.0, 11.2, 9.3, 6.3, 11.3, 10.5, 0.0, 9.7, ...
[15]:
constraints_df = pd.io.json.json_normalize(
    baseline_job.suggested_constraints().body_dict["features"]
)
constraints_df.head(10)
/opt/conda/lib/python3.7/site-packages/ipykernel_launcher.py:2: FutureWarning: pandas.io.json.json_normalize is deprecated, use pandas.json_normalize instead

[15]:
name inferred_type completeness num_constraints.is_non_negative
0 Churn Integral 1.0 True
1 Account Length Integral 1.0 True
2 VMail Message Integral 1.0 True
3 Day Mins Fractional 1.0 True
4 Day Calls Integral 1.0 True
5 Eve Mins Fractional 1.0 True
6 Eve Calls Integral 1.0 True
7 Night Mins Fractional 1.0 True
8 Night Calls Integral 1.0 True
9 Intl Mins Fractional 1.0 True

2. Analyze collected data for data quality issues

When you have collected the data above, analyze and monitor the data with Monitoring Schedules.

Create a schedule

[16]:
# Upload some test scripts to the S3 bucket for pre- and post-processing
bucket = boto3.Session().resource("s3").Bucket(bucket)
bucket.Object(code_prefix + "/preprocessor.py").upload_file("preprocessor.py")
bucket.Object(code_prefix + "/postprocessor.py").upload_file("postprocessor.py")

You can create a model monitoring schedule for the endpoint created earlier. Use the baseline resources (constraints and statistics) to compare against the realtime traffic.

[17]:
from sagemaker.model_monitor import CronExpressionGenerator

mon_schedule_name = "DEMO-xgb-churn-pred-model-monitor-schedule-" + strftime(
    "%Y-%m-%d-%H-%M-%S", gmtime()
)
my_default_monitor.create_monitoring_schedule(
    monitor_schedule_name=mon_schedule_name,
    endpoint_input=predictor.endpoint,
    # record_preprocessor_script=pre_processor_script,
    post_analytics_processor_script=s3_code_postprocessor_uri,
    output_s3_uri=s3_report_path,
    statistics=my_default_monitor.baseline_statistics(),
    constraints=my_default_monitor.suggested_constraints(),
    schedule_cron_expression=CronExpressionGenerator.hourly(),
    enable_cloudwatch_metrics=True,
)
The endpoint attribute has been renamed in sagemaker>=2.
See: https://sagemaker.readthedocs.io/en/stable/v2.html for details.

Start generating some artificial traffic

The cell below starts a thread to send some traffic to the endpoint. Note that you need to stop the kernel to terminate this thread. If there is no traffic, the monitoring jobs are marked as Failed since there is no data to process.

[18]:
from threading import Thread
from time import sleep

endpoint_name = predictor.endpoint
runtime_client = sm_session.sagemaker_runtime_client

# (just repeating code from above for convenience/ able to run this section independently)
def invoke_endpoint(ep_name, file_name, runtime_client):
    with open(file_name, "r") as f:
        for row in f:
            payload = row.rstrip("\n")
            response = runtime_client.invoke_endpoint(
                EndpointName=ep_name, ContentType="text/csv", Body=payload
            )
            response["Body"].read()
            time.sleep(1)


def invoke_endpoint_forever():
    while True:
        try:
            invoke_endpoint(endpoint_name, "test_data/test-dataset-input-cols.csv", runtime_client)
        except runtime_client.exceptions.ValidationError:
            pass


thread = Thread(target=invoke_endpoint_forever)
thread.start()

# Note that you need to stop the kernel to stop the invocations
The endpoint attribute has been renamed in sagemaker>=2.
See: https://sagemaker.readthedocs.io/en/stable/v2.html for details.

Describe and inspect the schedule

Once you describe, observe that the MonitoringScheduleStatus changes to Scheduled.

[19]:
desc_schedule_result = my_default_monitor.describe_schedule()
print("Schedule status: {}".format(desc_schedule_result["MonitoringScheduleStatus"]))
Schedule status: Pending

List executions

The schedule starts jobs at the previously specified intervals. Here, you list the latest five executions. Note that if you are kicking this off after creating the hourly schedule, you might find the executions empty. You might have to wait until you cross the hour boundary (in UTC) to see executions kick off. The code below has the logic for waiting.

Note: Even for an hourly schedule, Amazon SageMaker has a buffer period of 20 minutes to schedule your execution. You might see your execution start in anywhere from zero to ~20 minutes from the hour boundary. This is expected and done for load balancing in the backend.

[20]:
mon_executions = my_default_monitor.list_executions()
print(
    "We created a hourly schedule above that begins executions ON the hour (plus 0-20 min buffer.\nWe will have to wait till we hit the hour..."
)

while len(mon_executions) == 0:
    print("Waiting for the first execution to happen...")
    time.sleep(60)
    mon_executions = my_default_monitor.list_executions()
No executions found for schedule. monitoring_schedule_name: DEMO-xgb-churn-pred-model-monitor-schedule-2022-04-18-00-29-48
We created a hourly schedule above that begins executions ON the hour (plus 0-20 min buffer.
We will have to wait till we hit the hour...
Waiting for the first execution to happen...
No executions found for schedule. monitoring_schedule_name: DEMO-xgb-churn-pred-model-monitor-schedule-2022-04-18-00-29-48
Waiting for the first execution to happen...
No executions found for schedule. monitoring_schedule_name: DEMO-xgb-churn-pred-model-monitor-schedule-2022-04-18-00-29-48
Waiting for the first execution to happen...
No executions found for schedule. monitoring_schedule_name: DEMO-xgb-churn-pred-model-monitor-schedule-2022-04-18-00-29-48
Waiting for the first execution to happen...
No executions found for schedule. monitoring_schedule_name: DEMO-xgb-churn-pred-model-monitor-schedule-2022-04-18-00-29-48
Waiting for the first execution to happen...
No executions found for schedule. monitoring_schedule_name: DEMO-xgb-churn-pred-model-monitor-schedule-2022-04-18-00-29-48
Waiting for the first execution to happen...
No executions found for schedule. monitoring_schedule_name: DEMO-xgb-churn-pred-model-monitor-schedule-2022-04-18-00-29-48
Waiting for the first execution to happen...
No executions found for schedule. monitoring_schedule_name: DEMO-xgb-churn-pred-model-monitor-schedule-2022-04-18-00-29-48
Waiting for the first execution to happen...
No executions found for schedule. monitoring_schedule_name: DEMO-xgb-churn-pred-model-monitor-schedule-2022-04-18-00-29-48
Waiting for the first execution to happen...
No executions found for schedule. monitoring_schedule_name: DEMO-xgb-churn-pred-model-monitor-schedule-2022-04-18-00-29-48
Waiting for the first execution to happen...
No executions found for schedule. monitoring_schedule_name: DEMO-xgb-churn-pred-model-monitor-schedule-2022-04-18-00-29-48
Waiting for the first execution to happen...
No executions found for schedule. monitoring_schedule_name: DEMO-xgb-churn-pred-model-monitor-schedule-2022-04-18-00-29-48
Waiting for the first execution to happen...
No executions found for schedule. monitoring_schedule_name: DEMO-xgb-churn-pred-model-monitor-schedule-2022-04-18-00-29-48
Waiting for the first execution to happen...
No executions found for schedule. monitoring_schedule_name: DEMO-xgb-churn-pred-model-monitor-schedule-2022-04-18-00-29-48
Waiting for the first execution to happen...
No executions found for schedule. monitoring_schedule_name: DEMO-xgb-churn-pred-model-monitor-schedule-2022-04-18-00-29-48
Waiting for the first execution to happen...
No executions found for schedule. monitoring_schedule_name: DEMO-xgb-churn-pred-model-monitor-schedule-2022-04-18-00-29-48
Waiting for the first execution to happen...
No executions found for schedule. monitoring_schedule_name: DEMO-xgb-churn-pred-model-monitor-schedule-2022-04-18-00-29-48
Waiting for the first execution to happen...
No executions found for schedule. monitoring_schedule_name: DEMO-xgb-churn-pred-model-monitor-schedule-2022-04-18-00-29-48
Waiting for the first execution to happen...
No executions found for schedule. monitoring_schedule_name: DEMO-xgb-churn-pred-model-monitor-schedule-2022-04-18-00-29-48
Waiting for the first execution to happen...
No executions found for schedule. monitoring_schedule_name: DEMO-xgb-churn-pred-model-monitor-schedule-2022-04-18-00-29-48
Waiting for the first execution to happen...
No executions found for schedule. monitoring_schedule_name: DEMO-xgb-churn-pred-model-monitor-schedule-2022-04-18-00-29-48
Waiting for the first execution to happen...
No executions found for schedule. monitoring_schedule_name: DEMO-xgb-churn-pred-model-monitor-schedule-2022-04-18-00-29-48
Waiting for the first execution to happen...
No executions found for schedule. monitoring_schedule_name: DEMO-xgb-churn-pred-model-monitor-schedule-2022-04-18-00-29-48
Waiting for the first execution to happen...
No executions found for schedule. monitoring_schedule_name: DEMO-xgb-churn-pred-model-monitor-schedule-2022-04-18-00-29-48
Waiting for the first execution to happen...
No executions found for schedule. monitoring_schedule_name: DEMO-xgb-churn-pred-model-monitor-schedule-2022-04-18-00-29-48
Waiting for the first execution to happen...
No executions found for schedule. monitoring_schedule_name: DEMO-xgb-churn-pred-model-monitor-schedule-2022-04-18-00-29-48
Waiting for the first execution to happen...
No executions found for schedule. monitoring_schedule_name: DEMO-xgb-churn-pred-model-monitor-schedule-2022-04-18-00-29-48
Waiting for the first execution to happen...
No executions found for schedule. monitoring_schedule_name: DEMO-xgb-churn-pred-model-monitor-schedule-2022-04-18-00-29-48
Waiting for the first execution to happen...
No executions found for schedule. monitoring_schedule_name: DEMO-xgb-churn-pred-model-monitor-schedule-2022-04-18-00-29-48
Waiting for the first execution to happen...
No executions found for schedule. monitoring_schedule_name: DEMO-xgb-churn-pred-model-monitor-schedule-2022-04-18-00-29-48
Waiting for the first execution to happen...
No executions found for schedule. monitoring_schedule_name: DEMO-xgb-churn-pred-model-monitor-schedule-2022-04-18-00-29-48
Waiting for the first execution to happen...
No executions found for schedule. monitoring_schedule_name: DEMO-xgb-churn-pred-model-monitor-schedule-2022-04-18-00-29-48
Waiting for the first execution to happen...
No executions found for schedule. monitoring_schedule_name: DEMO-xgb-churn-pred-model-monitor-schedule-2022-04-18-00-29-48
Waiting for the first execution to happen...
No executions found for schedule. monitoring_schedule_name: DEMO-xgb-churn-pred-model-monitor-schedule-2022-04-18-00-29-48
Waiting for the first execution to happen...
No executions found for schedule. monitoring_schedule_name: DEMO-xgb-churn-pred-model-monitor-schedule-2022-04-18-00-29-48
Waiting for the first execution to happen...
No executions found for schedule. monitoring_schedule_name: DEMO-xgb-churn-pred-model-monitor-schedule-2022-04-18-00-29-48
Waiting for the first execution to happen...
No executions found for schedule. monitoring_schedule_name: DEMO-xgb-churn-pred-model-monitor-schedule-2022-04-18-00-29-48
Waiting for the first execution to happen...
No executions found for schedule. monitoring_schedule_name: DEMO-xgb-churn-pred-model-monitor-schedule-2022-04-18-00-29-48
Waiting for the first execution to happen...
No executions found for schedule. monitoring_schedule_name: DEMO-xgb-churn-pred-model-monitor-schedule-2022-04-18-00-29-48
Waiting for the first execution to happen...
No executions found for schedule. monitoring_schedule_name: DEMO-xgb-churn-pred-model-monitor-schedule-2022-04-18-00-29-48
Waiting for the first execution to happen...
Waiting for the first execution to happen...

Inspect a specific execution (latest execution)

In the previous cell, you picked up the latest completed or failed scheduled execution. Here are the possible terminal states and what each of them mean: * Completed - The monitoring execution completed and no issues were found in the violations report. * CompletedWithViolations - The execution completed, but constraint violations were detected. * Failed - The monitoring execution failed, maybe due to client error (perhaps incorrect role premissions) or infrastructure issues. Further examination of FailureReason and ExitMessage is necessary to identify what exactly happened. * Stopped - The job exceeded max runtime or was manually stopped.

[21]:
latest_execution = mon_executions[-1]  # Latest execution's index is -1, second to last is -2, etc
time.sleep(60)
latest_execution.wait(logs=False)

print("Latest execution status: {}".format(latest_execution.describe()["ProcessingJobStatus"]))
print("Latest execution result: {}".format(latest_execution.describe()["ExitMessage"]))

latest_job = latest_execution.describe()
if latest_job["ProcessingJobStatus"] != "Completed":
    print(
        "====STOP==== \n No completed executions to inspect further. Please wait till an execution completes or investigate previously reported failures."
    )
.............................................!Latest execution status: Completed
Latest execution result: CompletedWithViolations: Job completed successfully with 60 violations.
[22]:
report_uri = latest_execution.output.destination
print("Report Uri: {}".format(report_uri))
Report Uri: s3://sagemaker-us-west-2-000000000000/sagemaker/DEMO-ModelMonitor/reports/DEMO-xgb-churn-pred-model-monitor-2022-04-18-00-13-16/DEMO-xgb-churn-pred-model-monitor-schedule-2022-04-18-00-29-48/2022/04/18/01

List the generated reports

[23]:
from urllib.parse import urlparse

s3uri = urlparse(report_uri)
report_bucket = s3uri.netloc
report_key = s3uri.path.lstrip("/")
print("Report bucket: {}".format(report_bucket))
print("Report key: {}".format(report_key))

s3_client = boto3.Session().client("s3")
result = s3_client.list_objects(Bucket=report_bucket, Prefix=report_key)
report_files = [report_file.get("Key") for report_file in result.get("Contents")]
print("Found Report Files:")
print("\n ".join(report_files))
Report bucket: sagemaker-us-west-2-521695447989
Report key: sagemaker/DEMO-ModelMonitor/reports/DEMO-xgb-churn-pred-model-monitor-2022-04-18-00-13-16/DEMO-xgb-churn-pred-model-monitor-schedule-2022-04-18-00-29-48/2022/04/18/01
Found Report Files:
sagemaker/DEMO-ModelMonitor/reports/DEMO-xgb-churn-pred-model-monitor-2022-04-18-00-13-16/DEMO-xgb-churn-pred-model-monitor-schedule-2022-04-18-00-29-48/2022/04/18/01/constraint_violations.json
 sagemaker/DEMO-ModelMonitor/reports/DEMO-xgb-churn-pred-model-monitor-2022-04-18-00-13-16/DEMO-xgb-churn-pred-model-monitor-schedule-2022-04-18-00-29-48/2022/04/18/01/constraints.json
 sagemaker/DEMO-ModelMonitor/reports/DEMO-xgb-churn-pred-model-monitor-2022-04-18-00-13-16/DEMO-xgb-churn-pred-model-monitor-schedule-2022-04-18-00-29-48/2022/04/18/01/statistics.json

Violations report

Any violations compared to the baseline are listed below.

[24]:
violations = my_default_monitor.latest_monitoring_constraint_violations()
pd.set_option("display.max_colwidth", None)
constraints_df = pd.io.json.json_normalize(violations.body_dict["violations"])
constraints_df.head(10)
/opt/conda/lib/python3.7/site-packages/ipykernel_launcher.py:3: FutureWarning: pandas.io.json.json_normalize is deprecated, use pandas.json_normalize instead
  This is separate from the ipykernel package so we can avoid doing imports until
[24]:
feature_name constraint_check_type description
0 Area Code_408 data_type_check Data type match requirement is not met. Expected data type: Integral, Expected match: 100.0%. Observed: Only 99.64592817400101% of data is Integral.
1 VMail Plan_no data_type_check Data type match requirement is not met. Expected data type: Integral, Expected match: 100.0%. Observed: Only 99.64592817400101% of data is Integral.
2 State_NV data_type_check Data type match requirement is not met. Expected data type: Integral, Expected match: 100.0%. Observed: Only 99.64592817400101% of data is Integral.
3 State_NC data_type_check Data type match requirement is not met. Expected data type: Integral, Expected match: 100.0%. Observed: Only 99.64592817400101% of data is Integral.
4 State_SC data_type_check Data type match requirement is not met. Expected data type: Integral, Expected match: 100.0%. Observed: Only 99.64592817400101% of data is Integral.
5 State_UT data_type_check Data type match requirement is not met. Expected data type: Integral, Expected match: 100.0%. Observed: Only 99.64592817400101% of data is Integral.
6 State_LA data_type_check Data type match requirement is not met. Expected data type: Integral, Expected match: 100.0%. Observed: Only 99.64592817400101% of data is Integral.
7 State_CA data_type_check Data type match requirement is not met. Expected data type: Integral, Expected match: 100.0%. Observed: Only 99.64592817400101% of data is Integral.
8 State_RI data_type_check Data type match requirement is not met. Expected data type: Integral, Expected match: 100.0%. Observed: Only 99.64592817400101% of data is Integral.
9 Churn data_type_check Data type match requirement is not met. Expected data type: Integral, Expected match: 100.0%. Observed: Only 0.0% of data is Integral.

Other commands

We can also start and stop the monitoring schedules.

[25]:
# my_default_monitor.stop_monitoring_schedule()
# my_default_monitor.start_monitoring_schedule()

Delete resources

You can keep your endpoint running to continue capturing data. If you do not plan to collect more data or use this endpoint further, delete the endpoint to avoid incurring additional charges. Note that deleting your endpoint does not delete the data that was captured during the model invocations. That data persists in Amazon S3 until you delete it yourself.

You need to delete the schedule before deleting the model and endpoint.

[26]:
my_default_monitor.stop_monitoring_schedule()
my_default_monitor.delete_monitoring_schedule()
time.sleep(60)  # Wait for the deletion

Stopping Monitoring Schedule with name: DEMO-xgb-churn-pred-model-monitor-schedule-2022-04-18-00-29-48

Deleting Monitoring Schedule with name: DEMO-xgb-churn-pred-model-monitor-schedule-2022-04-18-00-29-48
[27]:
predictor.delete_model()
[28]:
predictor.delete_endpoint()

Notebook CI Test Results

This notebook was tested in multiple regions. The test results are as follows, except for us-west-2 which is shown at the top of the notebook.

This us-east-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This us-east-2 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This us-west-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This ca-central-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This sa-east-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This eu-west-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This eu-west-2 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This eu-west-3 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This eu-central-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This eu-north-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This ap-southeast-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This ap-southeast-2 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This ap-northeast-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This ap-northeast-2 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This ap-south-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable