SageMaker Inference Recommender for a scikit-learn model
This notebook’s CI test result for us-west-2 is as follows. CI test results in other regions can be found at the end of the notebook.
Contents
1. Introduction
SageMaker Inference Recommender is a new capability of SageMaker that reduces the time required to get machine learning (ML) models in production by automating performance benchmarking and load testing models across SageMaker ML instances. You can use Inference Recommender to deploy your model to a real-time inference endpoint that delivers the best performance at the lowest cost.
Get started with Inference Recommender on SageMaker in minutes while selecting an instance and get an optimized endpoint configuration in hours, eliminating weeks of manual testing and tuning time.
To begin, let’s update the required packages i.e. SageMaker Python SDK, boto3
, botocore
and awscli
[ ]:
!pip install -U sagemaker
[ ]:
import sys
!{sys.executable} -m pip install sagemaker botocore boto3 awscli --upgrade
!pip install --upgrade pip awscli botocore boto3 --quiet
2. Download the Model & payload
In this example, we are using a pre-trained scikit-learn model, trained on the California Housing dataset, present in Scikit-Learn: https://scikit-learn.org/stable/modules/generated/sklearn.datasets.fetch_california_housing.html. The California Housing dataset was originally published in:
Pace, R. Kelley, and Ronald Barry. “Sparse spatial auto-regressions.” Statistics & Probability Letters 33.3 (1997): 291-297.
[ ]:
from sagemaker import get_execution_role, Session, image_uris
from sklearn.datasets import fetch_california_housing
from sklearn.model_selection import train_test_split
import pandas as pd
import boto3
import datetime
import time
import os
region = boto3.Session().region_name
role = get_execution_role()
sagemaker_session = Session()
print(region)
[ ]:
export_dir = "./model/"
if not os.path.exists(export_dir):
os.makedirs(export_dir)
print("Directory ", export_dir, " Created ")
else:
print("Directory ", export_dir, " already exists")
model_archive_name = "sk-model.tar.gz"
sourcedir_archive_name = "sourcedir.tar.gz"
[ ]:
!aws s3 cp s3://aws-ml-blog/artifacts/scikit_learn_bring_your_own_model/model.joblib {export_dir}
Tar the model and code
[ ]:
!cd model && tar -cvpzf ../{model_archive_name} *
[ ]:
!cd code && tar -cvpzf ../{sourcedir_archive_name} *
Download the payload
[ ]:
payload_location = "./sample-payload/"
if not os.path.exists(payload_location):
os.makedirs(payload_location)
print("Directory ", payload_location, " Created ")
else:
print("Directory ", payload_location, " already exists")
payload_archive_name = "sk_payload.tar.gz"
[ ]:
data = fetch_california_housing()
X_train, X_test, y_train, y_test = train_test_split(
data.data, data.target, test_size=0.25, random_state=42
)
# we don't train a model, so we will need only the testing data
testX = pd.DataFrame(X_test, columns=data.feature_names)
# Save testing data to CSV
testX[data.feature_names].head(10).to_csv(
os.path.join(payload_location, "test_data.csv"), header=False, index=False
)
Tar the payload
[ ]:
!cd ./sample-payload/ && tar czvf ../{payload_archive_name} *
Upload to S3
We now have a model archive ready. We need to upload it to S3 before we can use it with Inference Recommender, so we will use the SageMaker Python SDK to handle the upload.
We need to create an archive that contains individual files that Inference Recommender can send to your SageMaker Endpoints. Inference Recommender will randomly sample files from this archive so make sure it contains a similar distribution of payloads you’d expect in production. Note that your inference code must be able to read in the file formats from the sample payload.
[ ]:
%%time
import os
import boto3
import re
import copy
import time
from time import gmtime, strftime
import sagemaker
from sagemaker import get_execution_role
# S3 bucket for saving code and model artifacts.
# Feel free to specify a different bucket and prefix
bucket = sagemaker.Session().default_bucket()
prefix = "sagemaker/scikit-learn-inference-recommender"
sample_payload_url = sagemaker.Session().upload_data(
payload_archive_name, bucket=bucket, key_prefix=prefix + "/inference"
)
sourcedir_url = sagemaker.Session().upload_data(
sourcedir_archive_name, bucket=bucket, key_prefix=prefix + "/california_housing/sourcedir"
)
model_url = sagemaker.Session().upload_data(
model_archive_name, bucket=bucket, key_prefix=prefix + "/california_housing/model"
)
print(sample_payload_url)
print(sourcedir_url)
print(model_url)
3. Machine Learning model details
Inference Recommender uses information about your ML model to recommend the best instance types and endpoint configurations for deployment. You can provide as much or as little information as you’d like and Inference Recommender will use that to provide recommendations.
Example ML Domains: COMPUTER_VISION
, NATURAL_LANGUAGE_PROCESSING
, MACHINE_LEARNING
Example ML Tasks: CLASSIFICATION
, REGRESSION
, OBJECT_DETECTION
, OTHER
Note: Select the task that is the closest match to your model. Chose OTHER
if none apply.
Example Model name: resnet50
, yolov4
, xgboost
etc
Use list_model_metadata API to fetch the list of available models. This will help you to pick the closest model for better recommendation.
[ ]:
import boto3
import pandas as pd
client = boto3.client("sagemaker", region)
list_model_metadata_response = client.list_model_metadata()
domains = []
frameworks = []
framework_versions = []
tasks = []
models = []
for model_summary in list_model_metadata_response["ModelMetadataSummaries"]:
domains.append(model_summary["Domain"])
tasks.append(model_summary["Task"])
models.append(model_summary["Model"])
frameworks.append(model_summary["Framework"])
framework_versions.append(model_summary["FrameworkVersion"])
data = {
"Domain": domains,
"Task": tasks,
"Framework": frameworks,
"FrameworkVersion": framework_versions,
"Model": models,
}
df = pd.DataFrame(data)
pd.set_option("display.max_rows", None)
pd.set_option("display.max_columns", None)
pd.set_option("display.width", 1000)
pd.set_option("display.colheader_justify", "center")
pd.set_option("display.precision", 3)
display(df.sort_values(by=["Domain", "Task", "Framework", "FrameworkVersion"]))
In this example, as we are predicting California Housing Prices with scikit-learn
, we select MACHINE_LEARNING
as the Domain, REGRESSION
as the Task, SAGEMAKER-SCIKIT-LEARN
as the Framework, and sagemaker-scikit-learn
as the Model.
[ ]:
ml_domain = "MACHINE_LEARNING"
ml_task = "REGRESSION"
ml_framework = "SAGEMAKER-SCIKIT-LEARN"
framework_version = "1.2-1"
model = "sagemaker-scikit-learn"
Container image URL
If you don’t have an inference container image, you can use Prebuilt Amazon SageMaker Docker Images for Scikit-learn provided by AWS to serve your ML model.
[ ]:
from sagemaker import image_uris
# ML model details
model_name = "scikit-learn-california-housing" + datetime.datetime.now().strftime(
"%Y-%m-%d-%H-%M-%S"
)
sagemaker_program = "inference.py"
inference_image = image_uris.retrieve(
framework="sklearn",
region=region,
version=framework_version,
py_version="py3",
instance_type="ml.m5.large",
)
print(inference_image)
4. Register Model Version/Package
Inference Recommender expects the model to be packaged in the model registry. Here, we are creating a model package group and a model package version. The model package version which takes container, model URL
etc. will now allow you to pass additional information about the model like Domain
, Task
, Framework
, FrameworkVersion
, NearestModelName
, SamplePayloadUrl
You specify a list of the instance types that are used to generate inferences in real-time
inSupportedRealtimeInferenceInstanceTypes
parameter. This list of instance types is key for the inference recommender feature. For inference on tabular data, e.g. with scikit-learn
, or XGBoost
models you’ll probably want to use standard instances or compute optimized ones. For deep learning models, you will probably want to use accelerated computing instances (GPU).
As SamplePayloadUrl
and SupportedContentTypes
parameters are essential for benchmarking the endpoint, we also highly recommend that you specify Domain
, Task
, Framework
, FrameworkVersion
, NearestModelName
for better inference recommendation.
[ ]:
import boto3
client = boto3.client("sagemaker", region)
model_package_group_name = "scikit-learn-california-housing-" + str(round(time.time()))
print(model_package_group_name)
model_package_group_response = client.create_model_package_group(
ModelPackageGroupName=str(model_package_group_name),
ModelPackageGroupDescription="My sample California housing model package group",
)
print(model_package_group_response)
[ ]:
model_package_version_response = client.create_model_package(
ModelPackageGroupName=str(model_package_group_name),
ModelPackageDescription="scikit-learn Inference Recommender Demo",
Domain=ml_domain,
Task=ml_task,
SamplePayloadUrl=sample_payload_url,
InferenceSpecification={
"Containers": [
{
"ContainerHostname": "scikit-learn",
"Image": inference_image,
"ModelDataUrl": model_url,
"Framework": ml_framework,
"NearestModelName": model,
"Environment": {
"SAGEMAKER_CONTAINER_LOG_LEVEL": "20",
"SAGEMAKER_PROGRAM": sagemaker_program,
"SAGEMAKER_REGION": region,
"SAGEMAKER_SUBMIT_DIRECTORY": sourcedir_url,
},
},
],
"SupportedRealtimeInferenceInstanceTypes": [
"ml.c5.large",
"ml.c5.xlarge",
"ml.c5.2xlarge",
"ml.m5.xlarge",
"ml.m5.2xlarge",
],
"SupportedContentTypes": ["text/csv"],
"SupportedResponseMIMETypes": ["text/csv"],
},
)
print(model_package_version_response)
Alternative Option: ContainerConfig
If you are missing mandatory fields to create an inference recommender job in your model package version like so (this create_model_package
is missing Domain
, Task
, and SamplePayloadUrl
):
client.create_model_package(
ModelPackageGroupName=str(model_package_group_name),
ModelPackageDescription="scikit-learn Inference Recommender Demo",
InferenceSpecification={
"Containers": [
{
"ContainerHostname": "scikit-learn",
"Image": inference_image,
"ModelDataUrl": model_url,
"Framework": ml_framework,
"NearestModelName": model,
"Environment": {
"SAGEMAKER_CONTAINER_LOG_LEVEL": "20",
"SAGEMAKER_PROGRAM": sagemaker_program,
"SAGEMAKER_REGION": region,
"SAGEMAKER_SUBMIT_DIRECTORY": sourcedir_url,
},
},
],
"SupportedRealtimeInferenceInstanceTypes": [
"ml.c5.large",
"ml.c5.xlarge",
"ml.c5.2xlarge",
"ml.m5.xlarge",
"ml.m5.2xlarge",
],
"SupportedContentTypes": ["text/csv"],
"SupportedResponseMIMETypes": ["text/csv"],
},
)
You may define the fields Domain
, Task
, and SamplePayloadUrl
in the optional field ContainerConfig
like so:
payload_config = {
"SamplePayloadUrl": sample_payload_url,
}
container_config = {
"Domain": ml_domain,
"Task": ml_task,
"PayloadConfig": payload_config,
}
And then provide it directly within create_inference_recommendations_job()
API like so:
default_response = client.create_inference_recommendations_job(
JobName=str(default_job),
JobDescription="",
JobType="Default",
RoleArn=role,
InputConfig={
"ModelPackageVersionArn": model_package_arn,
"ContainerConfig": container_config
},
)
For more information on what else can be provided via ContainerConfig
please refer to the CreateInferenceRecommendationsJob
doc here: CreateInferenceRecommendationsJob
5: Create a SageMaker Inference Recommender Default Job
Now with your model in Model Registry, you can kick off a ‘Default’ job to get instance recommendations. This only requires your ModelPackageVersionArn
and comes back with recommendations within an hour.
The output is a list of instance type recommendations with associated environment variables, cost, throughput and latency metrics.
[ ]:
import boto3
from sagemaker import get_execution_role
client = boto3.client("sagemaker", region)
role = get_execution_role()
default_job = "scikit-learn-basic-recommender-job-" + datetime.datetime.now().strftime(
"%Y-%m-%d-%H-%M-%S"
)
default_response = client.create_inference_recommendations_job(
JobName=str(default_job),
JobDescription="scikit-learn Inference Basic Recommender Job",
JobType="Default",
RoleArn=role,
InputConfig={"ModelPackageVersionArn": model_package_version_response["ModelPackageArn"]},
)
print(default_response)
6. Instance Recommendation Results
The inference recommender job provides multiple endpoint recommendations in its result. The recommendation includes InstanceType
, InitialInstanceCount
, EnvironmentParameters
which includes tuned parameters for better performance. We also include the benchmarking results like MaxInvocations
, ModelLatency
, CostPerHour
and CostPerInference
for deeper analysis. The information provided will help you narrow down to a specific endpoint configuration that suits your use
case.
Example:
CostPerInference
metricsModelLatency
/ MaxInvocations
metricsRunning the Inference recommender job will take ~35 minutes.
[ ]:
%%time
import boto3
import pprint
import pandas as pd
client = boto3.client("sagemaker", region)
ended = False
while not ended:
inference_recommender_job = client.describe_inference_recommendations_job(
JobName=str(default_job)
)
if inference_recommender_job["Status"] in ["COMPLETED", "STOPPED", "FAILED"]:
ended = True
else:
print("Inference recommender job in progress")
time.sleep(300)
if inference_recommender_job["Status"] == "FAILED":
print("Inference recommender job failed ")
print("Failed Reason: {}".inference_recommender_job["FailedReason"])
else:
print("Inference recommender job completed")
Detailing out the result
[ ]:
data = [
{**x["EndpointConfiguration"], **x["ModelConfiguration"], **x["Metrics"]}
for x in inference_recommender_job["InferenceRecommendations"]
]
df = pd.DataFrame(data)
dropFilter = df.filter(["VariantName"])
df.drop(dropFilter, inplace=True, axis=1)
pd.set_option("max_colwidth", 400)
By MaxInvocations
- The maximum number of requests per minute expected for the endpoint.
[ ]:
df.sort_values(by=["MaxInvocations"], ascending=False).head()
By ModelLatencyThresholds
- The interval of time taken by a model to respond as viewed from SageMaker. The interval includes the local communication time taken to send the request and to fetch the response from the container of a model and the time taken to complete the inference in the container.
[ ]:
df.sort_values(by=["ModelLatency"]).head()
Optional: ListInferenceRecommendationsJobSteps
To see the list of subtasks for an Inference Recommender job, simply provide the JobName
to the ListInferenceRecommendationsJobSteps
API.
To see more information for the API, please refer to the doc here: ListInferenceRecommendationsJobSteps
[ ]:
list_job_steps_response = client.list_inference_recommendations_job_steps(JobName=str(default_job))
print(list_job_steps_response)
7. Custom Load Test
With an ‘Advanced’ job, you can provide your production requirements, select instance types, tune environment variables and perform more extensive load tests. This typically takes 2 hours depending on your traffic pattern and number of instance types.
The output is a list of endpoint configuration recommendations (instance type, instance count, environment variables) with associated cost, throughput and latency metrics.
In the below example, we aim to limit the latency requirement to 50 ms. The goal is to find the best performance in the sense of the maximum number of requests per minute expected for the endpoint for a ml.m5.2xlarge
instance. We specify DurationInSeconds
, how long traffic phase should be, to be 120, and the maximum duration of the job, in seconds JobDurationInSeconds
to 7200.
[ ]:
import boto3
client = boto3.client("sagemaker", region)
role = get_execution_role()
advanced_job = "scikit-learn-advanced-recommender-job-" + datetime.datetime.now().strftime(
"%Y-%m-%d-%H-%M-%S"
)
advanced_response = client.create_inference_recommendations_job(
JobName=advanced_job,
JobDescription="scikit-learn Inference Advanced Recommender Job",
JobType="Advanced",
RoleArn=role,
InputConfig={
"ModelPackageVersionArn": model_package_version_response["ModelPackageArn"],
"JobDurationInSeconds": 7200,
"EndpointConfigurations": [{"InstanceType": "ml.m5.2xlarge"}],
"TrafficPattern": {
"TrafficType": "PHASES",
"Phases": [{"InitialNumberOfUsers": 1, "SpawnRate": 1, "DurationInSeconds": 120}],
},
},
StoppingConditions={
"MaxInvocations": 500,
"ModelLatencyThresholds": [{"Percentile": "P95", "ValueInMilliseconds": 50}],
},
)
print(advanced_response)
8. Custom Load Test Results
Inference recommender runs benchmarks on both of the endpoint configurations. Below is the result.
Running the Inference recommender job will take ~15 minutes.
[ ]:
%%time
import boto3
import pprint
import pandas as pd
client = boto3.client("sagemaker", region)
ended = False
while not ended:
inference_recommender_job = client.describe_inference_recommendations_job(
JobName=str(advanced_job)
)
if inference_recommender_job["Status"] in ["COMPLETED", "STOPPED", "FAILED"]:
ended = True
else:
print("Inference recommender job in progress")
time.sleep(300)
if inference_recommender_job["Status"] == "FAILED":
print("Inference recommender job failed ")
print("Failed Reason: {}".inference_recommender_job["FailedReason"])
else:
print("Inference recommender job completed")
Detailing out the result
Analyzing load test result, we can see that to achieve 50 ms latency, we will need two ml.m5.2xlarge
instances, with MaxInvocations
(The maximum number of requests per minute expected for the endpoint) of ~736.
[ ]:
data = [
{**x["EndpointConfiguration"], **x["ModelConfiguration"], **x["Metrics"]}
for x in inference_recommender_job["InferenceRecommendations"]
]
df = pd.DataFrame(data)
dropFilter = df.filter(["VariantName"])
df.drop(dropFilter, inplace=True, axis=1)
pd.set_option("max_colwidth", 400)
df.head()
Notebook CI Test Results
This notebook was tested in multiple regions. The test results are as follows, except for us-west-2 which is shown at the top of the notebook.