SageMaker Inference Recommender - XGBoost
This notebook’s CI test result for us-west-2 is as follows. CI test results in other regions can be found at the end of the notebook.
1. Introduction
SageMaker Inference Recommender is a new capability of SageMaker that reduces the time required to get machine learning (ML) models in production by automating load tests and optimizing model performance across instance types. You can use Inference Recommender to select a real-time inference endpoint that delivers the best performance at the lowest cost.
Get started with Inference Recommender on SageMaker in minutes while selecting an instance and get an optimized endpoint configuration in hours, eliminating weeks of manual testing and tuning time.
2. Setup
Note that we are using the conda_python3
kernel in SageMaker Notebook Instances. This is running Python 3.6. If you’d like to use the same setup, in the AWS Management Console, go to the Amazon SageMaker console. Choose Notebook Instances, and click create a new notebook instance. Upload the current notebook and set the kernel. You can also run this in SageMaker Studio Notebooks with the Python 3 (Data Science)
kernel.
In the next steps, you’ll import standard methods and libraries as well as set variables that will be used in this notebook. The get_execution_role
function retrieves the AWS Identity and Access Management (IAM) role you created at the time of creating your notebook instance.
[ ]:
!pip install --upgrade pip awscli botocore boto3 --quiet
[ ]:
from sagemaker import get_execution_role, Session, image_uris
import boto3
import time
[ ]:
region = boto3.Session().region_name
role = get_execution_role()
sm_client = boto3.client("sagemaker", region_name=region)
sagemaker_session = Session()
print(region)
3. Machine learning model details
Inference Recommender uses metadata about your ML model to recommend the best instance types and endpoint configurations for deployment. You can provide as much or as little information as you’d like but the more information you provide, the better your recommendations will be.
ML Frameworks: TENSORFLOW, PYTORCH, XGBOOST, SAGEMAKER-SCIKIT-LEARN
ML Domains: COMPUTER_VISION, NATURAL_LANGUAGE_PROCESSING, MACHINE_LEARNING
Example ML Tasks: CLASSIFICATION, REGRESSION, IMAGE_CLASSIFICATION, OBJECT_DETECTION, SEGMENTATION, MASK_FILL, TEXT_CLASSIFICATION, TEXT_GENERATION, OTHER
Note: Select the task that is the closest match to your model. Chose OTHER
if none apply.
[ ]:
# ML framework details
framework = "XGBOOST"
framework_version = "1.2.0"
# model name as standardized by model zoos or a similar open source model
model_name = "xgboost"
# ML model details
ml_domain = "MACHINE_LEARNING"
ml_task = "CLASSIFICATION"
4. Create a model archive
SageMaker models need to be packaged in .tar.gz
files. When your SageMaker Endpoint is provisioned, the files in the archive will be extracted and put in /opt/ml/model/
on the Endpoint.
In this step, there are two optional tasks to:
Download a pretrained model from Keras applications
Download a sample inference script (inference.py) from S3
These tasks are provided as a sample reference but can and should be modified when using your own trained models with Inference Recommender.
Optional: Train an XGBoost model
Let’s quickly train an XGBoost model. If you already have a model, you can skip this step and proceed to the next section.
For the purposes of this notebook, we are training an XGBoost model on random data.
[ ]:
# Install sklearn and XGBoost
!pip3 install -U scikit-learn xgboost==1.2.0 --quiet
[ ]:
# Import required libraries
import numpy as np
from numpy import loadtxt
from xgboost import XGBClassifier
from sklearn.model_selection import train_test_split
[ ]:
# Generate dummy data to perform binary classification
seed = 7
features = 50 # number of features
samples = 10000 # number of samples
X = np.random.rand(samples, features).astype("float32")
Y = np.random.randint(2, size=samples)
test_size = 0.1
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=test_size, random_state=seed)
[ ]:
model = XGBClassifier()
model.fit(X_train, y_train)
[ ]:
model_fname = "xgboost.model"
model.save_model(model_fname)
Create a tarball
To bring your own XGBoost model, SageMaker expects a single archive file in .tar.gz format, containing a model file and optionally inference code.
[ ]:
model_archive_name = "xgbmodel.tar.gz"
[ ]:
!tar -cvpzf {model_archive_name} 'xgboost.model'
Upload to S3
We now have a model archive ready. We need to upload it to S3 before we can use with Inference Recommender. Furthermore, we will use the SageMaker Python SDK to handle the upload.
[ ]:
# model package tarball (model artifact + inference code)
model_url = sagemaker_session.upload_data(path=model_archive_name, key_prefix="xgbmodel")
print("model uploaded to: {}".format(model_url))
5. Create a sample payload archive
We need to create an archive that contains individual files that Inference Recommender can send to your SageMaker Endpoints. Inference Recommender will randomly sample files from this archive so make sure it contains a similar distribution of payloads you’d expect in production. Note that your inference code must be able to read in the file formats from the sample payload.
Here we are only adding a single CSV file for the example. In your own use case(s), it’s recommended to add a variety of samples that is representative of your payloads.
[ ]:
payload_archive_name = "xgb_payload.tar.gz"
[ ]:
print(X_test.shape)
[ ]:
batch_size = 100
np.savetxt("sample.csv", X_test[0:batch_size, :], delimiter=",")
[ ]:
!wc -l sample.csv
Create a tarball
[ ]:
!tar -cvzf {payload_archive_name} sample.csv
Upload to S3
Next, we’ll upload the packaged payload examples (payload.tar.gz) that was created above to S3. The S3 location will be used as input to our Inference Recommender job later in this notebook.
[ ]:
sample_payload_url = sagemaker_session.upload_data(
path=payload_archive_name, key_prefix="xgb_payload"
)
6. Register model in Model Registry
In order to use Inference Recommender, you must have a versioned model in SageMaker Model Registry. To register a model in the Model Registry, you must have a model artifact packaged in a tarball and an inference container image. Registering a model includes the following steps:
Create Model Group: This is a one-time task per machine learning use case. A Model Group contains one or more versions of your packaged model.
Register Model Version/Package: This task is performed for each new packaged model version.
Container image URL
If you don’t have an inference container image, you can use one of the open source AWS Deep Learning Containers (DLCs) provided by AWS to serve your ML model. The code below retrieves a DLC based on your ML framework, framework version, python version, and instance type.
[ ]:
dlc_uri = image_uris.retrieve("xgboost", region, "1.2-1")
dlc_uri
Create Model Group
[ ]:
model_package_group_name = "{}-cpu-models-".format(framework) + str(round(time.time()))
model_package_group_description = "{} models".format(ml_task.lower())
model_package_group_input_dict = {
"ModelPackageGroupName": model_package_group_name,
"ModelPackageGroupDescription": model_package_group_description,
}
create_model_package_group_response = sm_client.create_model_package_group(
**model_package_group_input_dict
)
print(
"ModelPackageGroup Arn : {}".format(create_model_package_group_response["ModelPackageGroupArn"])
)
Register Model Version/Package
In this step, you’ll register your pretrained model that was packaged in the prior steps as a new version in SageMaker Model Registry. First, you’ll configure the model package/version identifying which model package group this new model should be registered within as well as identify the initial approval status. You’ll also identify the domain and task for your model. These values were set earlier in the notebook where ml_domain = 'MACHINE_LEARNING'
and ml_task = 'CLASSIFICATION'
Note: ModelApprovalStatus is a configuration parameter that can be used in conjunction with SageMaker Projects to trigger automated deployment pipeline.
[ ]:
model_package_description = "{} {} inference recommender".format(framework, model_name)
model_approval_status = "PendingManualApproval"
create_model_package_input_dict = {
"ModelPackageGroupName": model_package_group_name,
"Domain": ml_domain.upper(),
"Task": ml_task.upper(),
"SamplePayloadUrl": sample_payload_url,
"ModelPackageDescription": model_package_description,
"ModelApprovalStatus": model_approval_status,
}
Set up inference specification
You’ll now set up the inference specification configuration for your model version. This contains information on how the model should be hosted.
Inference Recommender expects a single input MIME type for sending requests. Learn more about common inference data formats on SageMaker. This MIME type will be sent in the Content-Type header when invoking your endpoint.
[ ]:
input_mime_types = ["text/csv"]
If you specify a set of instance types below (i.e. non-empty list), then Inference Recommender will only support recommendations within the set of instances below. For this example, we provide a list of common CPU instance types used with XGBoost. Note that, if you want to try to compile your XGboost model with Amazon SageMaker Neo, it supports images list here: Inference Container Images or SageMaker XGboost containers. And you need to make sure the xgboost version is 1.0 to 1.3.
[ ]:
supported_realtime_inference_types = [
"ml.m4.2xlarge",
"ml.c5.2xlarge",
"ml.c5.xlarge",
"ml.c5.9xlarge",
]
[ ]:
modelpackage_inference_specification = {
"InferenceSpecification": {
"Containers": [
{
"Image": dlc_uri,
"Framework": framework.upper(),
"FrameworkVersion": framework_version,
"NearestModelName": model_name,
}
],
"SupportedContentTypes": input_mime_types, # required, must be non-null
"SupportedResponseMIMETypes": [],
"SupportedRealtimeInferenceInstanceTypes": supported_realtime_inference_types, # optional
}
}
# Specify the model data
modelpackage_inference_specification["InferenceSpecification"]["Containers"][0][
"ModelDataUrl"
] = model_url
Now that you’ve configured the model package, the next step is to create the model package/version in SageMaker Model Registry
[ ]:
create_model_package_input_dict.update(modelpackage_inference_specification)
[ ]:
create_mode_package_response = sm_client.create_model_package(**create_model_package_input_dict)
model_package_arn = create_mode_package_response["ModelPackageArn"]
print("ModelPackage Version ARN : {}".format(model_package_arn))
Alternative Option: ContainerConfig
If you are missing mandatory fields to create an inference recommender job in your model package version like so (this create_model_package_input_dict
is missing Domain
, Task
, and SamplePayloadUrl
):
create_model_package_input_dict = {
"ModelPackageGroupName": model_package_group_name,
"ModelPackageDescription": model_package_description,
"ModelApprovalStatus": model_approval_status,
}
You may define the fields Domain
, Task
, and SamplePayloadUrl
in the optional field ContainerConfig
like so:
payload_config = {
"SamplePayloadUrl": sample_payload_url,
}
container_config = {
"Domain": ml_domain.upper(),
"Task": ml_task.upper(),
"PayloadConfig": payload_config,
}
And then provide it directly within create_inference_recommendations_job()
API like so:
default_response = client.create_inference_recommendations_job(
JobName=str(default_job),
JobDescription="",
JobType="Default",
RoleArn=role,
InputConfig={
"ModelPackageVersionArn": model_package_arn,
"ContainerConfig": container_config
},
)
For more information on what else can be provided via ContainerConfig
please refer to the CreateInferenceRecommendationsJob
doc here: CreateInferenceRecommendationsJob
7. Create an Inference Recommender Default Job
Now with your model in Model Registry, you can kick off a ‘Default’ job to get instance recommendations. This only requires your ModelPackageVersionArn
and comes back with recommendations within 45 minutes.
The output is a list of instance type recommendations with associated environment variables, cost, throughput and latency metrics.
[ ]:
job_name = model_name + "-instance-" + str(round(time.time()))
job_description = "{} {}".format(framework, model_name)
job_type = "Default"
print(job_name)
[ ]:
rv = sm_client.create_inference_recommendations_job(
JobName=job_name,
JobDescription=job_description, # optional
JobType=job_type,
RoleArn=role,
InputConfig={"ModelPackageVersionArn": model_package_arn},
)
print(rv)
8. Instance Recommendation Results
Each inference recommendation includes InstanceType
, InitialInstanceCount
, EnvironmentParameters
which are tuned environment variable parameters for better performance. We also include performance and cost metrics such as MaxInvocations
, ModelLatency
, CostPerHour
and CostPerInference
. We believe these metrics will help you narrow down to a specific endpoint configuration that suits your use case.
Example:
CostPerInference
metricsModelLatency
/ MaxInvocations
metricsMetric |
Description |
---|---|
ModelLatency |
The interval of time taken by a model to respond as viewed from SageMaker. This interval includes the local communication times taken to send the request and to fetch the response from the container of a model and the time taken to complete the inference in the container. Units: Microseconds |
MaximumInvocations |
The maximum number of InvokeEndpoint requests sent to a model endpoint. Units: None |
CostPerHour |
The estimated cost per hour for your real-time endpoint. Units: US Dollars |
CostPerInference |
The estimated cost per inference for your real-time endpoint. Units: US Dollars |
[ ]:
import pprint
import pandas as pd
finished = False
while not finished:
inference_recommender_job = sm_client.describe_inference_recommendations_job(JobName=job_name)
if inference_recommender_job["Status"] in ["COMPLETED", "STOPPED", "FAILED"]:
finished = True
else:
print("In progress")
time.sleep(300)
if inference_recommender_job["Status"] == "FAILED":
print("Inference recommender job failed ")
print("Failed Reason: {}".inference_recommender_job["FailedReason"])
else:
print("Inference recommender job completed")
[ ]:
data = [
{**x["EndpointConfiguration"], **x["ModelConfiguration"], **x["Metrics"]}
for x in inference_recommender_job["InferenceRecommendations"]
]
df = pd.DataFrame(data)
dropFilter = df.filter(["VariantName"])
df.drop(dropFilter, inplace=True, axis=1)
pd.set_option("max_colwidth", 400)
df.head()
Optional: ListInferenceRecommendationsJobSteps
To see the list of subtasks for an Inference Recommender job, simply provide the JobName
to the ListInferenceRecommendationsJobSteps
API.
To see more information for the API, please refer to the doc here: ListInferenceRecommendationsJobSteps
[ ]:
list_job_steps_response = sm_client.list_inference_recommendations_job_steps(JobName=job_name)
print(list_job_steps_response)
9. Conclusion
This notebook discussed how to use SageMaker Inference Recommender with an XGBoost model to help determine the right CPU instance to reduce costs and maximize performance. The notebook walked you through training a quick XGBoost model, registering your model in Model Registry, and creating an Inference Recommender Default job to get recommendations. You can modify the batch size, features and instance types to match your own ML workload as well as bring your own XGBoost model for testing.
Notebook CI Test Results
This notebook was tested in multiple regions. The test results are as follows, except for us-west-2 which is shown at the top of the notebook.