SageMaker Inference Recommender for HuggingFace BERT Sentiment Analysis
This notebook’s CI test result for us-west-2 is as follows. CI test results in other regions can be found at the end of the notebook.
Contents
1. Introduction
SageMaker Inference Recommender is a new capability of SageMaker that reduces the time required to get machine learning (ML) models in production by automating performance benchmarking and load testing models across SageMaker ML instances. You can use Inference Recommender to deploy your model to a real-time inference endpoint that delivers the best performance at the lowest cost.
Get started with Inference Recommender on SageMaker in minutes while selecting an instance and get an optimized endpoint configuration in hours, eliminating weeks of manual testing and tuning time.
To begin, let’s update the required packages i.e. SageMaker Python SDK, boto3
, botocore
and awscli
[ ]:
import sys
!{sys.executable} -m pip install sagemaker botocore boto3 awscli transformers accelerate --upgrade
!pip install --upgrade pip awscli botocore boto3 --quiet
If you run this notebook in SageMaker Studio, you need to make sure ipywidgets
is installed and restart the kernel, so please uncomment the code in the next cell, and run it.
[ ]:
# %%capture
# import IPython
# import sys
!{sys.executable} -m pip install ipywidgets
# IPython.Application.instance().kernel.do_shutdown(True) # has to restart kernel so changes are used
2. Download a pre-trained Model
In this example, we are using a Huggingface
pre-trained sentiment-analysis
model.
You can learn more about it in the 🤗 Transformers library Quick tour: https://huggingface.co/docs/transformers/quicktour
[ ]:
from sagemaker import get_execution_role, Session, image_uris
import pandas as pd
import boto3
import datetime
import time
import os
region = boto3.Session().region_name
role = get_execution_role()
sagemaker_session = Session()
print(region)
[ ]:
export_dir = "./model/"
if not os.path.exists(export_dir):
os.makedirs(export_dir)
print("Directory ", export_dir, " Created ")
else:
print("Directory ", export_dir, " already exists")
model_archive_name = "hf-model.tar.gz"
payload_archive_name = "hf_payload.tar.gz"
Initiate a Huggingface pipeline
The pipelines are a great and easy way to use models for inference. These pipelines are objects that abstract most of the complex code from the library, offering a simple API dedicated to several tasks, including Named Entity Recognition, Masked Language Modeling, Sentiment Analysis, Feature Extraction and Question Answering. See the task summary for examples of use.
[ ]:
from transformers import pipeline
sentiment_analysis = pipeline("sentiment-analysis")
Save the pre-trained model on file system
[ ]:
sentiment_analysis.save_pretrained("./model")
Write the Inference Script
To deploy a pretrained PyTorch
model, you’ll need to use the PyTorch
estimator object to create a PyTorchModel
object and set a different entry_point
.
You’ll use the PyTorchModel
object to deploy a PyTorchPredictor
. This creates a SageMaker
Endpoint – a hosted prediction service that we can use to perform inference.
An implementation of model_fn
is required for inference script. We are going to use default implementations of input_fn
, predict_fn
, output_fn
and model_fn
defined in sagemaker-pytorch-containers.
Here’s an example of the inference script:
[ ]:
!cat code/inference.py
You can use a requirements.txt
to add Python packages
[ ]:
!cat code/requirements.txt
Create the directory structure for your model files
The directory structure where you saved your PyTorch model should look something like the following:
| model
| |--pytorch_model.bin
| |--config.json
| |--vocab.txt
| |--tokenizer.json
| |--tokenizer_config.json
| |--special_tokens_map.json
|
| code
| |--inference.py
| |--requirements.txt
Where requirements.txt
is an optional file that specifies dependencies on third-party libraries.
Let’s copy code
directory into model
directory to comply with the directory structure mentioned above.
[ ]:
!cp -r ./code/ ./model/
[ ]:
!ls -rtlh ./model/
Tar the model and code
[ ]:
!cd model && tar -cvpzf ../{model_archive_name} *
Tar the payload
[ ]:
!cd ./sample-payload/ && tar czvf ../{payload_archive_name} *
Upload the model and payload to S3
We now have a model archive and the payload ready. We need to upload it to S3 before we can use it with Inference Recommender, so we will use the SageMaker Python SDK to handle the upload.
We need to create an archive that contains individual files that Inference Recommender can send to your SageMaker Endpoints. Inference Recommender will randomly sample files from this archive so make sure it contains a similar distribution of payloads you’d expect in production. Note that your inference code must be able to read in the file formats from the sample payload.
[ ]:
%%time
import os
import boto3
import re
import copy
import time
from time import gmtime, strftime
import sagemaker
from sagemaker import get_execution_role
# S3 bucket for saving code and model artifacts.
# Feel free to specify a different bucket and prefix
bucket = sagemaker.Session().default_bucket()
prefix = "sagemaker/huggingface-pytorch-inference-recommender"
sample_payload_url = sagemaker.Session().upload_data(
payload_archive_name, bucket=bucket, key_prefix=prefix + "/inference"
)
model_url = sagemaker.Session().upload_data(
model_archive_name, bucket=bucket, key_prefix=prefix + "/sentiment-analysis/model"
)
print(sample_payload_url)
print(model_url)
3. Machine Learning model details
Inference Recommender uses information about your ML model to recommend the best instance types and endpoint configurations for deployment. You can provide as much or as little information as you’d like and Inference Recommender will use that to provide recommendations.
Example ML Domains: COMPUTER_VISION
, NATURAL_LANGUAGE_PROCESSING
, MACHINE_LEARNING
Example ML Tasks: CLASSIFICATION
, REGRESSION
, OBJECT_DETECTION
, OTHER
Note: Select the task that is the closest match to your model. Chose OTHER
if none apply.
Example Model name: resnet50
, yolov4
, xgboost
etc
Use list_model_metadata API to fetch the list of available models. This will help you to pick the closest model for better recommendation.
[ ]:
import boto3
import pandas as pd
client = boto3.client("sagemaker", region)
list_model_metadata_response = client.list_model_metadata()
domains = []
frameworks = []
framework_versions = []
tasks = []
models = []
for model_summary in list_model_metadata_response["ModelMetadataSummaries"]:
domains.append(model_summary["Domain"])
tasks.append(model_summary["Task"])
models.append(model_summary["Model"])
frameworks.append(model_summary["Framework"])
framework_versions.append(model_summary["FrameworkVersion"])
data = {
"Domain": domains,
"Task": tasks,
"Framework": frameworks,
"FrameworkVersion": framework_versions,
"Model": models,
}
df = pd.DataFrame(data)
pd.set_option("display.max_rows", None)
pd.set_option("display.max_columns", None)
pd.set_option("display.width", 1000)
pd.set_option("display.colheader_justify", "center")
pd.set_option("display.precision", 3)
display(df.sort_values(by=["Domain", "Task", "Framework", "FrameworkVersion"]))
In this example, as we are predicting Sentiment analysis with HuggingFace
BERT
, we select NATURAL_LANGUAGE_PROCESSING
as the Domain, FILL_MASK
as the Task, PYTORCH
as the Framework, and bert-base-uncased
as the Model.
[ ]:
ml_domain = "NATURAL_LANGUAGE_PROCESSING"
ml_task = "FILL_MASK"
ml_framework = "PYTORCH"
framework_version = "1.6.0"
model = "bert-base-uncased"
Container image URL
If you don’t have an inference container image, you can use Prebuilt SageMaker Docker Images for Deep Learning provided by AWS to serve your ML model.
[ ]:
from sagemaker import image_uris
# ML model details
model_name = "huggingface-pytorch-" + datetime.datetime.now().strftime("%Y-%m-%d-%H-%M-%S")
inference_image = image_uris.retrieve(
framework="pytorch",
region=region,
version="1.7.1",
py_version="py3",
instance_type="ml.m5.large",
image_scope="inference",
)
print(inference_image)
4. Register Model Version/Package
Inference Recommender expects the model to be packaged in the model registry. Here, we are creating a model package group and a model package version. The model package version which takes container, model URL
etc. will now allow you to pass additional information about the model like Domain
, Task
, Framework
, FrameworkVersion
, NearestModelName
, SamplePayloadUrl
You specify a list of the instance types that are used to generate inferences in real-time
inSupportedRealtimeInferenceInstanceTypes
parameter. This list of instance types is key for the inference recommender feature. For inference on tabular data, e.g. with scikit-learn
, or XGBoost
models you’ll probably want to use standard instances or compute optimized ones. For deep learning models, you will probably want to use accelerated computing instances (GPU).
As SamplePayloadUrl
and SupportedContentTypes
parameters are essential for benchmarking the endpoint, we also highly recommend that you specify Domain
, Task
, Framework
, FrameworkVersion
, NearestModelName
for better inference recommendation.
[ ]:
import boto3
client = boto3.client("sagemaker", region)
model_package_group_name = "huggingface-pytorch-" + str(round(time.time()))
print(model_package_group_name)
model_pacakge_group_response = client.create_model_package_group(
ModelPackageGroupName=str(model_package_group_name),
ModelPackageGroupDescription="My sample HuggingFace PyTorch model package group",
)
print(model_pacakge_group_response)
[ ]:
model_package_version_response = client.create_model_package(
ModelPackageGroupName=str(model_package_group_name),
ModelPackageDescription="HuggingFace PyTorch Inference Recommender Demo",
Domain=ml_domain,
Task=ml_task,
SamplePayloadUrl=sample_payload_url,
InferenceSpecification={
"Containers": [
{
"ContainerHostname": "huggingface-pytorch",
"Image": inference_image,
"ModelDataUrl": model_url,
"Framework": ml_framework,
"NearestModelName": model,
"Environment": {
"SAGEMAKER_CONTAINER_LOG_LEVEL": "20",
"SAGEMAKER_PROGRAM": "inference.py",
"SAGEMAKER_REGION": region,
"SAGEMAKER_SUBMIT_DIRECTORY": model_url,
},
},
],
"SupportedRealtimeInferenceInstanceTypes": [
"ml.c5.large",
"ml.c5.xlarge",
"ml.c5.2xlarge",
"ml.m5.xlarge",
"ml.m5.2xlarge",
],
"SupportedContentTypes": ["text/csv"],
"SupportedResponseMIMETypes": ["text/csv"],
},
)
print(model_package_version_response)
Alternative Option: ContainerConfig
If you are missing mandatory fields to create an inference recommender job in your model package version like so (this create_model_package
is missing Domain
, Task
, and SamplePayloadUrl
):
client.create_model_package(
ModelPackageGroupName=str(model_package_group_name),
ModelPackageDescription="HuggingFace PyTorch Inference Recommender Demo",
InferenceSpecification={
"Containers": [
{
"ContainerHostname": "huggingface-pytorch",
"Image": inference_image,
"ModelDataUrl": model_url,
"Framework": ml_framework,
"NearestModelName": model,
"Environment": {
"SAGEMAKER_CONTAINER_LOG_LEVEL": "20",
"SAGEMAKER_PROGRAM": "inference.py",
"SAGEMAKER_REGION": region,
"SAGEMAKER_SUBMIT_DIRECTORY": model_url,
},
},
],
"SupportedRealtimeInferenceInstanceTypes": [
"ml.c5.large",
"ml.c5.xlarge",
"ml.c5.2xlarge",
"ml.m5.xlarge",
"ml.m5.2xlarge",
],
"SupportedContentTypes": ["text/csv"],
"SupportedResponseMIMETypes": ["text/csv"],
},
)
You may define the fields Domain
, Task
, and SamplePayloadUrl
in the optional field ContainerConfig
like so:
payload_config = {
"SamplePayloadUrl": sample_payload_url,
}
container_config = {
"Domain": ml_domain,
"Task": ml_task,
"PayloadConfig": payload_config,
}
And then provide it directly within create_inference_recommendations_job()
API like so:
default_response = client.create_inference_recommendations_job(
JobName=str(default_job),
JobDescription="",
JobType="Default",
RoleArn=role,
InputConfig={
"ModelPackageVersionArn": model_package_arn,
"ContainerConfig": container_config
},
)
For more information on what else can be provided via ContainerConfig
please refer to the CreateInferenceRecommendationsJob
doc here: CreateInferenceRecommendationsJob
5: Create a SageMaker Inference Recommender Default Job
Now with your model in Model Registry, you can kick off a ‘Default’ job to get instance recommendations. This only requires your ModelPackageVersionArn
and comes back with recommendations within an hour.
The output is a list of instance type recommendations with associated environment variables, cost, throughput and latency metrics.
[ ]:
import boto3
from sagemaker import get_execution_role
client = boto3.client("sagemaker", region)
role = get_execution_role()
default_job = "huggingface-pytorch-basic-recommender-job-" + datetime.datetime.now().strftime(
"%Y-%m-%d-%H-%M-%S"
)
default_response = client.create_inference_recommendations_job(
JobName=str(default_job),
JobDescription="HuggingFace PyTorch Inference Basic Recommender Job",
JobType="Default",
RoleArn=role,
InputConfig={"ModelPackageVersionArn": model_package_version_response["ModelPackageArn"]},
)
print(default_response)
6. Instance Recommendation Results
The inference recommender job provides multiple endpoint recommendations in its result. The recommendation includes InstanceType
, InitialInstanceCount
, EnvironmentParameters
which includes tuned parameters for better performance. We also include the benchmarking results like MaxInvocations
, ModelLatency
, CostPerHour
and CostPerInference
for deeper analysis. The information provided will help you narrow down to a specific endpoint configuration that suits your use
case.
Example:
CostPerInference
metricsModelLatency
/ MaxInvocations
metricsRunning the Inference recommender job will take ~35 minutes.
[ ]:
%%time
import boto3
import pprint
import pandas as pd
client = boto3.client("sagemaker", region)
ended = False
while not ended:
inference_recommender_job = client.describe_inference_recommendations_job(
JobName=str(default_job)
)
if inference_recommender_job["Status"] in ["COMPLETED", "STOPPED", "FAILED"]:
ended = True
else:
print("Inference recommender job in progress")
time.sleep(60)
if inference_recommender_job["Status"] == "FAILED":
print("Inference recommender job failed ")
print("Failed Reason: {}".inference_recommender_job["FailedReason"])
else:
print("Inference recommender job completed")
Detailing out the result
[ ]:
data = [
{**x["EndpointConfiguration"], **x["ModelConfiguration"], **x["Metrics"]}
for x in inference_recommender_job["InferenceRecommendations"]
]
df = pd.DataFrame(data)
dropFilter = df.filter(["VariantName"])
df.drop(dropFilter, inplace=True, axis=1)
pd.set_option("max_colwidth", 400)
Let’s sort the result dataframe
by MaxInvocations
- The maximum number of requests per minute expected for the endpoint, in descending order.
[ ]:
df.sort_values(by=["MaxInvocations"], ascending=False).head()
This time, let’s sort the result dataframe
by ModelLatencyThresholds
- The interval of time taken by a model to respond as viewed from SageMaker. The interval includes the local communication time taken to send the request and to fetch the response from the container of a model and the time taken to complete the inference in the container.
[ ]:
df.sort_values(by=["ModelLatency"]).head()
Let’s choose the instance with the lowest ModelLatency
. This is done by choosing the first record of the result dataframe
, sorted by ascending order.
[ ]:
instance_type = (
df.sort_values(by=["ModelLatency"]).head(1)["InstanceType"].to_string(index=False).strip()
)
instance_type
Optional: ListInferenceRecommendationsJobSteps
To see the list of subtasks for an Inference Recommender job, simply provide the JobName
to the ListInferenceRecommendationsJobSteps
API.
To see more information for the API, please refer to the doc here: ListInferenceRecommendationsJobSteps
[ ]:
list_job_steps_response = client.list_inference_recommendations_job_steps(JobName=str(default_job))
print(list_job_steps_response)
7. Create an Endpoint for lowest latency real-time inference
Next we will create a SageMaker real-time endpoint using the instance with the lowest latency for the model, detected in the Inference Recommender Default Job that was run previously.
[ ]:
model_package_arn = model_package_version_response["ModelPackageArn"]
print("ModelPackage Version ARN : {}".format(model_package_arn))
View Model Groups and Versions
You can view details of a specific model version by using either the AWS SDK for Python (Boto3) or by using Amazon SageMaker Studio. To view the details of a model version by using Boto3, Call the list_model_packages
method to view the model versions in a model group
[ ]:
list_model_packages_response = client.list_model_packages(
ModelPackageGroupName=model_package_group_name
)
list_model_packages_response
[ ]:
model_version_arn = list_model_packages_response["ModelPackageSummaryList"][0]["ModelPackageArn"]
print(model_version_arn)
View Model Version Details
Call describe_model_package
to see the details of the model version. You pass in the ARN of a model version that you got in the output of the call to list_model_packages.
[ ]:
client.describe_model_package(ModelPackageName=model_version_arn)
Update Model Approval Status
After you create a model version, you typically want to evaluate its performance before you deploy it to a production endpoint. If it performs to your requirements, you can update the approval status of the model version to Approved
. Setting the status to Approved
can initiate CI/CD deployment for the model. If the model version does not perform to your requirements, you can update the approval status to Rejected
.
[ ]:
model_package_update_input_dict = {
"ModelPackageArn": model_package_arn,
"ModelApprovalStatus": "Approved",
}
model_package_update_response = client.update_model_package(**model_package_update_input_dict)
model_package_update_response
Deploy the Model in the Registry
After you register a model version and approve it for deployment, deploy it to a SageMaker endpoint for real-time inference.
When you create a MLOps
project and choose a MLOps
project template that includes model deployment, approved model versions in the model registry are automatically deployed to production. For information about using SageMaker MLOps
projects, see Automate ``MLOps` with SageMaker Projects <https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-projects.html>`__.
To deploy a model version using the AWS SDK for Python (Boto3) we’ll create a model object from the model version by calling the create_model method. Pass the Amazon Resource Name (ARN) of the model version as part of the Containers for the model object.
[ ]:
model_name = "huggingface-pytorch-model-" + datetime.datetime.now().strftime("%Y-%m-%d-%H-%M-%S")
print("Model name : {}".format(model_name))
[ ]:
primary_container = {
"ModelPackageName": model_version_arn,
}
[ ]:
create_model_respose = client.create_model(
ModelName=model_name, ExecutionRoleArn=get_execution_role(), PrimaryContainer=primary_container
)
print("Model arn : {}".format(create_model_respose["ModelArn"]))
Create an Endpoint Config from the model
This will create an endpoint configuration that Amazon SageMaker hosting services uses to deploy models. In the configuration, you identify one or more models, created using the CreateModel
API, to deploy and the resources that you want Amazon SageMaker to provision. Then you call the CreateEndpoint
API.
More info on create_endpoint_config
can be found on the Boto3 SageMaker documentation page.
[ ]:
endpoint_config_name = "huggingface-pytorch-endpoint-config-" + datetime.datetime.now().strftime(
"%Y-%m-%d-%H-%M-%S"
)
endpoint_config_response = client.create_endpoint_config(
EndpointConfigName=endpoint_config_name,
ProductionVariants=[
{
"VariantName": "AllTrafficVariant",
"ModelName": model_name,
"InitialInstanceCount": 1,
"InstanceType": instance_type,
"InitialVariantWeight": 1,
},
],
)
endpoint_config_response
Deploy the Endpoint Config to a real-time endpoint
This will create an endpoint using the endpoint configuration specified in the request. Amazon SageMaker uses the endpoint to provision resources and deploy models. Note that you have already created the endpoint configuration with the CreateEndpointConfig
API in the previous step.
More info on create_endpoint
can be found on the Boto3 SageMaker documentation page.
[ ]:
endpoint_name = "huggingface-pytorch-endpoint-" + datetime.datetime.now().strftime(
"%Y-%m-%d-%H-%M-%S"
)
create_endpoint_response = client.create_endpoint(
EndpointName=endpoint_name,
EndpointConfigName=endpoint_config_name,
)
create_endpoint_response
Wait for Endpoint to be ready
[ ]:
%%time
describe_endpoint_response = client.describe_endpoint(EndpointName=endpoint_name)
while describe_endpoint_response["EndpointStatus"] == "Creating":
describe_endpoint_response = client.describe_endpoint(EndpointName=endpoint_name)
print(describe_endpoint_response["EndpointStatus"])
time.sleep(15)
describe_endpoint_response
Invoke Endpoint with boto3
After you deploy a model into production using Amazon SageMaker hosting services, your client applications use this API to get inferences from the model hosted at the specified endpoint.
For an overview of Amazon SageMaker, see How It Works.
Amazon SageMaker strips all POST headers except those supported by the API. Amazon SageMaker might add additional headers. You should not rely on the behavior of headers outside those enumerated in the request syntax.
Calls to InvokeEndpoint
are authenticated by using AWS Signature Version 4. For information, see Authenticating Requests (AWS Signature Version 4) in the Amazon S3 API Reference.
A customer’s model containers must respond to requests within 60 seconds. The model itself can have a maximum processing time of 60 seconds before responding to invocations. If your model is going to take 50-60 seconds of processing time, the SDK socket timeout should be set to be 70 seconds.
More info on invoke_endpoint
can be found on the Boto3 ``SageMakerRuntime` documentation page <https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sagemaker-runtime.html#SageMakerRuntime.Client.invoke_endpoint>`__.
[ ]:
test_data = pd.read_csv("./sample-payload/test_data.csv", header=None)
test_data
[ ]:
runtime = boto3.client("sagemaker-runtime")
[ ]:
response = runtime.invoke_endpoint(
EndpointName=endpoint_name,
Body=test_data.to_csv(header=False, index=False),
ContentType="text/csv",
)
print(response["Body"].read())
8. Clean up
Endpoints should be deleted when no longer in use, since (per the SageMaker pricing page) they’re billed by time deployed.
[ ]:
client.delete_endpoint(EndpointName=endpoint_name)
9. Conclusion
In this notebook you successfully downloaded a Huggingface
pre-trained sentiment-analysis
model, you compressed the model
and the payload and upload it to Amazon S3. Then you registered the Model Version, and triggered a SageMaker Inference Recommender Default Job.
You then browsed the results, sorted by MaxInvocations
and by ModelLatency
, and decided to create an Endpoint for the lowest latency real-time inference. After deploying the model to a real-time endpoint, you invoked the Endpoint with a sample payload of few sentences, using boto3
, and got the predictions result.
As next steps, you can try running SageMaker Inference Recommender on your own models, to select an instance with the best price performance for your needs.
Notebook CI Test Results
This notebook was tested in multiple regions. The test results are as follows, except for us-west-2 which is shown at the top of the notebook.