Deploying Serverless Endpoints From SageMaker Model Registry


This notebook’s CI test result for us-west-2 is as follows. CI test results in other regions can be found at the end of the notebook.

This us-west-2 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable


SageMaker XGBoost Algorithm Regression Example

Amazon SageMaker Serverless Inference is a purpose-built inference option that makes it easy for customers to deploy and scale ML models. Serverless Inference is ideal for workloads which have idle periods between traffic spurts and can tolerate cold starts. Serverless endpoints also automatically launch compute resources and scale them in and out depending on traffic, eliminating the need to choose instance types or manage scaling policies.

SageMaker Model Registry can be used to catalog and manage different model versions. Model Registry now supports deploying registered models to serverless endpoints. For this notebook we will take the existing XGBoost Serverless example and integrate with the Model Registry. From there we will take our trained model and deploy it to a serverless endpoint using the Boto3 Python SDK. Note that there is no support for Model Registry in the SageMaker SDK with serverless endpoints at the moment.

Notebook Setting - SageMaker Studio: Python 3 (Data Science) - Regions Available: SageMaker Serverless Inference is currently available in the following regions in preview: US East (Northern Virginia), US East (Ohio), US West (Oregon), EU (Ireland), Asia Pacific (Tokyo) and Asia Pacific (Sydney). After general availability it should be available in all commercial regions. To verify availability stay up to date with the documentation which will reflect all supported regions.

Table of Contents

  • Setup

  • Model Training

  • Model Registry

  • Deployment

    • Model Creation

    • Endpoint Configuration Creation

    • Serverless Endpoint Creation

    • Endpoint Invocation

  • Cleanup

Setup

For testing you need to properly configure your Notebook Role to have SageMaker Full Access.

[ ]:
! pip install sagemaker botocore boto3 awscli --upgrade
[ ]:
# Setup clients
import boto3

client = boto3.client(service_name="sagemaker")
runtime = boto3.client(service_name="sagemaker-runtime")
[ ]:
import sagemaker
from sagemaker.estimator import Estimator

boto_session = boto3.session.Session()
region = boto_session.region_name
print(region)

sagemaker_session = sagemaker.Session()
base_job_prefix = "xgboost-example"
role = sagemaker.get_execution_role()
print(role)

default_bucket = sagemaker_session.default_bucket()
s3_prefix = base_job_prefix

training_instance_type = "ml.m5.xlarge"
[ ]:
# retrieve data
s3 = boto3.client("s3")
s3.download_file(
    f"sagemaker-example-files-prod-{region}",
    "datasets/tabular/uci_abalone/train_csv/abalone_dataset1_train.csv",
    "abalone_dataset1_train.csv",
)
[ ]:
# upload data to S3
!aws s3 cp abalone_dataset1_train.csv s3://{default_bucket}/xgboost-regression/train.csv

Model Training

Now, we train an ML model using the SageMaker XGBoost Algorithm. In this example, we use a SageMaker-provided XGBoost container image and configure an estimator to train our model.

[ ]:
from sagemaker.inputs import TrainingInput

training_path = f"s3://{default_bucket}/xgboost-regression/train.csv"
train_input = TrainingInput(training_path, content_type="text/csv")
[ ]:
model_path = f"s3://{default_bucket}/{s3_prefix}/xgb_model"

# retrieve xgboost image
image_uri = sagemaker.image_uris.retrieve(
    framework="xgboost",
    region=region,
    version="1.0-1",
    py_version="py3",
    instance_type=training_instance_type,
)

# Configure Training Estimator
xgb_train = Estimator(
    image_uri=image_uri,
    instance_type=training_instance_type,
    instance_count=1,
    output_path=model_path,
    sagemaker_session=sagemaker_session,
    role=role,
)

# Set Hyperparameters
xgb_train.set_hyperparameters(
    objective="reg:linear",
    num_round=50,
    max_depth=5,
    eta=0.2,
    gamma=4,
    min_child_weight=6,
    subsample=0.7,
    silent=0,
)
[ ]:
# Fit model
xgb_train.fit({"train": train_input})
[ ]:
# Retrieve model data from training job
model_artifacts = xgb_train.model_data
model_artifacts

Model Registry

[ ]:
# Create a Model Package Group: https://docs.aws.amazon.com/sagemaker/latest/dg/model-registry-model-group.html
import time
from time import gmtime, strftime

model_package_group_name = "xgboost-abalone" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
model_package_group_input_dict = {
    "ModelPackageGroupName": model_package_group_name,
    "ModelPackageGroupDescription": "Model package group for xgboost regression model with Abalone dataset",
}

create_model_pacakge_group_response = client.create_model_package_group(
    **model_package_group_input_dict
)
print(
    "ModelPackageGroup Arn : {}".format(create_model_pacakge_group_response["ModelPackageGroupArn"])
)
[ ]:
model_package_group_arn = create_model_pacakge_group_response["ModelPackageGroupArn"]
modelpackage_inference_specification = {
    "InferenceSpecification": {
        "Containers": [
            {
                "Image": image_uri,
            }
        ],
        "SupportedContentTypes": ["text/csv"],
        "SupportedResponseMIMETypes": ["text/csv"],
    }
}

# Specify the model source
model_url = model_artifacts

# Specify the model data
modelpackage_inference_specification["InferenceSpecification"]["Containers"][0][
    "ModelDataUrl"
] = model_url

create_model_package_input_dict = {
    "ModelPackageGroupName": model_package_group_arn,
    "ModelPackageDescription": "Model for regression with the Abalone dataset",
    "ModelApprovalStatus": "PendingManualApproval",
}
create_model_package_input_dict.update(modelpackage_inference_specification)

# Create cross-account model package
create_mode_package_response = client.create_model_package(**create_model_package_input_dict)
model_package_arn = create_mode_package_response["ModelPackageArn"]
print("ModelPackage Version ARN : {}".format(model_package_arn))
[ ]:
client.list_model_packages(ModelPackageGroupName=model_package_group_name)
[ ]:
model_package_arn = client.list_model_packages(ModelPackageGroupName=model_package_group_name)[
    "ModelPackageSummaryList"
][0]["ModelPackageArn"]
model_package_arn
[ ]:
client.describe_model_package(ModelPackageName=model_package_arn)
[ ]:
# Approve the model package
model_package_update_input_dict = {
    "ModelPackageArn": model_package_arn,
    "ModelApprovalStatus": "Approved",
}
model_package_update_response = client.update_model_package(**model_package_update_input_dict)
print(model_package_update_response)

Deployment

Model Creation

[ ]:
model_name = "xgboost-serverless-model" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
print("Model name : {}".format(model_name))
container_list = [{"ModelPackageName": model_package_arn}]

create_model_response = client.create_model(
    ModelName=model_name, ExecutionRoleArn=role, Containers=container_list
)
print("Model arn : {}".format(create_model_response["ModelArn"]))

Endpoint Configuration Creation

This is where you can adjust the Serverless Configuration for your endpoint. The current max concurrent invocations for a single endpoint, known as MaxConcurrency, can be any value from 1 to 200, and MemorySize can be any of the following: 1024 MB, 2048 MB, 3072 MB, 4096 MB, 5120 MB, or 6144 MB.

[ ]:
endpoint_config_name = "xgboost-serverless-epc" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
print(endpoint_config_name)
create_endpoint_config_response = client.create_endpoint_config(
    EndpointConfigName=endpoint_config_name,
    ProductionVariants=[
        {
            "ServerlessConfig": {"MemorySizeInMB": 1024, "MaxConcurrency": 10},
            "ModelName": model_name,
            "VariantName": "AllTraffic",
        }
    ],
)
print("Endpoint Configuration Arn: " + create_endpoint_config_response["EndpointConfigArn"])

Endpoint Creation

Now that we have an endpoint configuration, we can create a serverless endpoint and deploy our model to it. When creating the endpoint, provide the name of your endpoint configuration and a name for the new endpoint.

[ ]:
endpoint_name = "xgboost-serverless-ep" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
print("EndpointName={}".format(endpoint_name))

create_endpoint_response = client.create_endpoint(
    EndpointName=endpoint_name, EndpointConfigName=endpoint_config_name
)
print(create_endpoint_response["EndpointArn"])

Wait until the endpoint status is InService before invoking the endpoint.

[ ]:
import time

describe_endpoint_response = client.describe_endpoint(EndpointName=endpoint_name)

while describe_endpoint_response["EndpointStatus"] == "Creating":
    describe_endpoint_response = client.describe_endpoint(EndpointName=endpoint_name)
    print(describe_endpoint_response["EndpointStatus"])
    time.sleep(15)

describe_endpoint_response

Invocation

Invoke the endpoint by sending a request to it. The following is a sample data point grabbed from the CSV file downloaded from the public Abalone dataset.

[ ]:
response = runtime.invoke_endpoint(
    EndpointName=endpoint_name,
    Body=b".345,0.224414,.131102,0.042329,.279923,-0.110329,-0.099358,0.0",
    ContentType="text/csv",
)

print(response["Body"].read())

Cleanup

Delete any resources you created in this notebook that you no longer wish to use.

[ ]:
client.delete_model(ModelName=model_name)
client.delete_endpoint_config(EndpointConfigName=endpoint_config_name)
client.delete_endpoint(EndpointName=endpoint_name)

Notebook CI Test Results

This notebook was tested in multiple regions. The test results are as follows, except for us-west-2 which is shown at the top of the notebook.

This us-east-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This us-east-2 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This us-west-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This ca-central-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This sa-east-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This eu-west-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This eu-west-2 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This eu-west-3 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This eu-central-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This eu-north-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This ap-southeast-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This ap-southeast-2 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This ap-northeast-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This ap-northeast-2 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This ap-south-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable