Regression with Amazon SageMaker XGBoost (Parquet input)


This notebook’s CI test result for us-west-2 is as follows. CI test results in other regions can be found at the end of the notebook.

This us-west-2 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable



Contents

  1. Introduction

  2. Setup

  3. Training

    1. Training with SageMaker Training

    2. Training with SageMaker Automatic Model Tuning

  4. Plotting Objective Metric


Introduction

This notebook exhibits the use of a Parquet dataset for use with the SageMaker XGBoost algorithm. The example here is almost the same as Regression with Amazon SageMaker XGBoost algorithm.

This notebook tackles the exact same problem with the same solution, but has been modified for a Parquet input. The original notebook provides details of dataset and the machine learning use-case.

This notebook was tested in Amazon SageMaker Studio on a ml.t3.medium instance with Python 3 (Data Science) kernel.

Setup

[ ]:
!pip3 install -U sagemaker
[ ]:
import os
import boto3
import re
import sagemaker
from sagemaker import get_execution_role

role = get_execution_role()
region = boto3.Session().region_name

# S3 bucket for saving code and model artifacts.
# Feel free to specify a different bucket here if you wish.
bucket = sagemaker.Session().default_bucket()
prefix = "sagemaker/DEMO-xgboost-parquet"
bucket_path = "https://s3-{}.amazonaws.com/{}".format(region, bucket)

We will use PyArrow library to store the Abalone dataset in the Parquet format.

[ ]:
import pyarrow
[ ]:
%%time

import numpy as np
import pandas as pd
from sklearn.datasets import load_svmlight_file

s3 = boto3.client("s3")
# Download the dataset and load into a pandas dataframe
FILE_NAME = "abalone.csv"
s3.download_file(
    f"sagemaker-example-files-prod-{region}", f"datasets/tabular/uci_abalone/abalone.csv", FILE_NAME
)
feature_names = [
    "Sex",
    "Length",
    "Diameter",
    "Height",
    "Whole weight",
    "Shucked weight",
    "Viscera weight",
    "Shell weight",
    "Rings",
]
data = pd.read_csv(FILE_NAME, header=None, names=feature_names)

# SageMaker XGBoost has the convention of label in the first column
data = data[feature_names[-1:] + feature_names[:-1]]
data["Sex"] = data["Sex"].astype("category").cat.codes

# Split the downloaded data into train/test dataframes
train, test = np.split(data.sample(frac=1), [int(0.8 * len(data))])

# requires PyArrow installed
train.to_parquet("abalone_train.parquet")
test.to_parquet("abalone_test.parquet")
[ ]:
%%time
sagemaker.Session().upload_data(
    "abalone_train.parquet", bucket=bucket, key_prefix=prefix + "/" + "training"
)

sagemaker.Session().upload_data(
    "abalone_test.parquet", bucket=bucket, key_prefix=prefix + "/" + "validation"
)

We obtain the new container by specifying the framework version (1.7-1). This version specifies the upstream XGBoost framework version (1.7) and an additional SageMaker version (1). If you have an existing XGBoost workflow based on the previous (1.0-1, 1.2-2, 1.3-1 or 1.5-1) container, this would be the only change necessary to get the same workflow working with the new container.

[ ]:
container = sagemaker.image_uris.retrieve("xgboost", region, "1.7-1")

Training

After setting training parameters, we kick off training, and poll for status until training is completed.

Training can be done by either calling SageMaker Training with a set of hyperparameters values to train with, or by leveraging SageMaker Automatic Model Tuning (AMT). AMT, also known as hyperparameter tuning (HPO), finds the best version of a model by running many training jobs on your dataset using the algorithm and ranges of hyperparameters that you specify. It then chooses the hyperparameter values that result in a model that performs the best, as measured by a metric that you choose.

In this notebook, both methods are used for demonstration purposes, but the best training that the HPO job creates is the one that is eventually used for analytics purposes. You can instead choose to use the standalone training by changing the below variable use_amt to False.

Training with SageMaker Training

Takes between 5 and 6 minutes in this example:

[ ]:
%%time
import time
from time import gmtime, strftime

client = boto3.client("sagemaker", region_name=region)
use_amt = True

training_job_name = "xgboost-parquet-example-training-" + strftime("%Y-%m-%d-%H-%M-%S", gmtime())
print("Training job", training_job_name)

# Ensure that the training and validation data folders generated above are reflected in the "InputDataConfig" parameter below.

create_training_params = {
    "AlgorithmSpecification": {"TrainingImage": container, "TrainingInputMode": "Pipe"},
    "RoleArn": role,
    "OutputDataConfig": {"S3OutputPath": f"{bucket_path}/{prefix}/single-xgboost"},
    "ResourceConfig": {"InstanceCount": 1, "InstanceType": "ml.m5.2xlarge", "VolumeSizeInGB": 20},
    "TrainingJobName": training_job_name,
    "HyperParameters": {
        "max_depth": "5",
        "eta": "0.2",
        "gamma": "4",
        "min_child_weight": "6",
        "subsample": "0.7",
        "objective": "reg:linear",
        "num_round": "10",
        "verbosity": "2",
    },
    "StoppingCondition": {"MaxRuntimeInSeconds": 3600},
    "InputDataConfig": [
        {
            "ChannelName": "train",
            "DataSource": {
                "S3DataSource": {
                    "S3DataType": "S3Prefix",
                    "S3Uri": f"{bucket_path}/{prefix}/training",
                    "S3DataDistributionType": "FullyReplicated",
                }
            },
            "ContentType": "application/x-parquet",
            "CompressionType": "None",
        },
        {
            "ChannelName": "validation",
            "DataSource": {
                "S3DataSource": {
                    "S3DataType": "S3Prefix",
                    "S3Uri": f"{bucket_path}/{prefix}/validation",
                    "S3DataDistributionType": "FullyReplicated",
                }
            },
            "ContentType": "application/x-parquet",
            "CompressionType": "None",
        },
    ],
}

print(
    f"Creating a training job with name: {training_job_name}. It will take between 5 and 6 minutes to complete."
)
client.create_training_job(**create_training_params)

status = client.describe_training_job(TrainingJobName=training_job_name)["TrainingJobStatus"]
print(status)
while status != "Completed" and status != "Failed":
    time.sleep(60)
    status = client.describe_training_job(TrainingJobName=training_job_name)["TrainingJobStatus"]
    print(status)

Training with SageMaker Automatic Model Tuning

To create a tuning job using the AWS SageMaker Automatic Model Tuning API, you need to define 3 attributes.

  1. the tuning job name (string)

  2. the tuning job config (to specify settings for the hyperparameter tuning job - JSON object)

  3. training job definition (to configure the training jobs that the tuning job launches - JSON object).

To learn more about that, refer to the Configure and Launch a Hyperparameter Tuning Job documentation.

Note that the tuning job will take between 7 and 10 minutes to complete.

[ ]:
from time import gmtime, strftime, sleep

tuning_job_name = "DEMO-xgboost-parquet-" + strftime("%d-%H-%M-%S", gmtime())

tuning_job_config = {
    "ParameterRanges": {
        "CategoricalParameterRanges": [],
        "ContinuousParameterRanges": [
            {
                "MaxValue": "0.5",
                "MinValue": "0.1",
                "Name": "eta",
            },
            {
                "MaxValue": "5",
                "MinValue": "0",
                "Name": "gamma",
            },
            {
                "MaxValue": "120",
                "MinValue": "0",
                "Name": "min_child_weight",
            },
            {
                "MaxValue": "1",
                "MinValue": "0.5",
                "Name": "subsample",
            },
            {
                "MaxValue": "2",
                "MinValue": "0",
                "Name": "alpha",
            },
        ],
        "IntegerParameterRanges": [
            {
                "MaxValue": "10",
                "MinValue": "0",
                "Name": "max_depth",
            },
            {
                "MaxValue": "4000",
                "MinValue": "1",
                "Name": "num_round",
            },
        ],
    },
    # SageMaker sets the following default limits for resources used by automatic model tuning:
    # https://docs.aws.amazon.com/sagemaker/latest/dg/automatic-model-tuning-limits.html
    "ResourceLimits": {
        # Increase the max number of training jobs for increased accuracy (and training time).
        "MaxNumberOfTrainingJobs": 6,
        # Change parallel training jobs run by AMT to reduce total training time. Constrained by your account limits.
        # if max_jobs=max_parallel_jobs then Bayesian search turns to Random.
        "MaxParallelTrainingJobs": 2,
    },
    "Strategy": "Bayesian",
    "HyperParameterTuningJobObjective": {"MetricName": "validation:rmse", "Type": "Minimize"},
}

training_job_definition = {
    "AlgorithmSpecification": {"TrainingImage": container, "TrainingInputMode": "File"},
    "InputDataConfig": [
        {
            "ChannelName": "train",
            "DataSource": {
                "S3DataSource": {
                    "S3DataType": "S3Prefix",
                    "S3Uri": f"{bucket_path}/{prefix}/training",
                    "S3DataDistributionType": "FullyReplicated",
                }
            },
            "ContentType": "application/x-parquet",
            "CompressionType": "None",
        },
        {
            "ChannelName": "validation",
            "DataSource": {
                "S3DataSource": {
                    "S3DataType": "S3Prefix",
                    "S3Uri": f"{bucket_path}/{prefix}/validation",
                    "S3DataDistributionType": "FullyReplicated",
                }
            },
            "ContentType": "application/x-parquet",
            "CompressionType": "None",
        },
    ],
    "OutputDataConfig": {"S3OutputPath": f"{bucket_path}/{prefix}/single-xgboost"},
    "ResourceConfig": {"InstanceCount": 1, "InstanceType": "ml.m5.2xlarge", "VolumeSizeInGB": 5},
    "RoleArn": role,
    "StaticHyperParameters": {
        "objective": "reg:linear",
        "verbosity": "2",
    },
    "StoppingCondition": {"MaxRuntimeInSeconds": 43200},
}

print(
    f"Creating a tuning job with name: {tuning_job_name}. It will take between 7 and 10 minutes to complete."
)
client.create_hyper_parameter_tuning_job(
    HyperParameterTuningJobName=tuning_job_name,
    HyperParameterTuningJobConfig=tuning_job_config,
    TrainingJobDefinition=training_job_definition,
)

status = client.describe_hyper_parameter_tuning_job(HyperParameterTuningJobName=tuning_job_name)[
    "HyperParameterTuningJobStatus"
]
print(status)
while status != "Completed" and status != "Failed":
    time.sleep(60)
    status = client.describe_hyper_parameter_tuning_job(
        HyperParameterTuningJobName=tuning_job_name
    )["HyperParameterTuningJobStatus"]
    print(status)
[ ]:
## Plotting Objective Metric
[ ]:
if use_amt == True:
    training = client.describe_hyper_parameter_tuning_job(
        HyperParameterTuningJobName=tuning_job_name
    )["BestTrainingJob"]["TrainingJobName"]
else:
    training = training_job_name
[ ]:
%matplotlib inline
from sagemaker.analytics import TrainingJobAnalytics

metric_name = "validation:rmse"

metrics_dataframe = TrainingJobAnalytics(
    training_job_name=training, metric_names=[metric_name]
).dataframe()
plt = metrics_dataframe.plot(
    kind="line", figsize=(12, 5), x="timestamp", y="value", style="b.", legend=False
)
plt.set_ylabel(metric_name);

Notebook CI Test Results

This notebook was tested in multiple regions. The test results are as follows, except for us-west-2 which is shown at the top of the notebook.

This us-east-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This us-east-2 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This us-west-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This ca-central-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This sa-east-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This eu-west-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This eu-west-2 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This eu-west-3 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This eu-central-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This eu-north-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This ap-southeast-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This ap-southeast-2 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This ap-northeast-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This ap-northeast-2 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This ap-south-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable