Iris Training and Prediction with Sagemaker Scikit-learn


This notebook’s CI test result for us-west-2 is as follows. CI test results in other regions can be found at the end of the notebook.

This us-west-2 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable


This tutorial shows you how to use Scikit-learn with SageMaker by utilizing the pre-built container. Scikit-learn is a popular Python machine learning framework. It includes a number of different algorithms for classification, regression, clustering, dimensionality reduction, and data/feature pre-processing.

The sagemaker-python-sdk module makes it easy to take existing scikit-learn code, which we show by training a model on the Iris dataset and generating a set of predictions. For more information about the Scikit-learn container, see the sagemaker-scikit-learn-containers repository and the sagemaker-python-sdk repository.

Runtime

This notebook takes approximately 15 minutes to run.

Contents

[ ]:
%pip install -U sagemaker>=2.15

First, let’s create our Sagemaker session and role, and create a S3 prefix to use for the notebook example.

[ ]:
# S3 prefix
prefix = "DEMO-scikit-iris"

import sagemaker
from sagemaker import get_execution_role

sagemaker_session = sagemaker.Session()
region = sagemaker_session.boto_region_name
role = get_execution_role()

Upload the data for training

When training large models with huge amounts of data, you may use big data tools like Amazon Athena, AWS Glue, or Amazon EMR to process your data backed by S3. For the purposes of this example, we’re using a sample of the classic Iris dataset. We load the dataset, write it locally, then upload it to S3.

[ ]:
import boto3
import numpy as np
import pandas as pd
import os

os.makedirs("./data", exist_ok=True)

s3_client = boto3.client("s3")
s3_client.download_file(
    f"sagemaker-example-files-prod-{region}", "datasets/tabular/iris/iris.data", "./data/iris.csv"
)

df_iris = pd.read_csv("./data/iris.csv", header=None)
df_iris[4] = df_iris[4].map({"Iris-setosa": 0, "Iris-versicolor": 1, "Iris-virginica": 2})
iris = df_iris[[4, 0, 1, 2, 3]].to_numpy()
np.savetxt("./data/iris.csv", iris, delimiter=",", fmt="%1.1f, %1.3f, %1.3f, %1.3f, %1.3f")

Once we have the data locally, we can use use the tools provided by the SageMaker Python SDK to upload the data to a default bucket.

[ ]:
WORK_DIRECTORY = "data"

train_input = sagemaker_session.upload_data(
    WORK_DIRECTORY, key_prefix="{}/{}".format(prefix, WORK_DIRECTORY)
)

Create a Scikit-learn script for training

SageMaker can run a scikit-learn script using the SKLearn estimator. When run on SageMaker, a number of helpful environment variables are available to access properties of the training environment, such as:

  • SM_MODEL_DIR: A string representing the path to the directory to write model artifacts to. Any artifacts saved in this folder are uploaded to S3 for model hosting after the training job completes.

  • SM_OUTPUT_DIR: A string representing the file system path to write output artifacts to. Output artifacts may include checkpoints, graphs, and other files to save, not including model artifacts. These artifacts are compressed and uploaded to S3 to the same S3 prefix as the model artifacts.

Supposing two input channels, ‘train’ and ‘test’, were used in the call to the SKLearn estimator’s fit() method, the following environment variables are set, following the format SM_CHANNEL_[channel_name]:

  • SM_CHANNEL_TRAIN: A string representing the path to the directory containing data in the ‘train’ channel.

  • SM_CHANNEL_TEST: Same as above, but for the ‘test’ channel.

A typical training script loads data from the input channels, configures training with hyperparameters, trains a model, and saves a model to the model_dir so that it can be hosted later. Hyperparameters are passed to your script as arguments and can be retrieved with an argparse.ArgumentParser instance. For example, the script that we run in this notebook is below:

from __future__ import print_function

import argparse
import joblib
import os
import pandas as pd

from sklearn import tree


if __name__ == '__main__':
    parser = argparse.ArgumentParser()

    # Hyperparameters are described here. In this simple example we are just including one hyperparameter.
    parser.add_argument('--max_leaf_nodes', type=int, default=-1)

    # Sagemaker specific arguments. Defaults are set in the environment variables.
    parser.add_argument('--output-data-dir', type=str, default=os.environ['SM_OUTPUT_DATA_DIR'])
    parser.add_argument('--model-dir', type=str, default=os.environ['SM_MODEL_DIR'])
    parser.add_argument('--train', type=str, default=os.environ['SM_CHANNEL_TRAIN'])

    args = parser.parse_args()

    # Take the set of files and read them all into a single pandas dataframe
    input_files = [ os.path.join(args.train, file) for file in os.listdir(args.train) ]
    if len(input_files) == 0:
        raise ValueError(('There are no files in {}.\n' +
                          'This usually indicates that the channel ({}) was incorrectly specified,\n' +
                          'the data specification in S3 was incorrectly specified or the role specified\n' +
                          'does not have permission to access the data.').format(args.train, "train"))
    raw_data = [ pd.read_csv(file, header=None, engine="python") for file in input_files ]
    train_data = pd.concat(raw_data)

    # labels are in the first column
    train_y = train_data.iloc[:, 0]
    train_X = train_data.iloc[:, 1:]

    # Here we support a single hyperparameter, 'max_leaf_nodes'. Note that you can add as many
    # as your training my require in the ArgumentParser above.
    max_leaf_nodes = args.max_leaf_nodes

    # Now use scikit-learn's decision tree classifier to train the model.
    clf = tree.DecisionTreeClassifier(max_leaf_nodes=max_leaf_nodes)
    clf = clf.fit(train_X, train_y)

    # Print the coefficients of the trained classifier, and save the coefficients
    joblib.dump(clf, os.path.join(args.model_dir, "model.joblib"))


def model_fn(model_dir):
    """Deserialized and return fitted model

    Note that this should have the same name as the serialized model in the main method
    """
    clf = joblib.load(os.path.join(model_dir, "model.joblib"))
    return clf

Because the Scikit-learn container imports your training script, you should always put your training code in a main guard (if __name__=='__main__':) so that the container does not inadvertently run your training code at the wrong point in execution.

For more information about training environment variables, please visit https://github.com/aws/sagemaker-containers.

Create a SageMaker SKLearn Estimator

To run our Scikit-learn training script on SageMaker, we construct a sagemaker.sklearn.estimator.sklearn estimator, which accepts several constructor arguments:

  • entry_point: The path to the Python script SageMaker runs for training and prediction.

  • role: The IAM role ARN.

  • instance_type (optional): The type of SageMaker instances for training. Note: Because Scikit-learn does not natively support GPU training, SageMaker Scikit-learn does not currently support training on GPU instance types.

  • sagemaker_session (optional): The session used to train on SageMaker.

  • hyperparameters (optional): A dictionary passed to the train function as hyperparameters.

To see the code for the SKLearn Estimator, see: https://github.com/aws/sagemaker-python-sdk/tree/master/src/sagemaker/sklearn

[ ]:
from sagemaker.sklearn.estimator import SKLearn

FRAMEWORK_VERSION = "1.2-1"
script_path = "scikit_learn_iris.py"

sklearn = SKLearn(
    entry_point=script_path,
    framework_version=FRAMEWORK_VERSION,
    instance_type="ml.c4.xlarge",
    role=role,
    sagemaker_session=sagemaker_session,
    hyperparameters={"max_leaf_nodes": 30},
)

Train SKLearn Estimator on Iris data

Training is straightforward, just call fit() on the Estimator! This starts a SageMaker training job that downloads the data, invokes our scikit-learn code (in the provided script file), and saves any model artifacts that the script creates.

[ ]:
sklearn.fit({"train": train_input})

Use the trained model to make inference requests

Deploy the model

Deploying the model to SageMaker hosting just requires a deploy() call on the fitted model. This call takes an instance count and instance type.

[ ]:
predictor = sklearn.deploy(initial_instance_count=1, instance_type="ml.m5.xlarge")

Choose some data and use it for a prediction

We extract some data we used for training and make predictions on it. This is not a recommended statistical practice, but it demonstrates how to run inference using the deployed endpoint.

[ ]:
import itertools
import pandas as pd

shape = pd.read_csv("data/iris.csv", header=None)

a = [50 * i for i in range(3)]
b = [40 + i for i in range(10)]
indices = [i + j for i, j in itertools.product(a, b)]

test_data = shape.iloc[indices[:-1]]
test_X = test_data.iloc[:, 1:]
test_y = test_data.iloc[:, 0]

To make a prediction, call predict() on the predictor returned from deploy(), passing the data to do predictions on. The output from the endpoint returns a numerical representation of the classification prediction; in the original dataset, these are three flower category names, but in this example the labels are numerical. We can compare against the original label that we parsed.

[ ]:
print(predictor.predict(test_X.values))
print(test_y.values)

Endpoint cleanup

When you’re done with the endpoint, delete it to release the resources and avoid incurring additional cost.

[ ]:
predictor.delete_endpoint()

Batch Transform

We can also use the trained model for asynchronous batch inference on S3 data using SageMaker Batch Transform.

[ ]:
# Define an SKLearn Transformer from the trained SKLearn Estimator
transformer = sklearn.transformer(instance_count=1, instance_type="ml.m5.xlarge")

Prepare Input Data

We extract 10 random samples of 100 rows from the training data, split the features (X) from the labels (Y), and upload the input data to a given location in S3.

[ ]:
%%bash
# Randomly sample the iris dataset 10 times, then split X and Y
mkdir -p batch_data/XY batch_data/X batch_data/Y
for i in {0..9}; do
    cat data/iris.csv | shuf -n 100 > batch_data/XY/iris_sample_${i}.csv
    cat batch_data/XY/iris_sample_${i}.csv | cut -d',' -f2- > batch_data/X/iris_sample_X_${i}.csv
    cat batch_data/XY/iris_sample_${i}.csv | cut -d',' -f1 > batch_data/Y/iris_sample_Y_${i}.csv
done
[ ]:
# Upload input data from local file system to S3
batch_input_s3 = sagemaker_session.upload_data("batch_data/X", key_prefix=prefix + "/batch_input")

Run Transform Job

Using the Transformer, run a transform job on the S3 input data.

[ ]:
# Start a transform job and wait for it to finish
transformer.transform(batch_input_s3, content_type="text/csv")
print("Waiting for transform job: " + transformer.latest_transform_job.job_name)
transformer.wait()

Check Output Data

After the transform job has completed, download the output data from S3. For each file “f” in the input data, we have a corresponding file “f.out” containing the predicted labels from each input row. We can compare the predicted labels to the true labels saved earlier.

[ ]:
# Download the output data from S3 to local file system
batch_output = transformer.output_path
!mkdir -p batch_data/output
!aws s3 cp --recursive $batch_output/ batch_data/output/
# Head to see what the batch output looks like
!head batch_data/output/*
[ ]:
%%bash
# For each sample file, compare the predicted labels from batch output to the true labels
for i in {1..9}; do
    diff -s batch_data/Y/iris_sample_Y_${i}.csv \
        <(cat batch_data/output/iris_sample_X_${i}.csv.out | sed 's/[["]//g' | sed 's/, \|]/\n/g') \
        | sed "s/\/dev\/fd\/63/batch_data\/output\/iris_sample_X_${i}.csv.out/"
done

Notebook CI Test Results

This notebook was tested in multiple regions. The test results are as follows, except for us-west-2 which is shown at the top of the notebook.

This us-east-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This us-east-2 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This us-west-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This ca-central-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This sa-east-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This eu-west-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This eu-west-2 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This eu-west-3 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This eu-central-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This eu-north-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This ap-southeast-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This ap-southeast-2 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This ap-northeast-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This ap-northeast-2 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This ap-south-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable