Compile and Train a Binary Classification Trainer Model with the SST2 Dataset for Single-Node Single-GPU Training


This notebook’s CI test result for us-west-2 is as follows. CI test results in other regions can be found at the end of the notebook.

This us-west-2 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable


  1. Introduction

  2. Development Environment and Permissions

    1. Installation

    2. SageMaker environment

  3. Processing

    1. Tokenization

    2. Uploading data to sagemaker_session_bucket

  4. SageMaker Training Job

    1. Training with Native PyTorch

    2. Training with Optimized PyTorch

    3. Analysis

SageMaker Training Compiler Overview

SageMaker Training Compiler is a capability of SageMaker that makes these hard-to-implement optimizations to reduce training time on GPU instances. The compiler optimizes DL models to accelerate training by more efficiently using SageMaker machine learning (ML) GPU instances. SageMaker Training Compiler is available at no additional charge within SageMaker and can help reduce total billable time as it accelerates training.

SageMaker Training Compiler is integrated into the AWS Deep Learning Containers (DLCs). Using the SageMaker Training Compiler enabled AWS DLCs, you can compile and optimize training jobs on GPU instances with minimal changes to your code. Bring your deep learning models to SageMaker and enable SageMaker Training Compiler to accelerate the speed of your training job on SageMaker ML instances for accelerated computing.

For more information, see SageMaker Training Compiler in the Amazon SageMaker Developer Guide.

Introduction

In this demo, you’ll use Hugging Face’s transformers and datasets libraries with Amazon SageMaker Training Compiler to train the RoBERTa model on the Stanford Sentiment Treebank v2 (SST2) dataset. To get started, we need to set up the environment with a few prerequisite steps, for permissions, configurations, and so on.

NOTE: You can run this demo in SageMaker Studio, SageMaker notebook instances, or your local machine with AWS CLI set up. If using SageMaker Studio or SageMaker notebook instances, make sure you choose one of the PyTorch-based kernels, Python 3 (PyTorch x.y Python 3.x CPU Optimized) or conda_pytorch_p36 respectively.

NOTE: This notebook uses two ml.p3.2xlarge instances that have single GPU. If you don’t have enough quota, see Request a service quota increase for SageMaker resources.

Development Environment

Installation

This example notebook requires the SageMaker Python SDK v2.133.0 and transformers.

[ ]:
!pip install botocore boto3 awscli s3fs typing-extensions "torch==1.12.0" "fsspec<=2022.7.1" "sagemaker>=2.133.0" --upgrade
[ ]:
!pip install -U transformers "datasets[s3]==2.5.2" --upgrade
[ ]:
import botocore
import boto3
import sagemaker
import transformers
import pandas as pd

print(f"sagemaker: {sagemaker.__version__}")
print(f"transformers: {transformers.__version__}")

Copy and run the following code if you need to upgrade ipywidgets for datasets library and restart kernel. This is only needed when preprocessing is done in the notebook.

%%capture
import IPython
!conda install -c conda-forge ipywidgets -y
# has to restart kernel for the updates to be applied
IPython.Application.instance().kernel.do_shutdown(True)

SageMaker environment

[ ]:
import sagemaker

sess = sagemaker.Session()

# SageMaker session bucket -> used for uploading data, models and logs
# SageMaker will automatically create this bucket if it does not exist
sagemaker_session_bucket = None
if sagemaker_session_bucket is None and sess is not None:
    # set to default bucket if a bucket name is not given
    sagemaker_session_bucket = sess.default_bucket()

role = sagemaker.get_execution_role()
sess = sagemaker.Session(default_bucket=sagemaker_session_bucket)

print(f"sagemaker role arn: {role}")
print(f"sagemaker bucket: {sess.default_bucket()}")
print(f"sagemaker session region: {sess.boto_region_name}")

Loading the SST dataset

When using the 🤗 Datasets library, datasets can be downloaded directly with the following datasets.load_dataset() method:

from datasets import load_dataset
load_dataset('dataset_name')

If you’d like to try other training datasets later, you can simply use this method.

For this example notebook, we prepared the SST2 dataset in the public SageMaker sample file S3 bucket. The following code cells show how you can directly load the dataset and convert to a HuggingFace DatasetDict.

Preprocessing

We download and preprocess the SST2 dataset from the s3://sagemaker-sample-files/datasets bucket. After preprocessing, we’ll upload the dataset to the sagemaker_session_bucket, which will be used as a data channel for the training job.

Tokenization

[ ]:
from datasets import Dataset
from transformers import AutoTokenizer
import pandas as pd

# tokenizer used in preprocessing
tokenizer_name = "roberta-base"

# s3 key prefix for the data
s3_prefix = "samples/datasets/sst2"

# Download the SST2 data from s3
!curl https://sagemaker-sample-files.s3.amazonaws.com/datasets/text/SST2/sst2.test > ./sst2.test
!curl https://sagemaker-sample-files.s3.amazonaws.com/datasets/text/SST2/sst2.train > ./sst2.train
!curl https://sagemaker-sample-files.s3.amazonaws.com/datasets/text/SST2/sst2.val > ./sst2.val
[ ]:
# download tokenizer
tokenizer = AutoTokenizer.from_pretrained(tokenizer_name)


# tokenizer helper function
def tokenize(batch):
    return tokenizer(batch["text"], padding="max_length", truncation=True)


# load dataset
test_df = pd.read_csv("sst2.test", sep="delimiter", header=None, engine="python", names=["line"])
train_df = pd.read_csv("sst2.train", sep="delimiter", header=None, engine="python", names=["line"])

test_df[["label", "text"]] = test_df["line"].str.split(" ", 1, expand=True)
train_df[["label", "text"]] = train_df["line"].str.split(" ", 1, expand=True)

test_df.drop("line", axis=1, inplace=True)
train_df.drop("line", axis=1, inplace=True)

test_df["label"] = pd.to_numeric(test_df["label"], downcast="integer")
train_df["label"] = pd.to_numeric(train_df["label"], downcast="integer")

train_dataset = Dataset.from_pandas(train_df)
test_dataset = Dataset.from_pandas(test_df)

# tokenize dataset
train_dataset = train_dataset.map(tokenize, batched=True)
test_dataset = test_dataset.map(tokenize, batched=True)

# set format for pytorch
train_dataset = train_dataset.rename_column("label", "labels")
train_dataset.set_format("torch", columns=["input_ids", "attention_mask", "labels"])

test_dataset = test_dataset.rename_column("label", "labels")
test_dataset.set_format("torch", columns=["input_ids", "attention_mask", "labels"])

Uploading data to sagemaker_session_bucket

We are going to use the new FileSystem integration to upload our preprocessed dataset to S3.

[ ]:
import botocore
from datasets.filesystems import S3FileSystem

s3 = S3FileSystem()

# save train_dataset to s3
training_input_path = f"s3://{sess.default_bucket()}/{s3_prefix}/train"
train_dataset.save_to_disk(training_input_path, fs=s3)

# save test_dataset to s3
test_input_path = f"s3://{sess.default_bucket()}/{s3_prefix}/test"
test_dataset.save_to_disk(test_input_path, fs=s3)

SageMaker Training Job

To create a SageMaker training job, we use a PyTorch estimator. Using the estimator, you can define which fine-tuning script should SageMaker use through entry_point, which instance_type to use for training, which hyperparameters to pass, and so on.

When a SageMaker training job starts, SageMaker takes care of starting and managing all the required machine learning instances, picks up the PyTorch Deep Learning Container, uploads your training script, and downloads the data from sagemaker_session_bucket into the container at /opt/ml/input/data.

In the following section, you learn how to set up two versions of the SageMaker PyTorch estimator, a native one without the compiler and an optimized one with the compiler.

Training Setup

Set up an option for fine-tuning or full training. FINE_TUNING = 1 is for fine-tuning, and it will use fine_tune_with_huggingface.py. FINE_TUNING = 0 is for full training, and it will use full_train_roberta_with_huggingface.py.

[ ]:
# Here we configure the training job. Please configure the appropriate options below:

# Fine tuning trains a pre-trained model on a different dataset whereas full training trains the model from scratch.
FINE_TUNING = 1
FULL_TRAINING = not FINE_TUNING

# Fine tuning is typically faster and is done for fewer epochs
EPOCHS = 4 if FINE_TUNING else 100

TRAINING_SCRIPT = (
    "fine_tune_with_huggingface.py" if FINE_TUNING else "full_train_roberta_with_huggingface.py"
)

# SageMaker Training Compiler currently only supports training on GPU
# Select Instance type for training
INSTANCE_TYPE = "ml.p3.2xlarge"

Training with Native PyTorch

The train_batch_size in the following code cell is the maximum batch that can fit into the memory of the ml.p3.2xlarge instance. If you change the model, instance type, sequence length, and other parameters, you need to do some experiments to find the largest batch size that will fit into GPU memory.

[ ]:
from sagemaker.pytorch import PyTorch

# hyperparameters, which are passed into the training job
hyperparameters = {"epochs": EPOCHS, "train_batch_size": 24, "model_name": "roberta-base"}
# The original LR was set for a batch of 32. Here we are scaling learning rate with batch size.
hyperparameters["learning_rate"] = float("5e-5") / 32 * hyperparameters["train_batch_size"]

# If checkpointing is enabled with higher epoch numbers, your disk requirements will increase as well
volume_size = 60 + 2 * hyperparameters["epochs"]
[ ]:
# configure the training job
native_estimator = PyTorch(
    entry_point=TRAINING_SCRIPT,
    source_dir="./scripts",
    instance_type=INSTANCE_TYPE,
    instance_count=1,
    role=role,
    py_version="py39",
    framework_version="1.13.1",
    volume_size=volume_size,
    hyperparameters=hyperparameters,
    disable_profiler=True,
    debugger_hook_config=False,
)

# start training with our uploaded datasets as input
native_estimator.fit({"train": training_input_path, "test": test_input_path}, wait=False)

# The name of the training job.
native_estimator.latest_training_job.name

Training with Optimized PyTorch

Compilation through Training Compiler changes the memory footprint of the model. Most commonly, this manifests as a reduction in memory utilization and a consequent increase in the largest batch size that can fit on the GPU. Note that if you want to change the batch size, you must adjust the learning rate appropriately.

Note: We recommend you to turn the SageMaker Debugger’s profiling and debugging tools off when you use compilation to avoid additional overheads.

[ ]:
# With SageMaker Training Compiler enabled we are able to fit a larger batch into memory.
hyperparameters["train_batch_size"] = 36
# The original LR was set for a batch of 32. Here we are scaling learning rate with batch size.
hyperparameters["learning_rate"] = float("5e-5") / 32 * hyperparameters["train_batch_size"]

# If checkpointing is enabled with higher epoch numbers, your disk requirements will increase as well
volume_size = 60 + 2 * hyperparameters["epochs"]
[ ]:
from sagemaker.pytorch import PyTorch, TrainingCompilerConfig

# configure the training job
optimized_estimator = PyTorch(
    entry_point=TRAINING_SCRIPT,
    compiler_config=TrainingCompilerConfig(),
    source_dir="./scripts",
    instance_type=INSTANCE_TYPE,
    instance_count=1,
    role=role,
    py_version="py39",
    framework_version="1.13.1",
    volume_size=volume_size,
    hyperparameters=hyperparameters,
    disable_profiler=True,
    debugger_hook_config=False,
)

# start training with our uploaded datasets as input
optimized_estimator.fit({"train": training_input_path, "test": test_input_path}, wait=False)

# The name of the training job
optimized_estimator.latest_training_job.name

Wait for training jobs to complete

[ ]:
waiter = native_estimator.sagemaker_session.sagemaker_client.get_waiter(
    "training_job_completed_or_stopped"
)
waiter.wait(TrainingJobName=native_estimator.latest_training_job.name)
waiter = optimized_estimator.sagemaker_session.sagemaker_client.get_waiter(
    "training_job_completed_or_stopped"
)
waiter.wait(TrainingJobName=optimized_estimator.latest_training_job.name)

Analysis

Load information and logs of the training job without SageMaker Training Compiler

[ ]:
# container image used for native training job
print(f"container image used for training job: \n{native_estimator.image_uri}\n")

# s3 uri where the native trained model is located
print(f"s3 uri where the trained model is located: \n{native_estimator.model_data}\n")

# latest training job name for this estimator
print(
    f"latest training job name for this estimator: \n{native_estimator.latest_training_job.name}\n"
)
[ ]:
%%capture native

# access the logs of the native training job
native_estimator.sagemaker_session.logs_for_job(native_estimator.latest_training_job.name)

Note: If the estimator object is no longer available due to a kernel break or refresh, you need to directly use the training job name and manually attach the training job to a new PyTorch estimator. For example:

native_estimator = PyTorch.attach("your_huggingface_training_job_name")

Load information and logs of the training job with SageMaker Training Compiler

[ ]:
# container image used for optimized training job
print(f"container image used for training job: \n{optimized_estimator.image_uri}\n")

# s3 uri where the optimized trained model is located
print(f"s3 uri where the trained model is located: \n{optimized_estimator.model_data}\n")

# latest training job name for this estimator
print(
    f"latest training job name for this estimator: \n{optimized_estimator.latest_training_job.name}\n"
)
[ ]:
%%capture optimized

# access the logs of the optimized training job
optimized_estimator.sagemaker_session.logs_for_job(optimized_estimator.latest_training_job.name)

Note: If the estimator object is no longer available due to a kernel break or refresh, you need to directly use the training job name and manually attach the training job to a new PyTorch estimator. For example:

optimized_estimator = PyTorch.attach("your_compiled_huggingface_training_job_name")

Create helper functions for analysis

[ ]:
from ast import literal_eval
from collections import defaultdict
from matplotlib import pyplot as plt


def _summarize(captured):
    final = []
    for line in captured.stdout.split("\n"):
        cleaned = line.strip()
        if "{" in cleaned and "}" in cleaned:
            final.append(cleaned[cleaned.index("{") : cleaned.index("}") + 1])
    return final


def make_sense(string):
    try:
        return literal_eval(string)
    except:
        pass


def summarize(summary):
    final = {"train": [], "eval": [], "summary": {}}
    for line in summary:
        interpretation = make_sense(line)
        if interpretation:
            if "loss" in interpretation:
                final["train"].append(interpretation)
            elif "eval_loss" in interpretation:
                final["eval"].append(interpretation)
            elif "train_runtime" in interpretation:
                final["summary"].update(interpretation)
    return final

Plot and compare throughput of compiled training and native training

Visualize average throughput as reported by HuggingFace and see potential savings.

[ ]:
# collect the average throughput as reported by HF for the native training job
n = summarize(_summarize(native))
native_throughput = n["summary"]["train_samples_per_second"]

# collect the average throughput as reported by HF for the SageMaker Training Compiler enhanced training job
o = summarize(_summarize(optimized))
optimized_throughput = o["summary"]["train_samples_per_second"]

# Calculate speedup from SageMaker Training Compiler
avg_speedup = f"{round((optimized_throughput/native_throughput-1)*100)}%"
[ ]:
%matplotlib inline

plt.title("Training Throughput \n (Higher is better)")
plt.ylabel("Samples/sec")

plt.bar(x=[1], height=native_throughput, label="Baseline PT", width=0.35)
plt.bar(x=[1.5], height=optimized_throughput, label="Compiler-enhanced PT", width=0.35)

plt.xlabel("  ====> {} Compiler savings <====".format(avg_speedup))
plt.xticks(ticks=[1, 1.5], labels=["Baseline PT", "Compiler-enhanced PT"])

Convergence of Training Loss

SageMaker Training Compiler does not affect the model convergence behavior. Here, we see the decrease in training loss is similar with and without SageMaker Training Compiler

[ ]:
vanilla_loss = [i["loss"] for i in n["train"]]
vanilla_epochs = [i["epoch"] for i in n["train"]]
optimized_loss = [i["loss"] for i in o["train"]]
optimized_epochs = [i["epoch"] for i in o["train"]]

plt.title("Plot of Training Loss")
plt.xlabel("Epoch")
plt.ylabel("Training Loss")
plt.plot(vanilla_epochs, vanilla_loss, label="Baseline PT")
plt.plot(optimized_epochs, optimized_loss, label="Compiler-enhanced PT")
plt.legend()

Evaluation Stats

SageMaker Training Compiler does not affect the quality of the model. Here, we compare the evaluation metrics of the models trained with and without SageMaker Training Compiler to verify the same.

[ ]:
import pandas as pd

table = pd.DataFrame([n["eval"][-1], o["eval"][-1]], index=["Baseline PT", "Compiler-enhanced PT"])
table.drop(columns=["eval_runtime", "eval_samples_per_second", "epoch"])

Training Stats

Let’s compare various training metrics with and without SageMaker Training Compiler. SageMaker Training Compiler provides an increase in training throughput which translates to a decrease in total training time.

[ ]:
pd.DataFrame([n["summary"], o["summary"]], index=["Native", "Optimized"])
[ ]:
# calculate percentage speedup from SageMaker Training Compiler in terms of total training time reported by HF

speedup = (
    (n["summary"]["train_runtime"] - o["summary"]["train_runtime"])
    * 100
    / n["summary"]["train_runtime"]
)
print(
    f"SageMaker Training Compiler integrated PyTorch is about {int(speedup)}% faster in terms of total training time as reported by HF."
)

Total Billable Time

Finally, the decrease in total training time results in a decrease in the billable seconds from SageMaker

[ ]:
def BillableTimeInSeconds(name):
    describe_training_job = (
        optimized_estimator.sagemaker_session.sagemaker_client.describe_training_job
    )
    details = describe_training_job(TrainingJobName=name)
    return details["BillableTimeInSeconds"]
[ ]:
Billable = {}
Billable["Native"] = BillableTimeInSeconds(native_estimator.latest_training_job.name)
Billable["Optimized"] = BillableTimeInSeconds(optimized_estimator.latest_training_job.name)
pd.DataFrame(Billable, index=["BillableSecs"])
[ ]:
speedup = (Billable["Native"] - Billable["Optimized"]) * 100 / Billable["Native"]
print(f"SageMaker Training Compiler integrated PyTorch was {int(speedup)}% faster in summary.")

Clean up

Stop all training jobs launched if the jobs are still running.

[ ]:
import boto3

sm = boto3.client("sagemaker")


def stop_training_job(name):
    status = sm.describe_training_job(TrainingJobName=name)["TrainingJobStatus"]
    if status == "InProgress":
        sm.stop_training_job(TrainingJobName=name)


stop_training_job(native_estimator.latest_training_job.name)
stop_training_job(optimized_estimator.latest_training_job.name)

Also, to find instructions on cleaning up resources, see Clean Up in the Amazon SageMaker Developer Guide.

Notebook CI Test Results

This notebook was tested in multiple regions. The test results are as follows, except for us-west-2 which is shown at the top of the notebook.

This us-east-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This us-east-2 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This us-west-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This ca-central-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This sa-east-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This eu-west-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This eu-west-2 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This eu-west-3 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This eu-central-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This eu-north-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This ap-southeast-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This ap-southeast-2 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This ap-northeast-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This ap-northeast-2 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This ap-south-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable