Use Amazon Sagemaker Distributed Model Parallel to Launch a BERT Training Job with Model Parallelization

Sagemaker distributed model parallel (SMP) is a model parallelism library for training large deep learning models that were previously difficult to train due to GPU memory limitations. SMP automatically and efficiently splits a model across multiple GPUs and instances and coordinates model training, allowing you to increase prediction accuracy by creating larger models with more parameters.

Use this notebook to configure SMP to train a model using PyTorch (version 1.6.0) and the Amazon SageMaker Python SDK.

In this notebook, you will use a BERT example training script with SMP. The example script is based on Nvidia Deep Learning Examples and requires you to download the datasets and upload them to Amazon Simple Storage Service (Amazon S3) as explained in the instructions below. This is a large dataset, and so depending on your connection speed, this process can take hours to complete.

This notebook depends on the following files. You can find all files in the bert directory in the model parllel section of the Amazon SageMaker Examples notebooks repo.

  • bert_example/sagemaker_smp_pretrain.py: This is an entrypoint script that is passed to the Pytorch estimator in the notebook instructions. This script is responsible for end to end training of the BERT model with SMP. The script has additional comments at places where the SMP API is used.

  • bert_example/modeling.py: This contains the model definition for the BERT model.

  • bert_example/bert_config.json: This allows for additional configuration of the model and is used by modeling.py. Additional configuration includes dropout probabilities, pooler and encoder sizes, number of hidden layers in the encoder, size of the intermediate layers in the encoder etc.

  • bert_example/schedulers.py: contains definitions for learning rate schedulers used in end to end training of the BERT model (bert_example/sagemaker_smp_pretrain.py).

  • bert_example/utils.py: This contains different helper utility functions used in end to end training of the BERT model (bert_example/sagemaker_smp_pretrain.py).

  • bert_example/file_utils.py: Contains different file utility functions used in model definition (bert_example/modeling.py).

Additional Resources

If you are a new user of Amazon SageMaker, you may find the following helpful to learn more about SMP and using SageMaker with Pytorch.

Prerequisites

  1. You must create an S3 bucket to store the input data to be used for training. This bucket must must be located in the same AWS Region you use to launch your training job. This is the AWS Region you use to run this notebook. To learn how, see Creating a bucket in the Amazon S3 documentation.

  2. You must download the dataset that you use for training from Nvidia Deep Learning Examples and upload it to the S3 bucket you created. To learn more about the datasets and scripts provided to preprocess and download it, see Getting the data in the Nvidia Deep Learning Examples repo README. You can also use the Quick Start Guide to learn how to download the dataset. The repository consists of three datasets. Optionally, you can to use the wiki_only parameter to only download the Wikipedia dataset.

Amazon SageMaker Initialization

Upgrade Sagemaker SDK to the latest version. NOTE: This step may require a kernel restart.

[ ]:
import sagemaker

original_version = sagemaker.__version__
%pip install --upgrade sagemaker

Initialize the notebook instance. Get the AWS Region, SageMaker execution role Amazon Resource Name (ARN).

[ ]:
%%time
import sagemaker
from sagemaker import get_execution_role
from sagemaker.estimator import Estimator
from sagemaker.pytorch import PyTorch
import boto3
import os

role = (
    get_execution_role()
)  # provide a pre-existing role ARN as an alternative to creating a new role
print(f"SageMaker Execution Role:{role}")

client = boto3.client("sts")
account = client.get_caller_identity()["Account"]
print(f"AWS account:{account}")

session = boto3.session.Session()
region = session.region_name
print(f"AWS region:{region}")
sagemaker_session = sagemaker.session.Session(boto_session=session)
import sys

print(sys.path)

# get default bucket
default_bucket = sagemaker_session.default_bucket()
print()
print("Default bucket for this session: ", default_bucket)

Prepare/Identify your Training Data in Amazon S3

If you don’t already have the BERT dataset in an S3 bucket, please see the instructions in Nvidia BERT Example to download the dataset and upload it to a s3 bucket. See the prerequisites at the beginning of this notebook for more information.

Replace the instances of None below to set the S3 bucket and prefix of your preprocessed data. For example, if your training data is in s3://your-bucket/training, enter 'your-bucket' for s3_bucket and 'training' for prefix. Note that your output data will be stored in the same bucket, under the output/ prefix.

If you proceed with None values for both s3_bucket and prefix, then the program downloads some mock data from a public S3 bucket sagemaker-sample-files and uploads it to your default bucket. This is intended for CI.

[ ]:
s3_bucket = None  # Replace None by your bucket
prefix = None  # Replace None by the prefix of your data

# For CI
if s3_bucket is None:
    # Donwload some mock data from a public bucket in us-east-1
    s3 = boto3.resource("s3")
    bucket_name = "sagemaker-sample-files"
    # Phase 1 pretraining
    prefix = "datasets/binary/bert/hdf5_lower_case_1_seq_len_128_max_pred_20_masked_lm_prob_0.15_random_seed_12345_dupe_factor_5/wikicorpus_en_abstract"

    local_dir = "/tmp/data"
    bucket = s3.Bucket(bucket_name)

    for obj in bucket.objects.filter(Prefix=prefix):
        target = os.path.join(local_dir, obj.key)
        if not os.path.exists(os.path.dirname(target)):
            os.makedirs(os.path.dirname(target))
        bucket.download_file(obj.key, target)

    # upload to default bucket
    mock_data = sagemaker_session.upload_data(
        path=os.path.join(local_dir, prefix),
        bucket=sagemaker_session.default_bucket(),
        key_prefix=prefix,
    )

    data_channels = {"train": mock_data}
else:

    s3train = f"s3://{s3_bucket}/{prefix}"
    train = sagemaker.session.TrainingInput(
        s3train, distribution="FullyReplicated", s3_data_type="S3Prefix"
    )
    data_channels = {"train": train}
[ ]:
print(data_channels)

Set your output data path. This is where model artifacts are stored.

[ ]:
s3_output_location = f"s3://{default_bucket}/output/bert"
print(f"your output data will be stored in: s3://{default_bucket}/output/bert")

Define SageMaker Training Job

Next, you will use SageMaker Estimator API to define a SageMaker Training Job. You will use a `PyTorchEstimator <https://sagemaker.readthedocs.io/en/stable/frameworks/pytorch/sagemaker.pytorch.html>`__ to define the number and type of EC2 instances Amazon SageMaker uses for training, as well as the size of the volume attached to those instances.

You must update the following: * instance_count * instance_type * volume_size

See the following sub-sections for more details.

Update the Type and Number of EC2 Instances Used

The instance type and number of instances you specify in instance_type and instance_count respectively will determine the number of GPUs Amazon SageMaker uses during training. Explicitly, instance_type will determine the number of GPUs on a single instance and that number will be multiplied by instance_count.

You must specify values for instance_type and instance_count so that the total number of GPUs available for training is equal to partitions in config of smp.init in your training script.

If you set ddp to True, you must ensure that the total number of GPUs available is divisible by partitions. The result of the division is inferred to be the number of model replicas to be used for Horovod (data parallelism degree).

See Amazon SageMaker Pricing for SageMaker supported instances and cost information. To look up GPUs for each instance types, see Amazon EC2 Instance Types. Use the section Accelerated Computing to see general purpose GPU instances. Note that an ml.p3.2xlarge has the same number of GPUs as an p3.2xlarge.

Update your Volume Size

The volume size you specify in volume_size must be larger than your input data size.

Set your parameters dictionary for SMP and set custom mpioptions

With the parameters dictionary you can configure: the number of microbatches, number of partitions, whether to use data parallelism with ddp, the pipelining strategy, the placement strategy and other BERT specific hyperparameters.

[ ]:
mpi_options = "-verbose --mca orte_base_help_aggregate 0 "
smp_parameters = {
    "optimize": "speed",
    "microbatches": 12,
    "partitions": 2,
    "ddp": True,
    "pipeline": "interleaved",
    "overlapping_allreduce": True,
    "placement_strategy": "cluster",
    "memory_weight": 0.3,
}
timeout = 60 * 60
metric_definitions = [{"Name": "base_metric", "Regex": "<><><><><><>"}]

hyperparameters = {
    "input_dir": "/opt/ml/input/data/train",
    "output_dir": "./checkpoints",
    "config_file": "bert_config.json",
    "bert_model": "bert-large-uncased",
    "train_batch_size": 48,
    "max_seq_length": 128,
    "max_predictions_per_seq": 20,
    "max_steps": 7038,
    "warmup_proportion": 0.2843,
    "num_steps_per_checkpoint": 200,
    "learning_rate": 6e-3,
    "seed": 12439,
    "steps_this_run": 500,
    "allreduce_post_accumulation": 1,
    "allreduce_post_accumulation_fp16": 1,
    "do_train": 1,
    "use_sequential": 1,
    "skip_checkpoint": 1,
    "smp": 1,
    "apply_optimizer": 1,
}

Instantiate Pytorch Estimator with SMP enabled

[ ]:
pytorch_estimator = PyTorch(
    "sagemaker_smp_pretrain.py",
    role=role,
    instance_type="ml.p3.16xlarge",
    volume_size=200,
    instance_count=1,
    sagemaker_session=sagemaker_session,
    py_version="py36",
    framework_version="1.6.0",
    distribution={
        "smdistributed": {"modelparallel": {"enabled": True, "parameters": smp_parameters}},
        "mpi": {
            "enabled": True,
            "processes_per_host": 8,
            "custom_mpi_options": mpi_options,
        },
    },
    source_dir="bert_example",
    output_path=s3_output_location,
    max_run=timeout,
    hyperparameters=hyperparameters,
    metric_definitions=metric_definitions,
)

Finally, you will use the estimator to launch the SageMaker training job.

[ ]:
pytorch_estimator.fit(data_channels, logs=True)