Train GPT-2 with PyTorch 1.12 and Tensor Parallelism Using the SageMaker Model Parallelism Library

This notebook walks you through how to use the SageMaker model parallelism (SMP) library. You’ll learn how to train the GPT-2 model with SageMaker’s model parallelism.

The GPT-2 model was proposed by OpenAI in paper Language Models are Unsupervised Multitask Learners. The original GPT-2 is a large transformer-based language model with 1.5 billion parameters. In this notebook, you can experiment with the model parameters to achieve different model sizes. This notebook uses the Hugging Face Transformers GPT-2 implementation with the SMP integration.

This notebook depends on the following files and folders:

  • train_gpt_simple.py: This is an entrypoint script that is passed to the Hugging Face estimator in the notebook instructions. This script is responsible for end to end training of the GPT-2 model with SMP. The script has additional comments at places where the SMP API is used.

  • data_pipeline.py: This contains the datapipeline function to prepare the training data.

  • learining_rate.py: This contains the functions for learning rate schedule.

  • requirements.txt: This will install the dependencies, like the right version of huggingface transformers.

  • data_prep_512.py: This will download and preprocess the openwebtext dataset.

  • memory_tracker.py: This contains the functions to track memory usage.

  • sharded_data_parallel_checkpoint.py: This contains checkpoint util functions for sharded data parallelism

Additional Resources

If you are a new user of Amazon SageMaker, you may find the following helpful to learn more about SMP and using SageMaker with PyTorch.

Prerequisites

You must create an S3 bucket to store the input data for training. This bucket must be located in the same AWS Region that you choose to launch your training job. To learn more, see Creating a bucket in the Amazon S3 documentation.

Amazon SageMaker Initialization

Run the following cell to import SageMaker modules and retrieve information of your current SageMaker work environment, such as your AWS account ID, the AWS Region, and the ARN of your Amazon SageMaker execution role.

Upgrade SageMaker SDK to the latest version.

NOTE: This step might require a kernel restart.

[ ]:
%pip install --upgrade sagemaker
%pip install sagemaker-experiments
[ ]:
%%time
import os

import boto3
import sagemaker
from sagemaker import get_execution_role
from sagemaker.pytorch import PyTorch

role = (
    get_execution_role()
)  # provide a pre-existing role ARN as an alternative to creating a new role
print(f"SageMaker Execution Role: {role}")

client = boto3.client("sts")
account = client.get_caller_identity()["Account"]
print(f"AWS account: {account}")

session = boto3.session.Session()
region = session.region_name
print(f"AWS region: {region}")

sm_boto_client = boto3.client("sagemaker")
sagemaker_session = sagemaker.session.Session(boto_session=session)

# get default bucket
default_bucket = sagemaker_session.default_bucket()
print()
print("Default bucket for this session: ", default_bucket)

Prepare your dataset

openwebtext is a dataset that we recommend for training. You can use the script data_prep_512.py to download and preprocess the dataset. The entire process takes 3 to 4 hours, so it is recommended to run the script in a separate SageMaker notebook instance and upload the processed data into your S3 bucket. The script will require datasets and transformers to run, you could use the following commands to install the libraries:

pip install datasets
pip install transformers

You can also use your own dataset. Modify the data_pipeline.py to serve your purposes.

NOTE: In this notebook, we provide a wiki corpus dataset sample for the amazon-sagemaker-examples repository’s continuous integration (CI) test. This sample data is small and not meant to train for convergence.

Specify Amazon S3 Bucket Paths

You need to specify S3 paths for training and test datasets for your training job. The S3 bucket must be in the same region as where the training job will run.

Replace the None values at the top of the following cell with your S3 bucket and prefix of your preprocessed data. For example, if your training data is in s3://DOC-EXAMPLE-BUCKET/training, specify it to s3_train_bucket.

If you proceed with None values for both s3_train_bucket and s3_test_bucket, then the notebook will download the wiki corpus mock dataset from the public SageMaker S3 bucket (s3://sagemaker-sample-files) and upload it to your default bucket. This is intended for CI.

[ ]:
s3_train_bucket = None  # Specify your S3 bucket path for training dataset
s3_test_bucket = None  # Specify your S3 bucket path for test dataset


# For CI, integration test of the repo pipeline
if s3_train_bucket == None:
    # Download some mock data from a public bucket in us-east-1
    s3 = boto3.resource("s3")
    bucket_name = "sagemaker-sample-files"
    # Phase 1 pretraining
    prefix = "datasets/binary/bert/hdf5_lower_case_1_seq_len_128_max_pred_20_masked_lm_prob_0.15_random_seed_12345_dupe_factor_5/wikicorpus_en_abstract"

    local_dir = "/tmp/data"
    bucket = s3.Bucket(bucket_name)

    for obj in bucket.objects.filter(Prefix=prefix):
        target = os.path.join(local_dir, obj.key)
        if not os.path.exists(os.path.dirname(target)):
            os.makedirs(os.path.dirname(target))
        bucket.download_file(obj.key, target)

    # upload to default bucket
    mock_data = sagemaker_session.upload_data(
        path=os.path.join(local_dir, prefix),
        bucket=sagemaker_session.default_bucket(),
        key_prefix=prefix,
    )
    running_ci = True
else:
    running_ci = False

The following cell sets up the output path to store artifacts of the training job. You can modify this as needed.

[ ]:
s3_output_location = f"s3://{default_bucket}/output/"
print(f"your output data will be stored in: s3://{default_bucket}/output/")

Define Data Channels for SageMaker Training Using Amazon S3

In this step, you define SageMaker training data channels using the above buckets.

[ ]:
# Set use_fsx to False by default
# Set below var to True if you want to use fsx (see next cell)
use_fsx = False
if not use_fsx:
    if s3_train_bucket != None:
        train = sagemaker.inputs.TrainingInput(
            s3_train_bucket, distribution="FullyReplicated", s3_data_type="S3Prefix"
        )
        data_channels = {"train": train}
    else:
        data_channels = {"train": mock_data}
    if s3_test_bucket != None:
        test = sagemaker.inputs.TrainingInput(
            s3_test_bucket, distribution="FullyReplicated", s3_data_type="S3Prefix"
        )
        data_channels["test"] = test
    else:
        data_channels["test"] = mock_data
    print(data_channels)

(Optional) Set Up and Use Amazon FSx for Data Channels and Checkpoints

While the previous option of using Amazon S3 is easier to setup, using an FSx can be beneficial for performance when dealing with large input sizes and large model sizes. If you are using models above 13B, checkpointing should be done using FSx.

Please see the instructions from Distributed Training of Mask-RCNN in Amazon SageMaker Using FSx to create an FSx Lustre file system and import the dataset from the S3 bucket to your FSx file system. Note that the FSx file system must be created in a private subnet with internet gateway to ensure that training job has access to the internet.

[ ]:
# Instructions obtained from:
# https://github.com/aws/amazon-sagemaker-examples/blob/master/advanced_functionality/distributed_tensorflow_mask_rcnn/mask-rcnn-scriptmode-fsx.ipynb

if use_fsx:
    from sagemaker.inputs import FileSystemInput

    # Specify FSx Lustre file system id.
    file_system_id = "<your-file-system-id>"

    # Specify the SG and subnet used by the FSX, these are passed to SM Estimator so jobs use this as well
    fsx_security_group_id = "<your-security-group-id>"
    fsx_subnet = "<your-subnet>"

    # Specify directory path for input data on the file system.
    # You need to provide normalized and absolute path below.
    # Your mount name can be provided by you when creating fsx, or generated automatically.
    # You can find this mount_name on the FSX page in console.
    # Example of fsx generated mount_name: "3x5lhbmv"
    base_path = "<your-mount-name>"

    # Specify your file system type.
    file_system_type = "FSxLustre"

    train = FileSystemInput(
        file_system_id=file_system_id,
        file_system_type=file_system_type,
        directory_path=base_path,
        file_system_access_mode="rw",
    )

    data_channels = {"train": train, "test": train}

Set Up Hyperparameters, Metric Definitions, and MPI Options

The following hyperparameters dictionary is to pass arguments to the training script (train_gpt_simple.py) and set the model parallel configuration when creating the training job.

You can also add custom mpi flags. By default, we have --mca btl_vader_single_copy_mechanism none to remove unnecessary logs.

Next, we add a base metric definitions to enable the metric upload in SageMaker. You can add any further metric definitions.

[ ]:
hyperparameters = {
    "max_steps": 100,
    "seed": 12345,
    "fp16": 1,
    "lr": 2.0e-4,
    "lr_decay_iters": 125000,
    "min_lr": 0.00001,
    "lr-decay-style": "linear",
    "warmup": 0.01,
    "num_kept_checkpoints": 5,
    "checkpoint_freq": 200,
    "logging_freq": 1,
    "use_wiki_data": 1,
    # below flag loads model and optimizer state from checkpoint_s3_uri
    # 'load_partial': 1,
}

# Add parameters required by SMP config.
# Refer https://sagemaker.readthedocs.io/en/stable/api/training/smd_model_parallel_general.html
# for details.
hyperparameters.update(
    {
        "save_final_full_model": 0,
        "manual_partition": 1,
        "skip_full_optimizer": 1,
        "shard_optimizer_state": 1,
        "activation_checkpointing": 1,
        "activation_strategy": "each",
        "optimize": "speed",
    }
)

if not running_ci:
    # those flags are used when training with the openwebtext dataset
    hyperparameters["zipped_data"] = 0
    hyperparameters["validation_freq"] = 20
    hyperparameters["use_wiki_data"] = 0

if use_fsx:
    # make sure to update paths for training-dir and test-dir based on the paths of datasets in fsx
    # If you want to resume training, set checkpoint-dir to the same path as a previous job.
    SM_TRAIN_DIR = "/opt/ml/input/data/train"
    hyperparameters["checkpoint-dir"] = f"{SM_TRAIN_DIR}/checkpointdir-job2"
    hyperparameters["model-dir"] = f"{SM_TRAIN_DIR}/modeldir-job2"
    hyperparameters["training-dir"] = f"{SM_TRAIN_DIR}/datasets/pytorch_gpt2/train_synthetic"
    hyperparameters["test-dir"] = f"{SM_TRAIN_DIR}/datasets/pytorch_gpt2/val_synthetic"

# The checkpoint path (hyperparameters['checkpoint-dir'] or checkpoint_s3_uri) is not unique per job.
# You need to modify as needed for different runs.
# If same path is used for unrelated runs, this may increase time when downloading unnecessary checkpoints,
# and cause conflicts when loading checkpoints.


mpioptions = "-x NCCL_DEBUG=WARN -x SMDEBUG_LOG_LEVEL=ERROR "
mpioptions += (
    "-x SMP_DISABLE_D2D=1 -x SMP_D2D_GPU_BUFFER_SIZE_BYTES=1 -x SMP_NCCL_THROTTLE_LIMIT=1 "
)
mpioptions += "-x FI_EFA_USE_DEVICE_RDMA=1 -x FI_PROVIDER=efa -x RDMAV_FORK_SAFE=1"

metric_definitions = [
    {"Name": "base_metric", "Regex": "<><><><><><>"}
]  # Add your custom metric definitions

Set the model configuration. Specify one from gpt2-30b, gpt2-xl and gpt2-small.

[ ]:
model_config = "gpt2-small"

if model_config == "gpt2-30b":
    model_params = {
        "max_context_width": 512,
        "hidden_width": 7168,
        "num_layers": 48,
        "num_heads": 64,
        "tensor_parallel_degree": 8,
        "pipeline_parallel_degree": 1,
        "train_batch_size": 5,
        "val_batch_size": 5,
        "prescaled_batch": 0,
    }

elif model_config == "gpt2-xl":
    # 1.5B
    model_params = {
        "max_context_width": 512,
        "hidden_width": 1536,
        "num_layers": 48,
        "num_heads": 24,
        "tensor_parallel_degree": 4,
        "pipeline_parallel_degree": 1,
        "train_batch_size": 2,
        "val_batch_size": 4,
        "prescaled_batch": 0,
    }
elif model_config == "gpt2-small":
    model_params = {
        "max_context_width": 512,
        "hidden_width": 768,
        "num_layers": 12,
        "num_heads": 12,
        "tensor_parallel_degree": 4,
        "pipeline_parallel_degree": 1,
        "train_batch_size": 2,
        "val_batch_size": 4,
        "prescaled_batch": 0,
    }
else:
    raise RuntimeError("Unknown model config")

for k, v in model_params.items():
    hyperparameters[k] = v

Specify Essential Parameters for a SageMaker Training Job

Next, you will use the `SageMaker Estimator API <https://sagemaker.readthedocs.io/en/stable/api/training/estimators.html>`__ to define a SageMaker Training Job, passing values through the following parameters for training job name, the number of EC2 instances, the instance type, and the size of the volume attached to the instances.

  • instance_count

  • instance_type

  • volume_size

  • base_job_name

Update the Type and Number of EC2 Instance to Use

The instance type and the number of instances you specify to the instance_type and instance_count parameters, respectively, will determine the total number of GPUs (world size).

\[\text{(world size) = (the number of GPUs on a single instance)}\times\text{(the number of instances)}\]
[ ]:
instance_type = "ml.p3.16xlarge"

# for gpt2 30b, you need at least 16 p4d instances
# gpt2 xl can be run using a single p4d at the minimum
# gpt2 small can be run using a single p3.16 at the minimum
instance_count = 1

# set to the number of GPUs on that instance
processes_per_host = 8

To look up the number of GPUs of different instance types, see Amazon EC2 Instance Types. Use the section Accelerated Computing to see general purpose GPU instances. Note that, for example, a given instance type p4d.24xlarge has a corresponding instance type ml.p4d.24xlarge in SageMaker. For SageMaker supported ml instances and cost information, see Amazon SageMaker Pricing.

Attach an EBS Volume to the Training Instance

The volume size you specify in volume_size must be larger than your input data size. In this example, the volume size is set to 500GB.

[ ]:
volume_size = 500

Specify a Base Job Name

[ ]:
machine_str = instance_type.split(".")[1] + instance_type.split(".")[2][:3]
pp_degree = hyperparameters["pipeline_parallel_degree"]
tp_degree = hyperparameters["tensor_parallel_degree"]
base_job_name = f'smp-{model_config}-{machine_str}-tp{tp_degree}-pp{pp_degree}-bs{hyperparameters["train_batch_size"]}'
[ ]:
if not use_fsx:
    # If you want to resume training, set checkpoint_s3_uri to the same path as a previous job.
    # Previous checkpoint to load must have same model config.
    checkpoint_bucket = f"s3://sagemaker-{region}-{account}/"
    checkpoint_s3_uri = (
        f"{checkpoint_bucket}/experiments/gpt_synthetic_simpletrainer_checkpoints/{base_job_name}/"
    )
[ ]:
print(f"base_job_name: {base_job_name} checkpoint_s3_uri: {checkpoint_s3_uri}")

Create a SageMaker PyTorch Estimator

The following cell constructs a PyTorch estimator using the parameters defined above. To see how the SageMaker tensor parallelism modules and functions are applied to the script, see the train_gpt_simple.py file.

[ ]:
kwargs = {}
if use_fsx:
    # Use the security group and subnet that was used to create the fsx filesystem
    kwargs["security_group_ids"] = [fsx_security_group_id]
    kwargs["subnets"] = [fsx_subnet]

smp_estimator = PyTorch(
    entry_point="train_gpt_simple.py",
    source_dir=os.getcwd(),
    role=role,
    instance_type=instance_type,
    volume_size=volume_size,
    instance_count=instance_count,
    sagemaker_session=sagemaker_session,
    distribution={
        "mpi": {
            "enabled": True,
            "processes_per_host": processes_per_host,
            "custom_mpi_options": mpioptions,
        },
        "smdistributed": {
            "modelparallel": {
                "enabled": True,
                "parameters": {
                    "ddp": True,
                    "tensor_parallel_degree": hyperparameters["tensor_parallel_degree"],
                    # partitions is a required param in the current SM SDK so it needs to be passed,
                    # these two map to the same config
                    "partitions": hyperparameters["pipeline_parallel_degree"],
                    "shard_optimizer_state": hyperparameters["shard_optimizer_state"] > 0,
                    "prescaled_batch": hyperparameters["prescaled_batch"] > 0,
                    "fp16": hyperparameters["fp16"] > 0,
                    "optimize": hyperparameters["optimize"],
                    "auto_partition": False if hyperparameters["manual_partition"] else True,
                    "default_partition": 0,
                    "optimize": hyperparameters["optimize"],
                },
            }
        },
    },
    framework_version="1.12",
    py_version="py38",
    output_path=s3_output_location,
    checkpoint_s3_uri=checkpoint_s3_uri if not use_fsx else None,
    checkpoint_local_path=hyperparameters["checkpoint-dir"] if use_fsx else None,
    metric_definitions=metric_definitions,
    hyperparameters=hyperparameters,
    debugger_hook_config=False,
    disable_profiler=True,
    base_job_name=base_job_name,
    **kwargs,
)

Finally, run the estimator to launch the SageMaker training job of GPT2 model with tensor parallelism.

[ ]:
smp_estimator.fit(inputs=data_channels, logs=True)

Accessing the Training Logs

You can access the training logs from Amazon CloudWatch. Make sure to look at the logs of algo-1 because that is the main node whose output stream will have the training job logs.

You can use CloudWatch to track SageMaker GPU and memory utilization during training and inference. To view the metrics and logs that SageMaker writes to CloudWatch, see SageMaker Jobs and Endpoint Metrics in the Amazon SageMaker Developer Guide.

If you are a new user of CloudWatch, see Getting Started with Amazon CloudWatch.

For additional information on monitoring and analyzing Amazon SageMaker training jobs, see Monitor and Analyze Training Jobs Using Metrics.

Deploying Trained Model for Inference

In most cases, a trained model can be deployed on a single device for inference because inference only requires a small amount of memory. You can use the SMP API to create a single, unified model after training: the smp.DistributedModel.save_model() method for TensorFlow, and the smp.save() function for PyTorch.

After you build and train your models, you can deploy them to get predictions in one of two ways:

To learn more about deploying models for inference using SageMaker, see Deploy Models for Inference.

[ ]: