Distributed data parallel MNIST training with TensorFlow 2 and SageMaker Distributed


This notebook’s CI test result for us-west-2 is as follows. CI test results in other regions can be found at the end of the notebook.

This us-west-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable


Amazon SageMaker’s distributed library can be used to train deep learning models faster and cheaper. The data parallel feature in this library (smdistributed.dataparallel) is a distributed data parallel training framework for PyTorch, TensorFlow, and MXNet. This notebook demonstrates how to use the SageMaker distributed data library to train a PyTorch model using the MNIST dataset.

This notebook example shows how to use smdistributed.dataparallel with TensorFlow2 in SageMaker using MNIST dataset.

For more information:

  1. TensorFlow in SageMaker

  2. SageMaker distributed data parallel API Specification

  3. SageMaker’s Distributed Data Parallel Library

NOTE: This example requires SageMaker Python SDK v2.X.

Dataset

This example uses the MNIST dataset. MNIST is a widely used dataset for handwritten digit classification. It consists of 70,000 labeled 28x28 pixel grayscale images of handwritten digits. The dataset is split into 60,000 training images and 10,000 test images. There are 10 classes (one for each of the 10 digits).

[ ]:
pip install sagemaker --upgrade

SageMaker role

The following code cell defines role which is the IAM role ARN used to create and run SageMaker training and hosting jobs. This is the same IAM role used to create this SageMaker Notebook instance.

role must have permission to create a SageMaker training job and launch an endpoint to host a model. For granular policies you can use to grant these permissions, see Amazon SageMaker Roles. If you do not require fine-tuned permissions for this demo, you can used the IAM managed policy AmazonSageMakerFullAccess to complete this demo.

[ ]:
import sagemaker

sagemaker_session = sagemaker.Session()
role = sagemaker.get_execution_role()

To verify that the role above has required permissions:

  1. Go to the IAM console: https://console.aws.amazon.com/iam/home.

  2. Select Roles.

  3. Enter the role name in the search box to search for that role.

  4. Select the role.

  5. Use the Permissions tab to verify this role has required permissions attached.

Model training with SageMaker distributed data parallel

Training script

The train_tensorflow_smdataparallel_mnist.py script provides the code you need for training a SageMaker model using smdistributed.dataparallel’s DistributedGradientTape. The training script is very similar to a TensorFlow2 training script you might run outside of SageMaker, and has been modified to run with this library. smdistributed.dataparallel’s TensorFlow client provides an alternative to native DistributedGradientTape.

For details about how to use smdistributed.dataparallel’s DDP in your native TensorFlow 2 script, see the Modify a TensorFlow 2.x Training Script Using SMD Data Parallel.

[ ]:
!pygmentize code/train_tensorflow_smdataparallel_mnist.py

SageMaker TensorFlow Estimator function options

In the following code block, you can update the estimator function to use a different instance type, instance count, and distrubtion strategy. You’re also passing in the training script you reviewed in the previous cell.

Instance types

smdistributed.dataparallel supports model training on SageMaker with the following instance types only. For best performance, it is recommended you use an instance type that supports Amazon Elastic Fabric Adapter (ml.p3dn.24xlarge and ml.p4d.24xlarge).

  1. ml.p3.16xlarge

  2. ml.p3dn.24xlarge [Recommended]

  3. ml.p4d.24xlarge [Recommended]

Instance count

To get the best performance and the most out of smdistributed.dataparallel, you should use at least 2 instances, but you can also use 1 for testing this example.

Distribution strategy

Note that to use DDP mode, you update the the distribution strategy, and set it to use smdistributed dataparallel.

[ ]:
from sagemaker.tensorflow import TensorFlow

estimator = TensorFlow(
    base_job_name="tensorflow2-smdataparallel-mnist",
    source_dir="code",
    entry_point="train_tensorflow_smdataparallel_mnist.py",
    role=role,
    py_version="py37",
    framework_version="2.4.1",
    # For training with multinode distributed training, set this count. Example: 2
    instance_count=2,
    # For training with p3dn instance use - ml.p3dn.24xlarge, with p4dn instance use - ml.p4d.24xlarge
    instance_type="ml.p3.16xlarge",
    sagemaker_session=sagemaker_session,
    # Training using SMDataParallel Distributed Training Framework
    distribution={"smdistributed": {"dataparallel": {"enabled": True}}},
)
[ ]:
estimator.fit()

Now that you have a trained model, you can deploy an endpoint to host the model. After you deploy the endpoint, you can then test it with inference requests. The following cell will store the model_data variable to be used with the inference notebook.

[ ]:
model_data = estimator.model_data
print("Storing {} as model_data".format(model_data))
%store model_data
[ ]:
predictor = estimator.deploy(initial_instance_count=1, instance_type="ml.m4.xlarge")
[ ]:
print(predictor.endpoint_name)
[ ]:
import tensorflow as tf
import numpy as np

(mnist_images, mnist_labels), _ = tf.keras.datasets.mnist.load_data(path="/tmp/data")
[ ]:
for i in range(10):
    data = mnist_images[i].reshape(1, 28, 28, 1)

    predict_response = predictor.predict(data)

    print("========================================")
    label = mnist_labels[i]

    predict_label = np.argmax(predict_response["predictions"])

    print("label is {}".format(label))
    print("prediction is {}".format(predict_label))

Cleanup

If you don’t intend to try out inference or to do anything else with the endpoint, you should delete the endpoint.

[ ]:
predictor.delete_endpoint()

Notebook CI Test Results

This notebook was tested in multiple regions. The test results are as follows, except for us-west-2 which is shown at the top of the notebook.

This us-east-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable

This us-east-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable

This us-west-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable

This ca-central-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable

This sa-east-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable

This eu-west-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable

This eu-west-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable

This eu-west-3 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable

This eu-central-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable

This eu-north-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable

This ap-southeast-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable

This ap-southeast-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable

This ap-northeast-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable

This ap-northeast-2 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable

This ap-south-1 badge failed to load. Check your device's internet connectivity, otherwise the service is currently unavailable