Profiling TensorFlow Multi GPU Multi Node Training Job with Amazon SageMaker Debugger (SageMaker SDK)

This notebook will walk you through creating a TensorFlow training job with the SageMaker Debugger profiling feature enabled. It will create a multi GPU multi node training using Horovod.

(Optional) Install SageMaker and SMDebug Python SDKs

To use the new Debugger profiling features released in December 2020, ensure that you have the latest versions of SageMaker and SMDebug SDKs installed. Use the following cell to update the libraries and restarts the Jupyter kernel to apply the updates.

[ ]:
import sys
import IPython
install_needed = False  # should only be True once
if install_needed:
    print("installing deps and restarting kernel")
    !{sys.executable} -m pip install -U sagemaker smdebug

1. Create a Training Job with Profiling Enabled

You will use the standard SageMaker Estimator API for Tensorflow to create training jobs. To enable profiling, create a ProfilerConfig object and pass it to the profiler_config parameter of the TensorFlow estimator.

Define parameters for distributed training

This parameter tells SageMaker how to configure and run horovod. If you want to use more than 4 GPUs per node then change the process_per_host paramter accordingly.

[ ]:
distributions = {
    "mpi": {
        "enabled": True,
        "processes_per_host": 4,
        "custom_mpi_options": "-verbose -x HOROVOD_TIMELINE=./hvd_timeline.json -x NCCL_DEBUG=INFO -x OMPI_MCA_btl_vader_single_copy_mechanism=none",

Configure rules

We specify the following rules: - loss_not_decreasing: checks if loss is decreasing and triggers if the loss has not decreased by a certain persentage in the last few iterations - LowGPUUtilization: checks if GPU is under-utilizated - ProfilerReport: runs the entire set of performance rules and create a final output report with further insights and recommendations.

[ ]:
from sagemaker.debugger import Rule, ProfilerRule, rule_configs

rules = [

Specify a profiler configuration

The following configuration will capture system metrics at 500 milliseconds. The system metrics include utilization per CPU, GPU, memory utilization per CPU, GPU as well I/O and network.

Debugger will capture detailed profiling information from step 5 to step 15. This information includes Horovod metrics, dataloading, preprocessing, operators running on CPU and GPU.

[ ]:
from sagemaker.debugger import ProfilerConfig, FrameworkProfile

profiler_config = ProfilerConfig(
        local_path="/opt/ml/output/profiler/", start_step=5, num_steps=10

Get the image URI

The image that we will is dependent on the region that you are running this notebook in.

[ ]:
import boto3

session = boto3.session.Session()
region = session.region_name

image_uri = f"763104351884.dkr.ecr.{region}"

Define estimator

To enable profiling, you need to pass the Debugger profiling configuration (profiler_config), a list of Debugger rules (rules), and the image URI (image_uri) to the estimator. Debugger enables monitoring and profiling while the SageMaker estimator requests a training job.

[ ]:
import sagemaker
from sagemaker.tensorflow import TensorFlow

estimator = TensorFlow(

Start training job

The following with wait=False argument initiates the training job in the background. You can proceed to run the dashboard or analysis notebooks.

[ ]:

2. Analyze Profiling Data

Copy outputs of the following cell (training_job_name and region) to run the analysis notebooks profiling_generic_dashboard.ipynb, analyze_performance_bottlenecks.ipynb, and profiling_interactive_analysis.ipynb.

[ ]:
training_job_name =
print(f"Training jobname: {training_job_name}")
print(f"Region: {region}")

While the training is still in progress you can visualize the performance data in SageMaker Studio or in the notebook. Debugger provides utilities to plot system metrics in form of timeline charts or heatmaps. Checkout out the notebook profiling_interactive_analysis.ipynb for more details. In the following code cell we plot the total CPU and GPU utilization as timeseries charts. To visualize other metrics such as I/O, memory, network you simply need to extend the list passed to select_dimension and select_events.

Install the SMDebug client library to use Debugger analysis tools

[ ]:
import pip

def import_or_install(package):
    except ImportError:
        pip.main(["install", package])


Access the profiling data using the SMDebug TrainingJob utility class

[ ]:
from smdebug.profiler.analysis.notebook_utils.training_job import TrainingJob

tj = TrainingJob(training_job_name, region)

Plot time line charts

The following code shows how to use the SMDebug TrainingJob object, refresh the object if new event files are available, and plot time line charts of CPU and GPU usage.

[ ]:
from smdebug.profiler.analysis.notebook_utils.timeline_charts import TimelineCharts

system_metrics_reader = tj.get_systems_metrics_reader()

view_timeline_charts = TimelineCharts(
    select_dimensions=["CPU", "GPU"],

3. Download Debugger Profiling Report

The ProfilerReport() rule creates an html report profiler-report.html with a summary of builtin rules and recommenades of next steps. You can find this report in your S3 bucket.

[ ]:
rule_output_path = estimator.output_path + estimator.latest_training_job.job_name + "/rule-output"
print(f"You will find the profiler report in {rule_output_path}")

For more information about how to download and open the Debugger profiling report, see SageMaker Debugger Profiling Report in the SageMaker developer guide.

[ ]: