[ ]:
# install dependencies
!pip install smdebug
Profiling PyTorch Multi GPU Multi Node Training Job with Amazon SageMaker Debugger
This notebook’s CI test result for us-west-2 is as follows. CI test results in other regions can be found at the end of the notebook.
This notebook will walk you through creating a PyTorch training job with the SageMaker Debugger profiling feature enabled. It will create a multi GPU multi node training.
Install sagemaker and smdebug
To use the new Debugger profiling features, ensure that you have the latest versions of SageMaker and SMDebug SDKs installed. The following cell updates the libraries and restarts the Jupyter kernel to apply the updates.
1. Create a Training Job with Profiling Enabled
You will use the standard SageMaker Estimator API for PyTorch to create training jobs. To enable profiling, create a ProfilerConfig
object and pass it to the profiler_config
parameter of the PyTorch
estimator.
Define hyperparameters
Define hyperparameters such as number of epochs, batch size, and data augmentation. You can increase batch size to increases system utilization, but it may result in CPU bottlneck problems. Data preprocessing of a large batch size with augmentation requires a heavy computation. You can disable data_augmentation to see the impact on the system utilization.
For demonstration purpose, the following hyperparameters are prepared to increase CPU usage, leading to GPU starvation.
[ ]:
hyperparameters = {
"training_script": "pt_res50_cifar10_distributed.py",
"nproc_per_node": 4,
"nnodes": 2,
}
Configure rules
We specify the following rules: - loss_not_decreasing: checks if loss is decreasing and triggers if the loss has not decreased by a certain persentage in the last few iterations - LowGPUUtilization: checks if GPU is under-utilizated - ProfilerReport: runs the entire set of performance rules and create a final output report with further insights and recommendations.
[ ]:
from sagemaker.debugger import Rule, ProfilerRule, rule_configs
rules = [
Rule.sagemaker(rule_configs.loss_not_decreasing()),
ProfilerRule.sagemaker(rule_configs.LowGPUUtilization()),
ProfilerRule.sagemaker(rule_configs.ProfilerReport()),
]
Specify a profiler configuration
The following configuration will capture system metrics at 500 milliseconds. The system metrics include utilization per CPU, GPU, memory utilization per CPU, GPU as well I/O and network.
Debugger will capture detailed profiling information from step 5 to step 15. This information includes Horovod metrics, dataloading, preprocessing, operators running on CPU and GPU.
[ ]:
from sagemaker.debugger import ProfilerConfig, FrameworkProfile
profiler_config = ProfilerConfig(
system_monitor_interval_millis=500,
framework_profile_params=FrameworkProfile(start_step=5, num_steps=10),
)
Get the image URI
The image that we will is dependent on the region that you are running this notebook in.
[ ]:
import boto3
session = boto3.session.Session()
region = session.region_name
image_uri = (
f"763104351884.dkr.ecr.{region}.amazonaws.com/pytorch-training:1.6.0-gpu-py36-cu110-ubuntu18.04"
)
Define estimator
To enable profiling, you need to pass the Debugger profiling configuration (profiler_config
), a list of Debugger rules (rules
), and the image URI (image_uri
) to the estimator. Debugger enables monitoring and profiling while the SageMaker estimator requests a training job.
[ ]:
import sagemaker
from sagemaker.pytorch import PyTorch
estimator = PyTorch(
role=sagemaker.get_execution_role(),
image_uri=image_uri,
instance_count=2,
instance_type="ml.p3.8xlarge",
source_dir="entry_point",
entry_point="distributed_launch.py",
hyperparameters=hyperparameters,
profiler_config=profiler_config,
rules=rules,
)
[ ]:
estimator.fit()
2. Analyze Profiling Data
Copy outputs of the following cell (training_job_name
and region
) to run the analysis notebooks profiling_generic_dashboard.ipynb
, analyze_performance_bottlenecks.ipynb
, and profiling_interactive_analysis.ipynb
.
[ ]:
training_job_name = estimator.latest_training_job.name
print(f"Training jobname: {training_job_name}")
print(f"Region: {region}")
While the training is still in progress you can visualize the performance data in SageMaker Studio or in the notebook. Debugger provides utilities to plot system metrics in form of timeline charts or heatmaps. Checkout out the notebook profiling_interactive_analysis.ipynb for more details. In the following code cell we plot the total CPU and GPU utilization as timeseries charts. To visualize other metrics such as I/O, memory, network you
simply need to extend the list passed to select_dimension
and select_events
.
[ ]:
from smdebug.profiler.analysis.notebook_utils.training_job import TrainingJob
tj = TrainingJob(training_job_name, region)
tj.wait_for_sys_profiling_data_to_be_available()
[ ]:
from smdebug.profiler.analysis.notebook_utils.timeline_charts import TimelineCharts
system_metrics_reader = tj.get_systems_metrics_reader()
system_metrics_reader.refresh_event_file_list()
view_timeline_charts = TimelineCharts(
system_metrics_reader,
framework_metrics_reader=None,
select_dimensions=["CPU", "GPU"],
select_events=["total"],
)
3. Download Debugger Profiling Report
The profiling report rule will create an html report profiler-report.html
with a summary of builtin rules and recommenades of next steps. You can find this report in your S3 bucket.
[ ]:
rule_output_path = estimator.output_path + estimator.latest_training_job.job_name + "/rule-output"
print(f"You will find the profiler report in {rule_output_path}")
For more information about how to download and open the Debugger profiling report, see SageMaker Debugger Profiling Report in the SageMaker developer guide.
Notebook CI Test Results
This notebook was tested in multiple regions. The test results are as follows, except for us-west-2 which is shown at the top of the notebook.