Train an ML Model using Apache Spark in EMR and deploy in SageMaker
This notebook’s CI test result for us-west-2 is as follows. CI test results in other regions can be found at the end of the notebook.
In this notebook, we will see how you can train your Machine Learning (ML) model using Apache Spark and then take the trained model artifacts to create an endpoint in SageMaker for online inference. Apache Spark is one of the most popular big-data analytics platforms & it also comes with an ML library with a wide variety of feature transformers and algorithms that one can use to build an ML model.
Apache Spark is designed for offline batch processing workload and is not best suited for low latency online prediction. In order to mitigate that, we will use MLeap library. MLeap provides an easy-to-use Spark ML Pipeline serialization format & execution engine for low latency prediction use-cases. Once the ML model is trained using Apache Spark in EMR, we will serialize it with MLeap
and upload to S3 as part of the Spark job so that it can be used in
SageMaker in inference.
After the model training is completed, we will use SageMaker Inference to perform predictions against this model. The underlying Docker image that we will use in inference is provided by sagemaker-sparkml-serving. It is a Spring based HTTP web server written following SageMaker container specifications and its operations are powered by MLeap
execution engine.
We’ll use the SageMaker Studio Sparkmagic (PySpark)
kernel for this notebook.
Set up an EMR cluster and connect a SageMaker notebook to the cluster
In order to perform the steps mentioned in this notebook, you will need to have an EMR cluster running and make sure that the notebook can connect to the master node of the cluster.
This solution has been tested with Mleap 0.20, EMR 6.9.0 and Spark 3.3.0
Please follow the guide here on how to set up an EMR cluster and connect it to a notebook. https://aws.amazon.com/blogs/machine-learning/part-1-create-and-manage-amazon-emr-clusters-from-sagemaker-studio-to-run-interactive-spark-and-ml-workloads/
It can also be run as part of our workshop: https://catalog.workshops.aws/sagemaker-studio-emr/en-US
Connect to your EMR cluster
To begin, you’ll want to connect to your EMR Cluster from your SparkMagic kernel. If you want more information on doing this, please check the documentation: https://docs.aws.amazon.com/sagemaker/latest/dg/studio-notebooks-emr-cluster-connect.html
Install the MLeap JARs & Python Libraries on the cluster
You need to have the MLeap JARs in the classpath to be successfully able to use it during model serialization. This can seamlessly be done by adding the package names to spark.jars.packages
and creating a virtual env on our driver and executor nodes using notebook scoped dependencies.
[ ]:
%%configure -f
{
"conf": {
"spark.jars.packages": "ml.combust.mleap:mleap-spark_2.12:0.20.0,ml.combust.mleap:mleap-spark-base_2.12:0.20.0",
"spark.pyspark.python": "python3",
"spark.pyspark.virtualenv.enabled": "true",
"spark.pyspark.virtualenv.type": "native",
"spark.pyspark.virtualenv.bin.path": "/usr/bin/virtualenv",
}
}
[ ]:
sc.install_pypi_package("mleap==0.20.0")
Importing PySpark dependencies
Next we will import all the necessary dependencies that will be needed to execute the following cells on our Spark cluster. Please note that we are also importing the boto3
and mleap
modules here.
You need to ensure that the import cell runs without any error to verify that you have installed the dependencies from PyPI properly. Also, this cell will provide you with a valid SparkSession
named as spark
.
[ ]:
from mleap.pyspark.spark_support import SimpleSparkSerializer
from pyspark.ml.regression import RandomForestRegressor
from pyspark.sql.types import StructField, StructType, StringType, DoubleType
schema = StructType(
[
StructField("sex", StringType(), True),
StructField("length", DoubleType(), True),
StructField("diameter", DoubleType(), True),
StructField("height", DoubleType(), True),
StructField("whole_weight", DoubleType(), True),
StructField("shucked_weight", DoubleType(), True),
StructField("viscera_weight", DoubleType(), True),
StructField("shell_weight", DoubleType(), True),
StructField("rings", DoubleType(), True),
]
)
Machine Learning task: Predict the age of an Abalone from its physical measurement
The dataset is available from UCI Machine Learning. The aim for this task is to determine age of an Abalone (a kind of shellfish) from its physical measurements. At the core, it’s a regression problem. The dataset contains several features - sex
(categorical), length
(continuous), diameter
(continuous), height
(continuous), whole_weight
(continuous), shucked_weight
(continuous), viscera_weight
(continuous),
shell_weight
(continuous) and rings
(integer). Our goal is to predict the variable rings
which is a good approximation for age (age is rings
+ 1.5).
We’ll use SparkML to pre-process the dataset (apply one or more feature transformers) and train it with the Random Forest algorithm from SparkML.
Pass bucket information to your EMR Cluster
We’ll use our local notebook kernel to pass variables to our EMR Cluster
[ ]:
%%local
import sagemaker
sess = sagemaker.Session()
bucket = sess.default_bucket()
region = sess.boto_region_name
[ ]:
%%send_to_spark -i bucket -t str -n bucket
[ ]:
%%send_to_spark -i region -t str -n region
Define the schema of the dataset
In the next cell, we will define the schema of the Abalone
dataset and provide it to Spark so that it can parse the CSV file properly.
[ ]:
schema = StructType(
[
StructField("sex", StringType(), True),
StructField("length", DoubleType(), True),
StructField("diameter", DoubleType(), True),
StructField("height", DoubleType(), True),
StructField("whole_weight", DoubleType(), True),
StructField("shucked_weight", DoubleType(), True),
StructField("viscera_weight", DoubleType(), True),
StructField("shell_weight", DoubleType(), True),
StructField("rings", DoubleType(), True),
]
)
Read data directly from S3
Next we will use in-built CSV reader from Spark to read data directly from S3 into a Dataframe
and inspect its first five rows.
After that, we will split the Dataframe
into 80-20 train and validation so that we can train the model on the train part and measure its performance on the validation part.
[ ]:
total_df = spark.read.csv(
f"s3://sagemaker-example-files-prod-{region}/datasets/tabular/uci_abalone/abalone.csv",
header=False,
schema=schema,
)
total_df.show(5)
(train_df, validation_df) = total_df.randomSplit([0.8, 0.2])
Define the feature transformers
Abalone dataset has one categorical column - sex
which needs to be converted to integer format before it can be passed to the Random Forest algorithm.
For that, we are using StringIndexer
and OneHotEncoderEstimator
from Spark to transform the categorical column and then use a VectorAssembler
to produce a flat one dimensional vector for each data-point so that it can be used with the Random Forest algorithm.
[ ]:
from pyspark.ml.feature import (
StringIndexer,
VectorIndexer,
OneHotEncoder,
VectorAssembler,
IndexToString,
)
sex_indexer = StringIndexer(inputCol="sex", outputCol="indexed_sex")
sex_encoder = OneHotEncoder(inputCols=["indexed_sex"], outputCols=["sex_vec"])
assembler = VectorAssembler(
inputCols=[
"sex_vec",
"length",
"diameter",
"height",
"whole_weight",
"shucked_weight",
"viscera_weight",
"shell_weight",
],
outputCol="features",
)
Define the Random Forest model and perform training
After the data is preprocessed, we define a RandomForestClassifier
, define our Pipeline
comprising of both feature transformation and training stages and train the Pipeline calling .fit()
.
[ ]:
from pyspark.ml import Pipeline
from pyspark.ml.regression import RandomForestRegressor
rf = RandomForestRegressor(labelCol="rings", featuresCol="features", maxDepth=6, numTrees=18)
pipeline = Pipeline(stages=[sex_indexer, sex_encoder, assembler, rf])
model = pipeline.fit(train_df)
Use the trained Model
to transform train and validation dataset
Next we will use this trained Model
to convert our training and validation dataset to see some sample output and also measure the performance scores.The Model
will apply the feature transformers on the data before passing it to the Random Forest.
[ ]:
transformed_train_df = model.transform(train_df)
transformed_validation_df = model.transform(validation_df)
transformed_validation_df.select("prediction").show(5)
Evaluating the model on train and validation dataset
Using Spark’s RegressionEvaluator
, we can calculate the rmse
(Root-Mean-Squared-Error) on our train and validation dataset to evaluate its performance. If the performance numbers are not satisfactory, we can train the model again and again by changing parameters of Random Forest or add/remove feature transformers.
[ ]:
from pyspark.ml.evaluation import RegressionEvaluator
evaluator = RegressionEvaluator(labelCol="rings", predictionCol="prediction", metricName="rmse")
train_rmse = evaluator.evaluate(transformed_train_df)
validation_rmse = evaluator.evaluate(transformed_validation_df)
print("Train RMSE = %g" % train_rmse)
print("Validation RMSE = %g" % validation_rmse)
Using MLeap
to serialize the model
By calling the serializeToBundle
method from the MLeap
library, we can store the Model
in a specific serialization format that can be later used for inference by sagemaker-sparkml-serving
.
If this step fails with an error - ``JavaPackage is not callable``, it means you have not setup the MLeap JAR in the classpath properly.
[ ]:
model.serializeToBundle("jar:file:/tmp/model.zip", transformed_validation_df)
Convert the model to tar.gz
format
SageMaker expects any model format to be present in tar.gz
format, but MLeap produces the model zip
format. In the next cell, we unzip the model artifacts and store it in tar.gz
format.
[ ]:
import zipfile
with zipfile.ZipFile("/tmp/model.zip") as zf:
zf.extractall("/tmp/model")
import tarfile
with tarfile.open("/tmp/model.tar.gz", "w:gz") as tar:
tar.add("/tmp/model/bundle.json", arcname="bundle.json")
tar.add("/tmp/model/root", arcname="root")
Upload the trained model artifacts to S3
At the end, we need to upload the trained and serialized model artifacts to S3 so that it can be used for inference in SageMaker.
Please note down the S3 bucket location which we passed to the cluster previously.
[ ]:
import os
import boto3
s3 = boto3.resource("s3")
file_name = os.path.join("emr/abalone/mleap", "model.tar.gz")
s3.Bucket(bucket).upload_file("/tmp/model.tar.gz", file_name)
Delete model artifacts from local disk (optional)
If you are training multiple ML models on the same host and using the same location to save the MLeap
serialized model, then you need to delete the model on the local disk to prevent MLeap
library failing with an error - file already exists
.
[ ]:
import shutil
os.remove("/tmp/model.zip")
os.remove("/tmp/model.tar.gz")
shutil.rmtree("/tmp/model")
Hosting the model in SageMaker
Now the second phase of this Notebook begins, where we will host this model in SageMaker and perform predictions against it.
Hosting a model in SageMaker requires two components
A Docker image residing in ECR.
a trained Model residing in S3.
For SparkML, Docker image for MLeap based SparkML serving has already been prepared and uploaded to ECR by SageMaker team which anyone can use for hosting. For more information on this, please see SageMaker SparkML Serving.
MLeap serialized model was uploaded to S3 as part of the Spark job we executed in EMR in the previous steps.
Creating the endpoint for prediction
Next we’ll create the SageMaker endpoint which will be used for performing online prediction.
For this, we have to create an instance of SparkMLModel
from sagemaker-python-sdk
which will take the location of the model artifacts that we uploaded to S3 as part of the EMR job.
Passing the schema of the payload via environment variable
SparkML server also needs to know the payload of the request that’ll be passed to it while calling the predict
method. In order to alleviate the pain of not having to pass the schema with every request, sagemaker-sparkml-serving
lets you to pass it via an environment variable while creating the model definitions.
We’d see later that you can overwrite this schema on a per request basis by passing it as part of the individual request payload as well.
This schema definition should also be passed while creating the instance of SparkMLModel
.
[ ]:
%%local
import json
schema = {
"input": [
{"name": "sex", "type": "string"},
{"name": "length", "type": "double"},
{"name": "diameter", "type": "double"},
{"name": "height", "type": "double"},
{"name": "whole_weight", "type": "double"},
{"name": "shucked_weight", "type": "double"},
{"name": "viscera_weight", "type": "double"},
{"name": "shell_weight", "type": "double"},
],
"output": {"name": "prediction", "type": "double"},
}
schema_json = json.dumps(schema, indent=2)
[ ]:
%%local
from time import gmtime, strftime
import time
timestamp_prefix = strftime("%Y-%m-%d-%H-%M-%S", gmtime())
import boto3
import sagemaker
from sagemaker import get_execution_role
from sagemaker.sparkml.model import SparkMLModel
boto3_session = boto3.session.Session()
sagemaker_client = boto3.client("sagemaker")
sagemaker_runtime_client = boto3.client("sagemaker-runtime")
# Initialize sagemaker session
session = sagemaker.Session(
boto_session=boto3_session,
sagemaker_client=sagemaker_client,
sagemaker_runtime_client=sagemaker_runtime_client,
)
role = get_execution_role()
[ ]:
%%local
# S3 location of where you uploaded your trained and serialized SparkML model
sparkml_data = "s3://{}/{}/{}".format(bucket, "emr/abalone/mleap", "model.tar.gz")
model_name = "sparkml-abalone-" + timestamp_prefix
sparkml_model = SparkMLModel(
model_data=sparkml_data,
role=role,
spark_version="3.3",
sagemaker_session=session,
name=model_name,
# passing the schema defined above by using an environment
# variable that sagemaker-sparkml-serving understands
env={"SAGEMAKER_SPARKML_SCHEMA": schema_json},
)
[ ]:
%%local
endpoint_name = "sparkml-abalone-ep-" + timestamp_prefix
sparkml_model.deploy(
initial_instance_count=1, instance_type="ml.c4.xlarge", endpoint_name=endpoint_name
)
Invoking the newly created inference endpoint with a payload to transform the data
Now we will invoke the endpoint with a valid payload that sagemaker-sparkml-serving
can recognize. There are three ways in which input payload can be passed to the request:
Pass it as a valid CSV string. In this case, the schema passed via the environment variable will be used to determine the schema. For CSV format, every column in the input has to be a basic datatype (e.g. int, double, string) and it can not be a Spark
Array
orVector
.Pass it as a valid JSON string. In this case as well, the schema passed via the environment variable will be used to infer the schema. With JSON format, every column in the input can be a basic datatype or a Spark
Vector
orArray
provided that the corresponding entry in the schema mentions the correct value.Pass the request in JSON format along with the schema and the data. In this case, the schema passed in the payload will take precedence over the one passed via the environment variable (if any).
Passing the payload in CSV format
We will first see how the payload can be passed to the endpoint in CSV format.
[ ]:
%%local
from sagemaker.predictor import Predictor
from sagemaker.serializers import CSVSerializer, JSONSerializer
from sagemaker.deserializers import JSONDeserializer
payload = "F,0.515,0.425,0.14,0.766,0.304,0.1725,0.255"
predictor = Predictor(
endpoint_name=endpoint_name, sagemaker_session=session, serializer=CSVSerializer()
)
print(predictor.predict(payload))
Passing the payload in JSON format
We will now pass a different payload in JSON format.
[ ]:
%%local
payload = {"data": ["F", 0.515, 0.425, 0.14, 0.766, 0.304, 0.1725, 0.255]}
predictor = Predictor(
endpoint_name=endpoint_name, sagemaker_session=session, serializer=JSONSerializer()
)
print(predictor.predict(payload))
Passing the payload with both schema and the data
Next we will pass the input payload comprising of both the schema and the data. If you notice carefully, this schema will be slightly different than what we have passed via the environment variable. The locations of length
and sex
column have been swapped and so the data. The server now parses the payload with this schema and works properly.
[ ]:
%%local
payload = {
"schema": {
"input": [
{"name": "length", "type": "double"},
{"name": "sex", "type": "string"},
{"name": "diameter", "type": "double"},
{"name": "height", "type": "double"},
{"name": "whole_weight", "type": "double"},
{"name": "shucked_weight", "type": "double"},
{"name": "viscera_weight", "type": "double"},
{"name": "shell_weight", "type": "double"},
],
"output": {"name": "prediction", "type": "double"},
},
"data": [0.515, "F", 0.425, 0.14, 0.766, 0.304, 0.1725, 0.255],
}
predictor = Predictor(
endpoint_name=endpoint_name, sagemaker_session=session, serializer=JSONSerializer()
)
print(predictor.predict(payload))
Deleting the Endpoint (Optional)
Next we will delete the endpoint so that you do not incur the cost of keeping it running.
[ ]:
%%cleanup -f
[ ]:
%%local
predictor.delete_endpoint()
Notebook CI Test Results
This notebook was tested in multiple regions. The test results are as follows, except for us-west-2 which is shown at the top of the notebook.