Introduction to SageMaker HuggingFace - Text Classification


This notebook’s CI test result for us-west-2 is as follows. CI test results in other regions can be found at the end of the notebook.

This us-west-2 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable


  1. Set Up

  2. Select a Text-Classification Model

  3. Run inference on the pre-trained model

  4. Finetune the pre-trained model on a custom dataset

  5. Run Batch Transform

1. Set Up


To train and host on Amazon Sagemaker, we need to setup and authenticate the use of AWS services. Here, we use the execution role associated with the current notebook instance as the AWS account role with SageMaker access. It has necessary permissions, including access to your data in S3.


[ ]:
!pip install sagemaker --upgrade --quiet
[ ]:
import sagemaker, boto3, json
from sagemaker.session import Session

sagemaker_session = Session()
aws_role = sagemaker_session.get_caller_identity_arn()
aws_region = boto3.Session().region_name
sess = sagemaker.Session()

2. Select a Text Classification Model


You can continue with the default model, or can choose a different model from the dropdown generated upon running the next cell. A complete list of JumpStart fine-tuned models can also be accessed at JumpStart Fine-Tuned Models. ***

[ ]:
model_id = "huggingface-tc-bert-base-cased"
[ ]:
import IPython
from ipywidgets import Dropdown
from sagemaker.jumpstart.notebook_utils import list_jumpstart_models
from sagemaker.jumpstart.filters import And

# Retrieves all Text Classification models available by SageMaker Built-In Algorithms.
filter_value = And("task == tc", "framework == huggingface")
tc_models = list_jumpstart_models(filter=filter_value)
# display the model-ids in a dropdown, for user to select a model.
dropdown = Dropdown(
    value=model_id,
    options=tc_models,
    description="Sagemaker Pre-Trained Text Classification Models:",
    style={"description_width": "initial"},
    layout={"width": "max-content"},
)
display(IPython.display.Markdown("## Select a pre-trained model from the dropdown below"))
display(dropdown)

Using Models not Present in the Dropdown


If you want to choose any other model which is not present in the dropdown and is available at HugginFace Text-Classification please choose huggingface-tc-models in the dropdown and pass the model_id in the HF_MODEL_ID variable. Inference on the models listed in the dropdown menu can be run in network isolation under VPC settings. However, when running inference on a model specified through HF_MODEL_ID, VPC settings with network isolation will not work. ***

[ ]:
# model_version="*" fetches the latest version of the model.
infer_model_id, infer_model_version = dropdown.value, "*"

hub = {}
HF_MODEL_ID = "distilbert-base-uncased-finetuned-sst-2-english"  # Pass any other HF_MODEL_ID from - https://huggingface.co/models?pipeline_tag=text-classification&sort=downloads
if infer_model_id == "huggingface-tc-models":
    hub["HF_MODEL_ID"] = HF_MODEL_ID
    hub["HF_TASK"] = "text-classification"

3. Run Inference on the pre-trained model


Using SageMaker, we can perform inference on the fine-tuned model. For this example, that means on an input sentence, predicting the class label from one of the 2 classes of the SST2 dataset. Otherwise predicting the class label on any of the choosen model from the HugginFace Text-Classification ***

3.1. Deploy an Endpoint

[ ]:
from sagemaker.jumpstart.model import JumpStartModel

my_model = JumpStartModel(
    model_id=infer_model_id,
    env=hub,
    enable_network_isolation=False if infer_model_id == "huggingface-tc-models" else True,
)
model_predictor = my_model.deploy()

3.2. Example input sentences for inference


These examples are taken from SST2 dataset downloaded from TensorFlow. Apache 2.0 License. Dataset Homepage. ***

[ ]:
text1 = "astonishing ... ( frames ) profound ethical and philosophical questions in the form of dazzling pop entertainment"
text2 = "simply stupid , irrelevant and deeply , truly , bottomlessly cynical "

3.3. Query endpoint and parse response


Input to the endpoint is a single sentence. Response from the endpoint is a dictionary containing the predicted class label, and a list of class label probabilities. ***

[ ]:
newline, bold, unbold = "\n", "\033[1m", "\033[0m"


def query_endpoint(encoded_text):
    response = model_predictor.predict(
        encoded_text, {"ContentType": "application/x-text", "Accept": "application/json;verbose"}
    )
    return response


def parse_response(query_response):
    model_predictions = query_response
    probabilities, labels, predicted_label = (
        model_predictions["probabilities"],
        model_predictions["labels"],
        model_predictions["predicted_label"],
    )
    return probabilities, labels, predicted_label


for text in [text1, text2]:
    query_response = query_endpoint(text.encode("utf-8"))
    probabilities, labels, predicted_label = parse_response(query_response)
    print(
        f"Inference:{newline}"
        f"Input text: '{text}'{newline}"
        f"Model prediction: {probabilities}{newline}"
        f"Labels: {labels}{newline}"
        f"Predicted Label: {bold}{predicted_label}{unbold}{newline}"
    )

3.4. Clean up the endpoint

[ ]:
# Delete the SageMaker endpoint and the attached resources
model_predictor.delete_model()
model_predictor.delete_endpoint()

4. Fine-Tune the pre-trained model on a custom dataset


We support fine-tuning on any pre-trained model available on HugginFace Fill-Mask and Text-Classification. Though only the models in the dropdown list can be fine-tuned in network isolation. Please select huggingface-tc-models in the dropdown above if you can’t find your choice of model to fine-tune in the dropdown list, and specify id of any model available in HugginFace Fill-Mask and Text-Classification, in the HF_MODEL_ID variable below.


[ ]:
HF_MODEL_ID = "distilbert-base-uncased"  # Specify the HF_MODEL_ID here from https://huggingface.co/models?pipeline_tag=fill-mask&sort=downloads or https://huggingface.co/models?pipeline_tag=text-classification&sort=downloads

Previously, we saw how to run inference on a fine-tuned model. Next, we discuss how a model can be finetuned to a custom dataset with any number of classes.

The Text Embedding model can be fine-tuned on any text classification dataset in the same way the model available for inference has been fine-tuned on the SST2 movie review dataset.

The model available for fine-tuning attaches a classification layer to the Text Embedding model and initializes the layer parameters to random values. The output dimension of the classification layer is determined based on the number of classes detected in the input data. The fine-tuning step fine-tunes all the model parameters to minimize prediction error on the input data and returns the fine-tuned model. The model returned by fine-tuning can be further deployed for inference. Below are the instructions for how the training data should be formatted for input to the model.

  • Input: A directory containing a ‘data.csv’ file.

    • Each row of the first column of ‘data.csv’ should have integer class labels between 0 to the number of classes.

    • Each row of the second column should have the corresponding text.

  • Output: A trained model that can be deployed for inference.

Below is an example of ‘data.csv’ file showing values in its first two columns. Note that the file should not have any header.

0

hide new secretions from the parental units

0

contains no wit , only labored gags

1

that loves its characters and communicates something rather beautiful about human nature

SST2 dataset is downloaded from TensorFlow. Apache 2.0 License. Dataset Homepage. ***

4.1. Selecting a Model

[ ]:
from sagemaker import image_uris, model_uris, script_uris, hyperparameters

model_id, model_version = dropdown.value, "*"
training_instance_type = "ml.p3.2xlarge"

4.2. Set Training parameters


Now that we are done with all the setup that is needed, we are ready to fine-tune our Text Classification model. To begin, let us create a `sageMaker.estimator.Estimator <https://sagemaker.readthedocs.io/en/stable/api/training/estimators.html>`__ object. This estimator will launch the training job.

There are two kinds of parameters that need to be set for training.

The first one are the parameters for the training job. These include: (i) Training data path. This is S3 folder in which the input data is stored, (ii) Output path: This the s3 folder in which the training output is stored. (iii) Training instance type: This indicates the type of machine on which to run the training. Typically, we use GPU instances for these training. We defined the training instance type above to fetch the correct train_image_uri. The second set of parameters are algorithm specific training hyper-parameters. It is also used for sepcifying the model name if we want to fine-tune on the model which is not present in the dropdown list.

[ ]:
# Sample training data is available in this bucket
training_data_bucket = f"jumpstart-cache-prod-{aws_region}"
training_data_prefix = "training-datasets/SST/"

training_dataset_s3_path = f"s3://{training_data_bucket}/{training_data_prefix}"

output_bucket = sess.default_bucket()
output_prefix = "jumpstart-example-tc-training"

s3_output_location = f"s3://{output_bucket}/{output_prefix}/output"

For algorithm specific hyper-parameters, we start by fetching python dictionary of the training hyper-parameters that the algorithm accepts with their default values. This can then be overridden to custom values. ***

[ ]:
from sagemaker import hyperparameters

# Retrieve the default hyper-parameters for fine-tuning the model
hyperparameters = hyperparameters.retrieve_default(model_id=model_id, model_version=model_version)

# [Optional] Override default hyperparameters with custom values
hyperparameters["batch_size"] = "64"

# Please pass eval_accumulation_steps in hyperparameters to a smaller value if you get Cuda out of memory error during evaluation
# This will trigger the copy of predictions from host to CPU more frequently and free host memory.
# hyperparameters['eval_accumulation_steps'] = "10"

We will use the HF_MODEL_ID pased earlier here for using all the HugginFace Fill-Mask and Text-Classification models. ***

[ ]:
if model_id == "huggingface-tc-models":
    hyperparameters["hub_key"] = HF_MODEL_ID

print(hyperparameters)

4.3. Train with Automatic Model Tuning (HPO)


Amazon SageMaker automatic model tuning, also known as hyperparameter tuning, finds the best version of a model by running many training jobs on your dataset using the algorithm and ranges of hyperparameters that you specify. It then chooses the hyperparameter values that result in a model that performs the best, as measured by a metric that you choose. We will use a HyperparameterTuner object to interact with Amazon SageMaker hyperparameter tuning APIs. ***

[ ]:
from sagemaker.tuner import ContinuousParameter

# Use AMT for tuning and selecting the best model
use_amt = False

# Define objective metric, based on which the best model will be selected.
amt_metric_definitions = {
    "metrics": [{"Name": "val_accuracy", "Regex": "'eval_accuracy': ([0-9\\.]+)"}],
    "type": "Maximize",
}

# You can select from the hyperparameters supported by the model, and configure ranges of values to be searched for training the optimal model.(https://docs.aws.amazon.com/sagemaker/latest/dg/automatic-model-tuning-define-ranges.html)
hyperparameter_ranges = {
    "learning_rate": ContinuousParameter(0.00001, 0.0001, scaling_type="Logarithmic")
}

# Increase the total number of training jobs run by AMT, for increased accuracy (and training time).
max_jobs = 6
# Change parallel training jobs run by AMT to reduce total training time, constrained by your account limits.
# if max_jobs=max_parallel_jobs then Bayesian search turns to Random.
max_parallel_jobs = 2

4.4. Start Training


We start by creating the estimator object with all the required assets and then launch the training job. ***

[ ]:
from sagemaker.estimator import Estimator
from sagemaker.utils import name_from_base
from sagemaker.tuner import HyperparameterTuner
from sagemaker.jumpstart.estimator import JumpStartEstimator


training_metric_definitions = [
    {"Name": "val_accuracy", "Regex": "'eval_accuracy': ([0-9\\.]+)"},
    {"Name": "val_loss", "Regex": "'eval_loss': ([0-9\\.]+)"},
    {"Name": "train_loss", "Regex": "'loss': ([0-9\\.]+)"},
    {"Name": "val_f1", "Regex": "'eval_f1': ([0-9\\.]+)"},
    {"Name": "epoch", "Regex": "'epoch': ([0-9\\.]+)"},
]


# Create SageMaker Estimator instance
tc_estimator = JumpStartEstimator(
    hyperparameters=hyperparameters,
    model_id=dropdown.value,
    instance_type=training_instance_type,
    metric_definitions=training_metric_definitions,
    output_path=s3_output_location,
    enable_network_isolation=False if model_id == "huggingface-tc-models" else True,
)

if use_amt:
    hp_tuner = HyperparameterTuner(
        tc_estimator,
        amt_metric_definitions["metrics"][0]["Name"],
        hyperparameter_ranges,
        amt_metric_definitions["metrics"],
        max_jobs=max_jobs,
        max_parallel_jobs=max_parallel_jobs,
        objective_type=amt_metric_definitions["type"],
        base_tuning_job_name=training_job_name,
    )

    # Launch a SageMaker Tuning job to search for the best hyperparameters
    hp_tuner.fit({"training": training_dataset_s3_path})
else:
    # Launch a SageMaker Training job by passing s3 path of the training data
    tc_estimator.fit({"training": training_dataset_s3_path}, logs=True)

4.5. Extract Training performance metrics


Performance metrics such as training loss and validation accuracy/loss can be accessed through cloudwatch while the training. We can also fetch these metrics and analyze them within the notebook ***

[ ]:
from sagemaker import TrainingJobAnalytics

if use_amt:
    training_job_name = hp_tuner.best_training_job()
else:
    training_job_name = tc_estimator.latest_training_job.job_name


df = TrainingJobAnalytics(training_job_name=training_job_name).dataframe()
df.head(10)

4.6. Deploy & run Inference on the fine-tuned model


A trained model does nothing on its own. We now want to use the model to perform inference. For this example, that means predicting the class label of an input sentence. We follow the same steps as in 3. Run inference on the pre-trained model. We start by retrieving the artifacts for deploying an endpoint. However, instead of base_predictor, we deploy the tc_estimator that we fine-tuned. ***

[ ]:
inference_instance_type = "ml.p2.xlarge"

# Retrieve the inference docker container uri
deploy_image_uri = image_uris.retrieve(
    region=None,
    framework=None,
    image_scope="inference",
    model_id=model_id,
    model_version=model_version,
    instance_type=inference_instance_type,
)
# Retrieve the inference script uri
deploy_source_uri = script_uris.retrieve(
    model_id=model_id, model_version=model_version, script_scope="inference"
)

endpoint_name = name_from_base(f"jumpstart-example-FT-{model_id}-")

# Use the estimator from the previous step to deploy to a SageMaker endpoint
finetuned_predictor = (hp_tuner if use_amt else tc_estimator).deploy(
    initial_instance_count=1,
    instance_type=inference_instance_type,
    entry_point="inference.py",
    image_uri=deploy_image_uri,
    source_dir=deploy_source_uri,
    endpoint_name=endpoint_name,
)
[ ]:
text1 = "astonishing ... ( frames ) profound ethical and philosophical questions in the form of dazzling pop entertainment"
text2 = "simply stupid , irrelevant and deeply , truly , bottomlessly cynical "
[ ]:
newline, bold, unbold = "\n", "\033[1m", "\033[0m"


def query_endpoint(encoded_text):
    response = finetuned_predictor.predict(
        encoded_text, {"ContentType": "application/x-text", "Accept": "application/json;verbose"}
    )
    return response


def parse_response(query_response):
    model_predictions = query_response
    probabilities, labels, predicted_label = (
        model_predictions["probabilities"],
        model_predictions["labels"],
        model_predictions["predicted_label"],
    )
    return probabilities, labels, predicted_label


for text in [text1, text2]:
    query_response = query_endpoint(text.encode("utf-8"))
    probabilities, labels, predicted_label = parse_response(query_response)
    print(
        f"Inference:{newline}"
        f"Input text: '{text}'{newline}"
        f"Model prediction: {probabilities}{newline}"
        f"Labels: {labels}{newline}"
        f"Predicted Label: {bold}{predicted_label}{unbold}{newline}"
    )
[ ]:
# Delete the SageMaker endpoint and the attached resources
finetuned_predictor.delete_model()
finetuned_predictor.delete_endpoint()

4.7. Incrementally train the fine-tuned model


Incremental training allows you to train a new model using an expanded dataset that contains an underlying pattern that was not accounted for in the previous training and which resulted in poor model performance. You can use the artifacts from an existing model and use an expanded dataset to train a new model. Incremental training saves both time and resources as you don’t need to retrain a model from scratch.

One may use any dataset (old or new) as long as the dataset format remain the same (set of classes). Incremental training step is similar to the finetuning step discussed above with the following difference: In fine-tuning above, we start with a pre-trained model whereas in incremental training, we start with an existing fine-tuned model. ***

[ ]:
# We will only do the incremental training for the fine-tuned models
if model_id == "huggingface-tc-models":
    del hyperparameters["hub_key"]
[ ]:
# Identify the previously trained model path based on the output location where artifacts are stored previously and the training job name.

if use_amt:  # If using amt, select the model for the best training job.
    sage_client = boto3.Session().client("sagemaker")
    tuning_job_result = sage_client.describe_hyper_parameter_tuning_job(
        HyperParameterTuningJobName=hp_tuner._current_job_name
    )
    last_training_job_name = tuning_job_result["BestTrainingJob"]["TrainingJobName"]
else:
    last_training_job_name = tc_estimator._current_job_name

last_trained_model_path = f"{s3_output_location}/{last_training_job_name}/output/model.tar.gz"
[ ]:
# Retrieve the docker image
train_image_uri = image_uris.retrieve(
    region=None,
    framework=None,
    model_id=model_id,
    model_version=model_version,
    image_scope="training",
    instance_type=training_instance_type,
)
# Retrieve the training script
train_source_uri = script_uris.retrieve(
    model_id=model_id, model_version=model_version, script_scope="training"
)
[ ]:
incremental_train_output_prefix = "jumpstart-example-ic-incremental-training"

incremental_s3_output_location = f"s3://{output_bucket}/{incremental_train_output_prefix}/output"

incremental_training_job_name = name_from_base(f"jumpstart-example-{model_id}-incremental-training")


incremental_train_estimator = Estimator(
    role=aws_role,
    image_uri=train_image_uri,
    source_dir=train_source_uri,
    model_uri=last_trained_model_path,
    entry_point="transfer_learning.py",
    instance_count=1,
    instance_type=training_instance_type,
    max_run=360000,
    hyperparameters=hyperparameters,
    output_path=incremental_s3_output_location,
    base_job_name=incremental_training_job_name,
    metric_definitions=training_metric_definitions,
)

incremental_train_estimator.fit({"training": training_dataset_s3_path}, logs=True)

Once trained, we can use the same steps as in 4.6. Deploy & run Inference on the fine-tuned model to deploy the model.

5. Run Batch Transform


Using SageMaker, we can perform batch inference on the fine-tuned model for large datasets. For this example, that means on an input sentence, predicting the class label from one of the 2 classes of the SST2 dataset. - Batch Inference is useful in the following scenarios: - Preprocess datasets to remove noise or bias that interferes with training or inference from your dataset. - Get inferences from large datasets. - Run inference when you don’t need a persistent endpoint. - Associate input records with inferences to assist the interpretation of results.

Below is an example of ‘test.csv’ file showing input sentences. Note that the file should not have any header.

hide new secretions from the parental units

contains no wit , only labored gags

that loves its characters and communicates something rather beautiful about human nature


5.1. Prepare data for Batch Transform


We will use the tiny SST2 dataset for running the batch inference. We will download the data locally, remove the labels and upload it to S3 for batch inference. ***

[ ]:
import pandas as pd
import os
import ast

s3 = boto3.client("s3")
training_data_tiny_prefix = "training-datasets/SST-tiny/"
s3.download_file(training_data_bucket, training_data_tiny_prefix + "data.csv", "data.csv")
train_data = pd.read_csv("./data.csv", header=None, names=["label", "sentence"])
train_data.head(5)
[ ]:
test_data = train_data[["sentence"]]
test_data.to_csv("test_data.csv", header=False, index=False)
input_path = f"s3://{output_bucket}/{output_prefix}/test/"
output_path = f"s3://{output_bucket}/{output_prefix}/batch_output/"
s3.upload_file("test_data.csv", output_bucket, os.path.join(output_prefix + "/test/data.csv"))

5.2. Deploy Model for Batch Transform Job


We will use the deploy_image_uri, deploy_source_uri, and base_model_uri of the pre-trained model defined in section 3 for deploying the model for Batch Inference. To host the pre-trained model, we create an instance of `sagemaker.model.Model <https://sagemaker.readthedocs.io/en/stable/api/inference/model.html>`__ and deploy it. ***

[ ]:
# Create the SageMaker model instance. Note that we need to pass Predictor class when we deploy model through Model class,
# for being able to run inference through the sagemaker API.
from sagemaker.model import Model
from sagemaker.predictor import Predictor

# Retrieve the base model uri.

base_model_uri = model_uris.retrieve(
    model_id=infer_model_id, model_version="1.*", model_scope="inference"
)

model = Model(
    image_uri=deploy_image_uri,
    source_dir=deploy_source_uri,
    model_data=base_model_uri,
    entry_point="inference.py",
    role=aws_role,
    predictor_cls=Predictor,
)

# Creating the Batch transformer object
batch_transformer = model.transformer(
    instance_count=1,
    instance_type=inference_instance_type,
    output_path=output_path,
    assemble_with="Line",
    accept="text/csv;verbose",
    max_payload=1,
)

# Making the predications on the input data
batch_transformer.transform(input_path, content_type="text/csv", split_type="Line")

batch_transformer.wait()

5.3. Compare Predictions With the Ground Truth


We will compare the predictions on tiny SST2 data with the actual labels. ***

[ ]:
s3.download_file(
    output_bucket, output_prefix + "/batch_output/" + "data.csv.out", "predict.csv.out"
)
import ast

with open("predict.csv.out", "r") as predict_file:
    predict_all = [ast.literal_eval(line.rstrip()) for line in predict_file]

data_size = len(test_data)
df_predict = pd.DataFrame(predict_all)
df_predict["predicted_label"] = df_predict["predicted_label"].str[-1].astype(int)
accuracy = (
    sum(
        train_data.loc[: data_size - 1, "label"]
        == df_predict.loc[: data_size - 1, "predicted_label"]
    )
    / data_size
)

print("The accuracy of the model on the SST2 tiny data is: ", accuracy)

Notebook CI Test Results

This notebook was tested in multiple regions. The test results are as follows, except for us-west-2 which is shown at the top of the notebook.

This us-east-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This us-east-2 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This us-west-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This ca-central-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This sa-east-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This eu-west-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This eu-west-2 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This eu-west-3 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This eu-central-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This eu-north-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This ap-southeast-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This ap-southeast-2 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This ap-northeast-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This ap-northeast-2 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This ap-south-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable