SageMaker Serverless Inference


This notebook’s CI test result for us-west-2 is as follows. CI test results in other regions can be found at the end of the notebook.

This us-west-2 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable


HuggingFace Text Classification example

Amazon SageMaker Serverless Inference is a purpose-built inference option that makes it easy for you to deploy and scale ML models. Serverless Inference is ideal for workloads which have idle periods between traffic spurts and can tolerate cold starts. Serverless endpoints automatically launch compute resources and scale them in and out depending on traffic, eliminating the need to choose instance types or manage scaling policies. This takes away the undifferentiated heavy lifting of selecting and managing servers. Serverless Inference integrates with AWS Lambda to offer you high availability, built-in fault tolerance and automatic scaling.

Serverless Inference is a great choice for customers that have intermittent or unpredictable prediction traffic. For example, a document processing service used to extract and analyze data on a periodic basis. Customers that choose Serverless Inference should make sure that their workloads can tolerate cold starts. A cold start can occur when your endpoint doesn’t receive traffic for a period of time. It can also occur when your concurrent requests exceed the current request usage. The cold start time will depend on your model size, how long it takes to download, and your container startup time.

Introduction

Text Classification can be used to solve various use-cases like sentiment analysis, spam detection, hashtag prediction etc.

This notebook demonstrates the use of the HuggingFace ``transformers` library <https://huggingface.co/transformers/>`__ together with a custom Amazon sagemaker-sdk extension to fine-tune a pre-trained transformer on multi class text classification. In particular, the pre-trained model will be fine-tuned using the `20 newsgroups dataset <http://qwone.com/~jason/20Newsgroups/>`__. To get started, we need to set up the environment with a few prerequisite steps, for permissions, configurations, and so on.

Notebook Setting - SageMaker Classic Notebook Instance: ml.m5.xlarge Notebook Instance & conda_pytorch_p36 Kernel - SageMaker Studio: Python 3 (PyTorch 1.6 Python 3.6 CPU Optimized) - Regions Available: SageMaker Serverless Inference is currently available in the following regions: US East (Northern Virginia), US East (Ohio), US West (Oregon), EU (Ireland), Asia Pacific (Tokyo) and Asia Pacific (Sydney)

Table of Contents

  • Setup

  • Data Preprocessing

  • Model Training

  • Deployment

    • Endpoint Configuration (Adjust for Serverless)

    • Serverless Endpoint Creation

    • Endpoint Invocation

  • Cleanup

  • Conclusion

Development Environment and Permissions

Setup

If you run this notebook in SageMaker Studio, you need to make sure ipywidgets is installed and restart the kernel, so please uncomment the code in the next cell, and run it.

[ ]:
%%capture
import IPython
import sys

!{sys.executable} -m pip install ipywidgets
# IPython.Application.instance().kernel.do_shutdown(True)  # has to restart kernel so changes are used

Let’s install the required packages from HuggingFace and SageMaker

[ ]:
import sys

!{sys.executable} -m pip install "scikit_learn==1.2.0" "sagemaker>=2.86.1" "transformers==4.6.1" "datasets==1.6.2" "nltk==3.8"

Make sure SageMaker version is >= 2.86.1

[ ]:
import sagemaker
import boto3

print(sagemaker.__version__)
[ ]:
sess = sagemaker.Session()
# sagemaker session bucket -> used for uploading data, models and logs
# sagemaker will automatically create this bucket if it not exists
sagemaker_session_bucket = None
if sagemaker_session_bucket is None and sess is not None:
    # set to default bucket if a bucket name is not given
    sagemaker_session_bucket = sess.default_bucket()

role = sagemaker.get_execution_role()
sess = sagemaker.Session(default_bucket=sagemaker_session_bucket)
region = sess.boto_region_name

s3_prefix = "huggingface_serverless/20_newsgroups"

print(f"sagemaker role arn: {role}")
print(f"sagemaker bucket: {sess.default_bucket()}")
print(f"sagemaker session region: {sess.boto_region_name}")

Data Preparation

Now we’ll download a dataset from the web on which we want to train the text classification model.

In this example, let us train the text classification model on the `20 newsgroups dataset <http://qwone.com/~jason/20Newsgroups/>`__. The 20 newsgroups dataset consists of 20000 messages taken from 20 Usenet newsgroups.

[ ]:
import os
import shutil

data_dir = "20_newsgroups_bulk"
if os.path.exists(data_dir):  # cleanup existing data folder
    shutil.rmtree(data_dir)
[ ]:
s3 = boto3.client("s3")
s3.download_file(
    f"sagemaker-example-files-prod-{region}",
    "datasets/text/20_newsgroups/20_newsgroups_bulk.tar.gz",
    "20_newsgroups_bulk.tar.gz",
)
[ ]:
!tar xzf 20_newsgroups_bulk.tar.gz
!ls 20_newsgroups_bulk
[ ]:
file_list = [os.path.join(data_dir, f) for f in os.listdir(data_dir)]
print("Number of files:", len(file_list))
[ ]:
import pandas as pd

documents_count = 0
for file in file_list:
    df = pd.read_csv(file, header=None, names=["text"])
    documents_count = documents_count + df.shape[0]
print("Number of documents:", documents_count)

Let’s inspect the dataset files and analyze the categories.

[ ]:
categories_list = [f.split("/")[1] for f in file_list]
categories_list

We can see that the dataset consists of 20 topics, each in different file.

Let us inspect the dataset to get some understanding about how the data and the label is provided in the dataset.

[ ]:
df = pd.read_csv("./20_newsgroups_bulk/rec.motorcycles", header=None, names=["text"])
df
[ ]:
df["text"][0]
[ ]:
df = pd.read_csv("./20_newsgroups_bulk/comp.sys.mac.hardware", header=None, names=["text"])
df
[ ]:
df["text"][0]

As we can see from the above, there is a single file for each class in the dataset. Each record is just a plain text paragraphs with header, body, footer and quotes. We will need to process them into a suitable data format.

Data Preprocessing

We need to preprocess the dataset to remove the header, footer, quotes, leading/trailing whitespace, extra spaces, tabs, and HTML tags/markups.

Download the nltk tokenizer and other libraries

[ ]:
import nltk
from nltk.tokenize import word_tokenize
import re
import string

nltk.download("punkt")
[ ]:
from sklearn.datasets._twenty_newsgroups import (
    strip_newsgroup_header,
    strip_newsgroup_quoting,
    strip_newsgroup_footer,
)

This following function will remove the header, footer and quotes (of earlier messages in each text).

[ ]:
def strip_newsgroup_item(item):
    item = strip_newsgroup_header(item)
    item = strip_newsgroup_quoting(item)
    item = strip_newsgroup_footer(item)
    return item

The following function will take care of removing leading/trailing whitespace, extra spaces, tabs, and HTML tags/markups.

[ ]:
def process_text(texts):
    final_text_list = []
    for text in texts:
        # Check if the sentence is a missing value
        if isinstance(text, str) == False:
            text = ""

        filtered_sentence = []

        # Lowercase
        text = text.lower()

        # Remove leading/trailing whitespace, extra space, tabs, and HTML tags/markups
        text = text.strip()
        text = re.sub("\[.*?\]", "", text)
        text = re.sub("https?://\S+|www\.\S+", "", text)
        text = re.sub("<.*?>+", "", text)
        text = re.sub("[%s]" % re.escape(string.punctuation), "", text)
        text = re.sub("\n", "", text)
        text = re.sub("\w*\d\w*", "", text)

        for w in word_tokenize(text):
            # We are applying some custom filtering here, feel free to try different things
            # Check if it is not numeric
            if not w.isnumeric():
                filtered_sentence.append(w)
        final_string = " ".join(filtered_sentence)  # final string of cleaned words

        final_text_list.append(final_string)

    return final_text_list

Now we will read each of the 20_newsgroups dataset files, call strip_newsgroup_item and process_text functions we defined earlier, and then aggregate all data into one dataframe.

[ ]:
all_categories_df = pd.DataFrame()

for file in file_list:
    print(f"Processing {file}")
    label = file.split("/")[1]
    df = pd.read_csv(file, header=None, names=["text"])
    df["text"] = df["text"].apply(strip_newsgroup_item)
    df["text"] = process_text(df["text"].tolist())
    df["label"] = label
    all_categories_df = all_categories_df.append(df, ignore_index=True)

Let’s inspect how many categories there are in our dataset.

[ ]:
all_categories_df["label"].value_counts()

In our dataset there are 20 categories which is too much, so we will combine the sub-categories.

[ ]:
# replace to politics
all_categories_df["label"].replace(
    {
        "talk.politics.misc": "politics",
        "talk.politics.guns": "politics",
        "talk.politics.mideast": "politics",
    },
    inplace=True,
)

# replace to recreational
all_categories_df["label"].replace(
    {
        "rec.sport.hockey": "recreational",
        "rec.sport.baseball": "recreational",
        "rec.autos": "recreational",
        "rec.motorcycles": "recreational",
    },
    inplace=True,
)

# replace to religion
all_categories_df["label"].replace(
    {
        "soc.religion.christian": "religion",
        "talk.religion.misc": "religion",
        "alt.atheism": "religion",
    },
    inplace=True,
)

# replace to computer
all_categories_df["label"].replace(
    {
        "comp.windows.x": "computer",
        "comp.sys.ibm.pc.hardware": "computer",
        "comp.os.ms-windows.misc": "computer",
        "comp.graphics": "computer",
        "comp.sys.mac.hardware": "computer",
    },
    inplace=True,
)
# replace to sales
all_categories_df["label"].replace({"misc.forsale": "sales"}, inplace=True)

# replace to science
all_categories_df["label"].replace(
    {
        "sci.crypt": "science",
        "sci.electronics": "science",
        "sci.med": "science",
        "sci.space": "science",
    },
    inplace=True,
)

Now we are left with 6 categories, which is much better.

[ ]:
all_categories_df["label"].value_counts()

Let’s calculate number of words for each row.

[ ]:
all_categories_df["word_count"] = all_categories_df["text"].apply(lambda x: len(str(x).split()))
all_categories_df.head()

Let’s get basic statistics about the dataset.

[ ]:
all_categories_df["word_count"].describe()

We can see that the mean value is around 159 words. However, there are outliers, such as a text with 11351 words. This can make it harder for the model to result in good performance. We will take care to drop those rows.

Let’s drop empty rows first.

[ ]:
no_text = all_categories_df[all_categories_df["word_count"] == 0]
print(len(no_text))

# drop these rows
all_categories_df.drop(no_text.index, inplace=True)

Let’s drop the rows that are longer than 256 words, as it is a length close to the mean value of the word count. This is done to make it easy for the model to train without outliers.

[ ]:
long_text = all_categories_df[all_categories_df["word_count"] > 256]
print(len(long_text))

# drop these rows
all_categories_df.drop(long_text.index, inplace=True)
[ ]:
all_categories_df["label"].value_counts()

Let’s get basic statistics about the dataset after our outliers fixes.

[ ]:
all_categories_df["word_count"].describe()

This looks much more balanced.

Now we drop the word_count columns as we will not need it anymore.

[ ]:
all_categories_df.drop(columns="word_count", axis=1, inplace=True)
[ ]:
all_categories_df

Let’s convert categorical label to integer number, in order to prepare the dataset for training.

[ ]:
categories = all_categories_df["label"].unique().tolist()
categories
[ ]:
categories.index("recreational")
[ ]:
all_categories_df["label"] = all_categories_df["label"].apply(lambda x: categories.index(x))
[ ]:
all_categories_df["label"].value_counts()

We partition the dataset into 80% training and 20% validation set and save to csv files.

[ ]:
from sklearn.model_selection import train_test_split

train_df, test_df = train_test_split(all_categories_df, test_size=0.2)
[ ]:
train_df.to_csv("train.csv", index=None)
[ ]:
test_df.to_csv("test.csv", index=None)

Let’s inspect the label distribution in the training dataset

[ ]:
train_df["label"].value_counts()

Let’s inspect the label distribution in the test dataset

[ ]:
test_df["label"].value_counts()

Tokenization

A tokenizer is in charge of preparing the inputs for a model. The library contains tokenizers for all the models. Most of the tokenizers are available in two flavors: a full python implementation and a “Fast” implementation based on the Rust library tokenizers. The “Fast” implementations allows:

  • A significant speed-up in particular when doing batched tokenization.

  • Additional methods to map between the original string (character and words) and the token space (e.g. getting the index of the token comprising a given character or the span of characters corresponding to a given token).

[ ]:
from datasets import load_dataset
from transformers import AutoTokenizer

# tokenizer used in preprocessing
tokenizer_name = "distilbert-base-uncased"
[ ]:
# download tokenizer
tokenizer = AutoTokenizer.from_pretrained(tokenizer_name)

Load train and test datasets

Let’s create a Dataset from our local csv files for training and test we saved earlier.

[ ]:
dataset = load_dataset("csv", data_files={"train": "train.csv", "test": "test.csv"})
[ ]:
dataset
[ ]:
dataset["train"]
[ ]:
dataset["train"][0]
[ ]:
dataset["test"]
[ ]:
dataset["test"][0]
[ ]:
# tokenizer helper function
def tokenize(batch):
    return tokenizer(batch["text"], padding="max_length", truncation=True)
[ ]:
train_dataset = dataset["train"]
test_dataset = dataset["test"]

Tokenize train and test datasets

Let’s tokenize the train dataset

[ ]:
train_dataset = train_dataset.map(tokenize, batched=True)

Let’s tokenize the test dataset

[ ]:
test_dataset = test_dataset.map(tokenize, batched=True)

Set format for PyTorch

[ ]:
train_dataset = train_dataset.rename_column("label", "labels")
train_dataset.set_format("torch", columns=["input_ids", "attention_mask", "labels"])
test_dataset = test_dataset.rename_column("label", "labels")
test_dataset.set_format("torch", columns=["input_ids", "attention_mask", "labels"])

Uploading data to sagemaker_session_bucket

After we processed the datasets, we are going to upload it to S3.

[ ]:
import botocore
from datasets.filesystems import S3FileSystem

s3 = S3FileSystem()

# save train_dataset to s3
training_input_path = f"s3://{sess.default_bucket()}/{s3_prefix}/train"
train_dataset.save_to_disk(training_input_path, fs=s3)

# save test_dataset to s3
test_input_path = f"s3://{sess.default_bucket()}/{s3_prefix}/test"
test_dataset.save_to_disk(test_input_path, fs=s3)

print(training_input_path)
print(test_input_path)

Training the HuggingFace model for supervised text classification

In order to create a sagemaker training job we need a HuggingFace Estimator. The Estimator handles end-to-end Amazon SageMaker training and deployment tasks. In an Estimator we define, which fine-tuning script should be used as entry_point, which instance_type should be used, which hyperparameters are passed in …..

huggingface_estimator = HuggingFace(entry_point='train.py',
                            source_dir='./code',
                            instance_type='ml.p3.2xlarge',
                            instance_count=1,
                            volume_size=256,
                            role=role,
                            transformers_version='4.6',
                            pytorch_version='1.7',
                            py_version='py36',
                            hyperparameters = {'epochs': 1,
                                               'model_name':'distilbert-base-uncased',
                                               'num_labels': 6
                                              })

When we create a SageMaker training job, SageMaker takes care of starting and managing all the required ec2 instances for us with the huggingface container, uploads the provided fine-tuning script train.py and downloads the data from our sagemaker_session_bucket into the container at /opt/ml/input/data. Then, it starts the training job by running.

/opt/conda/bin/python train.py --epochs 1 --model_name distilbert-base-uncased --num_labels 6

The hyperparameters you define in the HuggingFace estimator are passed in as named arguments.

SageMaker is providing useful properties about the training environment through various environment variables, including the following:

  • SM_MODEL_DIR: A string that represents the path where the training job writes the model artifacts to. After training, artifacts in this directory are uploaded to S3 for model hosting.

  • SM_NUM_GPUS: An integer representing the number of GPUs available to the host.

  • SM_CHANNEL_XXXX: A string that represents the path to the directory that contains the input data for the specified channel. For example, if you specify two input channels in the HuggingFace estimator’s fit call, named train and test, the environment variables SM_CHANNEL_TRAIN and SM_CHANNEL_TEST are set.

To run your training job locally you can define instance_type='local' or instance_type='local-gpu' for gpu usage. Note: this does not work within SageMaker Studio

We create a metric_definition dictionary that contains regex-based definitions that will be used to parse the job logs and extract metrics

[ ]:
metric_definitions = [
    {"Name": "loss", "Regex": "'loss': ([0-9]+(.|e\-)[0-9]+),?"},
    {"Name": "learning_rate", "Regex": "'learning_rate': ([0-9]+(.|e\-)[0-9]+),?"},
    {"Name": "eval_loss", "Regex": "'eval_loss': ([0-9]+(.|e\-)[0-9]+),?"},
    {"Name": "eval_accuracy", "Regex": "'eval_accuracy': ([0-9]+(.|e\-)[0-9]+),?"},
    {"Name": "eval_f1", "Regex": "'eval_f1': ([0-9]+(.|e\-)[0-9]+),?"},
    {"Name": "eval_precision", "Regex": "'eval_precision': ([0-9]+(.|e\-)[0-9]+),?"},
    {"Name": "eval_recall", "Regex": "'eval_recall': ([0-9]+(.|e\-)[0-9]+),?"},
    {"Name": "eval_runtime", "Regex": "'eval_runtime': ([0-9]+(.|e\-)[0-9]+),?"},
    {
        "Name": "eval_samples_per_second",
        "Regex": "'eval_samples_per_second': ([0-9]+(.|e\-)[0-9]+),?",
    },
    {"Name": "epoch", "Regex": "'epoch': ([0-9]+(.|e\-)[0-9]+),?"},
]

Creating an Estimator and start a training job

[ ]:
from sagemaker.huggingface import HuggingFace

# hyperparameters, which are passed into the training job
hyperparameters = {"epochs": 1, "model_name": "distilbert-base-uncased", "num_labels": 6}

Now, let’s define the SageMaker HuggingFace estimator with resource configurations and hyperparameters to train Text Classification on 20 newsgroups dataset, running on a p3.2xlarge instance.

[ ]:
huggingface_estimator = HuggingFace(
    entry_point="train.py",
    source_dir="./code",
    instance_type="ml.p3.2xlarge",
    instance_count=1,
    volume_size=256,
    role=role,
    transformers_version="4.6",
    pytorch_version="1.7",
    py_version="py36",
    hyperparameters=hyperparameters,
    metric_definitions=metric_definitions,
)
[ ]:
# starting the train job with our uploaded datasets as input
huggingface_estimator.fit({"train": training_input_path, "test": test_input_path})

Deployment

Serverless Configuration

Memory size - memory_size_in_mb

Your serverless endpoint has a minimum RAM size of 1024 MB (1 GB), and the maximum RAM size you can choose is 6144 MB (6 GB). The memory sizes you can select are 1024 MB, 2048 MB, 3072 MB, 4096 MB, 5120 MB, or 6144 MB. Serverless Inference auto-assigns compute resources proportional to the memory you select. If you select a larger memory size, your container has access to more vCPUs. Select your endpoint’s memory size according to your model size. Generally, the memory size should be at least as large as your model size. You may need to benchmark in order to select the right memory selection for your model based on your latency SLAs. The memory size increments have different pricing; see the Amazon SageMaker pricing page for more information.

Concurrent invocations - max_concurrency

Serverless Inference manages predefined scaling policies and quotas for the capacity of your endpoint. Serverless endpoints have a quota for how many concurrent invocations can be processed at the same time. If the endpoint is invoked before it finishes processing the first request, then it handles the second request concurrently. You can set the maximum concurrency for a single endpoint up to 200, and the total number of serverless endpoint variants you can host in a Region is 50. The total concurrency you can share between all serverless endpoints per Region in your account is 200. The maximum concurrency for an individual endpoint prevents that endpoint from taking up all the invocations allowed for your account, and any endpoint invocations beyond the maximum are throttled.

[ ]:
from sagemaker.serverless.serverless_inference_config import ServerlessInferenceConfig

serverless_config = ServerlessInferenceConfig(
    memory_size_in_mb=6144,
    max_concurrency=1,
)

Serverless Endpoint Creation

Now that we have a ServerlessInferenceConfig, we can create a serverless endpoint and deploy our model to it.

[ ]:
%%time

predictor = huggingface_estimator.deploy(serverless_inference_config=serverless_config)

Endpoint Invocation

Using few samples, you can now invoke the SageMaker endpoint to get predictions.

[ ]:
def predict_sentence(sentence):
    result = predictor.predict({"inputs": sentence})
    index = int(result[0]["label"].split("LABEL_")[1])
    print(categories[index])
[ ]:
sentences = [
    "The modem is an internal AT/(E)ISA 8-bit card (just a little longer than a half-card).",
    "In the cage I usually wave to bikers.  They usually don't wave back.  My wife thinks it's strange but I don't care.",
    "Voyager has the unusual luck to be on a stable trajectory out of the solar system.",
]

# using the same processing logic that we used during data preparation for training
processed_sentences = process_text(sentences)

for sentence in processed_sentences:
    predict_sentence(sentence)

Clean up

Endpoints should be deleted when no longer in use, since (per the SageMaker pricing page) they’re billed by time deployed.

[ ]:
predictor.delete_endpoint()

Conclusion

In this notebook you successfully ran SageMaker Training Job with the HuggingFace framework to fine-tune a pre-trained transformer on text classification using the 20 newsgroups dataset dataset. Then, you prepared the Serverless configuration required, and deployed your model to SageMaker Serverless Endpoint. Finally, you invoked the Serverless endpoint with sample data and got the prediction results.

As next steps, you can try running SageMaker Training Jobs with your own algorithm and your own data, and deploy the model to SageMaker Serverless Endpoint.

Notebook CI Test Results

This notebook was tested in multiple regions. The test results are as follows, except for us-west-2 which is shown at the top of the notebook.

This us-east-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This us-east-2 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This us-west-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This ca-central-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This sa-east-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This eu-west-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This eu-west-2 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This eu-west-3 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This eu-central-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This eu-north-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This ap-southeast-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This ap-southeast-2 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This ap-northeast-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This ap-northeast-2 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This ap-south-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable