Introduction to JumpStart - Text Summarization
This notebook’s CI test result for us-west-2 is as follows. CI test results in other regions can be found at the end of the notebook.
Note: This notebook was tested on ml.t3.medium instance in Amazon SageMaker Studio with Python 3 (Data Science) kernel and in Amazon SageMaker Notebook instance with conda_python3 kernel.
1. Set Up
[ ]:
!pip install sagemaker ipywidgets --upgrade --quiet
Permissions and environment variables
[ ]:
import sagemaker, boto3, json
from sagemaker import get_execution_role
aws_role = get_execution_role()
aws_region = boto3.Session().region_name
sess = sagemaker.Session()
2. Select a model
Here, we download jumpstart model_manifest file from the jumpstart s3 bucket, filter-out all the Text Summarization models and select a model for inference. ***
[ ]:
import ipywidgets as widgets
# download JumpStart model_manifest file.
boto3.client("s3").download_file(
f"jumpstart-cache-prod-{aws_region}", "models_manifest.json", "models_manifest.json"
)
with open("models_manifest.json", "rb") as json_file:
model_list = json.load(json_file)
# filter-out all the Text Summarization models from the manifest list.
text_summarization_models = []
for model in model_list:
model_id = model["model_id"]
if "-summarization-" in model_id and model_id not in text_summarization_models:
text_summarization_models.append(model_id)
# display the model-ids in a dropdown to select a model for inference.
model_dropdown = widgets.Dropdown(
options=text_summarization_models,
value="huggingface-summarization-distilbart-cnn-6-6",
description="Select a model",
style={"description_width": "initial"},
layout={"width": "max-content"},
)
Chose a model for Inference
[ ]:
display(model_dropdown)
[ ]:
# model_version="*" fetches the latest version of the model
model_id, model_version = model_dropdown.value, "*"
3. Retrieve JumpStart Artifacts & Deploy an Endpoint
Using JumpStart, we can perform inference on the pre-trained model, even without fine-tuning it first on a new dataset. We start by retrieving the deploy_image_uri
, deploy_source_uri
, and model_uri
for the pre-trained model. To host the pre-trained model, we create an instance of `sagemaker.model.Model
<https://sagemaker.readthedocs.io/en/stable/api/inference/model.html>`__ and deploy it. This may take a few minutes. ***
[ ]:
from sagemaker import image_uris, model_uris, script_uris, hyperparameters
from sagemaker.model import Model
from sagemaker.predictor import Predictor
from sagemaker.utils import name_from_base
endpoint_name = name_from_base(f"jumpstart-example-infer-{model_id}")
inference_instance_type = "ml.p2.xlarge"
# Retrieve the inference docker container uri. This is the base HuggingFace container image for the default model above.
deploy_image_uri = image_uris.retrieve(
region=None,
framework=None, # automatically inferred from model_id
image_scope="inference",
model_id=model_id,
model_version=model_version,
instance_type=inference_instance_type,
)
# Retrieve the inference script uri. This includes all dependencies and scripts for model loading, inference handling etc.
deploy_source_uri = script_uris.retrieve(
model_id=model_id, model_version=model_version, script_scope="inference"
)
# Retrieve the model uri. This includes the pre-trained model and parameters.
model_uri = model_uris.retrieve(
model_id=model_id, model_version=model_version, model_scope="inference"
)
# Create the SageMaker model instance
model = Model(
image_uri=deploy_image_uri,
source_dir=deploy_source_uri,
model_data=model_uri,
entry_point="inference.py", # entry point file in source_dir and present in deploy_source_uri
role=aws_role,
predictor_cls=Predictor,
name=endpoint_name,
)
# deploy the Model. Note that we need to pass Predictor class when we deploy model through Model class,
# for being able to run inference through the sagemaker API.
model_predictor = model.deploy(
initial_instance_count=1,
instance_type=inference_instance_type,
predictor_cls=Predictor,
endpoint_name=endpoint_name,
)
4. Query endpoint and parse response
[ ]:
def query(model_predictor, text):
"""Query the model predictor."""
encoded_text = text.encode("utf-8")
query_response = model_predictor.predict(
encoded_text,
{
"ContentType": "application/x-text",
"Accept": "application/json",
},
)
return query_response
def parse_response(query_response):
"""Parse response and return summary text."""
model_predictions = json.loads(query_response)
translation_text = model_predictions["summary_text"]
return translation_text
[ ]:
newline, bold, unbold = "\n", "\033[1m", "\033[0m"
input_text = "The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building, and the tallest structure in Paris. Its base is square, measuring 125 metres (410 ft) on each side. During its construction, the Eiffel Tower surpassed the Washington Monument to become the tallest man-made structure in the world, a title it held for 41 years until the Chrysler Building in New York City was finished in 1930. It was the first structure to reach a height of 300 metres. Due to the addition of a broadcasting aerial at the top of the tower in 1957, it is now taller than the Chrysler Building by 5.2 metres (17 ft). Excluding transmitters, the Eiffel Tower is the second tallest free-standing structure in France after the Millau Viaduct."
query_response = query(model_predictor, input_text)
summary_text = parse_response(query_response)
print(f"Input text: {input_text}{newline}" f"Summary text: {bold}{summary_text}{unbold}{newline}")
5. Clean up the endpoint
[ ]:
# Delete the SageMaker endpoint
model_predictor.delete_model()
model_predictor.delete_endpoint()
This notebook was tested in multiple regions. The test results are as follows, except for us-west-2 which is shown at the top of the notebook.