Introduction to JumpStart - Named Entity Recognition
This notebook’s CI test result for us-west-2 is as follows. CI test results in other regions can be found at the end of the notebook.
Note: This notebook was tested on ml.t3.medium instance in Amazon SageMaker Studio with Python 3 (Data Science) kernel and in Amazon SageMaker Notebook instance with conda_python3 kernel.
1. Set Up
[ ]:
! pip install sagemaker ipywidgets --upgrade --quiet
Permissions and environment variables
[ ]:
import sagemaker, boto3, json
from sagemaker import get_execution_role
aws_role = get_execution_role()
aws_region = boto3.Session().region_name
sess = sagemaker.Session()
2. Select a model
Here, we download jumpstart model_manifest file from the jumpstart s3 bucket, filter-out all the Named Entity Recognition models and select a model for inference. ***
[ ]:
from ipywidgets import Dropdown
# download JumpStart model_manifest file.
boto3.client("s3").download_file(
f"jumpstart-cache-prod-{aws_region}", "models_manifest.json", "models_manifest.json"
)
with open("models_manifest.json", "rb") as json_file:
model_list = json.load(json_file)
# filter-out all the Named Entity Recognition models from the manifest list.
ner_models = []
for model in model_list:
model_id = model["model_id"]
if "-ner-" in model_id and model_id not in ner_models:
ner_models.append(model_id)
# display the model-ids in a dropdown to select a model for inference.
model_dropdown = Dropdown(
options=ner_models,
value="huggingface-ner-distilbert-base-cased-finetuned-conll03-english",
description="Select a model",
style={"description_width": "initial"},
layout={"width": "max-content"},
)
Chose a model for Inference
[ ]:
display(model_dropdown)
[ ]:
# model_version="*" fetches the latest version of the model
model_id, model_version = model_dropdown.value, "*"
3. Retrieve JumpStart Artifacts & Deploy an Endpoint
[ ]:
from sagemaker.jumpstart.model import JumpStartModel
model = JumpStartModel(model_id=model_id)
model_predictor = model.deploy()
4. Query endpoint and parse response
[ ]:
def query(model_predictor, text):
"""Query the model predictor."""
encoded_text = text.encode("utf-8")
query_response = model_predictor.predict(
encoded_text,
{
"ContentType": "application/x-text",
"Accept": "application/json",
},
)
return query_response
def parse_response(query_response):
"""Parse response and return predicted entities."""
model_predictions = query_response
translation_text = model_predictions["predictions"]
return translation_text
[ ]:
newline, bold, unbold = "\n", "\033[1m", "\033[0m"
input_text = "My name is Wolfgang and I live in Berlin"
query_response = query(model_predictor, input_text)
model_predictions = parse_response(query_response)
print(
f"Input text: {input_text}{newline}"
f"Model prediction: {bold}{model_predictions}{unbold}{newline}"
)
5. Clean up the endpoint
[ ]:
# Delete the SageMaker endpoint
model_predictor.delete_model()
model_predictor.delete_endpoint()
This notebook was tested in multiple regions. The test results are as follows, except for us-west-2 which is shown at the top of the notebook.