SageMaker JumpStart Foundation Models - BloomZ: Multilingual Text Classification, Question and Answering, Code Generation, Paragraph rephrase, and More


This notebook’s CI test result for us-west-2 is as follows. CI test results in other regions can be found at the end of the notebook.

This us-west-2 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable


  1. Set Up

  2. Select a model

  3. Retrieve Artifacts & Deploy an Endpoint

  4. Query endpoint and parse response

  5. Advanced features: How to use varisous parameters to control the generated text

  6. Advanced features: How to use prompts engineering to solve different tasks

  7. Clean up the endpoint

Note: This notebook was tested on ml.t3.medium instance in Amazon SageMaker Studio with conda_pytorch_p39 kernel and in Amazon SageMaker Notebook instance with conda_python3 kernel.

1. Set Up

[ ]:
!pip install ipywidgets --quiet
!pip install --upgrade sagemaker --quiet

Permissions and environment variables

[ ]:
import sagemaker, boto3, json
from sagemaker.session import Session

sagemaker_session = Session()
aws_role = sagemaker_session.get_caller_identity_arn()
aws_region = boto3.Session().region_name
sess = sagemaker.Session()

2. Select a pre-trained model


You can continue with the default model, or can choose a different model from the dropdown generated upon running the next cell. A complete list of SageMaker pre-trained models can also be accessed at Sagemaker pre-trained Models. ***

[ ]:
model_id, model_version, = (
    "huggingface-textgeneration1-bloomz-7b1-fp16",
    "1.*",
)

[Optional] Select a different Sagemaker pre-trained model. Here, we download the model_manifest file from the Built-In Algorithms s3 bucket, filter-out all the Text Generation models and select a model for inference. ***

[ ]:
import ipywidgets as widgets
from sagemaker.jumpstart.notebook_utils import list_jumpstart_models

# Retrieves all Text Generation models available by SageMaker Built-In Algorithms.
filter_value = "task == textgeneration1"
text_generation_models = list_jumpstart_models(filter=filter_value)

# display the model-ids in a dropdown to select a model for inference.
model_dropdown = widgets.Dropdown(
    options=text_generation_models,
    value=model_id,
    description="Select a model",
    style={"description_width": "initial"},
    layout={"width": "max-content"},
)

Choose a model for Inference

[ ]:
display(model_dropdown)
[ ]:
# model_version="*" fetches the latest version of the model
model_id, model_version = model_dropdown.value, "1.*"

3. Retrieve Artifacts & Deploy an Endpoint


Using SageMaker, we can perform inference on the pre-trained model, even without fine-tuning it first on a new dataset. We start by retrieving the deploy_image_uri, deploy_source_uri, and model_uri for the pre-trained model. To host the pre-trained model, we create an instance of `sagemaker.model.Model <https://sagemaker.readthedocs.io/en/stable/api/inference/model.html>`__ and deploy it. This may take a few minutes.


[ ]:
def get_sagemaker_session(local_download_dir) -> sagemaker.Session:
    """Return the SageMaker session."""

    sagemaker_client = boto3.client(
        service_name="sagemaker", region_name=boto3.Session().region_name
    )

    session_settings = sagemaker.session_settings.SessionSettings(
        local_download_dir=local_download_dir
    )

    # the unit test will ensure you do not commit this change
    session = sagemaker.session.Session(
        sagemaker_client=sagemaker_client, settings=session_settings
    )

    return session

We need to create a directory to host the downloaded model.

[ ]:
!mkdir -p download_dir
[ ]:
from sagemaker import image_uris, model_uris, script_uris, hyperparameters
from sagemaker.model import Model
from sagemaker.predictor import Predictor
from sagemaker.utils import name_from_base


endpoint_name = name_from_base(f"jumpstart-example-{model_id}")

inference_instance_type = "ml.g4dn.12xlarge"

# Retrieve the inference docker container uri. This is the base HuggingFace container image for the default model above.
deploy_image_uri = image_uris.retrieve(
    region=None,
    framework=None,  # automatically inferred from model_id
    image_scope="inference",
    model_id=model_id,
    model_version=model_version,
    instance_type=inference_instance_type,
)

deploy_image_uri = image_uris.retrieve(
    framework="djl-deepspeed", region=sagemaker_session.boto_session.region_name, version="0.20.0"
)

# Retrieve the model uri.
model_uri = model_uris.retrieve(
    model_id=model_id, model_version=model_version, model_scope="inference"
)

model = Model(
    image_uri=deploy_image_uri,
    model_data=model_uri,  # "s3://testgptj/test/infer-huggingface-textgeneration1-bloomz-7b.tar.gz",#model_uri,
    role=aws_role,
    predictor_cls=Predictor,
    name=endpoint_name,
)

# deploy the Model. Note that we need to pass Predictor class when we deploy model through Model class,
# for being able to run inference through the sagemaker API.
model_predictor = model.deploy(
    initial_instance_count=1,
    instance_type=inference_instance_type,
    predictor_cls=Predictor,
    endpoint_name=endpoint_name,
)

4. Query endpoint and parse response

[ ]:
newline, bold, unbold = "\n", "\033[1m", "\033[0m"


def query_endpoint(encoded_text, endpoint_name):
    client = boto3.client("runtime.sagemaker")
    response = client.invoke_endpoint(
        EndpointName=endpoint_name, ContentType="application/x-text", Body=encoded_text
    )
    return response


def parse_response(query_response):
    model_predictions = json.loads(query_response["Body"].read())
    generated_text = model_predictions[0]["generated_text"]
    return generated_text
[ ]:
newline, bold, unbold = "\n", "\033[1m", "\033[0m"

text1 = "Translate to German:  My name is Arthur"
text2 = "A step by step recipe to make bolognese pasta:"


for text in [text1, text2]:
    query_response = query_endpoint(json.dumps(text).encode("utf-8"), endpoint_name=endpoint_name)
    generated_text = parse_response(query_response)
    print(
        f"Inference:{newline}"
        f"input text: {text}{newline}"
        f"generated text: {bold}{generated_text}{unbold}{newline}"
    )

5. Advanced features: How to use various advanced parameters to control the generated text


This model also supports many advanced parameters while performing inference. They include:

  • max_length: Model generates text until the output length (which includes the input context length) reaches max_length. If specified, it must be a positive integer.

  • num_return_sequences: Number of output sequences returned. If specified, it must be a positive integer.

  • num_beams: Number of beams used in the greedy search. If specified, it must be integer greater than or equal to num_return_sequences.

  • no_repeat_ngram_size: Model ensures that a sequence of words of no_repeat_ngram_size is not repeated in the output sequence. If specified, it must be a positive integer greater than 1.

  • temperature: Controls the randomness in the output. Higher temperature results in output sequence with low-probability words and lower temperature results in output sequence with high-probability words. If temperature -> 0, it results in greedy decoding. If specified, it must be a positive float.

  • early_stopping: If True, text generation is finished when all beam hypotheses reach the end of stence token. If specified, it must be boolean.

  • do_sample: If True, sample the next word as per the likelyhood. If specified, it must be boolean.

  • top_k: In each step of text generation, sample from only the top_k most likely words. If specified, it must be a positive integer.

  • top_p: In each step of text generation, sample from the smallest possible set of words with cumulative probability top_p. If specified, it must be a float between 0 and 1.

  • seed: Fix the randomized state for reproducibility. If specified, it must be an integer.

We may specify any subset of the parameters mentioned above while invoking an endpoint. Next, we show an example of how to invoke endpoint with these arguments


[ ]:
# Input must be a json
payload = {
    "text_inputs": ["Tell me the steps to make a pizza", "i work at new york"],
    "max_length": 50,
    "num_return_sequences": 3,
    "top_k": 50,
    "top_p": 0.95,
    "do_sample": True,
}


def query_endpoint_with_json_payload(encoded_json, endpoint_name):
    client = boto3.client("runtime.sagemaker")
    response = client.invoke_endpoint(
        EndpointName=endpoint_name, ContentType="application/json", Body=encoded_json
    )
    return response


query_response = query_endpoint_with_json_payload(
    json.dumps(payload).encode("utf-8"), endpoint_name=endpoint_name
)


def parse_response_multiple_texts(query_response):
    model_predictions = json.loads(query_response["Body"].read())
    # generated_text = model_predictions[0]["generated_texts"]
    generated_text = [x["generated_text"] for x in model_predictions[0]]

    return generated_text


generated_texts = parse_response_multiple_texts(query_response)
print(generated_texts)

6. Advanced features: How to use prompts engineering to solve different tasks

Below we domenstrate solving 12 key tasks with BloomZ 7B1 model. The tasks are following.

  • Multilingual text / sentiment classification

  • Multilingual question and answering

  • Code generation

  • Paragraph rephrase

  • Summarization

  • Common sense reasoning / natural language inference

  • Question and answering

  • Sentence / sentiment classification

  • Translation

  • Pronoun resolution

  • Imaginary article generation based on title

  • Summarize a title based on a article

6.1. Multilingual text / sentiment classification (Chinese to English)

[ ]:
text = """一个传奇的开端,一个不灭的神话,这不仅仅是一部电影,而是作为一个走进新时代的标签,永远彪炳史册。"""
[ ]:
prompts = ["""{text} Would you rate the previous review as positive, neutral or negative?"""]

parameters = {
    "max_length": 500,
    "num_return_sequences": 1,
    "top_k": 50,
    "top_p": 0.95,
    "do_sample": False,
}

for each_prompt in prompts:
    input_text = each_prompt.replace("{text}", text)
    payload = {"text_inputs": input_text, **parameters}
    query_response = query_endpoint_with_json_payload(
        json.dumps(payload).encode("utf-8"), endpoint_name=endpoint_name
    )
    generated_texts = parse_response_multiple_texts(query_response)
    print(f"For input with prompt: {payload['text_inputs']}")
    print(f"{bold}Result{unbold}: {generated_texts}{newline}")

6.2. Multilingual question and answering (English to Chinese)

[ ]:
text = """what is the backpropagation."""
[ ]:
prompts = ["""Explain to me in Traditional Chinese  {text}"""]

parameters = {
    "max_length": 500,
    "num_return_sequences": 1,
    "top_k": 50,
    "top_p": 0.95,
    "do_sample": False,
}

for each_prompt in prompts:
    input_text = each_prompt.replace("{text}", text)
    payload = {"text_inputs": input_text, **parameters}
    query_response = query_endpoint_with_json_payload(
        json.dumps(payload).encode("utf-8"), endpoint_name=endpoint_name
    )
    generated_texts = parse_response_multiple_texts(query_response)
    print(f"For input with prompt: {payload['text_inputs']}")
    print(f"{bold}Result{unbold}: {generated_texts}{newline}")

6.3. Code generation

[ ]:
text = "binary search tree"
code_start = """
def binary_search(a, x):
    low = 0
    high = len(a) - 1"""
[ ]:
prompts = [
    """Write a {text} with O(log(n)) computational complexity.
{code_start}"""
]
[ ]:
parameters = {
    "max_length": 500,
    "num_return_sequences": 1,
    "top_k": 50,
    "top_p": 0.95,
    "do_sample": False,
}

for each_prompt in prompts:
    input_text = each_prompt.replace("{text}", text)
    input_text = input_text.replace("{code_start}", code_start)
    payload = {"text_inputs": input_text, **parameters}
    query_response = query_endpoint_with_json_payload(
        json.dumps(payload).encode("utf-8"), endpoint_name=endpoint_name
    )
    generated_texts = parse_response_multiple_texts(query_response)
    print(f"For input with prompt: {payload['text_inputs']}")
    print(f"{bold} For prompt: '{each_prompt}'{unbold}{newline}")
    print(f"{bold}Result{unbold}: {generated_texts}{newline}")

6.4. Paragraph rephrase

[ ]:
sentence = """Amazon Web Services (AWS) has announced nine major new updates for its cloud-based machine learning platform, SageMaker.
SageMaker aims to provide a machine learning service which can be used to build, train, and deploy ML models for virtually any use case.
During this year’s re:Invent conference, AWS made several announcements to further improve SageMaker’s capabilities."""
[ ]:
prompts = [
    """\ {sentence}\n\nHow would you rephrase that briefly using English?""",
    """"{sentence}\nThe above sentence is very complicated. Please provide me a simplified synonymous version consisting of multiple sentences:""",
]

parameters = {
    "max_length": 50000,
    "num_return_sequences": 1,
    "top_k": 50,
    "top_p": 0.01,
    "do_sample": False,
}


for each_prompt in prompts:
    input_text = each_prompt.replace("{sentence}", sentence)
    print(f"{bold} For prompt{unbold}: '{input_text}'{newline}")
    payload = {"text_inputs": input_text, **parameters}
    query_response = query_endpoint_with_json_payload(
        json.dumps(payload).encode("utf-8"), endpoint_name=endpoint_name
    )
    generated_texts = parse_response_multiple_texts(query_response)
    print(f"{bold} The reasoning result is{unbold}: '{generated_texts}'{newline}")

6.5. Summarization

Define the text article you want to summarize.

[ ]:
text = """Amazon Comprehend uses natural language processing (NLP) to extract insights about the content of documents. It develops insights by recognizing the entities, key phrases, language, sentiments, and other common elements in a document. Use Amazon Comprehend to create new products based on understanding the structure of documents. For example, using Amazon Comprehend you can search social networking feeds for mentions of products or scan an entire document repository for key phrases.
You can access Amazon Comprehend document analysis capabilities using the Amazon Comprehend console or using the Amazon Comprehend APIs. You can run real-time analysis for small workloads or you can start asynchronous analysis jobs for large document sets. You can use the pre-trained models that Amazon Comprehend provides, or you can train your own custom models for classification and entity recognition.
All of the Amazon Comprehend features accept UTF-8 text documents as the input. In addition, custom classification and custom entity recognition accept image files, PDF files, and Word files as input.
Amazon Comprehend can examine and analyze documents in a variety of languages, depending on the specific feature. For more information, see Languages supported in Amazon Comprehend. Amazon Comprehend's Dominant language capability can examine documents and determine the dominant language for a far wider selection of languages."""
[ ]:
prompts = ["""{text}\n\n===\nWrite a summary of the previous text in English:"""]

num_return_sequences = 3
parameters = {
    "max_length": 500,
    "num_return_sequences": num_return_sequences,
    "top_k": 50,
    "top_p": 0.95,
    "do_sample": True,
}

print(f"{bold}Number of return sequences are set as {num_return_sequences}{unbold}{newline}")
for each_prompt in prompts:
    payload = {"text_inputs": each_prompt.replace("{text}", text), **parameters}
    query_response = query_endpoint_with_json_payload(
        json.dumps(payload).encode("utf-8"), endpoint_name=endpoint_name
    )
    generated_texts = parse_response_multiple_texts(query_response)
    print(f"{bold} For prompt: '{each_prompt}'{unbold}{newline}")
    print(f"{bold} The {num_return_sequences} summarized results are{unbold}:{newline}")
    for idx, each_generated_text in enumerate(generated_texts):
        print(f"{bold}Result {idx}{unbold}: {each_generated_text}{newline}")

6.6. Common sense reasoning / natural language inference

In the common sense reasoning, you can design a prompt and combine it with the premise, hypothesis, and options, send the combined text into the endpoint to get an answer. Examples are demonstrated as below.

Define the premise, hypothesis, and options that you hope the model to reason.

[ ]:
premise = "The world cup has kicked off in Los Angeles, United States."
hypothesis = "The world cup takes place in United States."
options = """["yes", "no"]"""
[ ]:
prompts = [
    """Given that {premise} Does it follow that {hypothesis} Yes or no?""",
    """Suppose {premise} Can we infer that \"{hypothesis}\"? Yes or no?""",
    """{premise} Based on the previous passage, is it true that \"{hypothesis}\"? Yes or no?""",
    """{premise} Using only the above description and what you know about the world, is \"{hypothesis}\" definitely correct? Yes or no?""",
    """{premise}\nQuestion: {hypothesis} True or False?""",
    """"{premise} Using only the above description and what you know about the world, is \"{hypothesis}\" definitely correct? Yes or no?""",
]

parameters = {
    "max_length": 50,
    "num_return_sequences": 1,
    "top_k": 50,
    "top_p": 0.95,
    "do_sample": True,
}


for each_prompt in prompts:
    input_text = each_prompt.replace("{premise}", premise)
    input_text = input_text.replace("{hypothesis}", hypothesis)
    # input_text = input_text.replace("{options_}", options)
    print(f"{bold} For prompt{unbold}: '{input_text}'{newline}")
    payload = {"text_inputs": input_text, **parameters}
    query_response = query_endpoint_with_json_payload(
        json.dumps(payload).encode("utf-8"), endpoint_name=endpoint_name
    )
    generated_texts = parse_response_multiple_texts(query_response)
    print(f"{bold} The reasoning result is{unbold}: '{generated_texts}'{newline}")

6.7. Question and Answering

Now, let’s try another reasoning task with a different type of prompt template. You can simply provide context and question as shown below.

[ ]:
context = """The newest and most innovative Kindle yet lets you take notes on millions of books and documents, write lists and journals, and more.

For readers who have always wished they could write in their eBooks, Amazon’s new Kindle lets them do just that. The Kindle Scribe is the first Kindle for reading and writing and allows users to supplement their books and documents with notes, lists, and more.

Here’s everything you need to know about the Kindle Scribe, including frequently asked questions.

The Kindle Scribe makes it easy to read and write like you would on paper

The Kindle Scribe features a 10.2-inch, glare-free screen (the largest of all Kindle devices), crisp 300 ppi resolution, and 35 LED front lights that automatically adjust to your environment. Further personalize your experience with the adjustable warm light, font sizes, line spacing, and more.

It comes with your choice of the Basic Pen or the Premium Pen, which you use to write on the screen like you would on paper. They also attach magnetically to your Kindle and never need to be charged. The Premium Pen includes a dedicated eraser and a customizable shortcut button.

The Kindle Scribe has the most storage options of all Kindle devices: choose from 8 GB, 16 GB, or 32 GB to suit your level of reading and writing.
"""
question = "what are the key features of new Kindle?"
[ ]:
prompts = ["""question: \"{question}"\\n\nContext: \"{context}"\\n\nAnswer:"""]


parameters = {
    "max_length": 5000,
    "num_return_sequences": 1,
    "top_k": 50,
    "top_p": 0.95,
    "do_sample": True,
}


for each_prompt in prompts:
    input_text = each_prompt.replace("{context}", context)
    input_text = input_text.replace("{question}", question)
    print(f"{bold} For prompt{unbold}: '{each_prompt}'{newline}")

    payload = {"text_inputs": input_text, **parameters}
    query_response = query_endpoint_with_json_payload(
        json.dumps(payload).encode("utf-8"), endpoint_name=endpoint_name
    )
    generated_texts = parse_response_multiple_texts(query_response)
    print(f"{bold} The reasoning result is{unbold}: '{generated_texts}'{newline}")

6.8. Sentence / Sentiment Classification

Define the sentence you want to classifiy and the corresponded options.

[ ]:
sentence1 = "This moive is so great and once again dazzles and delights us"
options_ = """OPTIONS:\n-positive \n-negative """
[ ]:
prompts = [
    """Review:\n{sentence}\nIs this movie review sentence negative or positive?\n{options_}""",
    """Short movie review: {sentence}\nDid the critic think positively or negatively of the movie?\n{options_}""",
    """Sentence from a movie review: {sentence}\nWas the movie seen positively or negatively based on the preceding review? \n\n{options_}""",
    """\"{sentence}\"\nHow would the sentiment of this sentence be perceived?\n\n{options_}""",
    """Is the sentiment of the following sentence positive or negative?\n{sentence}\n{options_}""",
    """What is the sentiment of the following movie review sentence?\n{sentence}\n{options_}""",
]

parameters = {
    "max_length": 50,
    "num_return_sequences": 1,
    "top_k": 50,
    "top_p": 0.95,
    "do_sample": True,
}


for each_prompt in prompts:
    input_text = each_prompt.replace("{sentence}", sentence1)
    input_text = input_text.replace("{options_}", options_)
    print(f"{bold} For prompt{unbold}: '{input_text}'{newline}")
    payload = {"text_inputs": input_text, **parameters}
    query_response = query_endpoint_with_json_payload(
        json.dumps(payload).encode("utf-8"), endpoint_name=endpoint_name
    )
    generated_texts = parse_response_multiple_texts(query_response)
    print(f"{bold} The reasoning result is{unbold}: '{generated_texts}'{newline}")

Now let’s try if the model can identify the sentence is grammatically correct or not.

[ ]:
sentence = """study I mathematics."""
[ ]:
prompts = [
    """The following sentence is either \"{{\"acceptable\"}}\", meaning it is grammatically correct and makes sense, or \"{{\"unacceptable\"}}\". Which is it?\n{sentence}\n|""",
    """{sentence}\nI'm worried that sentence didn't make any sense, or was grammatically incorrect. Was it correct?\n""",
    """"{sentence}\nIs this example grammatically correct and sensible?\n""",
]


parameters = {
    "max_length": 50,
    "num_return_sequences": 1,
    "top_k": 50,
    "top_p": 0.95,
    "do_sample": True,
}


for each_prompt in prompts:
    input_text = each_prompt.replace("{sentence}", sentence)
    print(f"{bold} For prompt{unbold}: '{input_text}'{newline}")
    payload = {"text_inputs": input_text, **parameters}
    query_response = query_endpoint_with_json_payload(
        json.dumps(payload).encode("utf-8"), endpoint_name=endpoint_name
    )
    generated_texts = parse_response_multiple_texts(query_response)
    print(f"{bold} The reasoning result is{unbold}: '{generated_texts}'{newline}")

6.9. Translation

Define the sentence and the language you want to translate the sentence to.

[ ]:
sent1 = "My name is Arthur"
lang2 = """German"""
[ ]:
prompts = [
    """{sent1}\n\nTranslate to {lang2}""",
    """{sent1}\n\nCould you please translate this to {lang2}?""",
    """Translate to {lang2}:\n\n{sent1}""",
    """Translate the following sentence to {lang2}:\n{sent1}""",
    """How is \"{sent1}\" said in {lang2}?""",
    """Translate \"{sent1}\" to {lang2}?""",
]

parameters = {
    "max_length": 50,
    "num_return_sequences": 1,
    "top_k": 50,
    "top_p": 0.95,
    "do_sample": True,
}


for each_prompt in prompts:
    input_text = each_prompt.replace("{sent1}", sent1)
    input_text = input_text.replace("{lang2}", lang2)
    print(f"{bold} For prompt{unbold}: '{input_text}'{newline}")
    payload = {"text_inputs": input_text, **parameters}
    query_response = query_endpoint_with_json_payload(
        json.dumps(payload).encode("utf-8"), endpoint_name=endpoint_name
    )
    generated_texts = parse_response_multiple_texts(query_response)
    print(f"{bold} The translated result is{unbold}: '{generated_texts}'{newline}")

6.10. Imaginary article generation based on title

[ ]:
title = "University has new facility coming up"
[ ]:
prompts = [
    """Title: \"{title}\"\\nGiven the above title of an imaginary article, imagine the article.\\n"""
]


parameters = {
    "max_length": 50000,
    "num_return_sequences": 1,
    "top_k": 50,
    "top_p": 0.95,
    "do_sample": True,
}


for each_prompt in prompts:
    input_text = each_prompt.replace("{title}", title)
    print(f"{bold} For prompt{unbold}: '{input_text}'{newline}")
    payload = {"text_inputs": input_text, **parameters}
    query_response = query_endpoint_with_json_payload(
        json.dumps(payload).encode("utf-8"), endpoint_name=endpoint_name
    )
    generated_texts = parse_response_multiple_texts(query_response)
    print(f"{bold} The reasoning result is{unbold}: '{generated_texts}'{newline}")

6.11. Summarize a title based on the article

[ ]:
article = """The newest and most innovative Kindle yet lets you take notes on millions of books and documents, write lists and journals, and more.

For readers who have always wished they could write in their eBooks, Amazon’s new Kindle lets them do just that. The Kindle Scribe is the first Kindle for reading and writing and allows users to supplement their books and documents with notes, lists, and more.

Here’s everything you need to know about the Kindle Scribe, including frequently asked questions.

The Kindle Scribe makes it easy to read and write like you would on paper

The Kindle Scribe features a 10.2-inch, glare-free screen (the largest of all Kindle devices), crisp 300 ppi resolution, and 35 LED front lights that automatically adjust to your environment. Further personalize your experience with the adjustable warm light, font sizes, line spacing, and more.

It comes with your choice of the Basic Pen or the Premium Pen, which you use to write on the screen like you would on paper. They also attach magnetically to your Kindle and never need to be charged. The Premium Pen includes a dedicated eraser and a customizable shortcut button.

The Kindle Scribe has the most storage options of all Kindle devices: choose from 8 GB, 16 GB, or 32 GB to suit your level of reading and writing."""
[ ]:
prompts = ["""'\'{article} \n\n \\n\\nGive me a good title for the article above."""]

parameters = {
    "max_length": 5000,
    "num_return_sequences": 1,
    "top_k": 50,
    "top_p": 0.95,
    "do_sample": True,
}


for each_prompt in prompts:
    input_text = each_prompt.replace("{article}", article)
    print(f"{bold} For prompt{unbold}: '{input_text}'{newline}")
    payload = {"text_inputs": input_text, **parameters}
    query_response = query_endpoint_with_json_payload(
        json.dumps(payload).encode("utf-8"), endpoint_name=endpoint_name
    )
    generated_texts = parse_response_multiple_texts(query_response)
    print(f"{bold} The reasoning result is{unbold}: '{generated_texts}'{newline}")

7. Clean up the endpoint

[ ]:
# Delete the SageMaker endpoint
model_predictor.delete_model()
model_predictor.delete_endpoint()

This notebook was tested in multiple regions. The test results are as follows, except for us-west-2 which is shown at the top of the notebook.

This us-east-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This us-east-2 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This us-west-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This ca-central-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This sa-east-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This eu-west-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This eu-west-2 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This eu-west-3 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This eu-central-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This eu-north-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This ap-southeast-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This ap-southeast-2 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This ap-northeast-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This ap-northeast-2 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This ap-south-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable