SageMaker JumpStart Foundation Models - Fine-tuning text generation GPT-J 6B model on domain specific dataset


This notebook’s CI test result for us-west-2 is as follows. CI test results in other regions can be found at the end of the notebook.

This us-west-2 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable


  1. Set Up

  2. Select Text Generation Model GTP-J 6B

  3. Finetune the pre-trained model on a custom dataset

    • Set Training parameters

    • Start Training

    • Deploy & run Inference on the fine-tuned model

1. Set Up

Before executing the notebook, there are some initial steps required for setup.

[ ]:
!pip uninstall sagemaker
!pip install sagemaker --quiet

2. Select Text Generation Model GTP-J 6B

[ ]:
model_id = "huggingface-textgeneration1-gpt-j-6b"

3. Fine-tune the pre-trained model on a custom dataset

Fine-tuning refers to the process of taking a pre-trained language model and retraining it for a different but related task using specific data. This approach is also known as transfer learning, which involves transferring the knowledge learned from one task to another. Large language models (LLMs) like GPT-J 6B are trained on massive amounts of unlabeled data and can be fine-tuned on domain domain datasets, making the model perform better on that specific domain.

We will use financial text from SEC filings to fine tune a LLM GPT-J 6B for financial applications.

  • Input: A train and an optional validation directory. Each directory contains a CSV/JSON/TXT file.

    • For CSV/JSON files, the train or validation data is used from the column called ‘text’ or the first column if no column called ‘text’ is found.

    • The number of files under train and validation (if provided) should equal to one.

  • Output: A trained model that can be deployed for inference. Below is an example of a TXT file for fine-tuning the Text Generation model. The TXT file is SEC filings of Amazon from year 2021 to 2022.

SEC filings data of Amazon is downloaded from publicly available EDGAR. Instruction of accessing the data is shown here.

Set Training parameters

Now that we are done with all the setup that is needed, we are ready to fine-tune our model. To begin, let us create a `sageMaker.estimator.Estimator <https://sagemaker.readthedocs.io/en/stable/api/training/estimators.html>`__ object. This estimator will launch the training job.

There are two kinds of parameters that need to be set for training.

The first one are the parameters for the training job. These include: Training data path. This is S3 folder in which the input data is stored The second set of parameters are algorithm specific training hyper-parameters.

[ ]:
import json
from sagemaker.jumpstart.estimator import JumpStartEstimator
from sagemaker.jumpstart.utils import get_jumpstart_content_bucket

# Sample training data is available in this bucket
data_bucket = get_jumpstart_content_bucket()
data_prefix = "training-datasets/sec_data"

training_dataset_s3_path = f"s3://{data_bucket}/{data_prefix}/train/"
validation_dataset_s3_path = f"s3://{data_bucket}/{data_prefix}/validation/"

Start Training


We start by creating the estimator object with all the required assets and then launch the training job. Since default hyperparameter values are model-specific, inspect estimator.hyperparameters() to view default values for your selected model. ***

[ ]:
estimator = JumpStartEstimator(
    model_id=model_id,
    hyperparameters={"epoch": "3", "per_device_train_batch_size": "4"},
)
[ ]:
# You can now fit the estimator by providing training data to the train channel

estimator.fit(
    {"train": training_dataset_s3_path, "validation": validation_dataset_s3_path}, logs=True
)

Deploy & run Inference on the fine-tuned model


A trained model does nothing on its own. We now want to use the model to perform inference. ***

[ ]:
# You can deploy the fine-tuned model to an endpoint directly from the estimator.
predictor = estimator.deploy()

Next, we query the finetuned model and print the predictions.

[ ]:
payload = {"inputs": "This Form 10-K report shows that", "parameters": {"max_new_tokens": 400}}
predictor.predict(payload)
[ ]:
# Delete the SageMaker endpoint and the attached resources
predictor.delete_predictor()

Notebook CI Test Results

This notebook was tested in multiple regions. The test results are as follows, except for us-west-2 which is shown at the top of the notebook.

This us-east-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This us-east-2 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This us-west-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This ca-central-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This sa-east-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This eu-west-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This eu-west-2 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This eu-west-3 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This eu-central-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This eu-north-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This ap-southeast-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This ap-southeast-2 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This ap-northeast-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This ap-northeast-2 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This ap-south-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable