Music Recommender Data Preparation with SageMaker Feature Store and SageMaker Data Wrangler


This notebook’s CI test result for us-west-2 is as follows. CI test results in other regions can be found at the end of the notebook.

This us-west-2 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable



Background

This notebook is part of a notebook series that goes through the ML lifecycle and shows how we can build a Music Recommender System using a combination of SageMaker services and features. This notebook uses Amazon SageMaker Feature Store (Feature Store) to create a feature group, executes your Data Wrangler Flow 01_music_dataprep.flow on the entire dataset using a SageMaker Processing Job and ingest processed data to Feature Store. It is the second notebook in the series. You can choose to run this notebook by itself or in sequence with the other notebooks listed below. Please see the README.md for more information about this use case implement of this sequence of notebooks.

  1. Music Recommender Data Exploration

  2. Music Recommender Data Preparation with SageMaker Feature Store and SageMaker Data Wrangler (current notebook)

  3. Train, Deploy, and Monitor the Music Recommender Model using SageMaker SDK


Contents

  1. Prereqs: Get Data

  2. Update the Data Source in the .flow File

  3. Create Feature Group

  4. Configure Feature Group

  5. Initialize & Create Feature Group

  6. Inputs and Outputs

  7. Upload Flow to S3

  8. Run Processing Job

  9. Fetch Data from Offline Feature Store

💡 Quick Start To save your processed data to feature store, Click here to create a feature group and follow the instruction to run a SageMaker processing job.

[ ]:
import sys
import pprint

sys.path.insert(1, "./code")
[ ]:
# update pandas to avoid data type issues in older 1.0 version
!pip install pandas --upgrade --quiet
import pandas as pd

print(pd.__version__)
[ ]:
# create data folder
!mkdir data
[ ]:
import pandas as pd
import matplotlib.pyplot as plt

%matplotlib inline

import json
import sagemaker
import boto3
import os
from awscli.customizations.s3.utils import split_s3_bucket_key

# SageMaker session
sess = sagemaker.Session()
# get session bucket name
bucket = sess.default_bucket()
# bucket prefix or the subfolder for everything we produce
prefix = "music-recommendation"
# s3 client
s3_client = boto3.client("s3")

print(f"this is your default SageMaker Studio bucket name: {bucket}")

Prereqs: Get Data


Here we will download the music data from a public S3 bucket that we’ll be using for this demo and uploads it to your default S3 bucket that was created for you when you initially created a SageMaker Studio workspace.

[ ]:
from demo_helpers import get_data, get_model, update_data_sources
[ ]:
# create data folder
!mkdir data
[ ]:
# public S3 bucket that contains our music data
s3_bucket_music_data = (
    f"s3://sagemaker-example-files-prod-{sess.boto_region_name}/datasets/tabular/synthetic-music"
)
[ ]:
new_data_paths = get_data(
    s3_client,
    [f"{s3_bucket_music_data}/tracks.csv", f"{s3_bucket_music_data}/ratings.csv"],
    bucket,
    prefix,
    sample_data=0.70,
)
print(new_data_paths)
[ ]:
# these are the new file paths located on your SageMaker Studio default s3 storage bucket
tracks_data_source = f"s3://{bucket}/{prefix}/tracks.csv"
ratings_data_source = f"s3://{bucket}/{prefix}/ratings.csv"

Update the Data Source in the .flow File


The 01_music_datapred.flow file is a JSON file containing instructions for where to find your data sources and how to transform the data. We’ll be updating the object telling Data Wrangler where to find the input data on S3. We will set this to your default S3 bucket. With this update to the .flow file it now points to your new S3 bucket as the data source used by SageMaker Data Wrangler.

Make sure the .flow file is closed before running this next step or it won’t update the new s3 file locations in the file

[ ]:
update_data_sources("01_music_dataprep.flow", tracks_data_source, ratings_data_source)

## Create Feature Group

Amazon SageMaker Feature Store is a fully managed, purpose-built repository to store, update, retrieve, and share machine learning (ML) features. Features are the attributes or properties models use during training and inference to make predictions. For example, in a ML application that recommends a music playlist, features could include song ratings, which songs were listened to previously, and how long songs were listened to. The accuracy of a ML model is based on a precise set and composition of features. Often, these features are used repeatedly by multiple teams training multiple models. And whichever feature set was used to train the model needs to be available to make real-time predictions (inference). Keeping a single source of features that is consistent and up-to-date across these different access patterns is a challenge as most organizations keep two different feature stores, one for training and one for inference.

Amazon SageMaker Feature Store is a purpose-built repository where you can store and access features so it’s much easier to name, organize, and reuse them across teams. SageMaker Feature Store provides a unified store for features during training and real-time inference without the need to write additional code or create manual processes to keep features consistent. SageMaker Feature Store keeps track of the metadata of stored features (e.g. feature name or version number) so that you can query the features for the right attributes in batches or in real time using Amazon Athena, an interactive query service. SageMaker Feature Store also keeps features updated, because as new data is generated during inference, the single repository is updated so new features are always available for models to use during training and inference.

What is a feature group

A single feature corresponds to a column in your dataset. A feature group is a predefined schema for a collection of features - each feature in the feature group has a specified data type and name. A single record in a feature group corresponds to a row in your dataframe. A feature store is a collection of feature groups. To learn more about SageMaker Feature Store, see Amazon Feature Store Documentation.

Define Feature Group

Select Record identifier and Event time feature name. These are required parameters for feature group creation. * Record identifier name is the name of the feature defined in the feature group’s feature definitions whose value uniquely identifies a Record defined in the feature group’s feature definitions. * Event time feature name is the name of the EventTime feature of a Record in FeatureGroup. An EventTime is a timestamp that represents the point in time when a new event occurs that corresponds to the creation or update of a Record in the FeatureGroup. All Records in the FeatureGroup must have a corresponding EventTime.

💡Record identifier and Event time feature name are required for feature group. After filling in the values, you can choose Run Selected Cell and All Below from the Run Menu from the menu bar.

[ ]:
# feature group name, with flow_name and an unique id. You can give it a customized name
feature_group_names = [
    "track-features-music-rec",
    "user-5star-track-features-music-rec",
    "ratings-features-music-rec",
]
print(f"Feature Group Name: {feature_group_names}")

record_identifier_feature_names = {
    "track-features-music-rec": "trackId",
    "user-5star-track-features-music-rec": "userId",
    "ratings-features-music-rec": "ratingEventId",
}
event_time_feature_name = "EventTime"

Feature Definitions

The following is a list of the feature names and feature types of the final dataset that will be produced when your data flow is used to process your input dataset. These are automatically generated from the step Custom Pyspark from Source: Answers.Csv. To save from a different step, go to Data Wrangler to select a new step to export.

💡 Configurable Settings

  1. You can select a subset of the features. By default all columns of the result dataframe will be used as features.

  2. You can change the Data Wrangler data type to one of the Feature Store supported types (Integral, Fractional, or String). The default type is set to String. This means that, if a column in your dataset is not a float or long type, it will default to String in your Feature Store.

For Event Time features, make sure the format follows the feature store Event Time feature format

The following is a list of the feature names and data types of the final dataset that will be produced when your data flow is used to process your input dataset.

[ ]:
track_column_schemas = [
    {"name": "trackId", "type": "string"},
    {"name": "length", "type": "float"},
    {"name": "energy", "type": "float"},
    {"name": "acousticness", "type": "float"},
    {"name": "valence", "type": "float"},
    {"name": "speechiness", "type": "float"},
    {"name": "instrumentalness", "type": "float"},
    {"name": "liveness", "type": "float"},
    {"name": "tempo", "type": "float"},
    {"name": "genre_Folk", "type": "float"},
    {"name": "genre_Country", "type": "float"},
    {"name": "genre_Latin", "type": "float"},
    {"name": "genre_Jazz", "type": "float"},
    {"name": "genre_RnB", "type": "float"},
    {"name": "genre_Reggae", "type": "float"},
    {"name": "genre_Rap", "type": "float"},
    {"name": "genre_Pop_Rock", "type": "float"},
    {"name": "genre_Electronic", "type": "float"},
    {"name": "genre_Blues", "type": "float"},
    {"name": "danceability", "type": "float"},
    {"name": "EventTime", "type": "float"},
]

user_column_schemas = [
    {"name": "userId", "type": "long"},
    {"name": "energy_5star", "type": "float"},
    {"name": "acousticness_5star", "type": "float"},
    {"name": "valence_5star", "type": "float"},
    {"name": "speechiness_5star", "type": "float"},
    {"name": "instrumentalness_5star", "type": "float"},
    {"name": "liveness_5star", "type": "float"},
    {"name": "tempo_5star", "type": "float"},
    {"name": "danceability_5star", "type": "float"},
    {"name": "genre_Latin_5star", "type": "float"},
    {"name": "genre_Folk_5star", "type": "float"},
    {"name": "genre_Blues_5star", "type": "float"},
    {"name": "genre_Rap_5star", "type": "float"},
    {"name": "genre_Reggae_5star", "type": "float"},
    {"name": "genre_Jazz_5star", "type": "float"},
    {"name": "genre_RnB_5star", "type": "float"},
    {"name": "genre_Country_5star", "type": "float"},
    {"name": "genre_Electronic_5star", "type": "float"},
    {"name": "genre_Pop_Rock_5star", "type": "float"},
    {"name": "EventTime", "type": "float"},
]

rating_column_schemas = [
    {"name": "ratingEventId", "type": "string"},
    {"name": "ts", "type": "long"},
    {"name": "userId", "type": "long"},
    {"name": "trackId", "type": "string"},
    {"name": "sessionId", "type": "long"},
    {"name": "itemInSession", "type": "long"},
    {"name": "Rating", "type": "float"},
    {"name": "EventTime", "type": "float"},
]

column_schemas = {
    "track-features-music-rec": track_column_schemas,
    "user-5star-track-features-music-rec": user_column_schemas,
    "ratings-features-music-rec": rating_column_schemas,
}

Below we create the SDK input for those feature definitions. Some schema types in Data Wrangler are not supported by Feature Store. The following will create a default_FG_type set to String for these types.

[ ]:
from sagemaker.feature_store.feature_definition import FeatureDefinition
from sagemaker.feature_store.feature_definition import FeatureTypeEnum

default_feature_type = FeatureTypeEnum.STRING
column_to_feature_type_mapping = {
    "float": FeatureTypeEnum.FRACTIONAL,
    "long": FeatureTypeEnum.INTEGRAL,
}

feature_definitions = {}
for feature_group_name in feature_group_names:
    feature_definition = [
        FeatureDefinition(
            feature_name=column_schema["name"],
            feature_type=column_to_feature_type_mapping.get(
                column_schema["type"], default_feature_type
            ),
        )
        for column_schema in column_schemas[feature_group_name]
    ]
    feature_definitions[feature_group_name] = feature_definition

Configure Feature Group


💡 Configurable Settings

  1. feature_group_name: name of the feature group.

  2. feature_store_offline_s3_uri: SageMaker FeatureStore writes the data in the OfflineStore of a FeatureGroup to a S3 location owned by you.

  3. enable_online_store: controls if online store is enabled. Enabling the online store allows quick access to the latest value for a Record via the GetRecord API.

  4. iam_role: IAM role for executing the processing job.

[ ]:
from time import gmtime, strftime
import uuid
[ ]:
# IAM role for executing the processing job.
iam_role = sagemaker.get_execution_role()

# flow name and an unique ID for this export (used later as the processing job name for the export)
flow_name = "01_music_dataprep"
flow_export_id = f"{strftime('%d-%H-%M-%S', gmtime())}-{str(uuid.uuid4())[:8]}"
flow_export_name = f"flow-{flow_export_id}"

# SageMaker FeatureStore writes the data in the OfflineStore of a FeatureGroup to a
# S3 location owned by you.
feature_store_offline_s3_uri = "s3://" + bucket

# controls if online store is enabled. Enabling the online store allows quick access to
# the latest value for a Record via the GetRecord API.
enable_online_store = True
fg_name_tracks = feature_group_name

Initialize & Create Feature Group


[ ]:
# Initialize Boto3 session that is required to create feature group
import boto3
from sagemaker.session import Session

region = boto3.Session().region_name
boto_session = boto3.Session(region_name=region)

sagemaker_client = boto_session.client(service_name="sagemaker", region_name=region)
featurestore_runtime = boto_session.client(
    service_name="sagemaker-featurestore-runtime", region_name=region
)

feature_store_session = Session(
    boto_session=boto_session,
    sagemaker_client=sagemaker_client,
    sagemaker_featurestore_runtime_client=featurestore_runtime,
)
[ ]:
from sagemaker.feature_store.feature_group import FeatureGroup
import time


def wait_for_feature_group_creation_complete(feature_group):
    """Helper function to wait for the completions of creating a feature group"""
    status = feature_group.describe().get("FeatureGroupStatus")
    while status == "Creating":
        print("Waiting for Feature Group Creation")
        time.sleep(5)
        status = feature_group.describe().get("FeatureGroupStatus")
    if status != "Created":
        raise SystemExit(f"Failed to create feature group {feature_group.name}: {status}")
    print(f"FeatureGroup {feature_group.name} successfully created.")


def create_feature_group(feature_group_name, feature_store_session, feature_definitions):
    feature_group = FeatureGroup(
        name=feature_group_name,
        sagemaker_session=feature_store_session,
        feature_definitions=feature_definitions[feature_group_name],
    )

    # only create feature group if it doesn't already exist
    try:
        sagemaker_client.describe_feature_group(
            FeatureGroupName=feature_group_name, NextToken="string"
        )
        feature_group_exists = True
        print("Feature Group {0} already exists. Using {0}".format(feature_group_name))
    except Exception as e:
        error = e.response.get("Error").get("Code")
        if error == "ResourceNotFound":
            feature_group_exists = False
            print("Creating Feature Group {}".format(feature_group_name))
            feature_group.create(
                s3_uri=feature_store_offline_s3_uri,
                record_identifier_name=record_identifier_feature_names[feature_group_name],
                event_time_feature_name=event_time_feature_name,
                role_arn=iam_role,
                enable_online_store=enable_online_store,
            )
            # Invoke the Feature Store API to create the feature group and wait until it is ready
            wait_for_feature_group_creation_complete(feature_group=feature_group)
        if error == "ResourceInUse":
            feature_group_exists = True
            print("Feature Group {0} already exists. Using {0}".format(feature_group_name))

    return feature_group_exists

Feature group is initialized and created below

[ ]:
feature_group_existence = {}
for feature_group_name in feature_group_names:
    feature_group_exists = create_feature_group(
        feature_group_name, feature_store_session, feature_definitions
    )
    feature_group_existence[feature_group_name] = feature_group_exists

Now that the feature group is created, You will use a processing job to process your data at scale and ingest the transformed data into this feature group.

Inputs and Outputs


The below settings configure the inputs and outputs for the flow export.

💡 Configurable Settings

In Input - Source you can configure the data sources that will be used as input by Data Wrangler

  1. For S3 sources, configure the source attribute that points to the input S3 prefixes

  2. For all other sources, configure attributes like query_string, database in the source’s DatasetDefinition object.

If you modify the inputs the provided data must have the same schema and format as the data used in the Flow. You should also re-execute the cells in this section if you have modified the settings in any data sources.

[ ]:
from sagemaker.processing import ProcessingInput, ProcessingOutput
from sagemaker.dataset_definition.inputs import (
    AthenaDatasetDefinition,
    DatasetDefinition,
    RedshiftDatasetDefinition,
)

data_sources = []

Input - S3 Source: tracks.csv

[ ]:
data_sources.append(
    ProcessingInput(
        source=f"{tracks_data_source}",  # You could override this to point to another dataset on S3
        destination="/opt/ml/processing/tracks.csv",
        input_name="tracks.csv",
        s3_data_type="S3Prefix",
        s3_input_mode="File",
        s3_data_distribution_type="FullyReplicated",
    )
)

Input - S3 Source: ratings.csv

[ ]:
data_sources.append(
    ProcessingInput(
        source=f"{ratings_data_source}",  # You could override this to point to another dataset on S3
        destination="/opt/ml/processing/ratings.csv",
        input_name="ratings.csv",
        s3_data_type="S3Prefix",
        s3_input_mode="File",
        s3_data_distribution_type="FullyReplicated",
    )
)

Output: Feature Store

Below are the inputs required by the SageMaker Python SDK to launch a processing job with feature store as an output. Notice the output_name variable below; this ID is found within the .flow file at the node point you want to capture transformations up to. The .flow file contains instructions for SageMaker Data Wrangler to know where to look for data and how to transform it. Each data transformation action is associated with a node and therefore a node ID. Using the associated node ID + output name tells SageMaker up to what point in the transformation process you want to export to a feature store.

[ ]:
from sagemaker.processing import FeatureStoreOutput

# Output name is auto-generated from the select node's ID + output name from the .flow file
output_names = {
    "track-features-music-rec": "19ad8e80-2002-4ee9-9753-fe9a384b1166.default",
    "user-5star-track-features-music-rec": "7a6dad19-2c80-43e3-b03d-ec23c3842ae9.default",
    "ratings-features-music-rec": "9a283380-91ca-478e-be99-6ba3bf57c680.default",
}

processing_job_outputs = {}

for feature_group_name in feature_group_names:
    processing_job_output = ProcessingOutput(
        output_name=output_names[feature_group_name],
        app_managed=True,
        feature_store_output=FeatureStoreOutput(feature_group_name=feature_group_name),
    )
    processing_job_outputs[feature_group_name] = processing_job_output

Upload Flow to S3


To use the Data Wrangler as an input to the processing job, first upload your flow file to Amazon S3.

[ ]:
import os
import json
import boto3

# name of the flow file which should exist in the current notebook working directory
flow_file_name = "01_music_dataprep.flow"

# Load .flow file from current notebook working directory
!echo "Loading flow file from current notebook working directory: $PWD"

with open(flow_file_name) as f:
    flow = json.load(f)

# Upload flow to S3
s3_client = boto3.client("s3")
s3_client.upload_file(
    flow_file_name, bucket, f"{prefix}/data_wrangler_flows/{flow_export_name}.flow"
)

flow_s3_uri = f"s3://{bucket}/{prefix}/data_wrangler_flows/{flow_export_name}.flow"

print(f"Data Wrangler flow {flow_file_name} uploaded to {flow_s3_uri}")

The Data Wrangler Flow is also provided to the Processing Job as an input source which we configure below.

[ ]:
## Input - Flow: 01_music_dataprep.flow
flow_input = ProcessingInput(
    source=flow_s3_uri,
    destination="/opt/ml/processing/flow",
    input_name="flow",
    s3_data_type="S3Prefix",
    s3_input_mode="File",
    s3_data_distribution_type="FullyReplicated",
)

Run Processing Job


Job Configurations

💡 Configurable Settings

You can configure the following settings for Processing Jobs. If you change any configurations you will need to re-execute this and all cells below it by selecting the Run menu above and click Run Selected Cells and All Below

  1. IAM role for executing the processing job.

  2. A unique name of the processing job. Give a unique name every time you re-execute processing jobs

  3. Data Wrangler Container URL.

  4. Instance count, instance type and storage volume size in GB.

  5. Content type for each output. Data Wrangler supports CSV as default and Parquet.

  6. Network Isolation settings

[ ]:
# Data Wrangler Container URL.
container_uri = sagemaker.image_uris.retrieve(framework="data-wrangler", region=region)

# Processing Job Instance count and instance type.
instance_count = 2
instance_type = "ml.m5.4xlarge"

# Size in GB of the EBS volume to use for storing data during processing
volume_size_in_gb = 30

# Content type for each output. Data Wrangler supports CSV as default and Parquet.
output_content_type = "CSV"

# Network Isolation mode; default is off
enable_network_isolation = False

Create Processing Job

To launch a Processing Job, you will use the SageMaker Python SDK to create a Processor function.

[ ]:
from sagemaker.processing import Processor
from sagemaker.network import NetworkConfig

processor = Processor(
    role=iam_role,
    image_uri=container_uri,
    instance_count=instance_count,
    instance_type=instance_type,
    volume_size_in_gb=volume_size_in_gb,
    network_config=NetworkConfig(enable_network_isolation=enable_network_isolation),
    sagemaker_session=sess,
)

Job Status & S3 Output Location

Below you wait for processing job to finish. If it finishes successfully, your feature group should be populated with transformed feature values. In addition the raw parameters used by the Processing Job will be printed.

[ ]:
%%time

feature_group_exists = False
for feature_group_name in feature_group_names:
    print(f"Processing {feature_group_name}")
    # Unique processing job name. Give a unique name every time you re-execute processing jobs
    processing_job_name = "dw-flow-proc-music-rec-tracks-{}-{}".format(
        flow_export_id, str(uuid.uuid4())[:8]
    )
    print(f"{processing_job_name}")

    # Output configuration used as processing job container arguments
    output_config = {output_names[feature_group_name]: {"content_type": output_content_type}}

    # Run Processing Job if job not already previously ran
    if feature_group_exists:  # feature_group_existence[feature_group_name]
        print(
            "Feature Group {0} already exists therefore we will not run a processing job to create it again".format(
                feature_group_name
            )
        )
    else:
        print("Creating Processing Job: {}".format(feature_group_name))
        processor.run(
            inputs=[flow_input] + data_sources,
            outputs=[processing_job_outputs[feature_group_name]],
            arguments=[f"--output-config '{json.dumps(output_config)}'"],
            wait=False,
            logs=False,
            job_name=processing_job_name,
        )

        job_result = sess.wait_for_processing_job(processing_job_name)
        print(job_result)

You can view newly created feature group in Studio, refer to Use Amazon SageMaker Feature Store with Amazon SageMaker Studio for detailed guide. Learn more about SageMaker Feature Store

Fetch Data from Offline Feature Store


There are 3 feature stores for the ratings, tracks, and user preferences data. We retrieve data from all 3 before joining them.

[ ]:
feature_groups = []
for name in feature_group_names:
    feature_group = FeatureGroup(name=name, sagemaker_session=feature_store_session)
    feature_groups.append(feature_group)
[ ]:
s3_client = boto3.client("s3")
account_id = boto3.client("sts").get_caller_identity()["Account"]

sagemaker_role = sagemaker.get_execution_role()

s3_output_path = "s3://" + bucket
[ ]:
feature_group_s3_prefixes = []
for feature_group in feature_groups:
    feature_group_table_name = (
        feature_group.describe().get("OfflineStoreConfig").get("DataCatalogConfig").get("TableName")
    )
    feature_group_s3_prefix = (
        f"{account_id}/sagemaker/{region}/offline-store/{feature_group_table_name}"
    )
    feature_group_s3_prefixes.append(feature_group_s3_prefix)
[ ]:
# wait for data to be added to offline feature store
def wait_for_offline_store(feature_group_s3_prefix):
    print(feature_group_s3_prefix)
    offline_store_contents = None
    while offline_store_contents is None:
        objects_in_bucket = s3_client.list_objects(Bucket=bucket, Prefix=feature_group_s3_prefix)
        if "Contents" in objects_in_bucket and len(objects_in_bucket["Contents"]) > 1:
            offline_store_contents = objects_in_bucket["Contents"]
        else:
            print("Waiting for data in offline store...")
            time.sleep(60)
    print("Data available.")


for s3_prefix in feature_group_s3_prefixes:
    wait_for_offline_store(s3_prefix)
[ ]:
tables = {
    "ratings": {"feature_group": feature_groups[2], "cols": ["userId", "trackid", "rating"]},
    "tracks": {
        "feature_group": feature_groups[0],
        "cols": [
            "trackid",
            "length",
            "energy",
            "acousticness",
            "valence",
            "speechiness",
            "instrumentalness",
            "liveness",
            "tempo",
            "danceability",
            "genre_latin",
            "genre_folk",
            "genre_blues",
            "genre_rap",
            "genre_reggae",
            "genre_jazz",
            "genre_rnb",
            "genre_country",
            "genre_electronic",
            "genre_pop_rock",
        ],
    },
    "user_5star_features": {
        "feature_group": feature_groups[1],
        "cols": [
            "userId",
            "energy_5star",
            "acousticness_5star",
            "valence_5star",
            "speechiness_5star",
            "instrumentalness_5star",
            "liveness_5star",
            "tempo_5star",
            "danceability_5star",
            "genre_latin_5star",
            "genre_folk_5star",
            "genre_blues_5star",
            "genre_rap_5star",
            "genre_reggae_5star",
            "genre_jazz_5star",
            "genre_rnb_5star",
            "genre_country_5star",
            "genre_electronic_5star",
            "genre_pop_rock_5star",
        ],
    },
}

Check if the Athena queries have been done and the data sets exist, then just do train test split or just proceed to training

[ ]:
def get_train_val():
    for k, v in tables.items():
        query = v["feature_group"].athena_query()
        joined_cols = ", ".join(v["cols"])
        # limit number of datapoints for training time
        query_string = 'SELECT {} FROM "{}" LIMIT 500000'.format(joined_cols, query.table_name)
        print(query_string, "\n")

        output_location = f"s3://{bucket}/{prefix}/query_results/"
        query.run(query_string=query_string, output_location=output_location)
        query.wait()

        tables[k]["df"] = query.as_dataframe()

    ratings = tables["ratings"]["df"]
    tracks = tables["tracks"]["df"]
    user_prefs = tables["user_5star_features"]["df"]

    print("Merging datasets...")
    print(f"Ratings: {ratings.shape}\nTracks: {tracks.shape}\nUser Prefs: {user_prefs.shape}\n")

    dataset = pd.merge(ratings, tracks, on="trackid", how="inner")
    dataset = pd.merge(dataset, user_prefs, on="userId", how="inner")
    dataset.drop_duplicates(inplace=True)
    dataset.drop(["userId", "trackid"], axis=1, inplace=True)

    # split data
    from sklearn.model_selection import train_test_split

    train, val = train_test_split(dataset, test_size=0.2, random_state=42)
    print(
        "Training dataset shape: {}\nValidation dataset shape: {}\n".format(train.shape, val.shape)
    )

    return train, val
[ ]:
%%time
import pandas as pd
import glob


print("Creating training and validation sets...\n")
train, val = get_train_val()
# Write to csv in S3 without headers and index column
train.to_csv("./data/train_data.csv", header=False, index=False)
val.to_csv("./data/val_data.csv", header=False, index=False)

pd.DataFrame({"ColumnName": train.columns}).to_csv(
    "./data/train_data_headers.csv", header=False, index=False
)
pd.DataFrame({"ColumnName": val.columns}).to_csv(
    "./data/val_data_headers.csv", header=False, index=False
)

Notebook CI Test Results

This notebook was tested in multiple regions. The test results are as follows, except for us-west-2 which is shown at the top of the notebook.

This us-east-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This us-east-2 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This us-west-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This ca-central-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This sa-east-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This eu-west-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This eu-west-2 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This eu-west-3 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This eu-central-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This eu-north-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This ap-southeast-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This ap-southeast-2 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This ap-northeast-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This ap-northeast-2 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This ap-south-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable