Fraud Detection with Amazon SageMaker FeatureStore


This notebook’s CI test result for us-west-2 is as follows. CI test results in other regions can be found at the end of the notebook.

This us-west-2 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable


Kernel Python 3 (Data Science) works well with this notebook.

The following policies need to be attached to the execution role: - AmazonSageMakerFullAccess - AmazonS3FullAccess

Contents

  1. Background

  2. Setup SageMaker FeatureStore

  3. Inspect Dataset

  4. Ingest Data into FeatureStore

  5. Build Training Dataset

  6. Train and Deploy the Model

  7. SageMaker FeatureStore At Inference

  8. Cleanup Resources

Background

Amazon SageMaker FeatureStore is a new SageMaker capability that makes it easy for customers to create and manage curated data for machine learning (ML) development. SageMaker FeatureStore enables data ingestion via a high TPS API and data consumption via the online and offline stores.

This notebook provides an example for the APIs provided by SageMaker FeatureStore by walking through the process of training a fraud detection model. The notebook demonstrates how the dataset’s tables can be ingested into the FeatureStore, queried to create a training dataset, and quickly accessed during inference.

Terminology

A FeatureGroup is the main resource that contains the metadata for all the data stored in SageMaker FeatureStore. A FeatureGroup contains a list of FeatureDefinitions. A FeatureDefinition consists of a name and one of the following data types: a integral, string or decimal. The FeatureGroup also contains an OnlineStoreConfig and an OfflineStoreConfig controlling where the data is stored. Enabling the online store allows quick access to the latest value for a Record via the GetRecord API. The offline store, a required configuration, allows storage of historical data in your S3 bucket.

Once a FeatureGroup is created, data can be added as Records. Records can be thought of as a row in a table. Each record will have a unique RecordIdentifier along with values for all other FeatureDefinitions in the FeatureGroup.

Setup SageMaker FeatureStore

Let’s start by setting up the SageMaker Python SDK and boto client. Note that this notebook requires a boto3 version above 1.17.21

[ ]:
import boto3
import sagemaker

original_boto3_version = boto3.__version__
%pip install 'boto3>1.17.21'
[ ]:
from sagemaker.session import Session

region = boto3.Session().region_name

boto_session = boto3.Session(region_name=region)

sagemaker_client = boto_session.client(service_name="sagemaker", region_name=region)
featurestore_runtime = boto_session.client(
    service_name="sagemaker-featurestore-runtime", region_name=region
)

feature_store_session = Session(
    boto_session=boto_session,
    sagemaker_client=sagemaker_client,
    sagemaker_featurestore_runtime_client=featurestore_runtime,
)

SageMaker FeatureStore writes the data in the OfflineStore of a FeatureGroup to a S3 bucket owned by you. To be able to write to your S3 bucket, SageMaker FeatureStore assumes an IAM role which has access to it. The role is also owned by you. Note that the same bucket can be re-used across FeatureGroups. Data in the bucket is partitioned by FeatureGroup.

Set the default s3 bucket name and it will be referenced throughout the notebook.

[ ]:
# You can modify the following to use a bucket of your choosing
default_s3_bucket_name = feature_store_session.default_bucket()
prefix = "sagemaker-featurestore-demo"

print(default_s3_bucket_name)

Set up the IAM role. This role gives SageMaker FeatureStore access to your S3 bucket.

Note: In this example we use the default SageMaker role, assuming it has both AmazonSageMakerFullAccess and AmazonSageMakerFeatureStoreAccess managed policies. If not, please make sure to attach them to the role before proceeding.

[ ]:
from sagemaker import get_execution_role

# You can modify the following to use a role of your choosing. See the documentation for how to create this.
role = get_execution_role()
print(role)

Inspect Dataset

The provided dataset is a synthetic dataset with two tables: identity and transactions. They can both be joined by the TransactionId column. The transaction table contains information about a particular transaction such as amount, credit or debit card while the identity table contains information about the user such as device type and browser. The transaction must exist in the transaction table, but might not always be available in the identity table.

The objective of the model is to predict if a transaction is fraudulent or not, given the transaction record.

[ ]:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import io

s3_client = boto3.client("s3", region_name=region)

fraud_detection_bucket_name = f"sagemaker-example-files-prod-{region}"
identity_file_key = (
    "datasets/tabular/fraud_detection/synthethic_fraud_detection_SA/sampled_identity.csv"
)
transaction_file_key = (
    "datasets/tabular/fraud_detection/synthethic_fraud_detection_SA/sampled_transactions.csv"
)

identity_data_object = s3_client.get_object(
    Bucket=fraud_detection_bucket_name, Key=identity_file_key
)
transaction_data_object = s3_client.get_object(
    Bucket=fraud_detection_bucket_name, Key=transaction_file_key
)

identity_data = pd.read_csv(io.BytesIO(identity_data_object["Body"].read()))
transaction_data = pd.read_csv(io.BytesIO(transaction_data_object["Body"].read()))

identity_data = identity_data.round(5)
transaction_data = transaction_data.round(5)

identity_data = identity_data.fillna(0)
transaction_data = transaction_data.fillna(0)

# Feature transformations for this dataset are applied before ingestion into FeatureStore.
# One hot encode card4, card6
encoded_card_bank = pd.get_dummies(transaction_data["card4"], prefix="card_bank")
encoded_card_type = pd.get_dummies(transaction_data["card6"], prefix="card_type")

transformed_transaction_data = pd.concat(
    [transaction_data, encoded_card_type, encoded_card_bank], axis=1
)
# blank space is not allowed in feature name
transformed_transaction_data = transformed_transaction_data.rename(
    columns={"card_bank_american express": "card_bank_american_express"}
)
[ ]:
identity_data.head()
[ ]:
transformed_transaction_data.head()

Ingest Data into FeatureStore

In this step we will create the FeatureGroups representing the transaction and identity tables.

[ ]:
from time import gmtime, strftime, sleep

identity_feature_group_name = "identity-feature-group-" + strftime("%d-%H-%M-%S", gmtime())
transaction_feature_group_name = "transaction-feature-group-" + strftime("%d-%H-%M-%S", gmtime())
[ ]:
from sagemaker.feature_store.feature_group import FeatureGroup

identity_feature_group = FeatureGroup(
    name=identity_feature_group_name, sagemaker_session=feature_store_session
)
transaction_feature_group = FeatureGroup(
    name=transaction_feature_group_name, sagemaker_session=feature_store_session
)
[ ]:
import time

current_time_sec = int(round(time.time()))


def cast_object_to_string(data_frame):
    for label in data_frame.columns:
        if data_frame.dtypes[label] == "object":
            data_frame[label] = data_frame[label].astype("str").astype("string")


# cast object dtype to string. The SageMaker FeatureStore Python SDK will then map the string dtype to String feature type.
cast_object_to_string(identity_data)
cast_object_to_string(transformed_transaction_data)

# record identifier and event time feature names
record_identifier_feature_name = "TransactionID"
event_time_feature_name = "EventTime"

# append EventTime feature
identity_data[event_time_feature_name] = pd.Series(
    [current_time_sec] * len(identity_data), dtype="float64"
)
transformed_transaction_data[event_time_feature_name] = pd.Series(
    [current_time_sec] * len(transaction_data), dtype="float64"
)

# load feature definitions to the feature group. SageMaker FeatureStore Python SDK will auto-detect the data schema based on input data.
identity_feature_group.load_feature_definitions(data_frame=identity_data)
# output is suppressed
transaction_feature_group.load_feature_definitions(data_frame=transformed_transaction_data)
# output is suppressed
[ ]:
def wait_for_feature_group_creation_complete(feature_group):
    status = feature_group.describe().get("FeatureGroupStatus")
    while status == "Creating":
        print("Waiting for Feature Group Creation")
        time.sleep(5)
        status = feature_group.describe().get("FeatureGroupStatus")
    if status != "Created":
        raise RuntimeError(f"Failed to create feature group {feature_group.name}")
    print(f"FeatureGroup {feature_group.name} successfully created.")


identity_feature_group.create(
    s3_uri=f"s3://{default_s3_bucket_name}/{prefix}",
    record_identifier_name=record_identifier_feature_name,
    event_time_feature_name=event_time_feature_name,
    role_arn=role,
    enable_online_store=True,
)

transaction_feature_group.create(
    s3_uri=f"s3://{default_s3_bucket_name}/{prefix}",
    record_identifier_name=record_identifier_feature_name,
    event_time_feature_name=event_time_feature_name,
    role_arn=role,
    enable_online_store=True,
)

wait_for_feature_group_creation_complete(feature_group=identity_feature_group)
wait_for_feature_group_creation_complete(feature_group=transaction_feature_group)

Confirm the FeatureGroup has been created by using the DescribeFeatureGroup and ListFeatureGroups APIs.

[ ]:
identity_feature_group.describe()
[ ]:
transaction_feature_group.describe()
[ ]:
sagemaker_client.list_feature_groups()  # use boto client to list FeatureGroups

After the FeatureGroups have been created, we can put data into the FeatureGroups by using the PutRecord API. This API can handle high TPS and is designed to be called by different streams. The data from all of these Put requests is buffered and written to S3 in chunks. The files will be written to the offline store within a few minutes of ingestion. For this example, to accelerate the ingestion process, we are specifying multiple workers to do the job simultaneously. It will take ~1min to ingest data to the 2 FeatureGroups, respectively.

[ ]:
identity_feature_group.ingest(data_frame=identity_data, max_workers=3, wait=True)
[ ]:
transaction_feature_group.ingest(data_frame=transformed_transaction_data, max_workers=5, wait=True)

To confirm that data has been ingested, we can quickly retrieve a record from the online store:

[ ]:
record_identifier_value = str(2990130)

featurestore_runtime.get_record(
    FeatureGroupName=transaction_feature_group_name,
    RecordIdentifierValueAsString=record_identifier_value,
)

We can also retrieve a record of each feature group from the online store:

[ ]:
featurestore_runtime.batch_get_record(
    Identifiers=[
        {
            "FeatureGroupName": identity_feature_group_name,
            "RecordIdentifiersValueAsString": ["2990130"],
        },
        {
            "FeatureGroupName": transaction_feature_group_name,
            "RecordIdentifiersValueAsString": ["2990130"],
        },
    ]
)

The SageMaker Python SDK’s FeatureStore class also provides the functionality to generate Hive DDL commands. Schema of the table is generated based on the feature definitions. Columns are named after feature name and data-type are inferred based on feature type.

[ ]:
print(identity_feature_group.as_hive_ddl())
[ ]:
print(transaction_feature_group.as_hive_ddl())

Now let’s wait for the data to appear in our offline store before moving forward to creating a dataset. This will take approximately 5 minutes.

[ ]:
account_id = boto3.client("sts").get_caller_identity()["Account"]
print(account_id)

identity_feature_group_resolved_output_s3_uri = (
    identity_feature_group.describe()
    .get("OfflineStoreConfig")
    .get("S3StorageConfig")
    .get("ResolvedOutputS3Uri")
)
transaction_feature_group_resolved_output_s3_uri = (
    transaction_feature_group.describe()
    .get("OfflineStoreConfig")
    .get("S3StorageConfig")
    .get("ResolvedOutputS3Uri")
)

identity_feature_group_s3_prefix = identity_feature_group_resolved_output_s3_uri.replace(
    f"s3://{default_s3_bucket_name}/", ""
)
transaction_feature_group_s3_prefix = transaction_feature_group_resolved_output_s3_uri.replace(
    f"s3://{default_s3_bucket_name}/", ""
)

offline_store_contents = None
while offline_store_contents is None:
    objects_in_bucket = s3_client.list_objects(
        Bucket=default_s3_bucket_name, Prefix=transaction_feature_group_s3_prefix
    )
    if "Contents" in objects_in_bucket and len(objects_in_bucket["Contents"]) > 1:
        offline_store_contents = objects_in_bucket["Contents"]
    else:
        print("Waiting for data in offline store...\n")
        sleep(60)

print("Data available.")

SageMaker FeatureStore adds metadata for each record that’s ingested into the offline store.

Build Training Dataset

SageMaker FeatureStore automatically builds the Glue Data Catalog for FeatureGroups (you can optionally turn it on/off while creating the FeatureGroup). In this example, we want to create one training dataset with FeatureValues from both identity and transaction FeatureGroups. This is done by utilizing the auto-built Catalog. We run an Athena query that joins the data stored in the offline store in S3 from the 2 FeatureGroups.

[ ]:
identity_query = identity_feature_group.athena_query()
transaction_query = transaction_feature_group.athena_query()

identity_table = identity_query.table_name
transaction_table = transaction_query.table_name

query_string = (
    'SELECT * FROM "'
    + transaction_table
    + '" LEFT JOIN "'
    + identity_table
    + '" ON "'
    + transaction_table
    + '".transactionid = "'
    + identity_table
    + '".transactionid'
)
print("Running " + query_string)

# run Athena query. The output is loaded to a Pandas dataframe.
# dataset = pd.DataFrame()
identity_query.run(
    query_string=query_string,
    output_location="s3://" + default_s3_bucket_name + "/" + prefix + "/query_results/",
)
identity_query.wait()
dataset = identity_query.as_dataframe()

dataset
[ ]:
# Prepare query results for training.
query_execution = identity_query.get_query_execution()
query_result = (
    "s3://"
    + default_s3_bucket_name
    + "/"
    + prefix
    + "/query_results/"
    + query_execution["QueryExecution"]["QueryExecutionId"]
    + ".csv"
)
print(query_result)

# Select useful columns for training with target column as the first.
dataset = dataset[
    [
        "isfraud",
        "transactiondt",
        "transactionamt",
        "card1",
        "card2",
        "card3",
        "card5",
        "card_type_credit",
        "card_type_debit",
        "card_bank_american_express",
        "card_bank_discover",
        "card_bank_mastercard",
        "card_bank_visa",
        "id_01",
        "id_02",
        "id_03",
        "id_04",
        "id_05",
    ]
]

# Write to csv in S3 without headers and index column.
dataset.to_csv("dataset.csv", header=False, index=False)
s3_client.upload_file("dataset.csv", default_s3_bucket_name, prefix + "/training_input/dataset.csv")
dataset_uri_prefix = "s3://" + default_s3_bucket_name + "/" + prefix + "/training_input/"

dataset

Train and Deploy the Model

Now it’s time to launch a Training job to fit our model. We use the gradient boosting algorithm provided by XGBoost libary to fit our data. Call the SageMaker XGBoost container and construct a generic SageMaker estimator.

[ ]:
training_image = sagemaker.image_uris.retrieve("xgboost", region, "1.0-1")

Note: If the previous cell fails to call the SageMaker XGBoost training image, this might be due to the limited support of regions. To find available regions, see Docker Registry Paths for SageMaker Built-in Algorithms in the Amazon SageMaker developer guide. If you are in a region that is not listed in the documentation, you need to run the following code to manually pull the pre-built SageMaker XGBoost base image and push to your Amazon Elastic Container Registry (ECR). You must use this manual docker registration in SageMaker Notebook instances because SageMaker Studio apps run based off docker containers and do not support docker.

Step 1: Pull, build, and push the SageMaker XGBoost container to your ECR account. The following bash script pulls the SageMaker XGBoost docker image from the us-east-2 region and pushes to your ECR.

%%bash
public_ecr=257758044811.dkr.ecr.us-east-2.amazonaws.com
image=sagemaker-xgboost
tag=1.0-1-cpu-py3

# Add the public ECR for XGBoost image to authenticated registries
aws ecr get-login-password --region us-east-2 | \
    docker login --username AWS --password-stdin $public_ecr

# Pull the XGBoost image
docker pull $public_ecr/$image:$tag

# Push the image to your ECR
my_region=$(aws configure get region)
my_account=$(aws sts get-caller-identity --query Account | tr -d '"')
my_ecr=$my_account.dkr.ecr.$my_region.amazonaws.com

# Authenticate your ECR
aws ecr get-login-password --region $my_region | \
    docker login --username AWS --password-stdin $my_ecr

# Create a repository in your ECR to host the XGBoost image
repository_name=sagemaker-xgboost

if aws ecr create-repository --repository-name $repository_name ; then
    echo "Repository $repository_name created!"
else
    echo "Repository $repository_name already exists!"
fi

# Push the image to your ECR
docker tag $public_ecr/$image:$tag $my_ecr/$image:$tag
docker push $my_ecr/$image:$tag

Step 2: To use the ECR image URI, run the following code and set the training_image string object.

import boto3
region = boto3.Session().region_name
account_id = boto3.client('sts').get_caller_identity()["Account"]
ecr = '{}.dkr.ecr.{}.amazonaws.com'.format(account_id, region)

training_image=ecr + '/' + 'sagemaker-xgboost:1.0-1-cpu-py3'
[ ]:
training_output_path = "s3://" + default_s3_bucket_name + "/" + prefix + "/training_output"

from sagemaker.estimator import Estimator

training_model = Estimator(
    training_image,
    role,
    instance_count=1,
    instance_type="ml.m5.2xlarge",
    volume_size=5,
    max_run=3600,
    input_mode="File",
    output_path=training_output_path,
    sagemaker_session=feature_store_session,
)
[ ]:
training_model.set_hyperparameters(objective="binary:logistic", num_round=50)

Specify the training dataset created in the Build Training Dataset section.

[ ]:
train_data = sagemaker.inputs.TrainingInput(
    dataset_uri_prefix,
    distribution="FullyReplicated",
    content_type="text/csv",
    s3_data_type="S3Prefix",
)
data_channels = {"train": train_data}
[ ]:
training_model.fit(inputs=data_channels, logs=True)

Set up Hosting for the Model

Once the training is done, we can deploy the trained model as an Amazon SageMaker real-time hosted endpoint. This will allow us to make predictions (or inference) from the model. Note that we don’t have to host on the same instance (or type of instance) that we used to train. The endpoint deployment can be accomplished as follows. This takes 8-10 minutes to complete.

[ ]:
predictor = training_model.deploy(initial_instance_count=1, instance_type="ml.m5.xlarge")

SageMaker FeatureStore During Inference

SageMaker FeatureStore can be useful in supplementing data for inference requests because of the low-latency GetRecord functionality. For this demo, we will be given a TransactionId and query our online FeatureGroups for data on the transaction to build our inference request.

[ ]:
# Incoming inference request.
transaction_id = str(3450774)


# Helper to parse the feature value from the record.
def get_feature_value(record, feature_name):
    return str(list(filter(lambda r: r["FeatureName"] == feature_name, record))[0]["ValueAsString"])


transaction_response = featurestore_runtime.get_record(
    FeatureGroupName=transaction_feature_group_name, RecordIdentifierValueAsString=transaction_id
)
transaction_record = transaction_response["Record"]

transaction_test_data = [
    get_feature_value(transaction_record, "TransactionDT"),
    get_feature_value(transaction_record, "TransactionAmt"),
    get_feature_value(transaction_record, "card1"),
    get_feature_value(transaction_record, "card2"),
    get_feature_value(transaction_record, "card3"),
    get_feature_value(transaction_record, "card5"),
    get_feature_value(transaction_record, "card_type_credit"),
    get_feature_value(transaction_record, "card_type_debit"),
    get_feature_value(transaction_record, "card_bank_american_express"),
    get_feature_value(transaction_record, "card_bank_discover"),
    get_feature_value(transaction_record, "card_bank_mastercard"),
    get_feature_value(transaction_record, "card_bank_visa"),
]

identity_response = featurestore_runtime.get_record(
    FeatureGroupName=identity_feature_group_name, RecordIdentifierValueAsString=transaction_id
)
identity_record = identity_response["Record"]
id_test_data = [
    get_feature_value(identity_record, "id_01"),
    get_feature_value(identity_record, "id_02"),
    get_feature_value(identity_record, "id_03"),
    get_feature_value(identity_record, "id_04"),
    get_feature_value(identity_record, "id_05"),
]

# Join all pieces for inference request.
inference_request = []
inference_request.extend(transaction_test_data[:])
inference_request.extend(id_test_data[:])

inference_request
[ ]:
import json

results = predictor.predict(",".join(inference_request), initial_args={"ContentType": "text/csv"})
prediction = json.loads(results)
print(prediction)

Cleanup Resources

[ ]:
predictor.delete_endpoint()
[ ]:
identity_feature_group.delete()
transaction_feature_group.delete()
[ ]:
# restore original boto3 version
%pip install 'boto3=={}'.format(original_boto3_version)

Notebook CI Test Results

This notebook was tested in multiple regions. The test results are as follows, except for us-west-2 which is shown at the top of the notebook.

This us-east-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This us-east-2 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This us-west-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This ca-central-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This sa-east-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This eu-west-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This eu-west-2 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This eu-west-3 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This eu-central-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This eu-north-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This ap-southeast-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This ap-southeast-2 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This ap-northeast-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This ap-northeast-2 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable

This ap-south-1 badge failed to load. Check your device’s internet connectivity, otherwise the service is currently unavailable