Preprocessing Text Data

The purpose of this notebook is to demonstrate how to preprocessing text data for next-step feature engineering and training a machine learning model via Amazon SageMaker. In this notebook we will focus on preprocessing our text data, and we will use the text data we ingested in a sequel notebook to showcase text data preprocessing methodologies. We are going to discuss many possible methods to clean and enrich your text, but you do not need to run through every single step below. Usually, a rule of thumb is: if you are dealing with very noisy text, like social media text data, or nurse notes, then medium to heavy preprocessing effort might be needed, and if it’s domain-specific corpus, text enrichment is helpful as well; if you are dealing with long and well-written documents such as news articles and papers, very light preprocessing is needed; you can add some enrichment to the data to better capture the sentence to sentence relationship and overall meaning.

Overview

Input Format

Labeled text data sometimes are in a structured data format. You might come across this when working on reviews for sentiment analysis, news headlines for topic modeling, or documents for text classification. One column of the dataset could be dedicated for the label, one column for the text, and sometimes other columns as attributes. You can process this dataset format similar to how you would process tabular data and ingest them in the last section. Sometimes text data, especially raw text data, comes as unstructured data and is often in .json or .txt format. To work with this type of formatting, you will need to first extract useful information from the original dataset.

Use Cases

Text data contains rich information and it’s everywhere. Applicable use cases include Voice of Customer (VOC), fraud detection, warranty analysis, chatbot and customer service routing, audience analysis, and much more.

What’s the difference between preprocessing and feature engineering for text data?

In the preprocessing stage, you want to clean and transfer the text data from human language to standard, machine-analyzable format for further processing. For feature engineering, you extract predictive factors (features) from the text. For example, for a matching equivalent question pairs task, the features you can extract include words overlap, cosine similarity, inter-word relationships, parse tree structure similarity, TF-IDF (frequency-inverse document frequency) scores, etc.; for some language model like topic modeling, words embeddings themselves can also be features.

When is my text data ready for feature engineering?

When the data is ready to be vectorized and fit your specific use case.

Set Up Notebook

There are several python packages designed specifically for natural language processing (NLP) tasks. In this notebook, you will use the following packages:

  • nltk (natrual language toolkit), a leading platform includes multiple text processing libraries, which covers almost all aspects of preprocessing we will discuss in this section: tokenization, stemming, lemmatization, parsing, chunking, POS tagging, stop words, etc.

  • [SpaCy] (https://spacy.io/), offers most functionality provided by nltk, and provides pre-trained word vectors and models. It is scalable and designed for production usage.

  • Gensim (Generate Similar), “designed specifically for topic modeling, document indexing, and similarity retrieval with large corpora”.

  • [TextBlo](https://textblob.readthedocs.io/en/dev/), offers POS tagging, noun phrases extraction, sentiment analysis, classification, parsing, n-grams, word inflation, all offered as an API to perform more advanced NLP tasks. It is an easy-to-use wrapper for libraries likenltkandPattern`. We will use this package for our enrichment tasks.

[69]:
%pip install -qU  'sagemaker>=2.15.0' spacy gensim textblob emot autocorrect
WARNING: You are using pip version 20.0.2; however, version 20.2.4 is available.
You should consider upgrading via the '/home/ec2-user/anaconda3/envs/python3/bin/python -m pip install --upgrade pip' command.
Note: you may need to restart the kernel to use updated packages.
[70]:
import nltk
import spacy
import gensim
from textblob import TextBlob
import re
import string
import glob
import sagemaker
[71]:
# Get SageMaker session & default S3 bucket
sagemaker_session = sagemaker.Session()
bucket = sagemaker_session.default_bucket()  # replace with your own bucket if you have one
s3 = sagemaker_session.boto_session.resource("s3")

prefix = "text_sentiment140/sentiment140"
filename = "training.1600000.processed.noemoticon.csv"

Downloading data from Online Sources

Text Data Sets: Twitter – sentiment140

Sentiment140 The sentiment140 dataset contains 1.6M tweets that were extracted using the Twitter API . The tweets have been annotated with sentiment (0 = negative, 4 = positive) and topics (hashtags used to retrieve tweets). The dataset contains the following columns: * target: the polarity of the tweet (0 = negative, 4 = positive) * ids: The id of the tweet ( 2087) * date: the date of the tweet (Sat May 16 23:58:44 UTC 2009) * flag: The query (lyx). If there is no query, then this value is NO_QUERY. * user: the user that tweeted (robotickilldozr) * text: the text of the tweet (Lyx is cool)

[72]:
# helper functions to upload data to s3
def write_to_s3(filename, bucket, prefix):
    # put one file in a separate folder. This is helpful if you read and prepare data with Athena
    filename_key = filename.split(".")[0]
    key = "{}/{}/{}".format(prefix, filename_key, filename)
    return s3.Bucket(bucket).upload_file(filename, key)


def upload_to_s3(bucket, prefix, filename):
    url = "s3://{}/{}/{}".format(bucket, prefix, filename)
    print("Writing to {}".format(url))
    write_to_s3(filename, bucket, prefix)
[73]:
# run this cell if you are in SageMaker Studio notebook
#!apt-get install unzip
[74]:
!wget http://cs.stanford.edu/people/alecmgo/trainingandtestdata.zip -O sentimen140.zip
# Uncompressing
!unzip -o sentimen140.zip -d sentiment140
URL transformed to HTTPS due to an HSTS policy
--2020-11-02 21:57:53--  https://cs.stanford.edu/people/alecmgo/trainingandtestdata.zip
Resolving cs.stanford.edu (cs.stanford.edu)... 171.64.64.64
Connecting to cs.stanford.edu (cs.stanford.edu)|171.64.64.64|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 81363704 (78M) [application/zip]
Saving to: ‘sentimen140.zip’

sentimen140.zip     100%[===================>]  77.59M  23.9MB/s    in 3.5s

2020-11-02 21:57:57 (22.1 MB/s) - ‘sentimen140.zip’ saved [81363704/81363704]

Archive:  sentimen140.zip
  inflating: sentiment140/testdata.manual.2009.06.14.csv
  inflating: sentiment140/training.1600000.processed.noemoticon.csv
[75]:
# upload the files to the S3 bucket
csv_files = glob.glob("sentiment140/*.csv")
for filename in csv_files:
    upload_to_s3(bucket, "text_sentiment140", filename)
Writing to s3://sagemaker-us-east-2-060356833389/text_sentiment140/sentiment140/training.1600000.processed.noemoticon.csv
Writing to s3://sagemaker-us-east-2-060356833389/text_sentiment140/sentiment140/testdata.manual.2009.06.14.csv

Read in Data

We will read the data in as .csv format since the text is embedded in a structured table.

Note: A frequent error when reading in text data is the encoding error. You can try different encoding options with pandas read_csv when “encoding as UTF-8” does not work; see python encoding documentation for more encodings you may encounter.

[76]:
import pandas as pd

prefix = "text_sentiment140/sentiment140"
filename = "training.1600000.processed.noemoticon.csv"
data_s3_location = "s3://{}/{}/{}".format(bucket, prefix, filename)  # S3 URL
# we will showcase with a smaller subset of data for demonstration purpose
text_data = pd.read_csv(
    data_s3_location, header=None, encoding="ISO-8859-1", low_memory=False, nrows=10000
)
text_data.columns = ["target", "tw_id", "date", "flag", "user", "text"]

Examine Your Text Data

Here you will explore common methods and steps for text preprocessing. Text preprocessing is highly specific to each individual corpus and different tasks, so it is important to examine your text data first and decide what steps are necessary.

First, look at your text data. Seems like there are whitespaces to trim, URLs, smiley faces, numbers, abbreviations, spelling, names, etc. Tweets are less than 140 characters so there is less need for document segmentation and sentence dependencies.

[77]:
pd.set_option("display.max_colwidth", None)  # show full content in a column
text_data["text"][:5]
[77]:
0    @switchfoot http://twitpic.com/2y1zl - Awww, that's a bummer.  You shoulda got David Carr of Third Day to do it. ;D
1        is upset that he can't update his Facebook by texting it... and might cry as a result  School today also. Blah!
2                              @Kenichan I dived many times for the ball. Managed to save 50%  The rest go out of bounds
3                                                                        my whole body feels itchy and like its on fire
4        @nationwideclass no, it's not behaving at all. i'm mad. why am i here? because I can't see you all over there.
Name: text, dtype: object

Preprocessing

Step 1: Noise Removal

Start by removing noise from the text data. Removing noise is very task-specific, so you will usually pick and choose from the following to process your text data based on your needs: * Remove formatting (HTML, markup, metadata) – e.g. emails, web-scrapped data * Extract text data from full dataset – e.g. reviews, comments, labeled data from a nested JSON file or from structured data * Remove special characters * Remove emojis or convert emoji to words – e.g. reviews, tweets, Instagram and Facebook comments, SMS text with sales * Remove URLs – reviews, web content, emails * Convert accented characters to ASCII characters – e.g. tweets, contents that may contain foreign language

Note that preprocessing is an iterative process, so it is common to revisit any of these steps after you have cleaned and normalized your data.

Here you will look at tweets and decide how you are going to process URL, emojis and emoticons.

Working with text will often means dealing with regular expression. To freshen up on your regex or if you are new, Pythex is a good helper page for you to find cheatsheet and test your functions.

Noise Removal - Remove URLs

[78]:
def remove_urls(text):
    url = re.compile(r"https?://\S+|www\.\S+")
    return url.sub(r"", text)

Let’s check if our code works with one example:

[79]:
print(text_data["text"][0])
print("Removed URL:" + remove_urls(text_data["text"][0]))
@switchfoot http://twitpic.com/2y1zl - Awww, that's a bummer.  You shoulda got David Carr of Third Day to do it. ;D
Removed URL:@switchfoot  - Awww, that's a bummer.  You shoulda got David Carr of Third Day to do it. ;D

Noise Removal - Remove emoticons, or convert emoticons to words

[80]:
from emot.emo_unicode import UNICODE_EMO, EMOTICONS
[81]:
def remove_emoticons(text):
    """
    This function takes strings containing emoticons and returns strings with emoticons removed.
    Input(string): one tweet, contains emoticons
    Output(string): one tweet, emoticons removed, everything else unchanged
    """
    emoticon = re.compile("(" + "|".join(k for k in EMOTICONS) + ")")
    return emoticon.sub(r"", text)
[82]:
def convert_emoticons(text):
    """
    This function takes strings containing emoticons and convert the emoticons to words that describe the emoticon.
    Input(string): one tweet, contains emoticons
    Output(string): one tweet, emoticons replaced with words describing the emoticon
    """
    for emot in EMOTICONS:
        text = re.sub("(" + emot + ")", " ".join(EMOTICONS[emot].replace(",", "").split()), text)
    return text

Let’s check the results with one example and decide if we should keep the emoticon:

[83]:
print("original text: " + remove_emoticons(text_data["text"][0]))
print("removed emoticons: " + convert_emoticons(text_data["text"][0]))
original text: @switchfoot http/twitpic.com/2y1zl - Awww, that's a bummer.  You shoulda got David Carr of Third Day to do it.
removed emoticons: @switchfoot httpSkeptical annoyed undecided uneasy or hesitant/twitpic.com/2y1zl - Awww, that's a bummer.  You shoulda got David Carr of Third Day to do it. Wink or smirk

Assuming our task is sentiment analysis, then converting emoticons to words will be helpful. We will apply our remove_URL and convert_emoticons functions to the full dataset:

[84]:
text_data["cleaned_text"] = text_data["text"].apply(remove_urls).apply(convert_emoticons)
[85]:
text_data[["text", "cleaned_text"]][:1]
[85]:
text cleaned_text
0 @switchfoot http://twitpic.com/2y1zl - Awww, that's a bummer. You shoulda got David Carr of Third Day to do it. ;D @switchfoot - Awww, that's a bummer. You shoulda got David Carr of Third Day to do it. Wink or smirk

Step 2: Normalization

In the next step, we will further process the text so that all text/words will be put on the same level playing field: all the words should be in the same case, numbers should be also treated as strings, abbreviations and chat words should be recognizable and replaced with the full words, etc. This is important because we do not want two elements in our word list (dictionary) with the same meaning are taken as two non-related different words by machine, and when we eventually convert all words to numbers (vectors), these words will be noises to our model, such as “3” and “three”, “Our” and “our”, or “urs” and “yours”. This process often includes the following steps: ### Step 2.1 General Normalization * Convert all text to the same case * Remove punctuation * Convert numbers to word or remove numbers depending on your task * Remove white spaces * Convert abbreviations/slangs/chat words to word * Remove stop words (task specific and general English words); you can also create your own list of stop words * Remove rare words * Spelling correction

Note: some normalization processes are better to perform at sentence and document level, and some processes are word-level and should happen after tokenization and segmentation, which we will cover right after normalization.

Here you will convert the text to lower case, remove punctuation, remove numbers, remove white spaces, and complete other word-level processing steps after tokenizing the sentences.

Usually, this is a must for all language preprocessing. Since “Word” and “word” will essentially be considered two different elements in word representation, and we want words that have the same meaning to be represented the same in numbers (vectors), we want to convert all text into the same case.

[86]:
text_data["text_lower"] = text_data["cleaned_text"].str.lower()
text_data[["cleaned_text", "text_lower"]][:1]
[86]:
cleaned_text text_lower
0 @switchfoot - Awww, that's a bummer. You shoulda got David Carr of Third Day to do it. Wink or smirk @switchfoot - awww, that's a bummer. you shoulda got david carr of third day to do it. wink or smirk

Depending on your use cases, you can either remove numbers or convert numbers into strings. If numbers are not important in your task (e.g. sentiment analysis) you can remove those, and in some cases, numbers are useful (e.g. date), and you can tag these numbers differently. In most pre-trained embeddings, numbers are treated as strings.

In this example, we are using Twitter data (tweets) and typically, numbers are not that important for understanding the meaning or content of a tweet. Therefore, we will remove the numbers.

[87]:
def remove_numbers(text):
    """
    This function takes strings containing numbers and returns strings with numbers removed.
    Input(string): one tweet, contains numbers
    Output(string): one tweet, numbers removed
    """
    return re.sub(r"\d+", "", text)
[88]:
# let's check the results of our function
remove_numbers(text_data["text_lower"][2])
[88]:
'@kenichan i dived many times for the ball. managed to save %  the rest go out of bounds'
[89]:
text_data["normalized_text"] = text_data["text_lower"].apply(remove_numbers)

We can remove the mentions in the tweets, but if our task is to monitor VOC, it is helpful to extract the mentions data.

[90]:
def remove_mentions(text):
    """
    This function takes strings containing mentions and returns strings with
    mentions (@ and the account name) removed.
    Input(string): one tweet, contains mentions
    Output(string): one tweet, mentions (@ and the account name mentioned) removed
    """
    mentions = re.compile(r"@\w+ ?")
    return mentions.sub(r"", text)
[91]:
print("original text: " + text_data["text_lower"][0])
print("removed mentions: " + remove_mentions(text_data["text_lower"][0]))
original text: @switchfoot  - awww, that's a bummer.  you shoulda got david carr of third day to do it. wink or smirk
removed mentions:  - awww, that's a bummer.  you shoulda got david carr of third day to do it. wink or smirk
[92]:
def extract_mentions(text):
    """
    This function takes strings containing mentions and returns strings with
    mentions (@ and the account name) extracted into a different element,
    and removes the mentions in the original sentence.
    Input(string): one sentence, contains mentions
    Output:
    one tweet (string): mentions (@ and the account name mentioned) removed
    mentions (string): (only the account name mentioned) extracted
    """
    mentions = [i[1:] for i in text.split() if i.startswith("@")]
    sentence = re.compile(r"@\w+ ?").sub(r"", text)
    return sentence, mentions
[93]:
text_data["normalized_text"], text_data["mentions"] = zip(
    *text_data["normalized_text"].apply(extract_mentions)
)
[94]:
text_data[["text", "normalized_text", "mentions"]].head(1)
[94]:
text normalized_text mentions
0 @switchfoot http://twitpic.com/2y1zl - Awww, that's a bummer. You shoulda got David Carr of Third Day to do it. ;D - awww, that's a bummer. you shoulda got david carr of third day to do it. wink or smirk [switchfoot]

We will use the string.punctuation in python to remove punctuations, which contains the following punctuation symbols!"#$%&\'()*+,-./:;<=>?@[\\]^_{|}~ , you can add or remove more as needed.

[95]:
punc_list = string.punctuation  # you can self define list of punctuation to remove here


def remove_punctuation(text):
    """
    This function takes strings containing self defined punctuations and returns
    strings with punctuations removed.
    Input(string): one tweet, contains punctuations in the self-defined list
    Output(string): one tweet, self-defined punctuations removed
    """
    translator = str.maketrans("", "", punc_list)
    return text.translate(translator)
[96]:
remove_punctuation(text_data["normalized_text"][2])
[96]:
'i dived many times for the ball managed to save   the rest go out of bounds'
[97]:
text_data["normalized_text"] = text_data["normalized_text"].apply(remove_punctuation)

You can also use trim functions to trim whitespaces from left and right or in the middle. Here we will just simply utilize the split function to extract all words from our text since we already removed all special characters, and combine them with a single whitespace.

[98]:
def remove_whitespace(text):
    """
    This function takes strings containing mentions and returns strings with
    whitespaces removed.
    Input(string): one tweet, contains whitespaces
    Output(string): one tweet, white spaces removed
    """
    return " ".join(text.split())
[99]:
print("original text: " + text_data["normalized_text"][2])
print("removed whitespaces: " + remove_whitespace(text_data["normalized_text"][2]))
original text: i dived many times for the ball managed to save   the rest go out of bounds
removed whitespaces: i dived many times for the ball managed to save the rest go out of bounds
[100]:
text_data["normalized_text"] = text_data["normalized_text"].apply(remove_whitespace)

Step 3: Tokenization and Segmentation

After we extracted useful text data from the full dataset, we will split large chunks of text (documents) into sentences, and sentences into words. Most of the times we will use sentence-ending punctuation to split documents into sentences, but it can be ambiguous especially when we are dealing with character conversations (“Are you alright?” said Ron), abbreviations (Dr. Fay would like to see Mr. Smith now.) and other special use cases. There are Python libraries designed for this task (check textsplit), but you can take your own approach depending on your context.

Here for Twitter data, we are only dealing with sentences shorter than 140 characters, so we will just tokenize sentences into words. We do want to normalize the sentence before tokenizing sentences into words, so we will introduce normalization, and tokenize our tweets into words after normalizing sentences.

Tokenizing Sentences into Words

[101]:
nltk.download("punkt")
[nltk_data] Downloading package punkt to /home/ec2-user/nltk_data...
[nltk_data]   Package punkt is already up-to-date!
[101]:
True
[102]:
from nltk.tokenize import word_tokenize


def tokenize_sent(text):
    """
    This function takes strings (a tweet) and returns tokenized words.
    Input(string): one tweet
    Output(list): list of words tokenized from the tweet
    """
    word_tokens = word_tokenize(text)
    return word_tokens
[103]:
text_data["tokenized_text"] = text_data["normalized_text"].apply(tokenize_sent)
[104]:
text_data[["normalized_text", "tokenized_text"]][:1]
[104]:
normalized_text tokenized_text
0 awww thats a bummer you shoulda got david carr of third day to do it wink or smirk [awww, thats, a, bummer, you, shoulda, got, david, carr, of, third, day, to, do, it, wink, or, smirk]

Continuing Word-level Normalization

Remove Stop Words

Stop words are common words that does not contribute to the meaning of a sentence, such as ‘the’, ‘a’, ‘his’. Most of the time we can remove these words without harming further analysis, but if you want to apply Part-of-Speech (POS) tagging later, be careful with what you removed in this step as they can provide valuable information. You can also add stop words to the list based on your use cases.

[105]:
nltk.download("stopwords")
from nltk.corpus import stopwords
[nltk_data] Downloading package stopwords to
[nltk_data]     /home/ec2-user/nltk_data...
[nltk_data]   Package stopwords is already up-to-date!
[106]:
stopwords_list = set(stopwords.words("english"))

One way to add words to your stopwords list is to check for most frequent words, especially if you are working with a domain-specific corpus and those words sometimes are not covered by general English stop words. You can also remove rare words from your text data.

Let’s check for the most common words in our data. All the words we see in the following example are covered in general English stop words, so we will not add any additional stop words.

[107]:
from collections import Counter

counter = Counter()
for word in [w for sent in text_data["tokenized_text"] for w in sent]:
    counter[word] += 1
counter.most_common(10)
[107]:
[('i', 5317),
 ('to', 4047),
 ('the', 3264),
 ('a', 2379),
 ('my', 2271),
 ('and', 1955),
 ('is', 1819),
 ('in', 1549),
 ('it', 1495),
 ('for', 1343)]

Let’s check for the rarest words now. In this example, infrequently used words mostly consist of misspelled words, which we will later correct, but we can add them to our stop words list as well.

[108]:
# least frequent words
counter.most_common()[:-10:-1]
[108]:
[('rainboot', 1),
 ('colleague', 1),
 ('jaws', 1),
 ('windsor', 1),
 ('castiel', 1),
 ('georgous', 1),
 ('thingsss', 1),
 ('howwww', 1),
 ('christopher', 1)]
[109]:
top_n = 10
bottom_n = 10
stopwords_list |= set([word for (word, count) in counter.most_common(top_n)])
stopwords_list |= set([word for (word, count) in counter.most_common()[:-bottom_n:-1]])
stopwords_list |= {"thats"}


def remove_stopwords(tokenized_text):
    """
    This function takes a list of tokenized words from a tweet, removes self-defined stop words from the list,
    and returns the list of words with stop words removed
    Input(list): a list of tokenized words from a tweet, contains stop words
    Output(list): a list of words with stop words removed
    """
    filtered_text = [word for word in tokenized_text if word not in stopwords_list]
    return filtered_text
[110]:
print(text_data["tokenized_text"][2])
print(remove_stopwords(text_data["tokenized_text"][2]))
['i', 'dived', 'many', 'times', 'for', 'the', 'ball', 'managed', 'to', 'save', 'the', 'rest', 'go', 'out', 'of', 'bounds']
['dived', 'many', 'times', 'ball', 'managed', 'save', 'rest', 'go', 'bounds']
[111]:
text_data["tokenized_text"] = text_data["tokenized_text"].apply(remove_stopwords)

Convert Abbreviations, slangs and chat words into words

Sometimes you will need to develop your own mapping for abbreviations/slangs <-> words, for chat data, or for domain-specific data where abbreviations often have different meanings from what is commonly used.

[112]:
chat_words_map = {
    "idk": "i do not know",
    "btw": "by the way",
    "imo": "in my opinion",
    "u": "you",
    "oic": "oh i see",
}
chat_words_list = set(chat_words_map)
[113]:
def translator(text):
    """
    This function takes a list of tokenized words, finds the chat words in the self-defined chat words list,
    and replace the chat words with the mapped full expressions. It returns the list of tokenized words with
    chat words replaced.
    Input(list): a list of tokenized words from a tweet, contains chat words
    Output(list): a list of words with chat words replaced by full expressions
    """
    new_text = []
    for w in text:
        if w in set(chat_words_map):
            new_text = new_text + chat_words_map[w].split()
        else:
            new_text.append(w)
    return new_text
[114]:
print(text_data["tokenized_text"][13])
print(translator(text_data["tokenized_text"][13]))
['counts', 'idk', 'either', 'never', 'talk', 'anymore']
['counts', 'i', 'do', 'not', 'know', 'either', 'never', 'talk', 'anymore']
[115]:
text_data["tokenized_text"] = text_data["tokenized_text"].apply(translator)

Spelling Correction

Some common spelling correction packages include SpellChecker and autocorrect. It might take some time to spell check every sentence of the text, so you can decide if a spell check is absolutely necessary. If you are dealing with documents (news, papers, articles) generally it is not necessary; but if you are dealing with chat data, reviews, notes, it might be a good idea to spell check your text.

[116]:
from autocorrect import Speller
[117]:
spell = Speller(lang="en", fast=True)


def spelling_correct(tokenized_text):
    """
    This function takes a list of tokenized words from a tweet, spell check every words and returns the
    corrected words if applicable. Note that not every wrong spelling words will be identified especially
    for tweets.
    Input(list): a list of tokenized words from a tweet, contains wrong-spelling words
    Output(list): a list of corrected words
    """
    corrected = [spell(word) for word in tokenized_text]
    return corrected
[118]:
print(text_data["tokenized_text"][0])
print(spelling_correct(text_data["tokenized_text"][0]))
['awww', 'bummer', 'shoulda', 'got', 'david', 'carr', 'third', 'day', 'wink', 'smirk']
['www', 'summer', 'should', 'got', 'david', 'carr', 'third', 'day', 'wink', 'smirk']
[119]:
text_data["tokenized_text"] = text_data["tokenized_text"].apply(spelling_correct)

Step 3.2 Stemming and Lemmatization

Stemming is the process of removing affixes from a word to get a word stem, and lemmatization can in principle select the appropriate lemma depending on the context. The difference is that a stemmer operates on a single word without knowledge of the context, and therefore cannot discriminate between words that have different meanings depending on part of speech. However, stemmers are typically easier to implement and run faster, and the reduced accuracy may not matter for some applications.

Stemming

There are several stemming algorithms available, and the most popular ones are Porter, Lancaster, and Snowball. Porter is the most common one, Snowball is an improvement over Porter, and Lancaster is more aggressive. You can check for more algorithms provided by nltk here.

[120]:
from nltk.stem import SnowballStemmer
from nltk.tokenize import word_tokenize

stemmer = SnowballStemmer("english")


def stem_text(tokenized_text):
    """
    This function takes a list of tokenized words from a tweet, and returns the stemmed words by your
    defined stemmer.
    Input(list): a list of tokenized words from a tweet
    Output(list): a list of stemmed words in its root form
    """
    stems = [stemmer.stem(word) for word in tokenized_text]
    return stems

Lemmatization

[121]:
nltk.download("wordnet")
[nltk_data] Downloading package wordnet to /home/ec2-user/nltk_data...
[nltk_data]   Package wordnet is already up-to-date!
[121]:
True
[122]:
from nltk.stem import WordNetLemmatizer
from nltk.tokenize import word_tokenize

lemmatizer = WordNetLemmatizer()


def lemmatize_text(tokenized_text):
    """
    This function takes a list of tokenized words from a tweet, and returns the lemmatized words.
    you can also provide context for lemmatization, i.e. part-of-speech.
    Input(list): a list of tokenized words from a tweet
    Output(list): a list of lemmatized words in its base form
    """
    lemmas = [lemmatizer.lemmatize(word, pos="v") for word in tokenized_text]
    return lemmas

Let’s compare our stemming and lemmatization results:

It seems like both processes returned similar results besides some verb being trimmed differently, so it is okay to go with stemming in this case if you are dealing with a lot of data and want a better performance. You can also keep both and experiment with further feature engineering and modeling to see which one produces better results.

[123]:
print(text_data["tokenized_text"][2])
print(stem_text(text_data["tokenized_text"][2]))
print(lemmatize_text(text_data["tokenized_text"][2]))
['dived', 'many', 'times', 'ball', 'managed', 'save', 'rest', 'go', 'bounds']
['dive', 'mani', 'time', 'ball', 'manag', 'save', 'rest', 'go', 'bound']
['dive', 'many', 'time', 'ball', 'manage', 'save', 'rest', 'go', 'bound']

It seems that a stemmer can do the work for our tweets data. You can keep both and decide which one you want to use for feature engineering and modeling.

[124]:
text_data["stem_text"] = text_data["tokenized_text"].apply(stem_text)
text_data["lemma_text"] = text_data["tokenized_text"].apply(lemmatize_text)

Step 3.5: Re-examine the results

Take a pause here and examine the results from previous steps to decide if more noise removal/normalization is needed. In this case, you might want to add more words to the stop words list, spell-check more aggressively, or add more mappings to the abbreviation/slang to words list.

[125]:
text_data.sample(5)[["text", "stem_text", "lemma_text"]]
[125]:
text stem_text lemma_text
1972 feeling very poorly and sorry for myself. Can't swallow, ow Stupid glands. [feel, poor, sorri, cant, swallow, ow, stupid, gland] [feel, poorly, sorry, cant, swallow, ow, stupid, glands]
5625 @LorettaK @HeatherShorter Seriously though - there are 6 pairs of shoes in that fedex box, all bought recently [serious, though, pair, shoe, fedex, box, bought, recent] [seriously, though, pair, shoe, fedex, box, buy, recently]
7138 was late to work and hopes she is not in trouble... [late, work, hope, troubl] [late, work, hop, trouble]
1326 has to return the shirt she bought from Topshop bc she has $50 in her bank account that has to last her the rest of the month, life sucks [return, shirt, bought, topshop, bc, bank, account, last, rest, month, life, suck] [return, shirt, buy, topshop, bc, bank, account, last, rest, month, life, suck]
324 @ridley1013 I agree. The shapeshifting is a copout. I was so excited for Angela's ep, I thought it was this week. Noah was awesome tho! [agre, shapeshift, copout, excit, angel, ep, thought, week, noah, awesom, tho] [agree, shapeshifting, copout, excite, angels, ep, think, week, noah, awesome, tho]

Step 4: Enrichment and Augmentation

After you have cleaned and tokenized your text data into a standard form, you might want to enrich it with more useful information that was not provided directly in the original text or its single-word form. For example: * Part-of-speech tagging * Extracting phrases * Name entity recognition * Dependency parsing * Word level embeddings

Many Python Packages including nltk, SpaCy, CoreNLP, and here we will use TextBlob to illustrate some enrichment methods.

Part-of-Speech tagging can assign each word in accordance with its syntactic functions (noun, verb, adjectives, etc.).

[126]:
nltk.download("averaged_perceptron_tagger")
[nltk_data] Downloading package averaged_perceptron_tagger to
[nltk_data]     /home/ec2-user/nltk_data...
[nltk_data]   Package averaged_perceptron_tagger is already up-to-
[nltk_data]       date!
[126]:
True
[127]:
text_example = text_data.sample()["lemma_text"]
text_example
[127]:
5444    [workingggggggg, ughhh, phone, wont, let, twitter]
Name: lemma_text, dtype: object
[128]:
from textblob import TextBlob

result = TextBlob(" ".join(text_example.values[0]))
print(result.tags)
[('workingggggggg', 'NN'), ('ughhh', 'JJ'), ('phone', 'NN'), ('wont', 'NN'), ('let', 'NN'), ('twitter', 'NN')]

Sometimes words come in as phrases (noun group phrases, verb group phrases, etc.) and often have discrete grammatical meanings. Extract those words as phrases rather than separate words in this case.

[129]:
nltk.download("brown")
[nltk_data] Downloading package brown to /home/ec2-user/nltk_data...
[nltk_data]   Package brown is already up-to-date!
[129]:
True
[130]:
# orginal text:
text_example = text_data.sample()["lemma_text"]
" ".join(text_example.values[0])
[130]:
'sad kutner kill far show house'
[131]:
# noun phrases that can be extracted from this sentence
result = TextBlob(" ".join(text_example.values[0]))
for nouns in result.noun_phrases:
    print(nouns)
sad kutner
show house

You can use pre-trained/pre-defined name entity recognition models to find named entities in text and classify them into pre-defined categories. You can also train your own NER model, especially if you are dealing with domain specific context.

[132]:
nltk.download("maxent_ne_chunker")
nltk.download("words")
[nltk_data] Downloading package maxent_ne_chunker to
[nltk_data]     /home/ec2-user/nltk_data...
[nltk_data]   Package maxent_ne_chunker is already up-to-date!
[nltk_data] Downloading package words to /home/ec2-user/nltk_data...
[nltk_data]   Package words is already up-to-date!
[132]:
True
[133]:
text_example_enr = text_data.sample()["lemma_text"].values[0]
print("original text: " + " ".join(text_example_enr))
original text: uh oh think get sick
[134]:
from nltk import pos_tag, ne_chunk

print(ne_chunk(pos_tag(text_example_enr)))
(S uh/JJ oh/MD think/VB get/VB sick/JJ)

Final Dataset ready for feature engineering and modeling

For this notebook you cleaned and normalized the data, kept mentions as a separate column, and stemmed and lemmatized the tokenized words. You can experiment with these two results to see which one gives you a better model performance.

Twitter data is short and often does not have complex syntax structures, so no enrichment (POS tagging, parsing, etc.) was done at this time; but you can experiment with those when you have more complicated text data.

[135]:
text_data.head(2)
[135]:
target tw_id date flag user text cleaned_text text_lower normalized_text mentions tokenized_text stem_text lemma_text
0 0 1467810369 Mon Apr 06 22:19:45 PDT 2009 NO_QUERY _TheSpecialOne_ @switchfoot http://twitpic.com/2y1zl - Awww, that's a bummer. You shoulda got David Carr of Third Day to do it. ;D @switchfoot - Awww, that's a bummer. You shoulda got David Carr of Third Day to do it. Wink or smirk @switchfoot - awww, that's a bummer. you shoulda got david carr of third day to do it. wink or smirk awww thats a bummer you shoulda got david carr of third day to do it wink or smirk [switchfoot] [www, summer, should, got, david, carr, third, day, wink, smirk] [www, summer, should, got, david, carr, third, day, wink, smirk] [www, summer, should, get, david, carr, third, day, wink, smirk]
1 0 1467810672 Mon Apr 06 22:19:49 PDT 2009 NO_QUERY scotthamilton is upset that he can't update his Facebook by texting it... and might cry as a result School today also. Blah! is upset that he can't update his Facebook by texting it... and might cry as a result School today also. Blah! is upset that he can't update his facebook by texting it... and might cry as a result school today also. blah! is upset that he cant update his facebook by texting it and might cry as a result school today also blah [] [upset, cant, update, facebook, texting, might, cry, result, school, today, also, blah] [upset, cant, updat, facebook, text, might, cri, result, school, today, also, blah] [upset, cant, update, facebook, texting, might, cry, result, school, today, also, blah]

Save our final dataset to S3 for further process

[136]:
filename_write_to = "processed_sentiment_140.csv"
text_data.to_csv(filename_write_to, index=False)
upload_to_s3(bucket, "text_sentiment140_processed", filename_write_to)
Writing to s3://sagemaker-us-east-2-060356833389/text_sentiment140_processed/processed_sentiment_140.csv

Conclusion

Congratulations! You cleaned and prepared your text data and it is now ready to be vectorized or used for feature engineering. Now that your data is ready to be converted into machine-readable format (numbers), we will cover extracting features and word embeddings in the next section text data feature engineering.

[ ]: