Nathaniel Dake Blog

1. Working With Word Vectors

In the introduction section we went over some very basic NLP techniques, and more importantly gained an understanding for how to apply basic machine learning models and APIs to the a problem revolving around text data.

One of the things that we went over was the term-document matrix, and how it could be utilized in our process of converting text data to numerical data, so a model can understand it. Well, one of the things that we did not discuss is there is a slight problem with that method, namely its simple counting process. One of the things that tends to happen is that words such as "a", "the", "and", "in", "to", etc, have a high count for ALL documents, no matter what the category is! This is a very large amount of noise that will often overshadow the meaningful words.

These words are known as stopwords and one common technique is to just remove them from the dataset before doing any machine learning.

1.1 TF-IDF

However, there is another technique that we can utilize: Term Frequency-Inverse Document Frequency, (TF-IDF). We will not go into the full details of TF-IDF, however, the jist is as follows: We know that words that appear in many documents are probably less meaningful. With this in mind, we can weight each vector component (in this case a word) by something related to how many documents that word appears in. So, intuitively speaking, we may do something like:

$$\frac{\text{raw word count}}{\text{document count}}$$

So, the numerator tells us how many times does this word appear in this document, and the denominator tells us how many documents does this word appear in, in total. Now, in practice we do some transformations on these, like taking the log count, smoothing, and so on. However, the specific implementation isn't nearly as important for this course as is the general understand behind the process.

1.2 Key Point

One of the most important things to keep in mind during all subsequent posts is that no matter what technique we are using, we are always interested in a matrix of size $(V x D)$, where $V$ is the vocabulary size (the number of total words), and $D$ is the vector dimensionality, which if we are doing something like counting up the total number of times a word appears in a set of books, $D$ is the total number of books.

1.3 Word Embeddings

A final thing to note, we are going to encounter the term Word-Embedding quite a bit. This is just a fancy word for an old and relatively straight forward concept. A word-embedding is just a fancy name for a feature vector that represents a word. In other words, we can take a categorical object-a word in this case-and then map this object to a list of numbers (in other words, a vector). We say that we have embedded this word into a vector space, and that is why we call them word embeddings.

2. Word Analogy

One of the most popular applications of word embeddings is word analogies. This is where the famous king - man = queen - woman comes from.

So, we are now going to focus on two main questions:

  1. What are word analogies?
  2. How can we calculate them?

First, however, a few examples. We can start with:

King - Queen ~= Prince - Princess

Above on each side, we have a male member of the royal family minus a female member of the royal family, who is from the same generation.

France - Paris ~= Germany - Berlin

Now, on each side we have a country minus a famous city from that country.

Japan - Japanese ~= China - Chinese

Here, we have a country minus the term used to refer to the people of that country.

Brother - Sister ~= Uncle - Aunt

Now, we have a male member of the family, subtracting a close female member of the family.

Walk - Walking ~= Swim - Swimming

Finally, we can see that we were able to learn something about verb tense.

2.1 Visualing Analogies

So, how can we actually visualize these analogies? As usual, it is very helpful to think of things geometrically. First, recall that word embedding just means word vector. In other words, if we have a grid, each word is just represented by a dot on the grid. So, what will happen when subtracting one vector from another vector? Well, of course that will just yield the vector between the two vectors. If we can say that the difference of the two vectors on the left is approximately equal to the two vectors on the right, then we know that the two difference vectors are approximately the same.

Now, we know that a vector has two components: a direction and a magnitude. So, when we say that these two vectors are approximately the same, what we are really saying is that their magnitude and direction are very close to one another. This can be visualized below.

2.2 How to find analogies?

With that said, how do we actually we find word analogies? Well, we know that their are four words in every analogy. So, what we can do is take three of these words and try to find the fourth word. So, we have an input of three words, and an output of the fourth word. Notice that we because we are dealing entirely with vectors, we can just rearrange our equation as follows:

$$\text{King - Man = ? - Woman}$$$$\downarrow$$$$\text{King - Man + Woman = ?}$$

We know that the $?$ is representing Queen, however we will refer to it as SomeVector for the time being.

$$\text{King - Man + Woman = SomeVector}$$

So, our job is to find the word that is most closely associated with SomeVector. We will do this utilizing distance. In pseudocode this may look like:

closest_distance = infinity
best_word = None
test_vector = king - man + woman
for word, vector in vocabulary:
    distance = get_distance(test_vector, vector)
    if distance < closest_distance:
        closest_distance = distance
        best_word = word

Note that utilizing a for loop will be very slow, and we will of course want to vectorize this process utilizing numpy.

Now, we did not define get_distance in the above pseudocode. We have a variety of options when deciding how to calculate distance. Sometimes, we will simply use Euclidean Distance:

$$\text{Euclidean Distance: } ||a - b||^2$$

It is also common to use the cosine distance:

$$\text{Cosine Distance: } cosine\_distance(a, b) = \frac{1 - a^Tb}{||a|| \; ||b||}$$

In this later form, since only the angle matters, because:

$$a^Tb = ||a|| \; ||b|| cos(a,b)$$

During training we normalize all of the word vectors so that their length is 1:

$$cos(0) = 1, \; cos(90) = 0, \; cos(180) = -1$$

When two vectors are closer, $cos(\theta)$ is bigger. So, we want our distance to be:

$$\text{Distance} = 1 - cos(\theta)$$

At this point we can say that all of the word embeddings lie on the unit sphere.

2.3 Why is this so cool?

One pretty interesting fact about neural word embedding algorithms is that they can find these analogies at all. Once we have covered these algorithms, specifically word2vec and GloVe, it will become clear that these algorithms have no concept of analogies; in other words, what they want to optimize is totally unrelated to word analogies. So, the fact that word analogies suddenly emerge out of the training process is very intruiging. Remember, in all cases we are still dealing with a $VxD$ word embedding matrix. This is the case whether we are just using raw word counts, TF-IDF, or word2vec. Yet, raw word counts and TF-IDF do not give us good analogies. So, the fact that good word analogies emerge from the model and training process of word2vec is a very interesting research area.

3. Pretrained word vectors from GloVe

We are now going to take a look at some pretrained word vectors found by researchers at Stanford, who also created the GloVe algorithm. We will be studying GloVe shortly, but I want to to give some motivation and intuition before we do. We start with GloVe and not Word2Vec because the vectors are provided in plain text, and with a smaller vocabulary than the pretrained Word2Vec vectors. This has some advantages:

  1. It will allow us to parse the file using plain python, so we can actually look at the file and the checkout the format for ourselves.
  2. Because we are loading in the word vectors manually, we will be able to store them in a way that is convenient for our purposes.

The pretrained word vectors can be found here.

In [1]:
import numpy as np
from sklearn.metrics.pairwise import pairwise_distances
In [13]:
def dist1(a, b):
    return np.linalg.norm(a - b)

def dist2(a, b):
    return 1 - a.dot(b) / (np.linalg.norm(a) * np.linalg.norm(b))

# Select a distance type
dist, metric = dist2, 'cosine'
# dist, metric = dist1, 'euclidean'

def find_analogies(w1, w2, w3):
    """Slower implementation."""
    for w in (w1, w2, w3):
        if w not in word2vec:
            print(f'{w} not in dictionary.')
            return
    king = word2vec[w1]
    man = word2vec[w2]
    woman = word2vec[w3]
    v0 = king - man + woman
    
    min_dist = float('inf')
    best_word = ''
    for word, v1 in items(word2vec):
        if word not in (w1, w2, w3):
            d = dist(v0, v1)
            if d < min_dist:
                min_dist = d
                best_word = word
    print(w1, '-', w2, '=', best_word, '-', w3)
    

def find_analogies(w1, w2, w3):
    """Faster implementation."""
    for w in (w1, w2, w3):
        if w not in word2vec:
            print("%s not in dictionary" % w)
            return

    king = word2vec[w1]
    man = word2vec[w2]
    woman = word2vec[w3]
    v0 = king - man + woman

    # Utilize scikit-learn pairwise_distances
    distances = pairwise_distances(v0.reshape(1, D), embedding, metric=metric).reshape(V)
    idxs = distances.argsort()[:4]
    for idx in idxs:
        word = idx2word[idx]
        if word not in (w1, w2, w3): 
            best_word = word
            break

    return (f'{w1} - {w2} = {best_word} - {w3}')    

def nearest_neighbors(w, n=5):
    if w not in word2vec:
        print(f'{w} not in dictionary')
        return

    v = word2vec[w]
    distances = pairwise_distances(v.reshape(1, D), embedding, metric=metric).reshape(V)
    idxs = distances.argsort()[1:n+1]
    print(f'Neighbors of: {w}')
    
    for idx in idxs:
        print(f'\t {idx2word[idx]}')
    return ''

Now we can load in the pretrained word vectors:

In [3]:
print('Loading word vectors...')
word2vec = {}
embedding = []
idx2word = []
with open('../../data/nlp/glove.6B/glove.6B.50d.txt', encoding='utf-8') as f:
    # Is just a space-separated text file in the format:
    #  - word vec[0] vec[1] vec[2] ...
    for line in f:
        values = line.split()
        word = values[0]
        vec = np.asarray(values[1:], dtype='float32')
        word2vec[word] = vec
        embedding.append(vec)
        idx2word.append(word)
print('Found %s word vectors.' % len(word2vec))
embedding = np.array(embedding)
V, D = embedding.shape
Loading word vectors...
Found 400000 word vectors.
In [14]:
print(find_analogies('king', 'man', 'woman'))
print(find_analogies('france', 'paris', 'london'))
print(find_analogies('france', 'paris', 'rome'))
print(find_analogies('paris', 'france', 'italy'))
print(find_analogies('france', 'french', 'english'))
print(find_analogies('japan', 'japanese', 'chinese'))
print(find_analogies('japan', 'japanese', 'italian'))
print(find_analogies('japan', 'japanese', 'australian'))
print(find_analogies('december', 'november', 'june'))
print(find_analogies('miami', 'florida', 'texas'))
print(find_analogies('einstein', 'scientist', 'painter'))
print(find_analogies('china', 'rice', 'bread'))
print(find_analogies('man', 'woman', 'she'))
print(find_analogies('man', 'woman', 'aunt'))
print(find_analogies('man', 'woman', 'sister'))
print(find_analogies('man', 'woman', 'wife'))
print(find_analogies('man', 'woman', 'actress'))
print(find_analogies('man', 'woman', 'mother'))
print(find_analogies('heir', 'heiress', 'princess'))
print(find_analogies('nephew', 'niece', 'aunt'))
print(find_analogies('france', 'paris', 'tokyo'))
print(find_analogies('france', 'paris', 'beijing'))
print(find_analogies('february', 'january', 'november'))
print(find_analogies('france', 'paris', 'rome'))
print(find_analogies('paris', 'france', 'italy'))

print(nearest_neighbors('king'))
print(nearest_neighbors('france'))
print(nearest_neighbors('japan'))
print(nearest_neighbors('einstein'))
print(nearest_neighbors('woman'))
print(nearest_neighbors('nephew'))
print(nearest_neighbors('february'))
print(nearest_neighbors('rome'))
king - man = queen - woman
france - paris = britain - london
france - paris = italy - rome
paris - france = rome - italy
france - french = england - english
japan - japanese = china - chinese
japan - japanese = italy - italian
japan - japanese = australia - australian
december - november = july - june
miami - florida = houston - texas
einstein - scientist = matisse - painter
china - rice = chinese - bread
man - woman = he - she
man - woman = uncle - aunt
man - woman = brother - sister
man - woman = friend - wife
man - woman = actor - actress
man - woman = father - mother
heir - heiress = queen - princess
nephew - niece = uncle - aunt
france - paris = japan - tokyo
france - paris = china - beijing
february - january = october - november
france - paris = italy - rome
paris - france = rome - italy
Neighbors of: king
	 prince
	 queen
	 ii
	 emperor
	 son

Neighbors of: france
	 french
	 belgium
	 paris
	 spain
	 netherlands

Neighbors of: japan
	 japanese
	 china
	 korea
	 tokyo
	 taiwan

Neighbors of: einstein
	 relativity
	 bohr
	 physics
	 heisenberg
	 freud

Neighbors of: woman
	 girl
	 man
	 mother
	 her
	 boy

Neighbors of: nephew
	 cousin
	 brother
	 grandson
	 son
	 uncle

Neighbors of: february
	 october
	 december
	 january
	 august
	 september

Neighbors of: rome
	 naples
	 venice
	 italy
	 turin
	 pope

So, from the above we can see that for the most part, these vectors make a lot of sense and could prove to be very powerful!

4. Text Classification with word vectors

We are now going to see how we could apply pretrained word vectors in order to do text classification. Unlike the remaining walkthroughs in section 2, we are going to use a bag of words model. What this means is that toy dog would be represented in the same way as dog toy, even though we know they have different meanings. In other words, we do not consider the order of the words. A few examples of bag of words that we have seen already are word counting and TF-IDF.

4.1 Bag-of-Words

So, how can we apply word vectors (such as what we would get from word2vec and GloVe) to bag-of-words. Well, it is quite simple; we know that we would like to represent a sentence, or document, as a single vector. First, for each word in the sentence, we can use our list of word vectors to get a vector. Then, we can just take the average of all of those word vectors; in other words, we add up all the word vectors, and divide by the number of vectors.

For example, let's say that our sentence is:

"I like eggs."

Our feature vector would then be:

Feature_vector = [vec('I') + vec('like') + vec('eggs')] / 3

Mathematically we can write this as:

$$feature = \frac{1}{|sentence|} \sum_{w \in sentence} vec(w)$$

4.2 Perform Classification

At this point we are able to convert each document in our data set into a single feature vector, now what? Well, it is actually very simple! We can take this representation and plug it into any supervised machine learning classifier, such as Naive Bayes, Logistic Regression, Random Forest, Extra Trees (found in Scikit-Learn), etc! This is due to the standard classifcation api that scikit learn provides, namely:

model = Model()
model.fit(X_train,Y_train)
model.score(X_test, Y_test)

4.3 What is the "meaning" of this?

One thing that you may have realized at this point is that in order for this work, our feature vectors must be good, or else we won't get an acceptable classification accuracy. In order for our feature vectors to be good feature vectors, the individual word vectors that are dependent on must make sense! So, this process of performing classification is actually a method use to evaluate how good a word embedding algorithm is!

There are other methods for evaluating how good a word embedding algorithm is, such as word analogies and similarity. Of course, these evaluations are just a proxy for how well each algorithm will work on your particular problem with your particular dataset.

4.4 Text Classification in Code

For this next portion we will be utilizing a text data set that can be found here.

In [18]:
import sys
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt

from sklearn.ensemble import ExtraTreesClassifier, RandomForestClassifier
from gensim.models import KeyedVectors
In [20]:
train = pd.read_csv('../../data/nlp/r8-train-all-terms.txt', header=None, sep='\t')
test = pd.read_csv('../../data/nlp/r8-test-all-terms.txt', header=None, sep='\t')
train.columns = ['label', 'content']
test.columns = ['label', 'content']
In [24]:
class GloveVectorizer:
    """Glove Vectorizer Fit Transform class"""
    def __init__(self):
        print('Loading word vectors...')
        word2vec = {}
        embedding = []
        idx2word = []
        with open('../../data/nlp/glove.6B/glove.6B.50d.txt', encoding='utf-8') as f:
            # Is just a space-separated text file in the format:
            #  - word vec[0] vec[1] vec[2] ...
            for line in f:
                values = line.split()
                word = values[0]
                vec = np.asarray(values[1:], dtype='float32')
                word2vec[word] = vec
                embedding.append(vec)
                idx2word.append(word)
        print(f'Found {len(word2vec)} word vectors.')
        
        self.word2vec = word2vec
        self.embedding = np.array(embedding)
        self.word2idx = {v:k for k,v in enumerate(idx2word)}
        self.V, self.D = self.embedding.shape
        
    def fit(self, data):
        pass
    
    def transform(self, data):
        X = np.zeros((len(data), self.D)) # initialize data matrix X
        n = 0  # index's data
        emptycount = 0 # how many sentences had words we coudn't find vectors for. 
        for sentence in data: # Loop through each sentence in the data
            tokens = sentence.lower().split()
            vecs = [] # stores all word vectors we encounter for this document (sentence)
            for word in tokens: # Loop through each words
                if word in self.word2vec:  # if word is in vocabularly, append its vector to vecs
                    vec = self.word2vec[word]
                    vecs.append(vec)
            if len(vecs) > 0:            # Check if vecs has any vectors. If yes, assign mean to X[n]
                vecs = np.array(vecs)
                X[n] = vecs.mean(axis=0)
            else:
                emptycount += 1
            n += 1
        print(f'Number of samples with no words found: {emptycount} / {len(data)}')
        return X
    
    def fit_transform(self, data):
        self.fit(data)
        return self.transform(data)        
In [25]:
vectorizer = GloveVectorizer() # Instantiate vectorizer

Xtrain = vectorizer.fit_transform(train.content)
Ytrain = train.label

Xtest = vectorizer.transform(test.content)
Ytest = test.label
Loading word vectors...
Found 400000 word vectors.
Number of samples with no words found: 0 / 5485
Number of samples with no words found: 0 / 2189
In [26]:
# create the model, train it, print scores
model = RandomForestClassifier(n_estimators=200)
model.fit(Xtrain, Ytrain)
print("train score:", model.score(Xtrain, Ytrain))
print("test score:", model.score(Xtest, Ytest))
train score: 0.9992707383773929
test score: 0.9346733668341709

We can see that our glove vectors ended up performing well in the final Random Forest classification! This is a useful metric to show that our individual word vectors worked well.

In [ ]:
 

© 2018 Nathaniel Dake