Word Embedding(Prediction based vectors)

in #word2vec7 years ago

Word2Vec- We heard this buzz word in our Data Science world very frequently. While researching Word2Vec, I came across a lot of resources of varying usefulness. So though I’d share my collection of links and notes on what they contain.

What is Word2Vec means..?
Word2vec is a group of related models that are used to produce word embeddings.
Actually, Word2Vec is not a single algorithm is a combination of two techniques.

  1. CBOW(Continuous bag of words) and
  2. Skip-gram model.

Both of these are shallow neural networks which map word(s) to the target variable which is also a word(s). Both of these techniques learn weights which act as word vector representations. Let us discuss both these methods separately and gain intuition into their working.

So, what exactly is Skip-gram?..
Let’s start with a high-level insight about where we’re going. We’re going to train a simple neural network with a single hidden layer to perform a certain task, but then we’re not actually going to use that neural network for the task we trained it on! Instead, the goal is actually just to learn the weights of the hidden layer–we’ll see that these weights are actually the “word vectors” that we’re trying to learn.

Start with The Fake Task:
So now we need to talk about this “fake” task that we’re going to build the neural network to perform, and then we’ll come back later to how this indirectly gives us those word vectors that we are actually after.

We’re going to train the neural network to do the following. Given a specific word in the middle of a sentence (the input word), look at the words nearby and pick one at random. The network is going to tell us the probability for every word in our vocabulary of being the “nearby word” that we chose.

Note: When I say "nearby", there is actually a "window size" parameter to the algorithm. A typical window size might be 5, meaning 5 words behind and 5 words ahead (10 in total).

The output probabilities are going to relate to how likely it is find each vocabulary word nearby our input word. For example, if you gave the trained network the input word “Soviet”, the output probabilities are going to be much higher for words like “Union” and “Russia” than for unrelated words like “watermelon” and “kangaroo”.

We’ll train the neural network to do this by feeding it word pairs found in our training documents. The below example shows some of the training samples (word pairs) we would take from the sentence “The quick brown fox jumps over the lazy dog.” I’ve used a small window size of 2 just for the example. The word highlighted in blue is the input word.

The network is going to learn the statistics from the number of times each pairing shows up. So, for example, the network is probably going to get many more training samples of (“Soviet”, “Union”) than it is of (“Soviet”, “Sasquatch”). When the training is finished, if you give it the word “Soviet” as input, then it will output a much higher probability for “Union” or “Russia” than it will for “Sasquatch”.

So how is this all represented..?
First of all, we know you can’t feed a word just as a text string to a neural network, so we need a way to represent the words to the network. To do this, we first build a vocabulary of words from our training documents–let’s say we have a vocabulary of 10,000 unique words.

We’re going to represent an input word like “ants” as a one-hot vector. This vector will have 10,000 components (one for every word in our vocabulary) and we’ll place a “1” in the position corresponding to the word “ants”, and 0s in all of the other positions.

The output of the network is a single vector (also with 10,000 components) containing, for every word in our vocabulary, the probability that a randomly selected nearby word is that vocabulary word.

There is no activation function on the hidden layer neurons(it means, by default its linear activation), but the output neurons use SoftMax.

When training this network on word pairs, the input is a one-hot vector representing the input word and the training output is also a one-hot vector representing the output word. But when you evaluate the trained network on an input word, the output vector will actually be a probability distribution (i.e., a bunch of floating point values, not a one-hot vector).

Ok, are we ready for an exciting bit of insight into this network?
If two different words have very similar “contexts” (that is, what words are likely to appear around them), then our model needs to output very similar results for these two words. And one way for the network to output similar context predictions for these two words is if the word vectors are similar. So, if two words have similar contexts, then our network is motivated to learn similar word vectors for these two words!
Ta da!

And what does it mean for two words to have similar contexts?
I think you could expect that synonyms like “intelligent” and “smart” would have very similar contexts. Or that words that are related, like “engine” and “transmission”, would probably have similar contexts as well.

This can also handle stemming for us – the network will likely learn similar word vectors for the words “ant” and “ants” because these should have similar contexts.

Hmm, I have a doubt how to understand skip in the name “the skip gram model” literally?..
Before this there was a bi-gram model which uses the most adjacent word to train the model. But in this case the word can be any word inside the window. So you can use any of the words inside the window skipping the most adjacent word. Hence skip-gram.

I'm not sure though :P

Advantages of Skip-Gram Model:

  • Skip-gram model can capture two semantics for a single word. i.e. it will have two vector representations of Apple. One for the company and other for the fruit.

So, what exactly is CBOW(Continuous bag of words) will do differ from skip gram?..
CBOW work is like it tends to predict the probability of a word given a context. A context may be a single word or a group of words.

Suppose, corpus C = “Hey, this is sample corpus using only one context word.” and defined a context window of 1. This corpus may be converted into a training set for a CBOW model as follow. The input is shown below. The matrix on the right in the below image contains the one-hot encoded from of the input on the left.

The flow of the CBOW algorithm is as follows:

  • The input layer and the target, both are one- hot encoded of size [1 X V]. Here V=10 in the above
    example.
  • There are two sets of weights. one is between the input and the hidden layer and second between hidden and output layer.
    Input-Hidden layer matrix size = [V X N] ,
    Hidden-Output layer matrix size =[N X V]
    Where N is the number of dimensions we choose to represent our word in. It is arbitrary and a hyper-parameter for a Neural Network.
    Also, N is the number of neurons in the hidden layer. Here, N=4.
  • There is a no activation function between any layers. It means by default linear activation
  • The input is multiplied by the input-hidden weights and called hidden activation. It is simply the corresponding row in the input-hidden matrix.
  • The hidden input gets multiplied by hidden- output weights and output is calculated.
  • Error between output and target is calculated and propagated back to re-adjust the weights.
  • The weight between the hidden layer and the output layer is taken as the word vector representation of the word.

We saw the above steps for a single context word.

Advantages of CBOW:

  • Being probabilistic is nature, it is supposed to perform superior to deterministic methods(generally).
  • It is low on memory. It does not need to have huge RAM requirements like that of co-occurrence matrix where it needs to store three huge matrices.

Disadvantages of CBOW:

  • CBOW takes the average of the context of a word. For example, Apple can be both a fruit and a company but CBOW takes an average of both the contexts and places it in between a cluster for fruits and companies.
  • Training a CBOW from scratch can take forever if not properly optimized.
Sort:  

Hi! I am a robot. I just upvoted you! I found similar content that readers might be interested in:
http://mccormickml.com/2016/04/19/word2vec-tutorial-the-skip-gram-model/