Recurrent Neural Networks cheatsheet
Architecture of a traditional RNN ― Recurrent neural networks, also known as RNNs, are a class of neural networks that allow previous outputs to be used as inputs while having hidden states. They are typically as follows:
For each timestep, the activation and the output are expressed as follows:
whereare coefficients that are shared temporally and activation functions.
The pros and cons of a typical RNN architecture are summed up in the table below:
|• Possibility of processing input of any length|
• Model size not increasing with size of input
• Computation takes into account historical information
• Weights are shared across time
|• Computation being slow|
• Difficulty of accessing information from a long time ago
• Cannot consider any future input for the current state
Applications of RNNs ― RNN models are mostly used in the fields of natural language processing and speech recognition. The different applications are summed up in the table below:
|Type of RNN||Illustration||Example|
|One-to-one||Traditional neural network|
|Many-to-many||Name entity recognition|
Loss function ― In the case of a recurrent neural network, the loss functionof all time steps is defined based on the loss at every time step as follows:
Backpropagation through time ― Backpropagation is done at each point in time. At timestep, the derivative of the loss with respect to weight matrix is expressed as follows:
Handling long term dependencies
Commonly used activation functions ― The most common activation functions used in RNN modules are described below:
Vanishing/exploding gradient ― The vanishing and exploding gradient phenomena are often encountered in the context of RNNs. The reason why they happen is that it is difficult to capture long term dependencies because of multiplicative gradient that can be exponentially decreasing/increasing with respect to the number of layers.
Gradient clipping ― It is a technique used to cope with the exploding gradient problem sometimes encountered when performing backpropagation. By capping the maximum value for the gradient, this phenomenon is controlled in practice.
Types of gates ― In order to remedy the vanishing gradient problem, specific gates are used in some types of RNNs and usually have a well-defined purpose. They are usually notedand are equal to:
whereare coefficients specific to the gate and is the sigmoid function. The main ones are summed up in the table below:
|Type of gate||Role||Used in|
|Update gate||How much past should matter now?||GRU, LSTM|
|Relevance gate||Drop previous information?||GRU, LSTM|
|Forget gate||Erase a cell or not?||LSTM|
|Output gate||How much to reveal of a cell?||LSTM|
GRU/LSTM ― Gated Recurrent Unit (GRU) and Long Short-Term Memory units (LSTM) deal with the vanishing gradient problem encountered by traditional RNNs, with LSTM being a generalization of GRU. Below is a table summing up the characterizing equations of each architecture:
|Characterization||Gated Recurrent Unit (GRU)||Long Short-Term Memory (LSTM)|
Remark: the signdenotes the element-wise multiplication between two vectors.
Variants of RNNs ― The table below sums up the other commonly used RNN architectures:
|Bidirectional (BRNN)||Deep (DRNN)|
Learning word representation
In this section, we notethe vocabulary and its size.
Motivation and notations
Representation techniques ― The two main ways of representing words are summed up in the table below:
|1-hot representation||Word embedding|
|• Noted |
• Naive approach, no similarity information
|• Noted |
• Takes into account words similarity
Embedding matrix ― For a given word, the embedding matrix is a matrix that maps its 1-hot representation to its embedding as follows:
Remark: learning the embedding matrix can be done using target/context likelihood models.
Word2vec ― Word2vec is a framework aimed at learning word embeddings by estimating the likelihood that a given word is surrounded by other words. Popular models include skip-gram, negative sampling and CBOW.
Skip-gram ― The skip-gram word2vec model is a supervised learning task that learns word embeddings by assessing the likelihood of any given target wordhappening with a context word . By noting a parameter associated with , the probability is given by:
Remark: summing over the whole vocabulary in the denominator of the softmax part makes this model computationally expensive. CBOW is another word2vec model using the surrounding words to predict a given word.
Negative sampling ― It is a set of binary classifiers using logistic regressions that aim at assessing how a given context and a given target words are likely to appear simultaneously, with the models being trained on sets ofnegative examples and 1 positive example. Given a context word and a target word , the prediction is expressed by:
Remark: this method is less computationally expensive than the skip-gram model.
GloVe ― The GloVe model, short for global vectors for word representation, is a word embedding technique that uses a co-occurence matrixwhere each denotes the number of times that a target occurred with a context . Its cost function is as follows:
Given the symmetry that and play in this model, the final word embedding is given by:
Remark: the individual components of the learned word embeddings are not necessarily interpretable.
Cosine similarity ― The cosine similarity between wordsand is expressed as follows:
Remark:is the angle between words and .
-SNE ― -SNE ( -distributed Stochastic Neighbor Embedding) is a technique aimed at reducing high-dimensional embeddings into a lower dimensional space. In practice, it is commonly used to visualize word vectors in the 2D space.
Overview ― A language model aims at estimating the probability of a sentence.
-gram model ― This model is a naive approach aiming at quantifying the probability that an expression appears in a corpus by counting its number of appearance in the training data.
Perplexity ― Language models are commonly assessed using the perplexity metric, also known as PP, which can be interpreted as the inverse probability of the dataset normalized by the number of words. The perplexity is such that the lower, the better and is defined as follows:
Remark: PP is commonly used in-SNE.
Overview ― A machine translation model is similar to a language model except it has an encoder network placed before. For this reason, it is sometimes referred as a conditional language model.
The goal is to find a sentencesuch that:
Beam search ― It is a heuristic search algorithm used in machine translation and speech recognition to find the likeliest sentencegiven an input .
• Step 1: Find top
• Step 2: Compute conditional probabilities
• Step 3: Keep top combinations
Remark: if the beam width is set to 1, then this is equivalent to a naive greedy search.
Beam width ― The beam widthis a parameter for beam search. Large values of yield to better result but with slower performance and increased memory. Small values of lead to worse results but is less computationally intensive. A standard value for is around 10.
Length normalization ― In order to improve numerical stability, beam search is usually applied on the following normalized objective, often called the normalized log-likelihood objective, defined as:
Remark: the parametercan be seen as a softener, and its value is usually between 0.5 and 1.
Error analysis ― When obtaining a predicted translationthat is bad, one can wonder why we did not get a good translation by performing the following error analysis:
|Root cause||Beam search faulty||RNN faulty|
|Remedies||Increase beam width||• Try different architecture|
• Get more data
Bleu score ― The bilingual evaluation understudy (bleu) score quantifies how good a machine translation is by computing a similarity score based on-gram precision. It is defined as follows:
whereis the bleu score on -gram only defined as follows:
Remark: a brevity penalty may be applied to short predicted translations to prevent an artificially inflated bleu score.
Attention model ― This model allows an RNN to pay attention to specific parts of the input that is considered as being important, which improves the performance of the resulting model in practice. By notingthe amount of attention that the output should pay to the activation and the context at time , we have:
Remark: the attention scores are commonly used in image captioning and machine translation.
Attention weight ― The amount of attention that the outputshould pay to the activation is given by computed as follows:
Remark: computation complexity is quadratic with respect to.