Jekyll2017-10-03T04:40:08+00:00http://rohan-varma.github.io/rohan-blogUCLA CS/Machine Learning/MathInterpreting Regulariation as a Bayesian Prior2017-08-24T00:00:00+00:002017-08-24T00:00:00+00:00http://rohan-varma.github.io/Regularization<p><img src="https://raw.githubusercontent.com/rohan-varma/rohan-blog/gh-pages/images/reg.png" alt="img" /></p>
<h3 id="introductionbackground">Introduction/Background</h3>
<p>In machine learning, we often start off by writing down a probabalistic model that defines our data. We then go on to write down a likelihood or some type of loss function, which we then optimize over to get the optimal settings for the parameters that we seek to estimate. Along the way, techniques such as regularization, hyperparameter tuning, and cross-validation can be used to ensure that we don’t overfit on our training dataset and our model generalizes well to unseen data.</p>
<p>Specifically, we have a few key functions and variables: the underlying probability distribution <script type="math/tex">p(x, y)</script> which generate our training examples (pairs of features and labels), a training set <script type="math/tex">(x, y)_{i = 1}^{D}</script> of <script type="math/tex">D</script> examples which we observe, and a model <script type="math/tex">h(x) : x \rightarrow{} y</script> which we wish to learn in order to produce a mapping from <script type="math/tex">x</script> to <script type="math/tex">y</script>. This function <script type="math/tex">h</script> is selected from a larger function space <script type="math/tex">H</script>.</p>
<p>For example, if we are in the context of linear regression models, then all functions in the function space of <script type="math/tex">H</script> will take on the form <script type="math/tex">y_i = x_{i}^T \beta</script> where a particular setting of our parameters <script type="math/tex">\beta</script> will result in a particular <script type="math/tex">h(x)</script>. We also have some function <script type="math/tex">L(h(x), y)</script> that takes in our predictions and labels, and quantifies how accurate our model is across some data.</p>
<p>Ideally, we’d like to minimize the risk function</p>
<script type="math/tex; mode=display">R[h(x)] = \sum_{(x, y)} L( h(x), y) p(x, y)</script>
<p>across all possible <script type="math/tex">(x, y)</script> pairs. However, this is impossible since we don’t know the underlying probability distribution that describes our dataset, so instead we seek to approximate the risk function by minimizing a loss function across the data that we have observed:</p>
<script type="math/tex; mode=display">\frac{1}{N} \sum_{i = 1}^{N} L(h(x), y)</script>
<h3 id="linear-models">Linear Models</h3>
<p>If we assume that our data are roughly linear, then we can write a relationship between our features and real-valued outputs: <script type="math/tex">y_i = x_i^T \beta + \epsilon</script> where <script type="math/tex">\epsilon \tilde{} N(0, \sigma^2)</script>. This essentially means that our data has a linear relationship that is corrupted by random Gaussian noise that has zero mean and constant variance.</p>
<p>This has the implication that <script type="math/tex">y_i</script> is a Gaussian random variable, and we can compute its expectation and variance:</p>
<script type="math/tex; mode=display">E[y_i] = E[x_i^T \beta + \epsilon] = x_i^T \beta</script>
<script type="math/tex; mode=display">Var[y_i] = Var[x_i^T \beta + \epsilon] = \sigma^2</script>
<p>We can now write down the probability of observing a value <script type="math/tex">y_i</script> given a certain set of features <script type="math/tex">x</script>:</p>
<script type="math/tex; mode=display">p(y_i | x_i) = N(y_i | x_i^T \beta, \sigma^2)</script>
<p>Next, we can write down the probability of observing the entire dataset of <script type="math/tex">(x, y)</script> pairs. This is known as the likelihood, and it’s simply the product of observing each of the individual feature, label pairs:</p>
<script type="math/tex; mode=display">L(x,y) = \prod_{i = 1}^{n} N(y_i | x_i \beta, \sigma^2)</script>
<p>As a note, writing down the likelihood this way does assume that our training data are independent and identically distributed, meaning that we are assuming that each of the training samples have the same probability distribution, and are mutually independent.</p>
<p>If we want to find the <script type="math/tex">\hat{\beta}</script> that maximizes the chance of us observing the training examples that we observed, then it makes sense to maximize the above likelihood. This is known as <strong>maximum likelihood estimation</strong>, and is a common approach to many machine learning problems such as linear and logistic regression.</p>
<p>In other words, we want to find</p>
<script type="math/tex; mode=display">\hat{\beta} = argmax_{\beta} \prod_{i = 1}^{n} N(y_i | x_i \beta, \sigma^2)</script>
<p>To simplify this a little bit, we can write out the normal distribution, and also take the log of the function, since the <script type="math/tex">\hat{\beta}</script> that maximizes <script type="math/tex">L</script> will also maximize <script type="math/tex">log(L)</script>. We end up with</p>
<script type="math/tex; mode=display">\hat{\beta} = argmax_{\beta} log \prod_{i = 1}^{n} \frac{1}{\sqrt(2 \pi \sigma^2}e^-\frac{(y_i - x_i \beta)^2}{2 \sigma^2}</script>
<p>Distributing the log and dropping constants (since they don’t affect the value of our parameter which maximizes the expression), we obtain</p>
<script type="math/tex; mode=display">\hat{\beta} = argmax_{\beta} \sum_{i = 1}^{N} -(y_i - x_i \beta)^2</script>
<p>Since minimizing the opposite of a function is the same as maximizing it, we can turn the above into a minimization problem:</p>
<script type="math/tex; mode=display">\hat{\beta} = argmin_{\beta} \sum_{i = 1}^{N} (y_i - x_i \beta)^2</script>
<p>This is the familiar least squares estimator, which says that the optimal parameter is the one that minimizes the <script type="math/tex">L2</script> squared norm between the predictions and actual values. We can use gradient descent with some initial setting of <script type="math/tex">\beta</script> and be guaranteed to get to a global minimum (since the function is convex) or we can explicitly solve for <script type="math/tex">\beta</script> and obtain the same answer.</p>
<p>Right now is a good time to think about the assumptions of this linear regression model. Like many models, it assumes that the data are drawn independently from the same data generating distribution. Furthermore, it assumes that this distribution is normal with a linear mean and constant variance. It also has a more implicit assumption: that the parameter <script type="math/tex">\beta</script> which we wish to estimate is not a random variable itself, and we will show how relaxing this assumption leads to a regularized linear model.</p>
<h3 id="regularization">Regularization</h3>
<p>Regularization is a popular approach to reducing a model’s predisposition to overfit on the training data and thus hopefully increasing the generalization ability of the model. Previously, we sought to learn the optimial <script type="math/tex">h(x)</script> from the space of functions <script type="math/tex">H</script>. However, if the whole function space can be explored, and our samples were observed with some amount of noise, then the model will likely select a function that overfits on the observed data. One way we can combat this is by limiting our search to a subspace within <script type="math/tex">H</script>, and this is exactly what regularization does.</p>
<p>To regularize a model, we take our loss function and add a regularizer to it. Regularizers take the form <script type="math/tex">\lambda R(\beta)</script> where <script type="math/tex">R(\beta)</script> is some function of our parameters, and <script type="math/tex">\lambda</script> is a hyperparameter describing our regularization constant. Using this rule, we can write out a regularized version of our loss function above, giving us a model known as ridge regression:</p>
<script type="math/tex; mode=display">\hat{\beta} = argmin_{\beta} \sum_{i = 1}^{N} (y_i - x_i \beta)^2 + \lambda \sum_{i = 1}^{j} \beta_j^2</script>
<p>What’s interesting about regularization is that it can be more deeply understood if we reconsider our original probabalistic model. In our original model, we conditioned our outputs on a linear function of the parameter which we wish to learn <script type="math/tex">\beta</script>. It turns out we often want to also consider <script type="math/tex">\beta</script> itself as a random variable, and impose a probability distribution on it. This is known as the <strong>prior</strong> probability distribution, because we assign <script type="math/tex">\beta</script> some probability without having observed the associated <script type="math/tex">(x, y)</script> pairs. Imposing a prior would be especially useful if we had some information about the parameter before observing any of the training data (possibly from domain knowledge), but it turns out that imposing a Gaussian prior even in the absence of actual prior knowledge leads to interesting properties. In particular, we can condition <script type="math/tex">\beta</script> as on a Gaussian with 0 mean and constant variance [1]:</p>
<script type="math/tex; mode=display">\beta \tilde{} N(0, \lambda^{-1})</script>
<p>As a consequence, we must adjust our probability of observing a particular <script type="math/tex">(x, y)</script> pair to accommodate the probability of observing the parameter that generated this pair. We obtain a new expression for our likelihood:</p>
<script type="math/tex; mode=display">L(x,y) = \prod_{i = 1}^{n} N(y_i | x_i \beta, \sigma^2) N(\beta | 0, \lambda^{-1})</script>
<p>Similar to the previously discussed method of maximum likelihood estimation, we can estimate the parameter <script type="math/tex">\beta</script> to be the <script type="math/tex">\hat{\beta}</script> that maximizes the above function:</p>
<script type="math/tex; mode=display">\hat{\beta} = argmax_{\beta} \sum_{i = 1}^{N} log N(y_i | x_i \beta, \sigma^2) + log N(\beta | 0, \lambda^{-1})</script>
<p>This is the maximum a posteriori estimate of <script type="math/tex">\beta</script>, and it only differs from the maximum likelihood estimate in that the former takes into account previous information, or a prior distribution, on the parameter <script type="math/tex">\beta</script>. In fact, the maximum likelihood estimate of the parameter can be seen as a special case of the maximum a posteriori estimate, where we take the prior probability distribution on the parameter to just be a constant.</p>
<p>Since (dropping unneeded constants) <script type="math/tex">N(\beta, 0, \lambda^{-1}) = exp(\frac{- \beta^{2}}{2 \lambda^{-1}})</script>, after taking the log, and minimizing the negative of the above function we obtain the familiar regularizer <script type="math/tex">\frac{1}{2} \lambda \beta^2</script> and our squared loss function <script type="math/tex">\sum_{i = 1}^{N} (y_i - x_i \beta)^2</script> is the same as the loss function we obtained without regularization. In this way, <script type="math/tex">L2</script> regularization on a linear model can be thought of as imposing a Bayesian prior on the underlying parameters which we wish to estimate.</p>
<h3 id="aside-interpreting-regularization-in-the-context-of-bias-and-variance">Aside: interpreting regularization in the context of bias and variance</h3>
<p>The error of a statistical model can be decomposed into three distinct sources of error: error due to bias, error due to variance, and irreducible error. They are related as follows:</p>
<script type="math/tex; mode=display">Err(x) = bias(X)^2 + var(x) + \epsilon</script>
<p>Given a constant error, this means that there will always be a tradeoff between bias and variance. Having too much bias or too much variance isn’t good for a model, but for different reasons. A high bias, low variance model will likely end up being inaccurate across both the training and testing datasets, and its predictions will likely not deviate too much based on the data sample it is trained on. On the other hand, a low-bias, high-variance model will likely give good results on a training dataset, but fail to generalize as well on a testing dataset.</p>
<p>The Gauss-Markov theorem states that in a linear regression problem, the least squares estimator has the lowest variance out of all other unbiased estimators. However, if we consider biased estimators such as the estimator given by ridge regression, we can arrive at a lower variance, higher-bias solution. In particular, the expectation of the ridge estimator (derived <a href="http://math.bu.edu/people/cgineste/classes/ma575/p/w14_1.pdf">here</a>) is given by:</p>
<script type="math/tex; mode=display">\beta - \lambda (X^TX + \lambda I)^{-1} \beta</script>
<p>The bias of an estimator is defined as the difference between the parameter’s expected value and the true parameter <script type="math/tex">\beta</script>: <script type="math/tex">bias(\hat{\beta}) = E[\hat{\beta}] - \beta</script></p>
<p>As you can see, the bias is proportional to <script type="math/tex">\lambda</script> and <script type="math/tex">\lambda = 0</script> gives us the unbiased least squares estimator since <script type="math/tex">E[\hat{\beta}] = \beta</script>. Therefore, assuming a constant total error for the least squares estimator and the ridge estimator, the variance for the ridge estimator is lower. A more complete discussion, including formal calculations for the bias and variance of the ridge estimator compared to the least squares estimator, is given <a href="http://math.bu.edu/people/cgineste/classes/ma575/p/w14_1.pdf">here</a>.</p>
<h3 id="a-linear-algebra-perspective">A linear algebra perspective</h3>
<p>To see why regularization makes sense from a linear algebra perspective, we can write down our least squares estimate in vectorized form:</p>
<script type="math/tex; mode=display">argmin_{\beta} { (y - X\beta)^T (y - X \beta) }</script>
<p>Next, we can expand this and simplify a little bit:</p>
<script type="math/tex; mode=display">argmin_{\beta} (y^T - \beta^TX^T)(y - X\beta)</script>
<script type="math/tex; mode=display">= argmin_{\beta} -2y^TX\beta + \beta^TX^TX\beta</script>
<p>where we have dropped the terms that are not a factor of <script type="math/tex">\beta</script> since they will zero out when we differentiate.</p>
<p>To minimize, we differentiate with respect to <script type="math/tex">\beta</script>:</p>
<script type="math/tex; mode=display">\frac{\delta L}{\delta \beta} = -2 y^TX + 2X^TX\beta</script>
<p>Setting the derivative equal to zero gives us the closed form solution of <script type="math/tex">\beta</script> which is the least-squares estimate [2]:</p>
<script type="math/tex; mode=display">\hat{\beta} = (X^TX)^{-1} y^TX</script>
<p>As we can see, in order to actually compute this quantity the matrix <script type="math/tex">X^T X</script> must be invertible. The matrix <script type="math/tex">X^T X</script> being invertible corresponds exactly to showing that the matrix is positive definite, which means that the scalar quantity <script type="math/tex">z^T X^T X z > 0</script> for any real, non-zero vectors <script type="math/tex">z</script>. However, the best we can do is show that <script type="math/tex">X^T X</script> is positive semidefinite.</p>
<p>To show that <script type="math/tex">X^TX</script> is positive semidefinite, we must show that the quantity <script type="math/tex">z^T X^T X z \geq 0</script> for any real, non-zero vectors <script type="math/tex">z</script>.</p>
<p>If we expand out the quantity <script type="math/tex">X^T X</script>, we obtain <script type="math/tex">\sum_{i = 1}^{N} x_i x_i^T</script> and it follows that the quantity <script type="math/tex">z^T (\sum_{i = 1}^{N} x_i x_i^T) z = \sum_{i = 1}^{N} (x_i^Tz)^2 \geq 0</script>. This means that in sitautions where this quantity is exactly <script type="math/tex">0</script>, the matrix <script type="math/tex">X^T X</script> cannot be inverted and a closed-form least squares solution cannot be computed.</p>
<p>On the other hand, expanding out our ridge estimate which has an extra regulariztion term <script type="math/tex">\lambda \sum_{i} \beta_i^2</script>, we obtain the derivative</p>
<script type="math/tex; mode=display">\frac{\delta L}{\delta \beta} = -2 y^TX + 2X^TX\beta + 2 \lambda \beta</script>
<p>Setting this quantity equal to zero, and rewriting <script type="math/tex">\lambda \beta</script> as <script type="math/tex">\lambda I \beta</script> (using the property of multiplication with the identity matrix), we now obtain</p>
<script type="math/tex; mode=display">\beta (X^TX + \lambda I) = y^T X</script>
<p>giving us the ridge estimate</p>
<script type="math/tex; mode=display">\hat{\beta_{ridge}} = (X^TX + \lambda I)^{-1} y^TX</script>
<p>The only difference in this closed-form solution is the addition of the <script type="math/tex">\lambda I</script> term to the quantity that gets inverted, so we are now sure that this quantity is positive definite if <script type="math/tex">\lambda > 0</script>. In other words, even when the matrix <script type="math/tex">X^T X</script> is not invertible, we can still compute a ridge estimate from our data [3].</p>
<h3 id="regularizers-in-neural-networks">Regularizers in neural networks</h3>
<p>While techniques such as L2 regularization can be used while training a neural network, employing techniques such as dropout, which randomly discards some proportion of the activations at a per-layer level during training, have been shown to be much more successful. There is also a different type of regularizer that takes into account the idea that a neural network should have sparse activations for any particular input. There are several theoretical reeasons for why sparsity is important, a topic covered very well by Glorot et al. in a <a href="http://proceedings.mlr.press/v15/glorot11a/glorot11a.pdf">2011 paper</a>.</p>
<p>Since sparsity is important in neural networks, we can introduce a constraint that can gaurantee us some degree of sparsity. Specifically, we can constrain the average activation of a particular neuron in a particular hidden layer.</p>
<p>In particular, the average activation of a neuron in a particular layer, weighted by the input into the neuron, can be given by summing over all of the activation - input pairs: <script type="math/tex">\hat{\rho} = \frac{1}{m} \sum_{i = 1}^{N} x_i a_i^2</script>. Next, we can choose a hyperparameter <script type="math/tex">\rho</script> for this particular neuron, which represents the average activation we want it to have - for example, if we wanted this neuron to activate sparsely, we might set <script type="math/tex">\rho = 0.05</script>. In order to ensure that our model learns neurons which sparsely activate, we must incorporate some function of <script type="math/tex">\hat{\rho}</script> and <script type="math/tex">\rho</script> into our cost function.</p>
<p>One way to do this is with the <a href="https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence">KL divergence</a>, which computes how much one probability distribution (in this case, our current average activation <script type="math/tex">\hat\rho</script>) and another expected probability distribution (<script type="math/tex">\rho</script>) diverge from each other. If we minimize the KL divergence for each of our neuron’s activations then our model will learn sparse activations. The cost function may be:</p>
<script type="math/tex; mode=display">J_{sparse} (W, b) = J(W, b) + \lambda \sum_{i = 1}^{M} KL(\rho_i || \hat{\rho_i})</script>
<p>where <script type="math/tex">J(W, b)</script> is a regular cost function used in neural networks, such as the cross-entropy loss. The hyperparameter <script type="math/tex">\lambda</script> indicates how important sparsity is to us - as <script type="math/tex">\lambda \rightarrow{} \infty</script>, we disregard the actual loss function and only aim to learn a sparse representation, and as <script type="math/tex">\lambda \rightarrow{} 0</script> we disregard the importance of sparse activations and only minimize the original loss function. Additional details on this type of regularization with application to sparse autoencoders are given <a href="http://ufldl.stanford.edu/wiki/index.php/Autoencoders_and_Sparsity">here</a>.</p>
<h3 id="recap">Recap</h3>
<p>As we have seen, regularization can be interpreted in several different ways, each of which gives us additional insight into what exactly regularization accomplishes. A few of the different interpretations are:</p>
<p>1) As a Bayesian prior on the paramaters which we are trying to learn.</p>
<p>2) As a term added to the loss function of our model which penalizes some function of our parameters, thereby introducing a tradeoff between minimizing the original loss function and ensuring our weights do not deviate too much from what we want them to be.</p>
<p>3) As a constraint on the model which we are trying to learn. This means we can take the original optimization problem and frame it in a constrained fashion, thereby ensuring that the magnitude of our weights never exceed a certain threshold (in the case of <script type="math/tex">L2</script> regularization).</p>
<p>4) As a method of reducing the function search space <script type="math/tex">H</script> to a new function search space <script type="math/tex">H'</script> that is smaller than <script type="math/tex">H</script>. Without regularization, we may search for our optimal function <script type="math/tex">h</script> in a much larger space, and constraining this to a smaller subspace can lead us to select models with better generalization ability.</p>
<p>Overall, regularization is a useful technique that is often employed to reduce the overall variance of a model, thereby improving its generalization capability. Of course, there’s tradeoffs in using regularization, most notably having to tune the hyperparameter <script type="math/tex">\lambda</script> which can be costly in terms of computational time. Thanks for reading!</p>
<h3 id="sources">Sources</h3>
<ol>
<li>
<p><a href="http://math.bu.edu/people/cgineste/classes/ma575/p/w14_1.pdf">Boston University Linear Models Course by Cedric Ginestet</a></p>
</li>
<li>
<p><a href="http://ufldl.stanford.edu/wiki/index.php/Autoencoders_and_Sparsity">Autoencoders and Sparsity, Stanford UFDL</a></p>
</li>
<li>
<p><a href="https://math.stackexchange.com/questions/1582348/simple-example-of-maximum-a-posteriori/1582407">Explanation of MAP Estimation</a></p>
</li>
</ol>
<p>[1] Imposing different prior distributions on the parameter leads to different types of regularization. A normal distribution with zero mean and constant variance leads to <script type="math/tex">L2</script> regularization, while a Laplacean prior would lead to <script type="math/tex">L1</script> regularization.</p>
<p>[2] Technically, we’ve only shown that the <script type="math/tex">\hat{\beta}</script> we’ve found is a local optimum. We actually want to verify that this is indeed a global minimum, which can be done by showing that the function we are minimizing is convex.</p>
<p>[3] For completeness, it is worth mentioning that there are other solutions if the inverse of the matrix <script type="math/tex">X^T X</script> does not exist. One common workaround is to use the <a href="https://en.wikipedia.org/wiki/Moore%E2%80%93Penrose_pseudoinverse">Moore-Penrose Psuedoinverse</a> which can be computed using the singular value decompisition of the matrix being psuedo-inverted. This is commonly used in implementations of PCA algorithms.</p>Language Models, Word2Vec, and Efficient Softmax Approximations2017-07-02T00:00:00+00:002017-07-02T00:00:00+00:00http://rohan-varma.github.io/Word2Vec<p><img src="https://raw.githubusercontent.com/rohan-varma/paper-analysis/master/word2vec-papers/models.png" alt="img" /></p>
<h3 id="introduction">Introduction</h3>
<p>The Word2Vec model has become a standard method for representing words as dense vectors. This is typically done as a preprocessing step, after which the learned vectors are fed into a discriminative model (typically an RNN) to generate predictions such as movie review sentiment, do machine translation, or even generate text, <a href="https://github.com/karpathy/char-rnn">character by character</a>.</p>
<h3 id="previous-language-models">Previous Language Models</h3>
<p>Previously, the bag of words model was commonly used to represent words and sentences as numerical vectors, which could then be fed into a classifier (for example Naive Bayes) to produce output predictions. Given a vocabulary of <script type="math/tex">V</script> words and a document of <script type="math/tex">N</script> words, a <script type="math/tex">V</script>-dimensional vector would be created to represent the vector, where index <script type="math/tex">i</script> denotes the number of times the <script type="math/tex">i</script>th word in the vocabulary occured in the document.</p>
<p>This model represented words as atomic units, assuming that all words were independent of each other. It had success in several fields such as document classification, spam detection, and even sentiment analysis, but its assumptions (that words are completely independent of each other) were too strong for more powerful and accurate models. A model that aimed to reduce some of the strong assumptions of the traditional bag of words model was the n-gram model.</p>
<h3 id="n-gram-models-and-markov-chains">N-gram models and Markov Chains</h3>
<p>Language models seek to predict the probability of observing the <script type="math/tex">t + 1</script>th word <script type="math/tex">w_{t + 1}</script> given the previous <script type="math/tex">t</script> words:</p>
<script type="math/tex; mode=display">p(w_{t + 1} | w_1, w_2, ... w_t)</script>
<p>Using the chain rule of probabilty, we can compute the probabilty of observing an entire sentence:</p>
<script type="math/tex; mode=display">p(w_1, w_2, ... w_t) = p(w_1)p(w_2 | w_1)...p(w_t | w_{t -1}, ... w_1)</script>
<p>Computing these probabilities have many applications, for example in speech recognition, spelling corrections, and automatic sentence completion. However, estimating these probabilites can be tough. We can use the maximum likelihood estimate:</p>
<script type="math/tex; mode=display">p(x_{t + 1} | x_1, ... x_t) = \frac{count(x_1, x_2, ... x_t, x_{t + 1})}{count(x_1, x_2, ... x_t)}</script>
<p>However, computing this is quite unrealistic - we will generally not observe enough data from a corpus to obtain realistic counts for any sequence of <script type="math/tex">t</script> words for any nontrivial value of <script type="math/tex">t</script>, so we instead invoke the Markov assumption. The Markov assumption assumes that the probability of observing a word at a given time is only dependent on the word observed in the previous time step, and independent of the words observed in all of the previous time steps:</p>
<script type="math/tex; mode=display">p(x_{t + 1} | x_1, x_2, ... x_t) = p(x_{t + 1} | x_t)</script>
<p>Therefore, the probabilty of a sentence can be given by</p>
<script type="math/tex; mode=display">p(w_1, w_2, ... w_t) = p(w_1)\prod_{i = 2}^{t} p(w_i | w_{i - 1})</script>
<p>The Markov assumption can be extended to condition the probability of the <script type="math/tex">t</script>th word on the previous two, three, four, and so on words. This is where the name of the n-gram model comes in - <script type="math/tex">n</script> is the number of previous timesteps we condition the current timestep on. The unigram and bigram models, respectively, are given below.</p>
<script type="math/tex; mode=display">p(x_{t + 1} | x_{1}, x_{2}, ... x_{t}) = p(x_{t + 1})</script>
<script type="math/tex; mode=display">p(x_{t + 1} | x_{1}, x_{2}, ... x_{t}) = p(x_{t + 1} | x_{t})</script>
<p>There is a lot more to the n-gram model such as linear interpolation and smoothing techniques, which <a href="https://web.stanford.edu/class/cs124/lec/languagemodeling.pdf">these slides</a> explain very well.</p>
<h3 id="the-skip-gram-and-continuous-bag-of-words-models">The Skip-Gram and Continuous Bag of Words Models</h3>
<p>Word vectors, or word embeddings, or distributed representation of words, generally refer to a dense vector representation of a word, as compared to a sparse (ie one-hot) traditional representation. There are actually two different implementations of models that learn dense representation of words: the Skip-Gram model and the Continuous Bag of Words model. Both of these models learn dense vector representation of words, based on the words that surround them (ie, their <em>context</em>).</p>
<p>The difference is that the skip-gram model predicts context (surrounding) words given the current word, wheras the continuous bag of words model predicts the current word based on several surrounding words.</p>
<p>This notion of “surrounding” words is best described by considering a center (or current) word and a window of words around it. For example, if we consider the sentence “The quick brown fox jumped over the lazy dog”, and a window size of 2, we’d have the following pairs for the skip-gram model:</p>
<p><img src="http://mccormickml.com/assets/word2vec/training_data.png" alt="img" /></p>
<p>Figure 1: Training Samples <a href="http://mccormickml.com/2016/04/19/word2vec-tutorial-the-skip-gram-model/">(Source)</a></p>
<p>In contrast, for the CBOW model, we’ll input the context words within the window (such as “the”, “brown”, “fox”) and aim to predict the target word “quick” (simply reversing the input to prediction pipeline from the skip-gram model).</p>
<p>The following is a visualization of the skip-gram and CBOW models:</p>
<p><img src="https://raw.githubusercontent.com/rohan-varma/paper-analysis/master/word2vec-papers/models.png" alt="img" /></p>
<p>Figure 2: CBOW vs Skip-gram models. <a href="https://arxiv.org/pdf/1301.3781.pdf">(Source)</a></p>
<p>In this <a href="https://arxiv.org/pdf/1301.3781.pdf">paper</a>, the overall recommendation was to use the skip-gram model, since it had been shown to perform better on analogy-related tasks than the CBOW model. Overall, if you understand one model, it is pretty easy to understand the other: just reverse the inputs and predictions. Since both papers focused on the skip-gram model, this post will do the same.</p>
<h3 id="learning-with-the-skip-gram-model">Learning with the Skip-Gram Model</h3>
<p>Our goal is to find word representations that are useful for predicting the surrounding words given a current word.
In particular, we wish to maximize the average log probability across our entire corpus:</p>
<script type="math/tex; mode=display">argmax_{\theta} \frac{1}{T} \sum_{t=1}^{T} \sum_{j \in c, j != 0} log p(w_{t + j} | w_{t} ; \theta)</script>
<p>This equation essentially says that there is some probability <script type="math/tex">p</script> of observing a particular word that’s within a window of size <script type="math/tex">c</script> of the current word <script type="math/tex">w_t</script>. This probability is conditioned on the current word (<script type="math/tex">w_t</script>) and some setting of parameters <script type="math/tex">\theta</script> (determined by our model). We wish to set these parameters <script type="math/tex">\theta</script> so that this probability is maximized across our entire corpus.</p>
<h3 id="basic-parametrization-softmax-model">Basic Parametrization: Softmax Model</h3>
<p>The basic skip-gram model defines the probability <script type="math/tex">p</script> through the softmax function. If we consider <script type="math/tex">w_i</script> to be a one-hot encoded vector with dimension <script type="math/tex">N</script> and <script type="math/tex">\theta</script> to be a <script type="math/tex">N * K</script> matrix embedding matrix (here, we have <script type="math/tex">N</script> words in our vocabulary and our learned embeddings have dimension <script type="math/tex">K</script>), then we can define</p>
<script type="math/tex; mode=display">p(w_{i} | w_{t} ; \theta) = \frac{exp(\theta w_i)}{\sum_t exp(\theta w_t)}</script>
<p>It is worth noting that after learning, the matrix <script type="math/tex">\theta</script> can be thought of as an embedding lookup matrix. If you have a word that is represented with the <script type="math/tex">k</script>th index of a vector being hot, then the learning embedding for that word will be the <script type="math/tex">k</script>th column. This parametrization has a major disadvantage that limits its usefulness in cases of very large corpuses. Specifically, we notice that in order to compute a single forward pass of our model, we must sum across the entire corpus vocabulary in order to evaluate the softmax function. This is prohibitively expensive on large datasets, so we look to alternate approximations of this model for the sake of computational efficiency.</p>
<h3 id="hierarchical-softmax">Hierarchical Softmax</h3>
<p>As discussed, the traditional softmax approach can become prohibitively expensive on large corpora, and the hierarchical softmax is a common alternative approach that approximates the softmax computation, but has logarithmic time complexity in the number of words in the vocabulary, as opposed to linear time complexity.</p>
<p>This is done by representing the softmax layer as a binary tree where the words are leaf nodes of the tree, and the probabilities are computed by a walk from the root of the binary tree to the particular leaf. An example of the binary tree of the hierarchical layer is given below:</p>
<p><img src="https://raw.githubusercontent.com/rohan-varma/paper-analysis/master/word2vec-papers/hierarchical.png" alt="img" /></p>
<p>Figure 3: Hierarchical Softmax Tree. <a href="https://www.youtube.com/watch?v=B95LTf2rVWM">(Source)</a></p>
<p>At each node in the tree starting from the root, we would like to predict the probability of branching right given the observed context. Therefore, in the above tree, if we would like to compute the probability of observing the word “cat” given a certain context, we would define it as the product of going left at node 1, then going right at node 2, and then again going right at node 5 (conditioned on the context).</p>
<p>The actual computation to determine the probability of a word is done by taking the output of the previous layer, applying a set of node-specific weights and biases to it, and running that result through a non-linearity (often sigmoidal). The following image is an illustration of the process of computing the probability of the word “cat” given an observed context:</p>
<p><img src="https://raw.githubusercontent.com/rohan-varma/paper-analysis/master/word2vec-papers/hierarchical2.png" alt="img" /></p>
<p>Figure 4: Hierarchical Softmax Computation. <a href="https://www.youtube.com/watch?v=B95LTf2rVWM">(Source)</a></p>
<p>Here, <script type="math/tex">V</script> is our matrix of weights connecting the outputs of our previous layer (denoted by <script type="math/tex">h(x)</script>) to our hierarchical layer, and the probabiltiy of branching right at a certain node is given by <script type="math/tex">\sigma(h(x)W_n + b_n)</script>. The probability of observing a particular word, then is just the product of the branches that lead to it.</p>
<p>In the above image, we also notice that in a vocabulary of 8 words, we only needed 3 computations to approximate the softmax computation as opposed to 8. More generally, hierarchical softmax greatly reduces our computation time to <script type="math/tex">log_2(n)</script> where <script type="math/tex">n</script> is our vocabulary size, compared to linear time for the traditional softmax approach. However, this speedup is only useful for training when we don’t need to know the full probability distribution. In settings where we wish to emit the most likely word given a context (for example, in sentence generation), we’d still need to compute the probability of all of the words given the context, resulting in no speed up (although some methods such as pruning when the probability of a certain word quickly tends to zero can certainly increase efficiency).</p>
<h3 id="negative-sampling-and-noise-contrastive-estimation">Negative Sampling and Noise Contrastive Estimation</h3>
<p>Multinomial softmax regression is expensive when we are computing softmax across many different classes (each word essentially denotes a separate class). The core idea of Noise Contrastive Estimation (NCE) is to convert a multiclass classification problem into one of binary classification via logistic regression, while still retaining the quality of word vectors learned. With NCE, word vectors are no longer learned by attempting to predict the context words from the target word. Instead we learn word vectors by learning how to distinguish true pairs of (target, context) words from corrupted (target, random word from vocabulary) pairs. The idea is that if a model can distinguish between actual pairs of target and context words from random noise, then good word vectors will be learned.</p>
<p>Specifically, for each positive sample (ie, true target/context pair) we present the model with <script type="math/tex">k</script> negative samples drawn from a noise distribution. For small to average size training datasets, a value for <script type="math/tex">k</script> between 5 and 20 was recommended, while for very large datasets a smaller value of <script type="math/tex">k</script> between 2 and 5 suffices. Our model only has a single output node, which predicts whether the pair was just random noise or actually a valid target/context pair. The noise distribution itself is a free parameter, but the paper found that the unigram distribution raised to the power <script type="math/tex">3/4</script> worked better than other distributions, such as the unigram and uniform distributions.</p>
<p>The main differences between NCE and Negative sampling is the choice of distribution - the paper used a distribution (discussed above) that sampled less frequently occuring words more often. Moreover, NCE approximately minimizes the log probability across the entire corpus (so it is a good approximation of softmax regression), but this does not hold for negative sampling (but negative sampling still learns quality word vectors).</p>
<h3 id="practical-considerations">Practical Considerations</h3>
<p><strong>Implementing Softmax</strong>: If you’re implementing your own softmax function, it’s important to consider overflow issues. Specifically, the computation <script type="math/tex">\sum_i e^{z_i}</script> can easily overflow, leadning to <code class="highlighter-rouge">NaN</code> values while training. To resolve this issue, we can instead compute the equivalent <script type="math/tex">\frac{e^{z_i + k}}{\sum_i e^{z_i + k}}</script> and set <script type="math/tex">k = - max z</script> so that the largest exponent is zero, avoiding overflow issues.</p>
<p><strong>Subsampling of frequent words</strong>: We don’t get much information from very frequent words such as “the”, “it”, and the like. There will be many more pairs of (the, French) as opposed to (France, French) but we’re more interested in the latter pair. Therefore, it would be useful to subsample some of the more frequent words. We would also like to do this proportionally: very common words are sampled out with high probability, and uncommon words are not sampled out.</p>
<p>In order to do this, the paper defines the probability of discarding a particular word as <script type="math/tex">p(w_i) = 1 - \frac{t}{freq(w_i)}</script> where <script type="math/tex">t</script> is an arbitrary constant, taken in the paper to be <script type="math/tex">10^{-5}</script>. This discarding function will cause words that appear with a frequency greater than <script type="math/tex">t</script> to be sampled out with a high probability, while words that appear with a freqeuncy of less than or equal to <script type="math/tex">t</script> will not be sampled out. For example, if <script type="math/tex">t = 10^{-5}</script> and a particular word covers <script type="math/tex">0.1%</script> of the corpus, then each instance of that word will be discarded from the training corpus with probability <script type="math/tex">0.9</script>.</p>
<h3 id="conclusion">Conclusion</h3>
<p>We have discussed language models including the bag of words model, the n-gram model, and the word2vec model along with changes to the softmax layer in order to more efficiently compute word embeddings. The paper presented empirical results that indicated that negative sampling outperforms hierarchical softmax and (slightly) outperforms NCE on analogical reasoning tasks. Overall, word2vec is one of the most commonly used models for learning dense word embeddings to represent words, and these vectors have several interesting properties (such as additive compositionality). Once these word vectors are learned, they can be a more powerful representation than the typical one-hot encodings when used as inputs into RNNs/LSTMs for applications such as machine translation or sentiment analysis. Thanks for reading!</p>
<h3 id="sources">Sources</h3>
<ul>
<li><a href="https://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf">Distributed Representations of Words and Phrases</a> - the main paper discussed.</li>
<li><a href="https://www.youtube.com/watch?v=B95LTf2rVWM">Hierarchical Output Layer Video by Hugo Larochelle</a> - an excellent video going into great detail about hierarchical softmax.</li>
<li><a href="https://arxiv.org/pdf/1402.3722v1.pdf">Word2Vec explained</a> - a meta-paper explaining the word2vec paper</li>
<li><a href="http://mccormickml.com/2016/04/19/word2vec-tutorial-the-skip-gram-model/">Chris McCormick’s Word2Vec Tutorial</a></li>
<li><a href="https://www.quora.com/Word2vec-How-can-hierarchical-soft-max-training-method-of-CBOW-guarantee-its-self-consistence">Stephan Gouws’s Quora answer on Hierarchical Softmax</a> - an insightful answer about the hierarchical output layer</li>
<li><a href="http://sebastianruder.com/word-embeddings-1/">Word Embeddings Post by Sebastian Ruder</a> - an informative post covering word embeddings and language modelling.</li>
<li><a href="https://arxiv.org/pdf/1301.3781.pdf">Efficient estimation of word representations</a> another key word2vec paper discussing the differences (both from an architecture perspective and empirical results) of the bag of words, skip-gram, and word2vec models.</li>
</ul>Creating Neural Networks in Tensorflow2017-05-16T00:00:00+00:002017-05-16T00:00:00+00:00http://rohan-varma.github.io/Neural-Net-Tensorflow<p>This is a write-up and code tutorial that I wrote for an AI workshop given at UCLA, at which I gave a talk on neural networks and implementing them in Tensorflow. It’s part of a series on machine learning with Tensorflow, and the tutorials for the rest of them are available <a href="https://github.com/uclaacmai/tf-workshop-series">here</a>.</p>
<h3 id="recap-the-learning-problem">Recap: The Learning Problem</h3>
<p>We have a large dataset of <script type="math/tex">(x, y)</script> pairs where <script type="math/tex">x</script> denotes a vector of features and <script type="math/tex">y</script> denotes the label for that feature vector. We want to learn a function <script type="math/tex">h(x)</script> that maps features to labels, with good generalization accuracy. We do this by minimizing a loss function computed on our dataset: <script type="math/tex">\sum_{i=1}^{N} L(y_i, h(x_i))</script>. There are many loss functions we can choose. We have gone over the cross-entropy loss and variants of the squared error loss functions in previous workshops, and we will once again consider those today.</p>
<h3 id="review-a-single-neuron-aka-the-perceptron">Review: A Single “Neuron”, aka the Perceptron</h3>
<p><img src="https://raw.githubusercontent.com/rohan-varma/rohan-blog/gh-pages/images/perceptron.png" alt="perceptron" /></p>
<p>A single perceptron first calculates a <strong>weighted sum</strong> of our inputs. This means that we multiply each of our features <script type="math/tex">(x_1, x_2, ... x_n) \in x</script> with an associated weight <script type="math/tex">(w_1, w_2, ... w_n)</script> . We then take the sign of this linear combination, which and the sign tells us whether to classify this instance as a positive or negative example.</p>
<script type="math/tex; mode=display">h(x) = sign(w^Tx + b)</script>
<p>We then moved on to logistic regression, where we changed our sign function to instead be a sigmoid (<script type="math/tex">\sigma</script>) function. As a reminder, here’s the sigmoid function:</p>
<p><img src="https://upload.wikimedia.org/wikipedia/commons/thumb/8/88/Logistic-curve.svg/600px-Logistic-curve.svg.png" alt="sigmoid" /></p>
<p>Therefore, the function we compute for logistic regression is <script type="math/tex">h(x) = \sigma (w^Tx + b)</script>.</p>
<p>The sigmoid function is commonly referred to as an “activation” function. When we say that a “neuron computes an activation function”, it means that a standard linear combination is calculated (<script type="math/tex">w^Tx + b</script>) and then we apply a <em>non linear</em> function to it, such as the sigmoid function.</p>
<p>Here are a few other common activation functions:</p>
<p><img src="http://www.dplot.com/functions/tanh.png" alt="tanh" />
<img src="https://i.stack.imgur.com/8CGlM.png" alt="relu" /></p>
<h3 id="review-from-binary-to-multi-class-classification">Review: From binary to multi-class classification</h3>
<p>The most important change in moving from a binary (negative/positive) classification model to one that can classify training instances into many different classes (say, 10, for MNIST) is that our vector of weights <script type="math/tex">w</script> changes into a matrix <script type="math/tex">W</script>.</p>
<p>Each row of weights we learn represents the parameters for a certain class:</p>
<p><img src="https://raw.githubusercontent.com/rohan-varma/rohan-blog/gh-pages/images/imagemap.jpg" alt="weights" /></p>
<p>We also want to take our output and normalize the results so that they all sum to one, so that we can interpret them as probabilities. This is commonly done using the <em>softmax</em> function, which takes in a vector and returns another vector who’s elements sum to 1, and each element is proportional in scale to what it was in the original vector. In binary classification we used the sigmoid function to compute probabilities. Now since we have a vector, we use the softmax function.</p>
<p>Here is our current model of learning, then:</p>
<p><script type="math/tex">h(x) = softmax(Wx + b)</script>.</p>
<h3 id="building-up-the-neural-network">Building up the neural network</h3>
<p>Now that we’ve figured out how to linearly model multi-class classification, we can create a basic neural network. Consider what happens when we combine the idea of artificial neurons with our softmax classifier. Instead of computing a linear function $Wx + b$ and immediately passing the output to a softmax function, we have an intermediate step: pass the output of our linear combination to a vector of artificial neurons, which each compute a nonlinear function.</p>
<p>The output of this “layer” of neurons can be multiplied with a matrix of weights again, and we can apply our softmax function to this result to produce our predictions.</p>
<p><strong>Original function</strong>: <script type="math/tex">h(x) = softmax(Wx + b)</script></p>
<p><strong>Neural Network function</strong>: <script type="math/tex">h(x) = softmax(W_2(nonlin(W_1x + b_1)) + b_2)</script></p>
<p>The key differences are that we have more biases and weights, as well as a larger composition of functions. This function is harder to optimize, and introduces a few interesting ideas about learning the weights with an algorithm known as backpropagation.</p>
<p>This “intermediate step” is actually known as a hidden layer, and we have complete control over it, meaning that among other things, we can vary the number of parameters or connections between weights and neurons to obtain an optimal network. It’s also important to notice that we can stack an arbitrary amount of these hidden layers between the input and output of our network, and we can tune these layers individually. This lets us make our network as deep as we want it. For example, here’s what a neural network with two hidden layers would look like:</p>
<p><img src="https://raw.githubusercontent.com/rohan-varma/rohan-blog/gh-pages/images/neuralnet.png" alt="neuralnet" /></p>
<p>We’re now ready to start implementing a basic neural network in Tensorflow. First, let’s start off with the standard <code class="highlighter-rouge">import</code> statements, and visualize a few examples from our training dataset.</p>
<div class="language-python highlighter-rouge"><pre class="highlight"><code><span class="kn">import</span> <span class="nn">tensorflow</span> <span class="kn">as</span> <span class="nn">tf</span>
<span class="kn">import</span> <span class="nn">numpy</span> <span class="kn">as</span> <span class="nn">np</span>
<span class="kn">import</span> <span class="nn">matplotlib.pyplot</span> <span class="kn">as</span> <span class="nn">plt</span>
<span class="kn">from</span> <span class="nn">tensorflow.examples.tutorials.mnist</span> <span class="kn">import</span> <span class="n">input_data</span>
<span class="n">mnist</span> <span class="o">=</span> <span class="n">input_data</span><span class="o">.</span><span class="n">read_data_sets</span><span class="p">(</span><span class="s">'MNIST_data'</span><span class="p">,</span> <span class="n">one_hot</span><span class="o">=</span><span class="bp">True</span><span class="p">)</span> <span class="c"># reads in the MNIST dataset</span>
<span class="c"># a function that shows examples from the dataset. If num is specified (between 0 and 9), then only pictures with those labels will beused</span>
<span class="k">def</span> <span class="nf">show_pics</span><span class="p">(</span><span class="n">mnist</span><span class="p">,</span> <span class="n">num</span> <span class="o">=</span> <span class="bp">None</span><span class="p">):</span>
<span class="n">to_show</span> <span class="o">=</span> <span class="nb">list</span><span class="p">(</span><span class="nb">range</span><span class="p">(</span><span class="mi">10</span><span class="p">))</span> <span class="k">if</span> <span class="ow">not</span> <span class="n">num</span> <span class="k">else</span> <span class="p">[</span><span class="n">num</span><span class="p">]</span><span class="o">*</span><span class="mi">10</span> <span class="c"># figure out which numbers we should show</span>
<span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="mi">100</span><span class="p">):</span>
<span class="n">batch</span> <span class="o">=</span> <span class="n">mnist</span><span class="o">.</span><span class="n">train</span><span class="o">.</span><span class="n">next_batch</span><span class="p">(</span><span class="mi">1</span><span class="p">)</span> <span class="c"># gets some examples</span>
<span class="n">pic</span><span class="p">,</span> <span class="n">label</span> <span class="o">=</span> <span class="n">batch</span><span class="p">[</span><span class="mi">0</span><span class="p">],</span> <span class="n">batch</span><span class="p">[</span><span class="mi">1</span><span class="p">]</span>
<span class="k">if</span> <span class="n">np</span><span class="o">.</span><span class="n">argmax</span><span class="p">(</span><span class="n">label</span><span class="p">)</span> <span class="ow">in</span> <span class="n">to_show</span><span class="p">:</span>
<span class="c"># use matplotlib to plot it</span>
<span class="n">pic</span> <span class="o">=</span> <span class="n">pic</span><span class="o">.</span><span class="n">reshape</span><span class="p">((</span><span class="mi">28</span><span class="p">,</span><span class="mi">28</span><span class="p">))</span>
<span class="n">plt</span><span class="o">.</span><span class="n">title</span><span class="p">(</span><span class="s">"Label: {}"</span><span class="o">.</span><span class="n">format</span><span class="p">(</span><span class="n">np</span><span class="o">.</span><span class="n">argmax</span><span class="p">(</span><span class="n">label</span><span class="p">)))</span>
<span class="n">plt</span><span class="o">.</span><span class="n">imshow</span><span class="p">(</span><span class="n">pic</span><span class="p">,</span> <span class="n">cmap</span> <span class="o">=</span> <span class="s">'binary'</span><span class="p">)</span>
<span class="n">plt</span><span class="o">.</span><span class="n">show</span><span class="p">()</span>
<span class="n">to_show</span><span class="o">.</span><span class="n">remove</span><span class="p">(</span><span class="n">np</span><span class="o">.</span><span class="n">argmax</span><span class="p">(</span><span class="n">label</span><span class="p">))</span>
<span class="c">#show_pics(mnist)</span>
<span class="n">show_pics</span><span class="p">(</span><span class="n">mnist</span><span class="p">,</span> <span class="mi">2</span><span class="p">)</span>
</code></pre>
</div>
<div class="highlighter-rouge"><pre class="highlight"><code>Extracting MNIST_data/train-images-idx3-ubyte.gz
Extracting MNIST_data/train-labels-idx1-ubyte.gz
Extracting MNIST_data/t10k-images-idx3-ubyte.gz
Extracting MNIST_data/t10k-labels-idx1-ubyte.gz
</code></pre>
</div>
<p><img src="https://raw.githubusercontent.com/uclaacmai/tf-workshop-series/master/week6-neural-nets/Neural%20Network%20Tensorflow_files/Neural%20Network%20Tensorflow_1_1.png" alt="png" /></p>
<p><img src="https://raw.githubusercontent.com/uclaacmai/tf-workshop-series/master/week6-neural-nets/Neural%20Network%20Tensorflow_files/Neural%20Network%20Tensorflow_1_2.png" alt="png" /></p>
<p><img src="https://raw.githubusercontent.com/uclaacmai/tf-workshop-series/master/week6-neural-nets/Neural%20Network%20Tensorflow_files/Neural%20Network%20Tensorflow_1_3.png" alt="png" /></p>
<p><img src="https://raw.githubusercontent.com/uclaacmai/tf-workshop-series/master/week6-neural-nets/Neural%20Network%20Tensorflow_files/Neural%20Network%20Tensorflow_1_4.png" alt="png" /></p>
<p><img src="https://raw.githubusercontent.com/uclaacmai/tf-workshop-series/master/week6-neural-nets/Neural%20Network%20Tensorflow_files/Neural%20Network%20Tensorflow_1_5.png" alt="png" /></p>
<p><img src="https://raw.githubusercontent.com/uclaacmai/tf-workshop-series/master/week6-neural-nets/Neural%20Network%20Tensorflow_files/Neural%20Network%20Tensorflow_1_6.png" alt="png" /></p>
<p>As usual, we would like to define several variables to represent our weight matrices and our biases. We will also need to create placeholders to hold our actual data. Anytime we want to create variables or placeholders, we must have a sense of the <strong>shape</strong> of our data so that Tensorflow has no issues in carrying out the numerical computations.</p>
<p>In addition, neural networks rely on various hyperparameters, some of which will be defined below. Two important ones are the ** learning rate ** and the number of neurons in our hidden layer. Depending on these settings, the accuracy of the network may greatly change.</p>
<div class="language-python highlighter-rouge"><pre class="highlight"><code><span class="c"># some functions for quick variable creation</span>
<span class="k">def</span> <span class="nf">weight_variable</span><span class="p">(</span><span class="n">shape</span><span class="p">):</span>
<span class="k">return</span> <span class="n">tf</span><span class="o">.</span><span class="n">Variable</span><span class="p">(</span><span class="n">tf</span><span class="o">.</span><span class="n">truncated_normal</span><span class="p">(</span><span class="n">shape</span><span class="p">,</span> <span class="n">stddev</span> <span class="o">=</span> <span class="mf">0.1</span><span class="p">))</span>
<span class="k">def</span> <span class="nf">bias_variable</span><span class="p">(</span><span class="n">shape</span><span class="p">):</span>
<span class="k">return</span> <span class="n">tf</span><span class="o">.</span><span class="n">Variable</span><span class="p">(</span><span class="n">tf</span><span class="o">.</span><span class="n">constant</span><span class="p">(</span><span class="mf">0.1</span><span class="p">,</span> <span class="n">shape</span> <span class="o">=</span> <span class="n">shape</span><span class="p">))</span>
<span class="c"># hyperparameters we will use</span>
<span class="n">learning_rate</span> <span class="o">=</span> <span class="mf">0.1</span>
<span class="n">hidden_layer_neurons</span> <span class="o">=</span> <span class="mi">50</span>
<span class="n">num_iterations</span> <span class="o">=</span> <span class="mi">5000</span>
<span class="c"># placeholder variables</span>
<span class="n">x</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">placeholder</span><span class="p">(</span><span class="n">tf</span><span class="o">.</span><span class="n">float32</span><span class="p">,</span> <span class="n">shape</span> <span class="o">=</span> <span class="p">[</span><span class="bp">None</span><span class="p">,</span> <span class="mi">784</span><span class="p">])</span> <span class="c"># none = the size of that dimension doesn't matter. why is that okay here? </span>
<span class="n">y_</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">placeholder</span><span class="p">(</span><span class="n">tf</span><span class="o">.</span><span class="n">float32</span><span class="p">,</span> <span class="n">shape</span> <span class="o">=</span> <span class="p">[</span><span class="bp">None</span><span class="p">,</span> <span class="mi">10</span><span class="p">])</span>
</code></pre>
</div>
<p>We will now actually create all of the variables we need, and define our neural network as a series of function computations.</p>
<p>In our first layer, we take our inputs that have dimension <script type="math/tex">n * 784</script>, and multiply them with weights that have dimension <script type="math/tex">784 * k</script>, where <script type="math/tex">k</script> is the number of neurons in the hidden layer. We then add the biases to this result, which also have a dimension of <script type="math/tex">k</script>.</p>
<p>Finally, we apply a nonlinearity to our result. There are, as discussed, several choices, three of which are tanh, sigmoid, and rectifier. We have chosen to use the rectifier (also known as relu, standing for Rectified Linear Unit), since it has been shown in both research and practice that they tend to outperform and learn faster than other activation functions.</p>
<p>Therefore, the “activations” of our hidden layer are given by <script type="math/tex">h_1 = relu(Wx + b)</script>.</p>
<p>We follow a similar procedure for our output layer. Our activations have a shape <script type="math/tex">n * k</script>, where <script type="math/tex">n</script> is the number of training examples we input into our network and $k$ is the number of neurons in our hidden layer.</p>
<p>We want our final outputs to have dimension <script type="math/tex">n * 10</script> (in the case of MNIST) since we have 10 classes. Therefore, it makes sense for our second matrix of weights to have dimension <script type="math/tex">k * 10</script> and the bias to have dimension <script type="math/tex">10</script>.</p>
<p>After taking the linear combination <script type="math/tex">W_2(h_1) + b</script>, we would then apply the softmax function. However, applying the softmax function and then writing out the cross-entropy loss ourself could result in numerical unstability, so we will instead use a library call that computes both the softmax outputs and the cross entropy loss.</p>
<div class="language-python highlighter-rouge"><pre class="highlight"><code><span class="c"># create our weights and biases for our first hidden layer</span>
<span class="n">W_1</span><span class="p">,</span> <span class="n">b_1</span> <span class="o">=</span> <span class="n">weight_variable</span><span class="p">([</span><span class="mi">784</span><span class="p">,</span> <span class="n">hidden_layer_neurons</span><span class="p">]),</span> <span class="n">bias_variable</span><span class="p">([</span><span class="n">hidden_layer_neurons</span><span class="p">])</span>
<span class="c"># compute activations of the hidden layer</span>
<span class="n">h_1</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">nn</span><span class="o">.</span><span class="n">relu</span><span class="p">(</span><span class="n">tf</span><span class="o">.</span><span class="n">matmul</span><span class="p">(</span><span class="n">x</span><span class="p">,</span> <span class="n">W_1</span><span class="p">)</span> <span class="o">+</span> <span class="n">b_1</span><span class="p">)</span>
<span class="n">W_2_hidden</span> <span class="o">=</span> <span class="n">weight_variable</span><span class="p">([</span><span class="n">hidden_layer_neurons</span><span class="p">,</span> <span class="mi">30</span><span class="p">])</span>
<span class="n">b_2_hidden</span> <span class="o">=</span> <span class="n">bias_variable</span><span class="p">([</span><span class="mi">30</span><span class="p">])</span>
<span class="n">h_2</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">nn</span><span class="o">.</span><span class="n">relu</span><span class="p">(</span><span class="n">tf</span><span class="o">.</span><span class="n">matmul</span><span class="p">(</span><span class="n">h_1</span><span class="p">,</span> <span class="n">W_2_hidden</span><span class="p">)</span> <span class="o">+</span> <span class="n">b_2_hidden</span><span class="p">)</span>
<span class="c"># create our weights and biases for our output layer</span>
<span class="n">W_2</span><span class="p">,</span> <span class="n">b_2</span> <span class="o">=</span> <span class="n">weight_variable</span><span class="p">([</span><span class="mi">30</span><span class="p">,</span> <span class="mi">10</span><span class="p">]),</span> <span class="n">bias_variable</span><span class="p">([</span><span class="mi">10</span><span class="p">])</span>
<span class="c"># compute the of the output layer</span>
<span class="n">y</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">matmul</span><span class="p">(</span><span class="n">h_2</span><span class="p">,</span><span class="n">W_2</span><span class="p">)</span> <span class="o">+</span> <span class="n">b_2</span>
</code></pre>
</div>
<p>The cross entropy loss function is a commonly used loss function. For a single prediction/label pair, it is given by <script type="math/tex">C(h(x), y) = -\sum_i y_i log(h(x_i))</script>.*</p>
<p>Here, <script type="math/tex">y</script> is a specific one-hot encoded label vector, meaning that it is a column vector that has a 1 at the index corresponding to its label, and is zero everywhere else. <script type="math/tex">h(x)</script> is the output of our prediction function whose elements sum to 1. As an example, we may have:</p>
<script type="math/tex; mode=display">y = \begin{bmatrix}
1 \\
0 \\
0
\end{bmatrix}, h(x_i) = \begin{bmatrix}
0.2 \\
0.7 \\
0.1
\end{bmatrix} \longrightarrow{} C(y, h(x)) = -\sum_{i=1}^{N}y_ilog(h(x_i)) = -log(0.2) = 0.61</script>
<p>The contribution to the entire training data’s loss by this pair was 0.61. To contrast, we can swap the first two probabilities in our softmax vector. We then end up with a lower loss:</p>
<script type="math/tex; mode=display">y = \begin{bmatrix}
1 \\
0 \\
0
\end{bmatrix}, h(x) = \begin{bmatrix}
0.7 \\
0.2 \\
0.1
\end{bmatrix} \longrightarrow{} C(y, h(x)) = -\sum_{i=1}^{N}y_ilog(h(x_i)) = -log(0.7) = 0.15</script>
<p>So our cross-entropy loss makes intuitive sense: it is lower when our softmax vector has a high probability at the index of the true label, and it is higher when our probabilities indicate a wrong or uncertain choice.</p>
<p><strong>Sanity check: why do we need the negative sign outside the sum?</strong></p>
<div class="language-python highlighter-rouge"><pre class="highlight"><code><span class="c"># define our loss function as the cross entropy loss</span>
<span class="n">cross_entropy_loss</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">reduce_mean</span><span class="p">(</span><span class="n">tf</span><span class="o">.</span><span class="n">nn</span><span class="o">.</span><span class="n">softmax_cross_entropy_with_logits</span><span class="p">(</span><span class="n">labels</span> <span class="o">=</span> <span class="n">y_</span><span class="p">,</span> <span class="n">logits</span> <span class="o">=</span> <span class="n">y</span><span class="p">))</span>
<span class="c"># create an optimizer to minimize our cross entropy loss</span>
<span class="n">optimizer</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">train</span><span class="o">.</span><span class="n">GradientDescentOptimizer</span><span class="p">(</span><span class="n">learning_rate</span><span class="p">)</span><span class="o">.</span><span class="n">minimize</span><span class="p">(</span><span class="n">cross_entropy_loss</span><span class="p">)</span>
<span class="c"># functions that allow us to gauge accuracy of our model</span>
<span class="n">correct_predictions</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">equal</span><span class="p">(</span><span class="n">tf</span><span class="o">.</span><span class="n">argmax</span><span class="p">(</span><span class="n">y</span><span class="p">,</span> <span class="mi">1</span><span class="p">),</span> <span class="n">tf</span><span class="o">.</span><span class="n">argmax</span><span class="p">(</span><span class="n">y_</span><span class="p">,</span> <span class="mi">1</span><span class="p">))</span> <span class="c"># creates a vector where each element is T or F, denoting whether our prediction was right</span>
<span class="n">accuracy</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">reduce_mean</span><span class="p">(</span><span class="n">tf</span><span class="o">.</span><span class="n">cast</span><span class="p">(</span><span class="n">correct_predictions</span><span class="p">,</span> <span class="n">tf</span><span class="o">.</span><span class="n">float32</span><span class="p">))</span> <span class="c"># maps the boolean values to 1.0 or 0.0 and calculates the accuracy</span>
<span class="c"># we will need to run this in our session to initialize our weights and biases. </span>
<span class="n">init</span> <span class="o">=</span> <span class="n">tf</span><span class="o">.</span><span class="n">global_variables_initializer</span><span class="p">()</span>
</code></pre>
</div>
<p>With all of our variables created and computation graph defined, we can now launch the graph in a session and begin training. It is important to remember that since we declared the <script type="math/tex">x</script> and <script type="math/tex">y</script> variables as placeholders, we will need to feed in data to run our optimizer that minimizes the cross entropy loss.</p>
<p>The data we will feed in (by passing into our function a dictionary <em>feed_dict</em>) will come from the MNIST dataset. To randomly sample 100 training examples, we can use a wrapper provided by Tensorflow: <code class="highlighter-rouge">mnnist.train.next_batch(100)</code>.</p>
<p>When we run the optimizer with the call <code class="highlighter-rouge">optimizer.run(..)</code> Tensorflow calculates a forward pass for us (essentially propagating our data through the graph we have described), and then uses the loss function we created to evaluate the loss, and then computes partial derivatives with respect to each set of weights and updates the weights according to the partial derivatives. This is called the backpropagation algorithm, and it involves significant application of the chain rule. CS 231N provides an <a href="http://cs231n.github.io/optimization-2/">excellent explanation</a> of backpropagation.</p>
<div class="language-python highlighter-rouge"><pre class="highlight"><code><span class="c"># launch a session to run our graph defined above. </span>
<span class="k">with</span> <span class="n">tf</span><span class="o">.</span><span class="n">Session</span><span class="p">()</span> <span class="k">as</span> <span class="n">sess</span><span class="p">:</span>
<span class="n">sess</span><span class="o">.</span><span class="n">run</span><span class="p">(</span><span class="n">init</span><span class="p">)</span> <span class="c"># initializes our variables</span>
<span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="n">num_iterations</span><span class="p">):</span>
<span class="c"># get a sample of the dataset and run the optimizer, which calculates a forward pass and then runs the backpropagation algorithm to improve the weights</span>
<span class="n">batch</span> <span class="o">=</span> <span class="n">mnist</span><span class="o">.</span><span class="n">train</span><span class="o">.</span><span class="n">next_batch</span><span class="p">(</span><span class="mi">100</span><span class="p">)</span>
<span class="n">optimizer</span><span class="o">.</span><span class="n">run</span><span class="p">(</span><span class="n">feed_dict</span> <span class="o">=</span> <span class="p">{</span><span class="n">x</span><span class="p">:</span> <span class="n">batch</span><span class="p">[</span><span class="mi">0</span><span class="p">],</span> <span class="n">y_</span><span class="p">:</span> <span class="n">batch</span><span class="p">[</span><span class="mi">1</span><span class="p">]})</span>
<span class="c"># every 100 iterations, print out the accuracy</span>
<span class="k">if</span> <span class="n">i</span> <span class="o">%</span> <span class="mi">100</span> <span class="o">==</span> <span class="mi">0</span><span class="p">:</span>
<span class="c"># accuracy and loss are both functions that take (x, y) pairs as input, and run a forward pass through the network to obtain a prediction, and then compares the prediction with the actual y.</span>
<span class="n">acc</span> <span class="o">=</span> <span class="n">accuracy</span><span class="o">.</span><span class="nb">eval</span><span class="p">(</span><span class="n">feed_dict</span> <span class="o">=</span> <span class="p">{</span><span class="n">x</span><span class="p">:</span> <span class="n">batch</span><span class="p">[</span><span class="mi">0</span><span class="p">],</span> <span class="n">y_</span><span class="p">:</span> <span class="n">batch</span><span class="p">[</span><span class="mi">1</span><span class="p">]})</span>
<span class="n">loss</span> <span class="o">=</span> <span class="n">cross_entropy_loss</span><span class="o">.</span><span class="nb">eval</span><span class="p">(</span><span class="n">feed_dict</span> <span class="o">=</span> <span class="p">{</span><span class="n">x</span><span class="p">:</span> <span class="n">batch</span><span class="p">[</span><span class="mi">0</span><span class="p">],</span> <span class="n">y_</span><span class="p">:</span> <span class="n">batch</span><span class="p">[</span><span class="mi">1</span><span class="p">]})</span>
<span class="k">print</span><span class="p">(</span><span class="s">"Epoch: {}, accuracy: {}, loss: {}"</span><span class="o">.</span><span class="n">format</span><span class="p">(</span><span class="n">i</span><span class="p">,</span> <span class="n">acc</span><span class="p">,</span> <span class="n">loss</span><span class="p">))</span>
<span class="c"># evaluate our testing accuracy </span>
<span class="n">acc</span> <span class="o">=</span> <span class="n">accuracy</span><span class="o">.</span><span class="nb">eval</span><span class="p">(</span><span class="n">feed_dict</span> <span class="o">=</span> <span class="p">{</span><span class="n">x</span><span class="p">:</span> <span class="n">mnist</span><span class="o">.</span><span class="n">test</span><span class="o">.</span><span class="n">images</span><span class="p">,</span> <span class="n">y_</span><span class="p">:</span> <span class="n">mnist</span><span class="o">.</span><span class="n">test</span><span class="o">.</span><span class="n">labels</span><span class="p">})</span>
<span class="k">print</span><span class="p">(</span><span class="s">"testing accuracy: {}"</span><span class="o">.</span><span class="n">format</span><span class="p">(</span><span class="n">acc</span><span class="p">))</span>
</code></pre>
</div>
<div class="highlighter-rouge"><pre class="highlight"><code>Epoch: 0, accuracy: 0.07999999821186066, loss: 2.2931833267211914
Epoch: 100, accuracy: 0.8399999737739563, loss: 0.6990350484848022
Epoch: 200, accuracy: 0.8700000047683716, loss: 0.35569435358047485
Epoch: 300, accuracy: 0.9300000071525574, loss: 0.26591774821281433
Epoch: 400, accuracy: 0.8999999761581421, loss: 0.3307000696659088
Epoch: 500, accuracy: 0.9399999976158142, loss: 0.23977749049663544
Epoch: 600, accuracy: 0.9800000190734863, loss: 0.09397666901350021
Epoch: 700, accuracy: 0.9200000166893005, loss: 0.2931550145149231
Epoch: 800, accuracy: 0.9399999976158142, loss: 0.20180968940258026
Epoch: 900, accuracy: 0.949999988079071, loss: 0.18461622297763824
Epoch: 1000, accuracy: 0.9700000286102295, loss: 0.18968147039413452
Epoch: 1100, accuracy: 0.9599999785423279, loss: 0.14828498661518097
Epoch: 1200, accuracy: 0.949999988079071, loss: 0.1613173633813858
Epoch: 1300, accuracy: 0.9800000190734863, loss: 0.10008890926837921
Epoch: 1400, accuracy: 0.9900000095367432, loss: 0.07440848648548126
Epoch: 1500, accuracy: 0.9599999785423279, loss: 0.1167958676815033
Epoch: 1600, accuracy: 0.9100000262260437, loss: 0.1591644138097763
Epoch: 1700, accuracy: 0.9599999785423279, loss: 0.10022231936454773
Epoch: 1800, accuracy: 0.9700000286102295, loss: 0.1086776852607727
Epoch: 1900, accuracy: 0.9700000286102295, loss: 0.15659521520137787
Epoch: 2000, accuracy: 0.9599999785423279, loss: 0.09391114860773087
Epoch: 2100, accuracy: 0.9800000190734863, loss: 0.09786181151866913
Epoch: 2200, accuracy: 0.9700000286102295, loss: 0.11428779363632202
Epoch: 2300, accuracy: 0.9900000095367432, loss: 0.07231700420379639
Epoch: 2400, accuracy: 0.9700000286102295, loss: 0.09908157587051392
Epoch: 2500, accuracy: 0.9599999785423279, loss: 0.15657338500022888
Epoch: 2600, accuracy: 0.9900000095367432, loss: 0.07787769287824631
Epoch: 2700, accuracy: 0.9800000190734863, loss: 0.07373256981372833
Epoch: 2800, accuracy: 0.9700000286102295, loss: 0.062044695019721985
Epoch: 2900, accuracy: 0.9700000286102295, loss: 0.12512363493442535
Epoch: 3000, accuracy: 0.9900000095367432, loss: 0.11000598967075348
Epoch: 3100, accuracy: 0.9700000286102295, loss: 0.20609986782073975
Epoch: 3200, accuracy: 0.9800000190734863, loss: 0.09811186045408249
Epoch: 3300, accuracy: 0.9700000286102295, loss: 0.09816547483205795
Epoch: 3400, accuracy: 0.9700000286102295, loss: 0.10826745629310608
Epoch: 3500, accuracy: 0.9900000095367432, loss: 0.0645124614238739
Epoch: 3600, accuracy: 0.9700000286102295, loss: 0.1555529236793518
Epoch: 3700, accuracy: 0.9700000286102295, loss: 0.06963416188955307
Epoch: 3800, accuracy: 0.9900000095367432, loss: 0.08054723590612411
Epoch: 3900, accuracy: 0.9800000190734863, loss: 0.06120322644710541
Epoch: 4000, accuracy: 0.9900000095367432, loss: 0.06058483570814133
Epoch: 4100, accuracy: 0.9700000286102295, loss: 0.11490124464035034
Epoch: 4200, accuracy: 0.9700000286102295, loss: 0.10046141594648361
Epoch: 4300, accuracy: 0.9800000190734863, loss: 0.04671316221356392
Epoch: 4400, accuracy: 0.9900000095367432, loss: 0.052477456629276276
Epoch: 4500, accuracy: 0.9800000190734863, loss: 0.08245706558227539
Epoch: 4600, accuracy: 0.9900000095367432, loss: 0.041497569531202316
Epoch: 4700, accuracy: 0.9900000095367432, loss: 0.050769224762916565
Epoch: 4800, accuracy: 0.9900000095367432, loss: 0.039090484380722046
Epoch: 4900, accuracy: 0.9900000095367432, loss: 0.0564178042113781
testing accuracy: 0.9653000235557556
</code></pre>
</div>
<h3 id="questions-to-ponder">Questions to Ponder</h3>
<ul>
<li>Why is the test accuracy lower than the (final) training accuracy ?</li>
<li>Why is there only a nonlinearity in our hidden layer, and not in the output layer?</li>
<li>How can we tune our hyperparameters? In practice, is it okay to continually search for the best performance on the test dataset?</li>
<li>Why do we use only 100 examples in each iteration, as opposed to the entire dataset of 50,000 examples?</li>
</ul>
<h3 id="exercises">Exercises</h3>
<ol>
<li>Using different activation functions. Consult the Tensorflow documentation on <code class="highlighter-rouge">tanh</code> and <code class="highlighter-rouge">sigmoid</code>, and use that as the activation function instead of <code class="highlighter-rouge">relu</code>. Gauge the resulting changes in accuracy.</li>
<li>Varying the number of neurons - as mentioned, we have complete control over the number of neurons in our hidden layer. How does the testing accuracy change with a small number of neurons versus a large number of neurons? What about the generalization accuracy (with respect to the testing accuracy?)</li>
<li>Using different loss functions - we have discussed the cross entropy loss. Another common loss function used in neural networks is the MSE loss. Consult the Tensorflow documentation and implement the <code class="highlighter-rouge">MSELoss()</code> function.</li>
<li>Addition of another hidden layer - We can create a deeper neural network with additional hidden layers. Similar to how we created our original hidden layer, you will have to figure out the dimensions for the weights (and biases) by looking at the dimension of the previous layer, and deciding on the number of neurons you would like to use. Once you have decided this, you can simply insert another layer into the network with only a few lines of code:
<ol>
<li>Use <code class="highlighter-rouge">weight_variable()</code> and <code class="highlighter-rouge">bias_variable()</code> to create new variables for the additional layer (remember to specify the shape correctly).</li>
<li>Similar to computing the activations for the first layer, <code class="highlighter-rouge">h_1 = tf.nn.relu(...)</code>, compute the activations for your additional hidden layer.</li>
<li>Remember to change your output weight dimensions in order to reflect the number of neurons in the previous layer.</li>
</ol>
</li>
</ol>
<h3 id="more">More</h3>
<ol>
<li>Adding dropout</li>
<li>Using momentum optimization or other optimizers</li>
<li>Decaying learning rate</li>
<li>L2-regularization</li>
</ol>
<p>*Technical note: The way this loss function is presented is such that activations corresponding to a label of zero are not penalized at all. The full form of the cross-entropy loss is given by <script type="math/tex">C(y, h(x)) = \sum_i y_i log(h(x_i)) + (1 - y_i)(log(1 - h(x_i))</script>. However, the previously presented function works just as well in environments with larger amounts of data samples and training for many epochs (passes through the dataset), which is typically the case for neural networks.</p>This is a write-up and code tutorial that I wrote for an AI workshop given at UCLA, at which I gave a talk on neural networks and implementing them in Tensorflow. It’s part of a series on machine learning with Tensorflow, and the tutorials for the rest of them are available here.Paper Analysis - Training on corrupted labels2017-04-07T00:00:00+00:002017-04-07T00:00:00+00:00http://rohan-varma.github.io/Noisy-Labels<p><a href="https://arxiv.org/pdf/1703.08774.pdf">Link to paper</a></p>
<h3 id="abstract-and-intro">Abstract and Intro</h3>
<p>This paper talks about an innovative way to use labels assigned to medical images by many different doctors. Generally, large medical datasets are labelled by a variety of doctors and each doctor labels a small fraction of the dataset, and we also have many different doctors labelling the same picture. Often, their labels disagree. Generally when creating training and testing labels, this “disagreement” is captured through a majority vote or through modelling it with a probability distribution.</p>
<p>As an example, if a specific medical image is labelled as malignant by 5 doctors and benign by 4, then with the majority vote method the label will be malignant, and with the probability distribution method the label will be malignant with probability 5/9. This is equivalent to sampling a Bernoulli distribution with parameter 5/9.</p>
<p>However, there could be potentially useful information in this disagreement of labels that other methods could better model. For example, we could take in to account which expert produced which label, and the relaibility of the expert. A possible way to do this is by modelling each expert individually and weighting the label by the expert’s reliability.</p>
<p>This paper first showed that the assumption that the training label accuracy is an upper bound for a neural net’s accuracy is false, and next showed that there are better ways of modelling the opinions of several experts.</p>
<h3 id="motivation">Motivation</h3>
<p>The main motivation was to show that a neural network could “perform better than its teacher”, or attain a test accuracy that is better than the actual labels for the testeing dataset. An example of this was shown with MNIST.</p>
<p>The researchers trained a (relatively shallow) convolutional network with 2 conv layers and a single fully connected layer followed by a 10-way softmax. it was trained with stochastic gradient descent with minibatch learning. SGD is explained further in the next section. When the researchers introduced noise into the data, such as corrupting the true label with another random label that corresponds to another class with probability <script type="math/tex">0.5</script>, the network still only got 2.29% error. However as the probability of corrupting the label increased to above about <script type="math/tex">0.83</script> the network failed to learn and had the same error as the corruption probability.</p>
<h3 id="stochastic-gradient-descent">Stochastic Gradient Descent</h3>
<p>As an aside, stochastic gradient descent is a method for approximated the true gradient which is computed with gradient descent. We consider the typical gradient descent algorithm that takes derivatives with respect to the parameters of a loss function <script type="math/tex">J(\theta)</script> and then updates the parameters in the opposite direction:</p>
<script type="math/tex; mode=display">\delta \theta_i = \nabla_{\theta_i} J(\theta, X)</script>
<script type="math/tex; mode=display">\theta_i += -\alpha * \delta \theta_i</script>
<script type="math/tex; mode=display">\forall i \in [1...m]</script>
<p>where there are <script type="math/tex">m</script> parameters that we need to learn. The above algorithm just models regular gradient descent without any techniques such as momentum or Adagrad applied. The main point is that when we compute partial derivatives, we need to use the entire training set <script type="math/tex">X</script>.</p>
<p>If the training set is extremely large, this can be computationally prohibitive. The main idea behind SGD is then to use only a small portion of the training dataset to compute the updates, which are approximations of the true gradient. For example, the researchers used minibatches of 200 samples instead of the entire training set of 50,000 examples. These minibatch samples need to be drawn randomly. Even though each individual approximation may not be very accurate, in the long run we get a very good approximation of the true gradient.</p>
<h3 id="better-use-of-noisy-labels">Better use of Noisy Labels</h3>
<ul>
<li>The paper pointed out that there’s a lot of differences in how doctors label the same data due to different training they received and even the biases that every human has. The paper pointed out that doctors only agreed with each other 70% of the time and sometimes they even changed their own opinion from what they had previously.</li>
<li>This is pretty common in medicine. There’s usually no single right answer, and a lot of times doctors rely on previous experience and intution to diagnose their patients. I was reminded by a talk given by Vinod Khosla at Stanford MedicineX, where he said that the “practice” of medicine could become a more robust science if we use artificially intelligent agents to aid diagnosis.</li>
<li>This paper trained the neural network to model each of the individual doctors who were labelling data, instead of training the network to average the doctors.</li>
<li>Previously, deep learning methods have been really successful in diabetic retinopathy detection, with some networks attaining high sensitivity and specificity (97.55 and 93.4% respectively)</li>
</ul>
<h3 id="accuracy-sensitivityrecall-specifity-and-precision">Accuracy, Sensitivity/Recall, Specifity, and Precision</h3>
<ul>
<li>Accuracy is not always the best way to measure the ability of a model, and sometimes using it can be completely useless. Consider a scenario where you have 98 spam emails and 2 non-spam emails on a testing dataset. A model that gets 95% accuracy is not useful, as it performs worse than simply taking the majority label. Always be wary of accuracy percentages if they are not contextualized.</li>
<li>To understand sensitivity (same as recall), specifitiy, and precision, we first consider the following diagram, from a blog post by <a href="http://yuvalg.com/blog/2012/01/01/precision-recall-sensitivity-and-specificity/">Yuval Greenfield</a>:</li>
</ul>
<p><img src="http://i.imgur.com/cJDJU.png" alt="Measurement methods" /></p>
<ul>
<li>Let’s define some terms. Consider a binary classifications system that outputs a positive or negative label. Then a true positive is outcome is when the classifier correctly predicts a positive label. A false positive is when the classifier incorrectly predicts a positive label, and similar for the true and false negatives.</li>
<li>Accuracy, intuitively, is just the number of instances that we classified correctly over all the instances (so the instances we classified correctly and incorrectly). This means that <script type="math/tex">acc = \frac{TP + FN}{TP + FN + TN + FP}</script>.</li>
<li>Recall is defined as the proportion of correct positive classifications over the total number of positives. Therefore, we have the recall <script type="math/tex">r = \frac{TP}{TP + FN}</script>, where the sum <script type="math/tex">TP + FN</script> gives us all instances that are positive. Recall measures the proportion of actual positives that we predicted as positive. The term sensitivity is replaceable with sensitivity.</li>
<li>Precision measures a different quantity than recall, but they are very easy to mix up. Precision measures the proportion of actual positives over how many positives we predicted. This means that the precision <script type="math/tex">p = \frac{TP}{TP + FP}</script>. Note how this differs from recall. Recall measures how many positives we “found” out of all the positives, while precision measures the proportion of all our positive predictions that were correct.</li>
<li>Specificity is like Recall, but for negatives - it measures the proportion of the correct negative classifications over all of the negatives, giving us the ratio of how many negatives we found to all of the existing negatives. This means that the specificity <script type="math/tex">s = \frac{TN}{TN + FP}</script></li>
</ul>
<h3 id="methods">Methods</h3>
<ul>
<li>The researchers trained several different models of varying complexity on the diabetic retinopathy dataset.</li>
<li>As a baseline, the Inception-v3 architecture was used. Inception-v3 is a deep CNN with layers of “inception modules” that are composed of a concatenation of pooling, conv, and 1x1 conv steps. This is explained further in the next section.</li>
<li>
<p>The other networks used includes the “Doctor Net” that is extended to model the opinion of each doctor, and the “Weighted Doctor Net” that trains individual models for each doctor, and then combines their predictions through weighted averaging.</p>
</li>
<li>The cross entropy loss function was used to quantify the loss in all the models. The main difference between the several different models that the researchers trained can be seen in the cross entropy loss function. The usual inputs into the cross-entropy loss are the predictions for a certain image along with the true label. This was replaced with, for example, the target distribution (basically probabalistic labels) and averaged predictions.</li>
</ul>
<h3 id="inception-modules-in-convolutional-architectures">Inception Modules in Convolutional Architectures</h3>
<ul>
<li>At each step in a convolutional neural network’s architecture, you’re faced with many different possible choices. If you’re adding a convolutional layer, you’ll have to select the stride length, the kernel size, and whether you want to pad the edges or not. Altenratively you may want to add a pooling region, whether that’s max or average pooling.</li>
<li>The idea behind the inception module is that you don’t have to choose, and can instead apply all of these different options to your image/image transformation.</li>
<li>For example, you could have a 5 x 5 convolution followed by max pooling, as well as a 3 x 3 convolution followed by a 1 x 1 convolution, and simply concenate the outputs of these operations at the end. The following image, from the Udacity Course on Deep Learning, gives a good visualization of this:</li>
</ul>
<p><img src="https://raw.githubusercontent.com/rohan-varma/paper-analysis/master/noise-labels-paper/incmod.png" alt="Inception Module" /></p>
<ul>
<li>The main idea behind inception modules is that a 5 x 5 kernel and 3 x 3 kernel followed by a 1 x 1 convolution may both be beneficial to the modelling power of your architecture, so we could just use both, and the model will often perform better than using a single convolution. <a href="https://www.youtube.com/watch?v=VxhSouuSZDY">This video</a> explains the inception module in more detail.</li>
</ul>
<h3 id="modelling-label-noise-through-probabilistic-methods">Modelling label noise through probabilistic methods</h3>
<ul>
<li>
<p>The label noise was modelled by first assuming that a true label m is generated from an image s with some conditional probability: <script type="math/tex">p(m \vert{} s)</script>. Usually any form of deep neural networks (and general supervised ML) tries to learn this underlying probability distribution. Several learning algorithms such as binary logistic regression, softmax regression, and linear regression have a probabilistic interpretation of trying to model some underlying distribution. Here are a few examples:</p>
<ul>
<li>Binary logistic regression tries to model a bernoulli distribution by conditioning the label <script type="math/tex">y_n</script> on the input <script type="math/tex">x</script> and the weights of the model <script type="math/tex">w</script>: <script type="math/tex">p(y_n = 1 \vert{} x_n; w) = h_w(x_n)</script> where <script type="math/tex">h</script> is our model that we learn. More generally, we have the likelihood <script type="math/tex">L = \prod h_w(x_n)^{y_n} * (1 - h_w(x_n))^{1-y_n}</script>. We can then maximize the likelihood (or more typically, minimize the negative log likelihood) by applying gradient descent.</li>
<li>Linear regression can be interpretted as the real-valued output, <script type="math/tex">y</script>, being a linear function of the input <script type="math/tex">x</script> with Gaussin noise <script type="math/tex">n_1 \tilde{} N(\mu, \sigma)</script> added to it. Then we can write the log likelihood as <script type="math/tex">l(\theta) = \sum_i p(y_n \vert{} \theta * x, \sigma^2) = \sum_i \frac{-1}{2\sigma^2}(y_n - \theta^T x)^2 + Nlog(\sigma^2)</script>.</li>
<li>What these probabalistic interpretations let us do is see the assumptions our models make, which is key if we want to simulate the real world. For example, these probability distributions show us that a key assumption is that our data are independent of each other. More specifically for typical linear regression, we also assume that the noise in our model is drawn from a normal distribution with linear mean and constant variance.</li>
</ul>
</li>
<li>This paper tries to model a similar probability distribution <script type="math/tex">p(m \vert{} s)</script> but with deep neural networks. It further takes that probabilty distribution of labels and adds a corrupting probability. The ideal label was <script type="math/tex">m</script> but we observe, in our training set, a noisy label <script type="math/tex">\hat{m}</script> with probability <script type="math/tex">p(\hat{m} \vert{} m)</script>.</li>
<li>These probabilities can be drawn from any distribution; the researchers chose an asymmetric binary one. This allows us to account for the fact that even doctors disagree on the true label, so we better model real-world scenarios.</li>
</ul>
<h3 id="training-the-model">Training the Model</h3>
<ul>
<li>The training was done with TensorFlow across several different workers and GPUs. The model was pre-initialized with weights learned by the inception-v3 architecture on the ImageNet dataset.</li>
<li>This method of “transfer learning”, or transferring knowledge from one task to another, has recently gained popularity. The idea is that with learning parameters on ImageNet first, the model learns weights that aid with basic object recognition. Then, the model is trained on a more specific dataset to adjust its later layers, which model higher-level features of what we desire to learn.</li>
</ul>
<h3 id="results">Results</h3>
<ul>
<li>The results support the researcher’s thesis that generalization accuracy improves if the amount of information in the desired outputs is increased.</li>
<li>Training was done with 5-class loss. Results reported included the 5-class error, binary AUC, and specifity.</li>
<li>The hyperparameters were tuned with grid search. Methods to avoid overfitting that were used include L1 and L2-regularization as well as dropout throughout the networks. More information about regularization methods to prevent overfitting is in one of my blog posts here.</li>
<li>The “Weighted Doctor Network” (the network that averages weights of predictions given by several different models, learned for a particular doctor) performed best with a 5-class error fo 20.58%, beating out the baseline inception net and the expectation-maximization algorithm that had 23.83% and 23.74% error respectively.</li>
</ul>
<h3 id="grid-search">Grid Search</h3>
<ul>
<li>Grid search is a common method for tuning the hyperparameters for a deep model. Deep neural networks often require careful hyperparemeter tuning; for example, a learning rate that is too large or one that does not decay as training goes on may cause the algorithm to overshoot the minima and start to diverge. Therefore, we look at all the possible sequences of hyperparameters and pick the one that performs the best.</li>
<li>Specifically, we enumerate values for our hyperparameters:
<ul>
<li>learning_rates = [0.0001, 0.001, 0.01, 0.1]</li>
<li>momentum_consts = [0.1, 0.5, 1.0]</li>
<li>dropout_probability = [0.1, 0.5, 0.8]</li>
</ul>
</li>
<li>Next we do a search over all possible values. To evaluate the performance, it is important to use the validation set or k-fold cross validation. Never touch the test set during training:
<ul>
<li>for lr in learning_rates:
<ul>
<li>for momentum in momentum_consts:
<ul>
<li>for dropout in dropout_probs:
<ul>
<li>model = trained_model(X_train, y_train, lr, momentum, dropout) # calls the function that trains the model</li>
<li>cv_error = cross_validate(model, X_train, y_train)</li>
<li>update best cv error and hyperparams if less error is found</li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li>Finally, train a model with the selected hyperparameters.</li>
</ul>
</li>
<li>As you may have noticed, this method can get expensive as the number of different hyperparameters or different values for each goes up. For <script type="math/tex">n</script> different parameters with <script type="math/tex">k</script> possibilies we have to consider <script type="math/tex">k^n</script> different tuples.</li>
</ul>
<h3 id="conclusion">Conclusion</h3>
<ul>
<li>The paper showed that there are more effective methods to use the noise in labels than using a probability distribution or voting method. The network in this paper seeks to model the labels given by each individual doctor, and learn how to weight them optimially.</li>
</ul>
<h3 id="future-application">Future Application</h3>
<ul>
<li>This new method of modelling noise in the training datasets is pretty cool. I think it bettter models real-world datasets, where “predictions”, or diagnoses are made by experts with varying levels of experience, biases, and predispositions. For deep learning to advance in the medical field, modelling this aspect of medicine well will be essential. It also has application to other fields where noisy labels exist in any fashion. <a href="https://github.com/rohan-varma/paper-analysis/blob/master/tf-implementation.py">Here</a> is an example tensorflow implementation of training on corrupted labels.</li>
</ul>Link to paperImplementing a Neural Network in Python2017-02-10T00:00:00+00:002017-02-10T00:00:00+00:00http://rohan-varma.github.io/Neural-Net<p>Recently, I spent sometime writing out the code for a neural network in python from scratch, without using any machine learning libraries. It proved to be a pretty enriching experience and taught me a lot about how neural networks work, and what we can do to make them work better. I thought I’d share some of my thoughts in this post.</p>
<h3 id="defining-the-learning-problem">Defining the Learning Problem</h3>
<p>In supervised learning problems, we’re given a training dataset that contains pairs of input instances and their corresponding labels. For example, in the MNIST dataset, our input instances are images of handwritten digits, and our labels are a single digit that indicate the number written in the image. To input this training data to a computer, we need to numerically represent our data. Each image in the MNIST dataset is a 28 x 28 grayscale image, so we can represent each image as a vector <script type="math/tex">\vec{x} \in R^{784}</script>. The elements in the vector <script type="math/tex">x</script> are known as features, and in this case they’re values in between 0 and 255. Our labels are commonly denoted as <script type="math/tex">y</script>, and as mentioned, are in between 0 and 9. Here’s an an example from the MNIST dataset [1]:</p>
<p><img src="https://raw.githubusercontent.com/rohan-varma/rohan-blog/master/images/mnistimg.png" alt="image" /></p>
<p>We can think of this dataset as a sample from some probability distribution over the feature/label space, known as the data generating distribution. Specifically, this distribution gives us the probability of observing any particular <script type="math/tex">(x, y)</script> pairs for all <script type="math/tex">(x, y)</script> pairs in the cartesian product <script type="math/tex">X \cdot Y</script>. Intuitively, we would expect that the pair that consists of an image of a handwritten 2 and the label 2 to have a high probablity, while a pair that consists of a handwritten 2 and the label 9 to have a low probability.</p>
<p>Unfortunately, we don’t know what this data generating distribution is parametrized by, and this is where machine learning comes in: we aim to learn a function <script type="math/tex">h</script> that maps feature vectors to labels as accurately as possible, and in doing so, come up with estimates for the true underlying parameters. This function should generalize well: we don’t just want to learn a function that produces a flawless mapping on our training set. The function needs to be able to generalize over all unseen examples in the distribution. With this, we can introduce the idea of the loss function, a function that quantifies how off our prediction is from the true value. The loss function gives us a good idea about our model’s performance, so over the entire population of (feature vector, label) pairs, we’d want the expectation of the loss to be as low as possible. Therefore, we want to find <script type="math/tex">h(x)</script> that minimizes the following function:</p>
<script type="math/tex; mode=display">E[L(y, h(x))] = \sum_{(x, y) \in D} p(x, y)L(y, h(x))</script>
<p>However, there’s a problem here: we can’t compute <script type="math/tex">p(x, y)</script>, so we have to resort to approximations of the loss function based on the training data that we do have access to. To approximate our loss, it is common to sum the loss function’s output across our training data, and then divide it by the number of training examples to obtain an average loss, known as the training loss:</p>
<script type="math/tex; mode=display">\frac{1}{N} \sum_{i=1}^{N} L(y_i, h(x_i))</script>
<p>There are several different loss functions that we can use in our neural network to give us an idea of how well it is doing. The function that I ended up using was the cross-entropy loss, which will be discussed a bit later.</p>
<p>In the space of neural networks, the function <script type="math/tex">h(x)</script> we will find will consist of several operations of matrix multiplications followed by applying nonlinearity functions. The basic idea is that we need to find the parameters of this function that both produce a low training loss and generalize well to unseen data. With our learning problem defined, we can get on to the theory behind neural networks:</p>
<h3 id="precursor-a-single-neuron">Precursor: A single Neuron</h3>
<p>In the special case of binary classification, we can model an artificial neuron as receiving a linear combination of our inputs <script type="math/tex">w^{T} \cdot x</script>, and then computing a function that returns either 0 or 1, which is the predicted label of the input.</p>
<p>The weights are applied to the inputs, which are just the features of the training instance. Then, as a simple example of a function an artificial neuron can compute, we take the sign of the resulting number, and map that to a prediction. So the following is the neural model of learning [2]:</p>
<p><img src="https://raw.githubusercontent.com/rohan-varma/rohan-blog/gh-pages/images/perceptron.png" alt="Perceptron" /></p>
<p>There’s a few evident limitations to this kind of learning - for one, it can only do binary classification. Moreover, this neuron can only linearly separate data, and therefore this model assumes that the data is indeed linearly separable. Deep neural networks are capable of learning representations that model the nonlinearity inherent in many data samples. The idea, however, is that neural networks are just made up of layers of these neurons, which by themselves, are pretty simple, but extremely powerful when they are combined.</p>
<h3 id="from-binary-classification-to-multinomial-classfication">From Binary Classification to Multinomial Classfication</h3>
<p>In the context of our MNIST problem, we’re interested in producing more than a binary classification - we want to predict one label out of a possible ten. One intuitive way of doing this is simply training several classifiers - a one classifier, a two classifier, and so on. We don’t want to train multiple models separately though, we’d like a single model to learn all the possible different classifications.</p>
<p>If we consider our basic model of a neuron, we see that it has one vector of weights that it applies to determine a label. What if we had multiple vectors - a matrix - of weights instead? Then, each row of weights could represent a separate classifier. To see this clearly, we can start off with a simple linear mapping:</p>
<script type="math/tex; mode=display">a = W^{T}x + b</script>
<p>For our MNIST problem, x is a vector with 784 components, W was originally a single vector with 784 values, and the bias, b, was a single number. However, if we modified W to be a matrix instead, we get multiple rows of weights, each of which can be applied to the input x via a matrix multiplication. Since we want to be able to predict 10 different labels, we can let W be a 10 x 784 matrix, and the matrix product <script type="math/tex">Wx</script> will produce a column vector of values that represent the output of 10 separate classifiers, where the weights for each classifier is given by the rows of W. The bias term is now a 10-dimensional vector that each add a bias term to matrix product. The core idea, however, is that this matrix of weights represent different classifiers, and now we can predict more than just binary labels. An image from Stanford’s CS 231n course shows this clearly [3]:</p>
<p><img src="https://raw.githubusercontent.com/rohan-varma/rohan-blog/gh-pages/images/imagemap.jpg" alt="Multi-class classification" /></p>
<p>Now that we have a vector of outputs that roughly correspond to scores for each predicted class, we’d like to figure out the most likely label. To do this, we can map our 10 dimensional vector to another 10 dimensional vector which each value is in the range (0, 1), and the sum of all values is 1. This is known as the softmax function. We can use the output of this function to represent a probability distribution: each value gives us the probability of the input x mapping to a particular label y. The softmax function’s input and output are both vectors, and it can be defined as <script type="math/tex">\frac{e^{z_i}}{\sum_{i=1}^{N} e^{z_i}}</script></p>
<p>Next, we can use our loss function discussed previously to evaluate how well our classifier is doing. Specifically, we use the cross-entropy loss, which for a single prediction/label pair, is given by <script type="math/tex">C(S,L) = - \sum_{i}L_{i}log(S_{i})</script>.</p>
<p>Here, <script type="math/tex">L</script> is a specific one-hot encoded label vector, meaning that it is a column vector that has a 1 at the index corresponding to its label, and is zero everywhere else. <script type="math/tex">S</script> is a prediction vector whose elements sum to 1. As an example, we may have:</p>
<script type="math/tex; mode=display">L = \begin{bmatrix}
1 \\
0 \\
0
\end{bmatrix}, S = \begin{bmatrix}
0.2 \\
0.7 \\
0.1
\end{bmatrix} \longrightarrow{} C(S, L) = - \sum_{i=1}^{N}L_ilog(S_i) = -log(0.2) = 0.70</script>
<p>The contribution to the entire training data’s loss by this pair was 0.70. To contrast, we can swap the first two probabilities in our softmax vector. We then end up with a lower loss:</p>
<script type="math/tex; mode=display">L = \begin{bmatrix}
1 \\
0 \\
0
\end{bmatrix}, S = \begin{bmatrix}
0.7 \\
0.2 \\
0.1
\end{bmatrix} \longrightarrow{} C(S, L) = - \sum_{i=1}^{N}L_ilog(S_i) = -log(0.7) = 0.15</script>
<p>So our cross-entropy loss makes intuitive sense: it is lower when our softmax vector has a high probability at the index of the true label, and it is higher when our probabilities indicate a wrong or uncertain choice. The average cross entropy loss is given by plugging into the average training loss function given above. A large part of training our neural network will be finding parameters that make the value of this function as small as possible, but still ensuring that our parameters generalize well to unseen data. For the linear softmax classifier, the training loss can be written as:</p>
<script type="math/tex; mode=display">L = - \frac{1}{N}\sum_{j} C( S(Wx_j + b), L_j)</script>
<p>This is the function that we seek to minimize. Using the gradient descent algorithm, we can learn a particular matrix of weights that performs well and produces a low training loss. The assumption is that a low trainin gloss will correspond to a low expected loss across all samples in the population of data, but this is a risky assumption that can lead to overfitting. Therefore, a lot of research into machine learning is directed towards figuring out how to minimize training loss while also retaining the ability to generalize.</p>
<p>Now that we’ve figured out how to linearly model multilabel classification, we can create a basic neural network. Consider what happens when we combine the idea of artificial neurons with our logistic classifier. Instead of computing a linear function <script type="math/tex">Wx + b</script> and immediately passing the result to a softmax function, we can have an intermediate step: pass the output of our linear combination to a vector of artificial neurons, that each compute a nonlinear function. Then, we can take a linear combination with a vector of weights for each of these outputs, and pass that into our softmax function.</p>
<p>Our previous linear function was given by:</p>
<script type="math/tex; mode=display">\hat{y} = softmax(W_1x + b)</script>
<p>And our new function is not too different:</p>
<script type="math/tex; mode=display">\hat{y} = softmax(W_2(nonlin(W_1x + b_1)) + b_2)</script>
<p>The key differences are that we have more biases and weights, as well as a larger composition of functions. This function is harder to optimize, and introduces a few interesting ideas about learning the weights with an algorithm known as backpropagation.</p>
<p>This “intermediate step” is actually known as a hidden layer, and we have complete control over it, meaning that among other things, we can vary the number of parameters or connections between weights and neurons to obtain an optimal network. It’s also important to notice that we can stack an arbitrary amount of these hidden layers between the input and output of our network, and we can tune these layers individually. This lets us make our network as deep as we want it. For example, here’s what a neural network with two hidden layers would look like [4]:</p>
<p><img src="https://raw.githubusercontent.com/rohan-varma/rohan-blog/gh-pages/images/neuralnet.png" alt="neural network" /></p>
<h3 id="implementing-the-neural-network">Implementing the Neural Network</h3>
<p>With a bit of background out of the way, we can actually begin implementing our network. If we’re going to implement a neural network with one hidden layer of arbitrary size, we need to initalize two matrices of weights: one to multiply with our inputs to feed into the hidden layer, and one to multiply with the outputs of our hidden layer, to feed into the softmax layer. Here’s how we can initialize our weights:</p>
<div class="language-python highlighter-rouge"><pre class="highlight"><code><span class="kn">import</span> <span class="nn">numpy</span> <span class="kn">as</span> <span class="nn">np</span>
<span class="k">def</span> <span class="nf">init_weights</span><span class="p">(</span><span class="n">num_input_features</span><span class="p">,</span> <span class="n">num_hidden_units</span><span class="p">,</span> <span class="n">num_output_units</span><span class="p">):</span>
<span class="s">"""initialize weights uniformly randomly with small values"""</span>
<span class="n">w1</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">random</span><span class="o">.</span><span class="n">uniform</span><span class="p">(</span><span class="o">-</span><span class="mf">1.0</span><span class="p">,</span> <span class="mf">1.0</span><span class="p">,</span> <span class="n">size</span><span class="o">=</span><span class="n">num_hidden_units</span><span class="o">*</span><span class="p">(</span><span class="n">num_input_features</span> <span class="o">+</span> <span class="mi">1</span><span class="p">)</span>
<span class="p">)</span><span class="o">.</span><span class="n">reshape</span><span class="p">(</span><span class="n">num_hidden_units</span><span class="p">,</span> <span class="n">num_input_features</span> <span class="o">+</span> <span class="mi">1</span><span class="p">)</span>
<span class="n">w2</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">random</span><span class="o">.</span><span class="n">uniform</span><span class="p">(</span><span class="o">-</span><span class="mf">1.0</span><span class="p">,</span> <span class="mf">1.0</span><span class="p">,</span> <span class="n">size</span><span class="o">=</span><span class="n">num_output_units</span><span class="o">*</span><span class="p">(</span><span class="n">num_hidden_units</span><span class="o">+</span><span class="mi">1</span><span class="p">))</span><span class="o">.</span><span class="n">reshape</span><span class="p">(</span><span class="n">num_output_units</span><span class="p">,</span> <span class="n">num_hidden_units</span><span class="o">+</span> <span class="mi">1</span><span class="p">)</span>
<span class="k">return</span> <span class="n">w1</span><span class="p">,</span> <span class="n">w2</span>
<span class="n">w1</span><span class="p">,</span> <span class="n">w2</span> <span class="o">=</span> <span class="n">init_weights</span><span class="p">(</span><span class="mi">784</span><span class="p">,</span> <span class="mi">30</span><span class="p">,</span> <span class="mi">10</span><span class="p">)</span>
<span class="k">print</span> <span class="n">w1</span><span class="o">.</span><span class="n">shape</span> <span class="c"># expect </span>
<span class="k">print</span> <span class="n">w2</span><span class="o">.</span><span class="n">shape</span> <span class="c"># expect</span>
</code></pre>
</div>
<div class="highlighter-rouge"><pre class="highlight"><code>(30, 785)
(10, 31)
</code></pre>
</div>
<p>An important preprocessing step is to one-hot encode all of our labels. This is a typical process in machine learning and deep learning problems that involve modeling more labels than two. We begin with a 1-dimensional vector <script type="math/tex">y</script> with <em>m</em> elements, where element <script type="math/tex">y_i \in [0...N]</script> and turn it into an <em>N x M</em> matrix <em>Y</em>. Then, the <em>ith</em> column in <em>Y</em> represents the <em>ith</em> training label (this is also the element at index <em>i</em> in <script type="math/tex">y_i</script>). For this column, the label is given by the element <em>j</em> for which the value <script type="math/tex">Y[j][i] = 1</script>.</p>
<p>In other words, we’ve taken a vector in which a label <em>j</em> is given by <script type="math/tex">y[i] = j</script> and changed it into the matrix where the label would be <em>j</em> for the <em>ith</em> training example if <script type="math/tex">Y[j][i] = 1</script>. From this, we can implement a one-hot encoding:</p>
<div class="language-python highlighter-rouge"><pre class="highlight"><code><span class="k">def</span> <span class="nf">encode_labels</span><span class="p">(</span><span class="n">y</span><span class="p">,</span> <span class="n">num_labels</span><span class="p">):</span>
<span class="s">""" Encode labels into a one-hot representation
Params:
y: numpy array of num_samples, contains the target class labels for each training example.
For example, y = [2, 1, 3, 3] -> 4 training samples, and the ith sample has label y[i]
k: number of output labels
returns: onehot, a matrix of labels by samples. For each column, the ith index will be
"hot", or 1, to represent that index being the label.
"""</span>
<span class="n">onehot</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">zeros</span><span class="p">((</span><span class="n">num_labels</span><span class="p">,</span> <span class="n">y</span><span class="o">.</span><span class="n">shape</span><span class="p">[</span><span class="mi">0</span><span class="p">]))</span>
<span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="n">y</span><span class="o">.</span><span class="n">shape</span><span class="p">[</span><span class="mi">0</span><span class="p">]):</span>
<span class="n">onehot</span><span class="p">[</span><span class="n">y</span><span class="p">[</span><span class="n">i</span><span class="p">],</span> <span class="n">i</span><span class="p">]</span> <span class="o">=</span> <span class="mf">1.0</span>
<span class="k">return</span> <span class="n">onehot</span>
<span class="n">y_train</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">array</span><span class="p">([</span><span class="mi">0</span><span class="p">,</span><span class="mi">1</span><span class="p">,</span><span class="mi">8</span><span class="p">,</span><span class="mi">7</span><span class="p">,</span><span class="mi">4</span><span class="p">,</span><span class="mi">5</span><span class="p">,</span><span class="mi">0</span><span class="p">,</span><span class="mi">1</span><span class="p">,</span><span class="mi">4</span><span class="p">])</span>
<span class="n">Y</span> <span class="o">=</span> <span class="n">encode_labels</span><span class="p">(</span><span class="n">y_train</span><span class="p">,</span><span class="mi">9</span><span class="p">)</span>
<span class="n">Y</span>
</code></pre>
</div>
<div class="highlighter-rouge"><pre class="highlight"><code>array([[ 1., 0., 0., 0., 0., 0., 1., 0., 0.],
[ 0., 1., 0., 0., 0., 0., 0., 1., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 1., 0., 0., 0., 1.],
[ 0., 0., 0., 0., 0., 1., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 1., 0., 0., 0., 0., 0.],
[ 0., 0., 1., 0., 0., 0., 0., 0., 0.]])
</code></pre>
</div>
<p>With that out of the way, we’re ready to start implementing the bread and butter of the neural network: the <code class="highlighter-rouge">fit()</code> function. Fitting a function to our data requires two key steps: the forward propagation, where we make a prediction for a specific training example, and the backpropagation algorithm, where we update each of our weights by calculating the weight’s impact on our prediction error. The prediction error is quantified by the average training loss discussed above.</p>
<p>The first step in implementing the entire fit function will be to implement forward propagation. I decided to use the tanh function as the nonlinearity. Other popular choices include the sigmoid and ReLu functions. The forward propagation code passes our inputs to the hidden layer via a matrix multiplication with weights, and the output of the hidden layer is multiplied with a different set of weights, the result of which is passed into the softmax layer from which we obtain our predictions.</p>
<p>It’s also useful to save and return these intermediate values instead of only returning the prediction, since we’ll need these values later for backpropagation.</p>
<div class="language-python highlighter-rouge"><pre class="highlight"><code><span class="k">def</span> <span class="nf">forward</span><span class="p">(</span><span class="n">X</span><span class="p">,</span> <span class="n">w1</span><span class="p">,</span> <span class="n">w2</span><span class="p">,</span> <span class="n">do_dropout</span> <span class="o">=</span> <span class="bp">True</span><span class="p">):</span>
<span class="s">""" Compute feedforward step """</span>
<span class="c">#the activation of the input layer is simply the input matrix plus bias unit, added for each sample.</span>
<span class="n">a1</span> <span class="o">=</span> <span class="n">add_bias_unit</span><span class="p">(</span><span class="n">X</span><span class="p">)</span>
<span class="c">#the input of the hidden layer is obtained by applying our weights to our inputs. We essentially take a linear combination of our inputs</span>
<span class="n">z2</span> <span class="o">=</span> <span class="n">w1</span><span class="o">.</span><span class="n">dot</span><span class="p">(</span><span class="n">a1</span><span class="o">.</span><span class="n">T</span><span class="p">)</span>
<span class="c">#applies the tanh function to obtain the input mapped to a distrubution of values between -1 and 1</span>
<span class="n">a2</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">tanh</span><span class="p">(</span><span class="n">z2</span><span class="p">)</span>
<span class="c">#add a bias unit to activation of the hidden layer.</span>
<span class="n">a2</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">add_bias_unit</span><span class="p">(</span><span class="n">a2</span><span class="p">,</span> <span class="n">column</span><span class="o">=</span><span class="bp">False</span><span class="p">)</span>
<span class="c"># compute input of output layer in exactly the same manner.</span>
<span class="n">z3</span> <span class="o">=</span> <span class="n">w2</span><span class="o">.</span><span class="n">dot</span><span class="p">(</span><span class="n">a2</span><span class="p">)</span>
<span class="c"># the activation of our output layer is just the softmax function.</span>
<span class="n">a3</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">softmax</span><span class="p">(</span><span class="n">z3</span><span class="p">)</span>
<span class="k">return</span> <span class="n">a1</span><span class="p">,</span> <span class="n">z2</span><span class="p">,</span> <span class="n">a2</span><span class="p">,</span> <span class="n">z3</span><span class="p">,</span> <span class="n">a3</span>
</code></pre>
</div>
<p>Since these operations are all vectorized, we generally run forward propagation on the entire matrix of training data at once. Next, we want to quantify how “off” our weights are, baed on what was predicted. The cost function is given by <script type="math/tex">-\sum_{i,j} L_{i,j}log(S_{i,j})</script> , where <script type="math/tex">L</script> is the one-hot encoded label for a particular example and <script type="math/tex">S</script> is the output of the softmax function in the final layer of our neural network. In code, it can be implemented as follows:</p>
<div class="language-python highlighter-rouge"><pre class="highlight"><code><span class="k">def</span> <span class="nf">get_cost</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">y_enc</span><span class="p">,</span> <span class="n">output</span><span class="p">,</span> <span class="n">w1</span><span class="p">,</span> <span class="n">w2</span><span class="p">):</span>
<span class="s">""" Compute the cost function."""</span>
<span class="n">cost</span> <span class="o">=</span> <span class="o">-</span> <span class="n">np</span><span class="o">.</span><span class="nb">sum</span><span class="p">(</span><span class="n">y_enc</span><span class="o">*</span><span class="n">np</span><span class="o">.</span><span class="n">log</span><span class="p">(</span><span class="n">output</span><span class="p">))</span>
<span class="n">cost</span> <span class="o">=</span> <span class="n">cost</span>
<span class="k">return</span> <span class="n">cost</span><span class="o">/</span><span class="n">y_enc</span><span class="o">.</span><span class="n">shape</span><span class="p">[</span><span class="mi">1</span><span class="p">]</span> <span class="c">#average cost</span>
</code></pre>
</div>
<h3 id="learning-weights-with-gradient-descent">Learning Weights with Gradient Descent</h3>
<p>Now we’re at a stage where our neural network can make predictions given training data, compare it to the actual labels, and quantify the error across our entire training dataset. Our network is able to learn quite yet however. The actual “learning” happens with the gradient descent algorithm. Gradient descent works by computing the partial derivative of our weights with respect to the cost. The vector of these partial derivatives gives us the direction of fastest increase for our loss function (in particular, it can be shown mathematically that the gradient of a function points in the direction of fastest increase). Then, we update each of our weights by the negative value of the gradient (hence the “descent” part of gradient descent. This can be seen as taking a “step” in the direction of a minimum. The size of this step is given by a hyperparameter known as the learning rate, which turns out to be extremely important in getting gradient descent to work. In general, the gradient descent algorithm can be given as follows:</p>
<p><em>while not converged</em>:</p>
<script type="math/tex; mode=display">\delta_i = \frac{\delta L}{\delta w_i} \forall w_i \in W</script>
<script type="math/tex; mode=display">w_i := w_i - \alpha*\delta_i</script>
<p>Gradient descent seeks to find the weights that bring our cost function to a global minimum. Intuitively, this makes sense, as we’d like our cost function to be as low as possible (while still taking care not to overfit on our training data). However, the functions that quantify the loss for most machine learning algorithms tend not to have an explicit solution to <script type="math/tex">\frac{\delta L}{\delta W} = 0</script>, so we must use numerical optimization algorithms such as gradient descent to hopefully get to a local minimum. It turns out that we’re not always gauranteed to get to a global minimum either. Gradient descent only converges to a global minimum if our cost function is <strong>convex</strong>, and while cost functions for algorithms such as logistic regression are convex, the cost function for our single hidden layer neural network is not.</p>
<p>We can still use gradient descent and get to a reasonably good set of weights, however. The art of doing this is an active area of deep learning research. Currently, a common method for implementing gradient descent for deep learning seems to be:</p>
<p>1) Initializing your weights sensibly. This often involves some experimentation in how you initialize your weights. If your network is not very deep, initializing them randomly with small values and low variance usually works. If your network is deeper, larger values are preferred. <a href="http://andyljones.tumblr.com/post/110998971763/an-explanation-of-xavier-initialization">Xavier initialization</a> is a useful algorithm that determines weight initialization with respect to the net’s size.</p>
<p>2) Choosing an optimal learning rate. If the learning rate is too large, gradient descent could end up actually diverging, or skipping over the minimum entirely since it takes steps that are too large. Likewise, if the learning rate is too small, gradient descent will converge much more slowly. In general, it is advisable to start off with a small learning rate and decay it over time as your function begins to converge.</p>
<p>3) Use minibatch gradient descent. Instead of computing the loss and weight updates across the entire set of training examples, <strong>randomly</strong> chooose a subset of your training examples and use that to update your weights. While this may cause gradient descent to not work optimally at each iteration, it is much more efficient so we end up winning by a lot. We essentially approximate the gradient across the entire training set from a sample from the training set.</p>
<p>4) Use the momentum method. This involves remembering the previous gradients, and factoring in the direction of those previous gradients when calculating the current update. This has proved to be pretty successful, as Geoffrey Hinton discusses in <a href="https://www.youtube.com/watch?v=8yg2mRJx-z4">this video</a>.</p>
<p>As a side note, the co-founder of OpenAI, Ilya Sutskever, has more about training deep neural networks with stochastic gradient descent <a href="http://yyue.blogspot.com/2015/01/a-brief-overview-of-deep-learning.html">here</a></p>
<p>Here’s an implementation of the fit() function:</p>
<div class="language-python highlighter-rouge"><pre class="highlight"><code><span class="k">def</span> <span class="nf">fit</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">X</span><span class="p">,</span> <span class="n">y</span><span class="p">,</span> <span class="n">print_progress</span><span class="o">=</span><span class="bp">True</span><span class="p">):</span>
<span class="s">""" Learn weights from training data """</span>
<span class="n">X_data</span><span class="p">,</span> <span class="n">y_data</span> <span class="o">=</span> <span class="n">X</span><span class="o">.</span><span class="n">copy</span><span class="p">(),</span> <span class="n">y</span><span class="o">.</span><span class="n">copy</span><span class="p">()</span>
<span class="n">y_enc</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">encode_labels</span><span class="p">(</span><span class="n">y</span><span class="p">,</span> <span class="bp">self</span><span class="o">.</span><span class="n">n_output</span><span class="p">)</span>
<span class="c"># init previous gradients</span>
<span class="n">prev_grad_w1</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">zeros</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">w1</span><span class="o">.</span><span class="n">shape</span><span class="p">)</span>
<span class="n">prev_grad_w2</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">zeros</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">w2</span><span class="o">.</span><span class="n">shape</span><span class="p">)</span>
<span class="c">#pass through the dataset</span>
<span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">epochs</span><span class="p">):</span>
<span class="bp">self</span><span class="o">.</span><span class="n">learning_rate</span> <span class="o">/=</span> <span class="p">(</span><span class="mi">1</span> <span class="o">+</span> <span class="bp">self</span><span class="o">.</span><span class="n">decay_rate</span><span class="o">*</span><span class="n">i</span><span class="p">)</span>
<span class="c"># use minibatches</span>
<span class="n">mini</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">array_split</span><span class="p">(</span><span class="nb">range</span><span class="p">(</span><span class="n">y_data</span><span class="o">.</span><span class="n">shape</span><span class="p">[</span><span class="mi">0</span><span class="p">]),</span> <span class="bp">self</span><span class="o">.</span><span class="n">minibatch_size</span><span class="p">)</span>
<span class="k">for</span> <span class="n">idx</span> <span class="ow">in</span> <span class="n">mini</span><span class="p">:</span>
<span class="c">#feed feedforward</span>
<span class="n">a1</span><span class="p">,</span> <span class="n">z2</span><span class="p">,</span> <span class="n">a2</span><span class="p">,</span> <span class="n">z3</span><span class="p">,</span> <span class="n">a3</span><span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">forward</span><span class="p">(</span><span class="n">X_data</span><span class="p">[</span><span class="n">idx</span><span class="p">],</span> <span class="bp">self</span><span class="o">.</span><span class="n">w1</span><span class="p">,</span> <span class="bp">self</span><span class="o">.</span><span class="n">w2</span><span class="p">)</span>
<span class="n">cost</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">get_cost</span><span class="p">(</span><span class="n">y_enc</span><span class="o">=</span><span class="n">y_enc</span><span class="p">[:,</span> <span class="n">idx</span><span class="p">],</span> <span class="n">output</span><span class="o">=</span><span class="n">a3</span><span class="p">,</span> <span class="n">w1</span><span class="o">=</span><span class="bp">self</span><span class="o">.</span><span class="n">w1</span><span class="p">,</span> <span class="n">w2</span><span class="o">=</span><span class="bp">self</span><span class="o">.</span><span class="n">w2</span><span class="p">)</span>
<span class="c">#compute gradient via backpropagation</span>
<span class="n">grad1</span><span class="p">,</span> <span class="n">grad2</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">backprop</span><span class="p">(</span><span class="n">a1</span><span class="o">=</span><span class="n">a1</span><span class="p">,</span> <span class="n">a2</span><span class="o">=</span><span class="n">a2</span><span class="p">,</span> <span class="n">a3</span><span class="o">=</span><span class="n">a3</span><span class="p">,</span> <span class="n">z2</span><span class="o">=</span><span class="n">z2</span><span class="p">,</span> <span class="n">y_enc</span><span class="o">=</span><span class="n">y_enc</span><span class="p">[:,</span> <span class="n">idx</span><span class="p">],</span> <span class="n">w1</span><span class="o">=</span><span class="bp">self</span><span class="o">.</span><span class="n">w1</span><span class="p">,</span> <span class="n">w2</span><span class="o">=</span><span class="bp">self</span><span class="o">.</span><span class="n">w2</span><span class="p">)</span>
<span class="c"># update parameters, multiplying by learning rate + momentum constants</span>
<span class="c"># gradient update: w += -alpha * gradient.</span>
<span class="n">w1_update</span><span class="p">,</span> <span class="n">w2_update</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">learning_rate</span><span class="o">*</span><span class="n">grad1</span><span class="p">,</span> <span class="bp">self</span><span class="o">.</span><span class="n">learning_rate</span><span class="o">*</span><span class="n">grad2</span>
<span class="c"># gradient update: w += -alpha * gradient.</span>
<span class="c"># use momentum - add in previous gradient mutliplied by a momentum hyperparameter.</span>
<span class="n">momentum_factor_w1</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">momentum_const</span> <span class="o">*</span> <span class="n">prev_grad_w1</span>
<span class="n">momentum_factor_w2</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">momentum_const</span> <span class="o">*</span> <span class="n">prev_grad_w2</span>
<span class="c">#update</span>
<span class="bp">self</span><span class="o">.</span><span class="n">w1</span> <span class="o">+=</span> <span class="o">-</span><span class="p">(</span><span class="n">w1_update</span> <span class="o">+</span> <span class="n">momentum_factor_w1</span><span class="p">)</span>
<span class="bp">self</span><span class="o">.</span><span class="n">w2</span> <span class="o">+=</span> <span class="o">-</span><span class="p">(</span><span class="n">w2_update</span> <span class="o">+</span> <span class="n">momentum_factor_w2</span><span class="p">)</span>
<span class="c"># save current gradients</span>
<span class="n">prev_grad_w1</span><span class="p">,</span> <span class="n">prev_grad_w2</span> <span class="o">=</span> <span class="n">w1_update</span><span class="p">,</span> <span class="n">w2_update</span>
<span class="k">if</span> <span class="n">print_progress</span> <span class="ow">and</span> <span class="p">(</span><span class="n">i</span><span class="o">+</span><span class="mi">1</span><span class="p">)</span> <span class="o">%</span> <span class="mi">50</span><span class="o">==</span><span class="mi">0</span><span class="p">:</span>
<span class="k">print</span> <span class="s">"Epoch: "</span> <span class="o">+</span> <span class="nb">str</span><span class="p">(</span><span class="n">i</span><span class="o">+</span><span class="mi">1</span><span class="p">)</span>
<span class="k">print</span> <span class="s">"Loss: "</span> <span class="o">+</span> <span class="nb">str</span><span class="p">(</span><span class="n">cost</span><span class="p">)</span>
<span class="n">acc</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">training_acc</span><span class="p">(</span><span class="n">X</span><span class="p">,</span> <span class="n">y</span><span class="p">)</span>
<span class="k">print</span> <span class="s">"Training Accuracy: "</span> <span class="o">+</span> <span class="nb">str</span><span class="p">(</span><span class="n">acc</span><span class="p">)</span>
<span class="k">return</span> <span class="bp">self</span>
</code></pre>
</div>
<p>To compute the actual gradients, we use the backpropagation algorithm that calculates the gradients that we need to update our weights from the outputs of our feed forward step. Essentially, we repeatedly apply the chain rule starting from our outputs until we end up with values for <script type="math/tex">\frac{\delta L}{\delta W_1}</script> and <script type="math/tex">\frac{\delta L}{\delta W_2}</script>. CS 231N provides an <a href="http://cs231n.github.io/optimization-2/">excellent explanation</a> of backprop.</p>
<p>Our forward pass was given by:</p>
<div class="language-python highlighter-rouge"><pre class="highlight"><code><span class="n">a1</span> <span class="o">=</span> <span class="n">X</span>
<span class="n">z2</span> <span class="o">=</span> <span class="n">w1</span> <span class="o">*</span> <span class="n">a1</span><span class="o">.</span><span class="n">T</span>
<span class="n">a2</span> <span class="o">=</span> <span class="n">tanh</span><span class="p">(</span><span class="n">z2</span><span class="p">)</span>
<span class="n">z3</span> <span class="o">=</span> <span class="n">w2</span> <span class="o">*</span> <span class="n">a1</span>
<span class="n">a3</span> <span class="o">=</span> <span class="n">softmax</span><span class="p">(</span><span class="n">z3</span><span class="p">)</span>
</code></pre>
</div>
<p>Using these values, our backwards pass can be given by:</p>
<div class="language-python highlighter-rouge"><pre class="highlight"><code><span class="n">s3</span> <span class="o">=</span> <span class="n">a3</span> <span class="o">-</span> <span class="n">y_actual</span>
<span class="n">grad_w1</span> <span class="o">=</span> <span class="n">s3</span> <span class="o">*</span> <span class="n">a2</span>
<span class="n">s2</span> <span class="o">=</span> <span class="n">w2</span><span class="o">.</span><span class="n">T</span> <span class="o">*</span> <span class="n">s3</span> <span class="o">*</span> <span class="n">tanh</span><span class="p">(</span><span class="n">z2</span><span class="p">,</span> <span class="n">deriv</span><span class="o">=</span><span class="bp">True</span><span class="p">)</span>
<span class="n">grad_w2</span> <span class="o">=</span> <span class="n">s3</span> <span class="o">*</span> <span class="n">a2</span><span class="o">.</span><span class="n">T</span>
</code></pre>
</div>
<p>The results of our backwards pass were used in the fit() function to update our weights. That’s essentially all of the important parts of implementing a neural network, and training this vanilla neural network on MNIST with 1000 epochs gave me about 95% accuracy on test data. There’s still a few more bells and whistles we can add to our network to make it generalize better to unseen data, however. These techniques reduce overfitting, and two common ones are L2-regularization and dropout.</p>
<h3 id="l2-regularization">L2-regularization</h3>
<p>Using L2-regularization in neural networks is the most common way to address the issue of overfitting. L2 regularization adds a term to the cost function which we seek to minimize.</p>
<p>Previously, our cost function was given by <script type="math/tex">- \sum_{i,j} L_{i,j} log(S_{i,j})</script></p>
<p>Now, we tack on an additional regularization term: <script type="math/tex">0.5 \lambda W^{2}</script>. Essentially, we impose a penalty on large weight values. Large weights are indicative of overfitting, so we want to keep the weights in our model relatively small, which is more indicative of a simpler model. To see why this is, consider the classic case of overfitting, where our learning algorithm essentially memorizes the training data [5]:</p>
<p><img src="https://raw.githubusercontent.com/rohan-varma/rohan-blog/gh-pages/images/overfitting.png" alt="overfitting" /></p>
<p>The values for the degree 9 polynomial are much greater than the values for the degree 3 polynomial:</p>
<p><img src="https://raw.githubusercontent.com/rohan-varma/rohan-blog/gh-pages/images/overfitting2.png" alt="overfitting values" /></p>
<p>With regularization, when we minimize the cost function, we have two separate goals. Minimizing the first term picks weight values that give us the smallest training error. Minimizing the second term picks weight values that are as small as possible. The value of the hyperparameter <script type="math/tex">\lambda</script> controls how much we penalize large weights: if <script type="math/tex">\lambda</script> is 0, we don’t regularize at all, and if <script type="math/tex">\lambda</script> is very large, then the entropy term becomes ignored and we prioritize small weight values.</p>
<p>Adding the L2-regularization term to the cost function does not change gradient descent very much. The derivative with respect to <script type="math/tex">W</script> with of to the regularization term <script type="math/tex">0.5 \lambda W^2</script> is simply <script type="math/tex">\lambda W</script>, so we just add that term while computing the gradient. The result of adding this extra term to the gradients is that each time we update our weights, the weights undergo a linear decay.</p>
<p>While L2-regularization is quite popular, a few other forms of regularization are used as well. Another common method is L1-regularization, in which we add on the L1-norm of our weights, multiplied by the regularization hyperparameter: <script type="math/tex">\lambda W</script>.</p>
<p>With L1-regularization, we penalize weights that are non-zero, thus leading our network to learn sparse vectors of weights (vectors where many of the weight entries are zero). Therefore, our neurons will only fire when the most important features (whatever they may be) are detected in our training examples. This helps with feature selection.</p>
<h3 id="dropout">Dropout</h3>
<p>Dropout is a recently introduced, but very effective technique for reducing overfitting in neural networks. Generally, every neuron in a particular layer is connected to all the neurons in the next layer. This is called a “fully-connected” or “Dense” layer - all activations are passed through the layer in the network. Dropout randomly drops a subset of a layer’s neuron’s activations, so the neurons in the next layer don’t receive any activations from the dropped neurons in the previous layer. This process is random, meaning that a different set of activations is discarded across different iterations of learning. Here’s a visualization of what happens when dropout is in use [6]:</p>
<p><img src="https://raw.githubusercontent.com/rohan-varma/rohan-blog/master/images/dropout.jpeg" alt="dropout" /></p>
<p>When dropout is used, each neuron is forced to learn redundant representations of its features, meaning that it is less likely to only fire when an extremely specific set of features is seen. This leads to better generalization. Alternatively, dropout can be seen as training several different neural network architectures during training (since some neurons are sampled out). When the network is tested, we don’t discard any activations, so it is similar to taking an average prediction from many different (though not independent) neural network architectures.</p>
<p>Dropout is very effective, often yielding better results than state-of-the-art regularization and early-stopping (stopping training when the error on validation dataset gets too high). In a <a href="http://www.cs.toronto.edu/~rsalakhu/papers/srivastava14a.pdf">paper describing dropout</a>, researchers were able to train a 65-million parameter network on MNIST (which has 60,000 training examples) with only 0.95% error using dropout - overfitting would have been a huge issue if such a large network relied only on regularization methods.</p>
<p>To implement dropout, we can set some of the activations computed to 0, and then pass that vector of results to the next layer. Forward propagation changes slightly:</p>
<div class="language-python highlighter-rouge"><pre class="highlight"><code><span class="k">def</span> <span class="nf">forward</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">X</span><span class="p">,</span> <span class="n">w1</span><span class="p">,</span> <span class="n">w2</span><span class="p">,</span> <span class="n">do_dropout</span> <span class="o">=</span> <span class="bp">True</span><span class="p">):</span>
<span class="s">""" Compute feedforward step """</span>
<span class="n">a1</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">add_bias_unit</span><span class="p">(</span><span class="n">X</span><span class="p">)</span>
<span class="k">if</span> <span class="bp">self</span><span class="o">.</span><span class="n">dropout</span> <span class="ow">and</span> <span class="n">do_dropout</span><span class="p">:</span> <span class="n">a1</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">compute_dropout</span><span class="p">(</span><span class="n">a1</span><span class="p">)</span> <span class="c"># dropout</span>
<span class="c">#the input of the hidden layer is obtained by applying our weights to our inputs. We essentially take a linear combination of our inputs</span>
<span class="n">z2</span> <span class="o">=</span> <span class="n">w1</span><span class="o">.</span><span class="n">dot</span><span class="p">(</span><span class="n">a1</span><span class="o">.</span><span class="n">T</span><span class="p">)</span>
<span class="c">#applies the tanh function to obtain the input mapped to a distrubution of values between 0 and 1</span>
<span class="n">a2</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">tanh</span><span class="p">(</span><span class="n">z2</span><span class="p">)</span>
<span class="c">#add a bias unit to activation of the hidden layer.</span>
<span class="n">a2</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">add_bias_unit</span><span class="p">(</span><span class="n">a2</span><span class="p">,</span> <span class="n">column</span><span class="o">=</span><span class="bp">False</span><span class="p">)</span>
<span class="k">if</span> <span class="bp">self</span><span class="o">.</span><span class="n">dropout</span> <span class="ow">and</span> <span class="n">do_dropout</span><span class="p">:</span> <span class="n">a2</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">compute_dropout</span><span class="p">(</span><span class="n">a2</span><span class="p">)</span> <span class="c"># dropout</span>
<span class="c"># compute input of output layer in exactly the same manner.</span>
<span class="n">z3</span> <span class="o">=</span> <span class="n">w2</span><span class="o">.</span><span class="n">dot</span><span class="p">(</span><span class="n">a2</span><span class="p">)</span>
<span class="c"># the activation of our output layer is just the softmax function.</span>
<span class="n">a3</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">softmax</span><span class="p">(</span><span class="n">z3</span><span class="p">)</span>
<span class="k">return</span> <span class="n">a1</span><span class="p">,</span> <span class="n">z2</span><span class="p">,</span> <span class="n">a2</span><span class="p">,</span> <span class="n">z3</span><span class="p">,</span> <span class="n">a3</span>
</code></pre>
</div>
<p>In order to actually compute the dropout, we can randomly sample the activations to set to 0 from a binomial distribution with probability p, which is yet another hyperparameter that must be tuned:</p>
<div class="language-python highlighter-rouge"><pre class="highlight"><code><span class="k">def</span> <span class="nf">compute_dropout</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">activations</span><span class="p">,</span> <span class="n">p</span><span class="p">):</span>
<span class="s">"""Sets a proportion p of the activations to zero"""</span>
<span class="n">mult</span> <span class="o">=</span> <span class="n">np</span><span class="o">.</span><span class="n">random</span><span class="o">.</span><span class="n">binomial</span><span class="p">(</span><span class="mi">1</span><span class="p">,</span> <span class="mi">1</span><span class="o">-</span><span class="n">p</span><span class="p">,</span> <span class="n">size</span> <span class="o">=</span> <span class="n">activations</span><span class="o">.</span><span class="n">shape</span><span class="p">)</span>
<span class="n">activations</span><span class="o">*=</span><span class="n">mult</span>
<span class="k">return</span> <span class="n">activations</span>
</code></pre>
</div>
<p>With these modificaitons, our neural network is less prone to overfitting and generalizes better. The full source code for the neural network can be found <a href="https://github.com/rohan-varma/neuralnets/blob/master/neuralnetwork/NeuralNetwork.py">here</a>, along with an <a href="https://github.com/rohan-varma/neuralnets/blob/master/neuralnetwork/NeuralNetDemo.ipynb">iPython notebook</a> with a demonstration on the MNIST dataset.</p>
<p><strong>References</strong></p>
<p>[1] <a href="http://yann.lecun.com/exdb/mnist/">The MNIST Database of Handwritten Digits</a></p>
<p>[2] <a href="https://blog.dbrgn.ch/2013/3/26/perceptrons-in-python/">Programming a Perceptron in Python</a> by Danilo Bargen</p>
<p>[3] <a href="http://cs231n.github.io/linear-classify/">Stanford CS 231N</a></p>
<p>[4] <a href="http://ufldl.stanford.edu/tutorial/unsupervised/Autoencoders/">Stanford Deep Learning Tutorial</a></p>
<p>[5] <a href="http://web.cs.ucla.edu/~ameet/teaching/winter17/cs260/lectures/lec09.pdf">Ameet Talwalkar, UCLA CS 260</a></p>
<p>[6] <a href="http://www.cs.toronto.edu/~rsalakhu/papers/srivastava14a.pdf">Srivastava, Hinton, et. al, Dropout: A simple way to prevent Neural Networks from Overfitting</a></p>
<p>[7] <a href="https://github.com/rasbt/python-machine-learning-book/blob/master/code/ch12/ch12.ipynb">Sebastian Raschka, Python Machine Learning, Chapter 12 Neural Networks</a> for code samples</p>Recently, I spent sometime writing out the code for a neural network in python from scratch, without using any machine learning libraries. It proved to be a pretty enriching experience and taught me a lot about how neural networks work, and what we can do to make them work better. I thought I’d share some of my thoughts in this post.Building and testing an API with Express, Mongo, and Chai2017-01-03T00:00:00+00:002017-01-03T00:00:00+00:00http://rohan-varma.github.io/Express-API<p>Recently, I’ve been going through the Express, Mongoose, and Chai docs in order to help build out and test an API that’s going to be used for ACM Hack, a committee of UCLA’s CS club that focuses on teaching students new technologies and frameworks, as well as fostering/building an environment of hackers and makers at UCLA. We’re completely revamping Hack for the next quarter with regular events, projects, and additional content in terms of blog posts and tutorials for our users. To do this, we needed to revamp the Hack website.</p>
<p>Specifically, a few backend tasks were required, in the form of creating a functional API to support the needs of our front-end developers and users:</p>
<ul>
<li>Create, update, get, and delete Events (an Event, for example, could be an “Android Workshop Session”)</li>
<li>Create, update, get, and delete Showcase Projects (these our projects that our hack members submit to us, and we showcase the coolest/most innovative projects)</li>
<li>Securing this API through the use of tokens, to make sure that requests cannot be spammed.</li>
<li>Create an email list API endpoint, that allows users to subscribe to our mailing list that notifies them about new events or important updates.</li>
<li>Create Mongoose schemas for all of the above data types.</li>
</ul>
<h3 id="tools-used">Tools Used</h3>
<p>On the backend, we decided to use MongoDB for our database, Express.js for our web framework, and Mocha/Chai for unit tests. The first order of business was to create database schemas for all of the above data types. We used <code class="highlighter-rouge">mongoose</code> to interact with our MongoDB database. <a href="http://mongoosejs.com/index.html">Mongoose</a> allows us to define object models that we can save and retrieve from our database. From the <a href="http://mongoosejs.com/docs/api.html">MongooseJS docs</a>, models are compiled from their schema definitions and represent specific documents in our database. The models also handle document creation and retrieval.</p>
<p>To take the example of creating our mailing list API endpoint, it would be useful to have an email schema that contains both the user’s email address as well as the user’s name. Moreover, we’d like to be able to retrieve all emails in a single request. Here’s the schema that we defined for emails:</p>
<script src="https://gist.github.com/rohan-varma/1cde65d7e093ddfc24d048a28dcc4af0.js"></script>
<p>We defined a <code class="highlighter-rouge">getAll</code> function in our schema to support querying for the entire mailing list. From the MongooseJS docs, each model has <code class="highlighter-rouge">find</code>, <code class="highlighter-rouge">findById</code>, <code class="highlighter-rouge">findOne</code> and a few other useful functions that we can use to retrieve particular documents. We primarily used the <code class="highlighter-rouge">find</code> function, that has a few interesting use cases:</p>
<script src="https://gist.github.com/rohan-varma/20889e90b5bc7f7d348d214753397a05.js"></script>
<p>We used the latter to return all email documents, thus providing us with our mailing list.</p>
<p>Next, we created a <code class="highlighter-rouge">mongoose</code> instance and connected it to MongoDB. There are several ways to create your own MongoDB instance, a popular choice being <a href="https://mlab.com">MongoLab</a>. We also exported our schemas so that they can be instantiated in other areas of our application, namely, in our API where these models will be created and accessed. The following code connects the <code class="highlighter-rouge">mongoose</code> instance and exports the schemas:</p>
<script src="https://gist.github.com/rohan-varma/ad8eb415c940d359e31159fc6ee4d327.js"></script>
<h3 id="defining-our-api-endpoint-with-express">Defining Our API Endpoint with Express</h3>
<p>The next step was to set up the Express framework and begin to define routes and endpoints for our application. <a href="http://expressjs.com/">Express</a> is a minimal web framework that is essentially composed of two things: routing and middleware functions. At a high level, <a href="https://expressjs.com/en/guide/routing.html">routing</a> defines endpoints for your application that can be accessed to perform certain actions (ie, GET or POST certain data). In other words, it defines the structure that is used for interaction with the backend of your web app. An Express route essentially maps a URL to a specific set of functions, called <a href="https://expressjs.com/en/guide/writing-middleware.html">middleware functions</a>. Middleware functions are quite powerful, and are capable of the following actions:</p>
<ul>
<li>Execute any code on the server</li>
<li>Modify the request (req) and response (res) object</li>
<li>Access the next middleware function on the stack, denoted by <code class="highlighter-rouge">next()</code></li>
<li>End the API call.</li>
</ul>
<p>For example, we can create a route for obtaining and sending data to our mailing list. To do this, we will create a router that maps the URL <code class="highlighter-rouge">/api/v1/email/:email?</code> to a set of functions. The last part of the URL, <code class="highlighter-rouge">:email?</code> is an an optional URL parameter. First, we can define middleware functions for this URL, which will also take care of the behavior of the endpoint without the optional argument:</p>
<script src="https://gist.github.com/rohan-varma/5ff1f324e9524332468f77ec9233a4c1.js"></script>
<p>In other files in our <code class="highlighter-rouge">api</code> directory of our application, we can tell Express to use certain routers for specific API endpoints. This way, routers can be composed: the <code class="highlighter-rouge">/api</code> endpoint can have routes for each API version, and each API version can have routes for its several endpoints that access data such as the mailing list or upcoming events:</p>
<div class="language-javascript highlighter-rouge"><pre class="highlight"><code><span class="c1">//require the routers implemented for each data type</span>
<span class="nx">router</span><span class="p">.</span><span class="nx">use</span><span class="p">(</span><span class="err">‘</span><span class="o">/</span><span class="nx">event</span><span class="err">’</span><span class="p">,</span> <span class="nx">require</span><span class="p">(</span><span class="err">‘</span><span class="p">.</span><span class="o">/</span><span class="nx">event</span><span class="err">’</span><span class="p">).</span><span class="nx">router</span><span class="p">);</span>
<span class="nx">router</span><span class="p">.</span><span class="nx">use</span><span class="p">(</span><span class="err">`</span><span class="o">/</span><span class="nx">email</span><span class="err">`</span><span class="p">,</span> <span class="nx">require</span><span class="p">(</span><span class="err">`</span><span class="p">.</span><span class="o">/</span><span class="nx">email</span><span class="p">).</span><span class="nx">router</span><span class="p">);</span>
<span class="nx">router</span><span class="p">.</span><span class="nx">use</span><span class="p">(</span><span class="err">`</span><span class="o">/</span><span class="nx">showcase</span><span class="err">`</span><span class="p">,</span> <span class="nx">require</span><span class="p">(</span><span class="err">`</span><span class="p">.</span><span class="o">/</span><span class="nx">showcase</span><span class="p">).</span><span class="nx">router</span><span class="p">);</span>
<span class="nx">module</span><span class="p">.</span><span class="nx">exports</span> <span class="o">=</span> <span class="p">{</span><span class="nx">router</span><span class="p">};</span>
</code></pre>
</div>
<div class="language-javascript highlighter-rouge"><pre class="highlight"><code><span class="c1">//require routers for each version of the API implemented</span>
<span class="nx">router</span><span class="p">.</span><span class="nx">use</span><span class="p">(</span><span class="err">‘</span><span class="o">/</span><span class="nx">v1</span><span class="err">’</span><span class="p">,</span> <span class="nx">require</span><span class="p">(</span><span class="err">‘</span><span class="p">.</span><span class="o">/</span><span class="nx">v1</span><span class="err">’</span><span class="p">).</span><span class="nx">router</span><span class="p">);</span>
<span class="nx">module</span><span class="p">.</span><span class="nx">exports</span> <span class="o">=</span> <span class="p">{</span><span class="nx">router</span><span class="p">};</span>
</code></pre>
</div>
<p>With this setup, access to our application’s data was organized into several different API endpoints. Next, we had to actually implement each middleware function for each of our API endpoints. To do this, we had to think about our API’s design at a granular level: what fields will we require for particular requests? Which requests will need token authentication? What will the response body look like in the case of success and in the case of failure?</p>
<p>We decided that our response objects will have two high level fields: <code class="highlighter-rouge">success</code>, a boolean value that indicates the status of the request, and <code class="highlighter-rouge">errors</code>, a string that indicates the errors (if any) that were encountered during the request (such as an invalid ID or unauthorized token). Here’s an example implementation of a <code class="highlighter-rouge">get</code> request:</p>
<script src="https://gist.github.com/rohan-varma/7d045f555f659f92f9bf394fbf2d7247.js"></script>
<p>As indicated above, we can have certain requests require a valid <code class="highlighter-rouge">token</code> for the request to return successfully. Also, we pass in an anonymous function that takes in two parameters to the <code class="highlighter-rouge">getAll</code> function defined in our Email model. From the implementation of <code class="highlighter-rouge">getAll</code> in the email schema discussed previously, the function retrieves all emails and then calls a provided callback function. In this case, the function returns a response object back to the user.</p>
<h3 id="testing-the-api-using-mocha-and-chai">Testing the API using Mocha and Chai</h3>
<p>Next, we moved on to testing our API endpoints to make sure they work well, especially in edge cases such as malformed or unauthorized requests. At first, we manually tested our API using <a href="https://www.getpostman.com/">Postman</a>, which is a useful tool for quickly querying your endpoint to make sure it works correctly. However, as our API and overall application began to change rapidly and increase in size, we decided to use unit testing in order to make sure that our core functionality doesn’t break as a result of an erroneous commit.</p>
<p>Unit tests allowed us to automatically detect problems in our codebase when they happen, and we can make sure we don’t push a broken build by making sure all of our tests pass during the build step. We used two JavaScript unit testing libraries: <a href="https://mochajs.org/">Mocha.js</a>, which allows us to actually run unit tests, and <a href="http://chaijs.com/">Chai.js</a> which contains several useful helper functions to write our testing code. Using a few more add-ons such as chai-Http (to create and send HTTP requests) and chai-should (to write clean assert statements), we can efficiently create a testing schematic for our API.</p>
<p>First, we describe a test and what it should do, and have an anonymous function running the actual test. The test for an API makes a request to that endpoint with some data, and then we verify that the response object looks like it should. As an example, to test our email API endpoint, we did the following:</p>
<ul>
<li>Create a valid GET request with a valid token in the body. Verify that the response object contains the relevant status fields and returns mailing list.</li>
<li>Create an invalid GET request that is missing a valid token. Verify that the response object indicates failure and provides no emails.</li>
<li>Create a valid POST request that has a body indicating the user’s name and email address. Verify that the response object indicates that the request executed successfully.</li>
<li>Create a valid POST request that has a body that is missing optional fields. Ensure that missing these optional fields doesn’t cause the request to fail.</li>
</ul>
<p>Here’s an example of a single test case:</p>
<script src="https://gist.github.com/rohan-varma/aaf8f1f74633334e5e6f6b95072bd07d.js"></script>
<p>To easily run our tests, we just need to add the line <code class="highlighter-rouge">"test": "mocha"</code> to our <code class="highlighter-rouge">package.json</code> file. Then, the unit tests can be run with a single command line argument: <code class="highlighter-rouge">npm test</code>. Chai and Mocha allow the developer to create and define tests so that the end result of running the tests is descriptive of what tests were run, and how they should behave:</p>
<p><img src="https://raw.githubusercontent.com/rohan-varma/rohan-blog/master/images/chaitest.png" alt="chai-test" title="unit tests" /></p>
<p>And that’s it! We now have a well-organized, reliable, and reusable set up for creating and testing a robust API. In the coming months, we hope to expand on this and push out even more interesting features for UCLA’s CS community.</p>
<h3 id="projectcode-contributors">Project/Code Contributors:</h3>
<ul>
<li><a href="https://github.com/nkansal96">Nikhil Kansal</a></li>
<li><a href="https://github.com/yvonneCh">Yvonne Chen</a></li>
<li><a href="https://github.com/hsykwon">Justin Liu</a></li>
<li><a href="https://github.com/akhilnadendla">Akhil Nadendla</a></li>
</ul>Recently, I’ve been going through the Express, Mongoose, and Chai docs in order to help build out and test an API that’s going to be used for ACM Hack, a committee of UCLA’s CS club that focuses on teaching students new technologies and frameworks, as well as fostering/building an environment of hackers and makers at UCLA. We’re completely revamping Hack for the next quarter with regular events, projects, and additional content in terms of blog posts and tutorials for our users. To do this, we needed to revamp the Hack website.Training Production-Grade Machine Learning Pipelines2016-10-01T00:00:00+00:002016-10-01T00:00:00+00:00http://rohan-varma.github.io/ML-Production<p>A few thoughts on how machine learning models can be scaled, stored, and used in production applications.</p>
<p>Choosing, training, and testing the right machine learning classifier is a difficult task: you have to preprocess
and analyze your dataset’s features, possibly extract new features, tune hyperparameters, and perform cross-validation, just to name a few components of a typical machine learning problem.
After you’ve trained and tested a reliable classifier, it’s ready to be deployed to serve new predictions at scale.
These machine learning systems that are trained on a massive amount of data coming from a variety of sources can be hard to maintain and scale up. This post is a few of my thoughts on deploying a machine learning architecture, specifically using Amazon Web Services.</p>
<h3 id="the-multi-model-architecture">The Multi-Model Architecture</h3>
<p>Our machine learning system has to be capable of a few different tasks:</p>
<ul>
<li>It needs to efficienty store data, as well as pull data from several different sources.</li>
<li>It should be capable of automatically re-training and testing itself. Since new data is always flowing to our system, it’s probably not a good idea to train our model only once on an initial dataset.</li>
<li>The time-consuming training phase should occur offline. When the model is trained, it should be deployed such that any arbitrary event can trigger it.</li>
<li>A user-friendly interface is essential for developers to manage the training, testing, and deployment phases of the machine learning system.</li>
</ul>
<p>For the above reasons, I’ve found the tools and infrastructure offered by AWS to be very helpful. Specifically, I’ll be talking about how we can use EC2, RDS, S3, and Lambda to build out a production-grade architecture.</p>
<h3 id="the-architecture">The Architecture</h3>
<p>Our architecture is composed of many pieces that interact with each other to train, deploy, and store our machine learning models. Here’s an overview of how our architecture could work, with details to follow:</p>
<p><img src="https://raw.githubusercontent.com/rohan-varma/rohan-blog/master/images/model.png" alt="model" /></p>
<p>Let’s review this model piece by piece.</p>
<h3 id="storage-components">Storage Components</h3>
<p>This model uses two storage components: RDS and S3. RDS (Relational Database System) is a relational database stored in the cloud, and acts as our datawarehouse: we can efficiently query for data when we are testing or training our model. S3 (Secure Storage Server) will store our machine learning models as serialized data transfer objects. We’ll send these objects to other components when they need to be used or updated. Here’s how a serializable Neural Network object could be represented - using C#’s <code class="highlighter-rouge">DataContract</code> paradigm:
<script src="https://gist.github.com/rohan-varma/92b6a07db23399cfdb98f348cca9370c.js"></script></p>
<h3 id="offline-training">Offline Training</h3>
<p>Training highly accurate machine learning algorithms with a lot of data can take a really long time. The training phase should occur offline (ie, separate from our application’s use of it) and on separate hardware. This is because training is a typically CPU/GPU intensive process, and dedicated hardware can result in faster training times, as well as separating the training concern from your application. Amazon EC2 (Elastic Cloud Compute) provides compute power on the cloud as a service - you can recruit new instances when you need them, and terminate them when finished (such as when all your models are trained). EC2 allows you to quickly scale your compute resources and configure additional instances quickly.</p>
<p>We can delegate the process of training our machine learning model to EC2. EC2 will be responsible for pulling data from RDS, training a model, testing and validating it, and sending that model to be stored in S3. Additionally, we’ll need to retrain our model as new data becomes available. To do this, we can use a popular queue-based paradigm to manage the training jobs we need to get done - this is the “Training Request Queue” in our model above. Requests for training or re-training a model can be generated by our application when enough new data becomes available. Here’s what a serializable request object might look like:</p>
<script src="https://gist.github.com/rohan-varma/ad7306b3628a98db712d2b504c7d15fa.js"></script>
<p>These requests are lined up into a queue, from which a pool of EC2 instances can pull from. Then, the instance can parse the training request, which involves obtaining the data needed from RDS and information about the particular type of classifier needed. After training, the instance sends the new object to S3, and is ready to pull another training request. If there’s no more training requests, we can easily terminate the instance so as to not waste compute power.</p>
<h3 id="making-predictions-at-scale-with-lambda">Making Predictions at Scale with Lambda</h3>
<p>We’ve discussed storing the relevant data and objects we need, as well as training our classifier using EC2. Now, it’s time to use our trained classifiers to serve prediction requests at scale. Lambda is a great option for this. Lambda employs a serverless architecture - you can run code without having to manage any servers or a backend service. All you have to do is upload your code and define when it should be executed, and Lambda will take care of the compute resources needed to run and scale your code.</p>
<p>Our Lambda function can simply be the relevant fit function from our trained machine learning classifier - a function that takes our classifier’s weights and applies them to our input dataset, and returns the predicted label. It’ll be responsible for loading the serialized model from S3, deserializing it, and outputting the prediction. If we’re training several different machine learning classifiers, we can deploy independent Lambda functions and invoke the relevant one. This way, each function represents a single model that solves a single problem.</p>
<p>Along with writing the code for our function, we’ll have to define <code class="highlighter-rouge">triggers</code> that invoke our function. These can be nearly anything - API requests, updates from S3, or explicit calls. This makes it easy to turn our machine learning applications into several reusable microservices.</p>
<p>And that’s it! Having a well-defined machine learning infrastructure to use in production makes it easier to scale up, encapsulate different tasks, and quickly track problems when something’s not working. There’s definitely a lot more to doing machine learning at scale well - such as extracting the right features, preprocessing your dataset, and choosing the right classifier for the task. Thanks for reading!</p>A few thoughts on how machine learning models can be scaled, stored, and used in production applications.Working With React and Flux2016-09-20T00:00:00+00:002016-09-20T00:00:00+00:00http://rohan-varma.github.io/React-Flux<p><img src="http://no-kill-switch.ghost.io/content/images/2015/06/react.png" alt="React and Flux" /></p>
<p>Recently, I’ve been learning a lot about React and Flux while developing a full-stack web application.
<a href="https://facebook.github.io/react/">React</a> is a JavaScript library for building user interfaces, and revolves around the idea of writing your components
in a very declarative way. It can be thought of as the “View” portion of the Model-View-Controller pattern.
<a href="https://facebook.github.io/flux/">Flux</a> is a pattern also developed by Facebook for building web interfaces - it utilizes
the concept of unidirectional data flow, which complements React’s components quite well. In this post, we’ll take a quick
look as to how React and Flux can work together to create the components of a dynamic, responsive web application. I’ve learned a lot of this through the excellent <a href="https://facebook.github.io/react/docs/tutorial.html">React Tutorial</a> and applied my knowledge through following along a <a href="https://facebook.github.io/react/docs/tutorial.html">tutorial to create a full-stack app</a>, the latter being the inspiration for this post.</p>
<h3 id="react-overview">React Overview</h3>
<p>React is a powerful UI library developed at Facebook that uses an innovative diff algorithm to efficiently re-render components when
data changes. With React, we don’t interact with the DOM directly - instead, React uses the concept of a Virtual DOM, which is just an abstract,
lightweight representation of the actual DOM. This virtual DOM can then be manipulated and then saved to the real DOM tree. A major advantage of React
is that this is done in an efficient way through the diffing algorithm used under the hood. The algorithm calculates the minimum number of elements
it needs to update, and then efficiently re-renders the component by only applying these changes to the actual DOM. Calculating the diff between two trees (or more specifically, the “edit distance” between two trees) is a O(n^3) problem, but React uses heuristics based on a few practical use-case assumptions to bring it down to O(n). For more on that, check out the <a href="https://facebook.github.io/react/docs/reconciliation.html">Docs</a>.</p>
<p>React also has a few other notable features that make it pretty useful:</p>
<ul>
<li><em>Server-side rendering of components</em>: Since React doesn’t require the actual DOM tree as a dependency, you can render your components on the server as opposed to the client-side, and then just send the resulting HTML instead of having the client download and execute additional JavaScript. This could reduce perceived page load times.</li>
<li><em>Declarative style</em>: components and elements allow you to write your component’s render() function in a declarative way.</li>
<li><em>Reusability and composability</em>: React’s components naturaly lend themselves to be reusable if they are designed well (for example, ensuring each component has only a single responsibility), and are therefore easy to compose with other components to quickly build complex user interfaces.</li>
</ul>
<h3 id="flux-overview">Flux Overview</h3>
<p>Flux is a pattern that complements React and the idea of unidirectional data flow. Its used internally at Facebook and is commonly paired with React. Its componsed of four components: Actions, Dispatcher, Store, and Controller Views, which manage the flow of data through an application and define what picks it up along the way. There’s many implementations of Flux, and the one I’ve been using is Alt.js.</p>
<h2 id="react--flux-example">React + Flux Example</h2>
<p>Let’s create a simple React component, along with actions and a store for it. The store will be responsible for listening for actions and updating the state of our component accordingly. We’ll subscribe our React component to the store so that it knows about changes in the store, and can update its own state accordingly. Also, we’ll define a few actions that fetch data and notify the store about whether the data fetch was successful or not. Let’s get started with these actions first, which are placed into a file called <code class="highlighter-rouge">MyComponentActions.js</code>:</p>
<script src="https://gist.github.com/rohan-varma/c76af8ce80cc1e99597c3521339a8aa4.js"></script>
<p>Here, we’ve defined three actions, one of which requests data from our backend, and two of which notify our store about the request’s success or failure. Note that we haven’t yet handled these two actions - that’ll be done when we define our handlers in the store. The store will also bind our actions to their handlers - sort of like a mapping from an action to the action handler. We’ll revisit this when we define our <code class="highlighter-rouge">MyComponentStore</code> class. The last line of the above code simply exports our actions so that they can be imported elsewhere.</p>
<h3 id="defining-the-component-store">Defining the Component Store</h3>
<p>Now, we can move on to defining a store for our React component. The store will be responsible for handling the actions we’ve defined and updating the state accordingly, so that our component can listen for state changes. Let’s put this code into a file called <code class="highlighter-rouge">MyComponentStore.js</code>:
<script src="https://gist.github.com/rohan-varma/e580bd6ce605c838e5ed77454d9a540e.js"></script></p>
<p>Here, <code class="highlighter-rouge">bindActions</code> is an Alt function that binds actions to their action handlers, with a specific naming convention. As an example, an action with name <code class="highlighter-rouge">doAction</code> will bind to <code class="highlighter-rouge">onDoAction</code> or just <code class="highlighter-rouge">doAction</code> (but not both). In this case, when our <code class="highlighter-rouge">getMyMembersSuccess</code> action occurs, the code in the handler <code class="highlighter-rouge">onGetMyMembersSuccess</code> will be executed and the members field of the state will be updated. With our actions and store defined, we’re ready to define our actual React component.</p>
<h3 id="creating-the-react-component">Creating the React Component</h3>
<p>Our React component will fire off actions (such as, in our example, getting members from the backend) and listen to the store for state changes. When our component is initially rendered, it sets its initial state to the store’s state, and also subscribes a listener to the store to listen for changes so that its state can be updated accordingly (the <code class="highlighter-rouge">OnChange</code> function in the code below). Additionally, we can remove our store listener when the component is unmounted. Here’s some basic boilerplate code that could be used to design this component:</p>
<script src="https://gist.github.com/rohan-varma/719c4d36d1660710fc20e87e379d5be2.js"></script>
<p>And that’s it! Hopefully, this was a good example to introduce how React and the Flux pattern work together to achieve unidirectional data flow, and how firing and handling actions update our component’s store. <a href="http://sahatyalkabov.com/create-a-character-voting-app-using-react-nodejs-mongodb-and-socketio/">This full-stack tutorial</a> is an excellent resource for learning more about React, Flux, and using Node to develop a full-stack app.</p>
<h3 id="sources">Sources</h3>
<ul>
<li><a href="https://www.fullstackreact.com/articles/react-tutorial-cloning-yelp/">Cloning Yelp with React</a></li>
<li><a href="http://sahatyalkabov.com/create-a-character-voting-app-using-react-nodejs-mongodb-and-socketio/">Full-stack React/Node/Mongo tutorial</a></li>
<li><a href="https://facebook.github.io/react/docs/getting-started.html">Getting started with React</a></li>
<li><a href="https://scotch.io/tutorials/getting-to-know-flux-the-react-js-architecture">Getting to Know Flux</a></li>
<li><a href="http://no-kill-switch.ghost.io/my-adventure-with-react-flux-setting-sails/">Adventures With React and Flux</a></li>
</ul>Exploring GraphQL2016-08-30T00:00:00+00:002016-08-30T00:00:00+00:00http://rohan-varma.github.io/Exploring-GraphQL<p>Taking a look at how GraphQL can improve upon the REST paradigm.</p>
<p><img src="https://raw.githubusercontent.com/rohan-varma/rohan-blog/gh-pages/images/graphql.png" alt="GraphQL" title="GraphQL" /></p>
<p>In this post, we’ll take a look at <a href="https://facebook.github.io/graphql/">GraphQL</a>, a querying langauge created by Facebook which was open-sourced in July 2015. We’ll learn about the querying powers that come with it, the benefits of using it, and compare it to traditional methods of querying RESTful APIs.</p>
<h3 id="what-is-graphql">What is GraphQL?</h3>
<p>GraphQL is a data querying langauge that’s designed with the data needs of the client-side in mind. It presents an alternative paradigm to REST, and we’ll look at the differences between GraphQL and the REST paradigm throughout this post. GraphQL, at its core, consists of a type system that describes what data is available on the server-side, as well as a query language for the client to ask for the data it needs.</p>
<h3 id="exploring-graphql">Exploring GraphQL</h3>
<p>The best way to learn about GraphQL is to get our hands dirty with it. In this post, we’ll implement a basic GraphQL Schema for Pokemon Go, consisting of two types: Pokemon and Moves. We’ll simply store our data in JSON files to simulate our database (normally, the data being queried for would lie in a database, a layer in your backend, or in another service of your application). At the end, we should be able to query for some Pokemon and their moves. In addition, we’ll have learned how to define GraphQL types, queries, and schemas, as well as its advantages over a traditional REST approach.</p>
<h3 id="why-should-you-use-graphql-a-quick-overview">Why should you use GraphQL? A quick overview</h3>
<ul>
<li>GraphQL is <em>client-centric</em>: With GraphQL, the client can ask the server for exactly the data it needs. It’s driven by the requirements of the front-end, rather than the server defining the data returned.</li>
<li><em>Strong-typing and introspection</em>: With GraphQL, it’s really easy to make sure a query is syntactically correct. Better yet, you can query the GraphQL type system itself, meaning that GraphQL is self-documenting: no need to write pages and pages of documentation for your API endpoints. We’ll see this in action with the schema we are about to create.</li>
</ul>
<h3 id="getting-started">Getting Started</h3>
<p>Note: The full code for this tutorial is available as a <a href="https://github.com/rohan-varma/pokemongo-graphQL">github repository</a>. If you want to be able to create and execute queries to see them in action, I recommend you to take a moment and set it up. Alternatively, you could use <a href="https://pokemongo-graphql-senopusxhk.now.sh/graphql">this link</a> to view the code running live (hosted by <a href="http://now.sh">now.sh</a>, which by the way, is excellent for deploying your Node.js projects).</p>
<p>First, let’s create our types. Using GraphQL’s type system, we can describe the types of objects that can be returned by our server. Our first type will be a <code class="highlighter-rouge">moveType</code>, and it will be responsible for resolving all of our queries for moves. We’ll use the <a href="https://github.com/graphql/graphql-js">JS implementation</a> of GraphQL to describe our types:</p>
<script src="https://gist.github.com/rohan-varma/fbe7eae88afff97f5a7dd266974431bb.js"></script>
<p>Next, let’s also create a basic <code class="highlighter-rouge">pokemonType</code> to represent some data about Pokemon: namely, their id, name, and thumbnail (which for now will just be a dummy string).</p>
<script src="https://gist.github.com/rohan-varma/74d0aa483d6cf24fa432070daae10de3.js"></script>
<p>Now, we can define a <code class="highlighter-rouge">Query</code>, which is another GraphQL object type. Our Query type will be responsible for communicating to the client what data is available on the server, and what arguments can be passed in to retrieve certain data (in our case, the <code class="highlighter-rouge">id</code> of a move or a pokemon). Moreover, it will also be responsible for implementing a <code class="highlighter-rouge">resolve</code> function. This function will be responsible for fullfilling requests from our client. In our case, it’ll be pretty simple: simply use the <code class="highlighter-rouge">id</code> argument passed in as a key to get the pokemon or move from our JSON files.</p>
<script src="https://gist.github.com/rohan-varma/1f897987fa4d965513cd8a2e5024b4e4.js"></script>
<p>We’ve now got a <code class="highlighter-rouge">Query</code> that defines the two fields available on the server, and tells the client what arguments can be passed in to query for the data the client may need. Moreover, we’ve implemented <code class="highlighter-rouge">resolve</code> methods to fulfill these requests. To finish up our schema, all that’s left is to actually define our Schema object and export it:</p>
<script src="https://gist.github.com/rohan-varma/656eb511797902e59a6c8d022722ea61.js"></script>
<p>That’s it! A basic schema is now set up that describes the data available on the server, how to query for it, and how these queries are fulfilled. Let’s take it a bit further, and explore some of the advantages of GraphQL.</p>
<h3 id="comparison-to-the-restful-paradigm">Comparison to the RESTful Paradigm</h3>
<p>The <code class="highlighter-rouge">pokemonType</code> on our server is defined to have an <code class="highlighter-rouge">id</code>, a <code class="highlighter-rouge">name</code>, and a <code class="highlighter-rouge">thumbnail</code>. If you were a developer who needed this data on your front-end, you could make this example <code class="highlighter-rouge">GET</code> request using the REST paradigm:
<code class="highlighter-rouge">GET http://website.com/api/v1/pokemon/1</code>.</p>
<p>Alternatively, if you had implemented a GraphQL Scehma for your server data, you could make a GraphQL request, in the form of a query, to your backend. A query is just a string sent to the server, and the returned JSON mirrors the shape of the query, making it easy to predict the shape of the data returned. Here’s an example query to get some pokemon data:</p>
<div class="language-javascript highlighter-rouge"><pre class="highlight"><code><span class="p">{</span>
<span class="nx">pokemon</span><span class="p">(</span><span class="nx">id</span><span class="err">:</span> <span class="s2">"1"</span><span class="p">)</span> <span class="p">{</span>
<span class="nx">name</span>
<span class="nx">thumbnail</span>
<span class="nx">id</span>
<span class="p">}</span>
<span class="p">}</span>
</code></pre>
</div>
<p>Both requests would give you back the name of the pokemon, its id, and its thumbnail. In our case, the thumbnail is just a string that actually represents the thumbnail URL, but what if the thumbnail was an image that had to be created on the server, which is a typically expensive, CPU-bound process? If we don’t actually need the thumbnail on our front-end for a certain view, we certainly shouldn’t take time to ask our server for it. In our RESTful paradigm, we’d have to define another API endpoint to send a request to, something like: <code class="highlighter-rouge">GET http://website.com/api/v1/pokemonlightweight/1</code>. You can see how in a large application with a lot of (potentially expensive to ask for) data available on the server, we could end up creating a bunch of different API endpoints, which we would need to maintain, version, and write documentation for.</p>
<p>Fortunately, the GraphQL solution is much simpler: simply change your query:</p>
<div class="language-javascript highlighter-rouge"><pre class="highlight"><code><span class="p">{</span>
<span class="nx">pokemon</span><span class="p">(</span><span class="nx">id</span><span class="err">:</span> <span class="s2">"1"</span><span class="p">)</span> <span class="p">{</span>
<span class="nx">name</span>
<span class="nx">id</span>
<span class="p">}</span>
<span class="p">}</span>
</code></pre>
</div>
<p>There. Now we’re not asking for the thumbnail, so our server doesn’t have to spend time creating it. This is the beauty of GraphQL: the client can define exactly what data it needs, and receive exactly that from a single point of access to your data.</p>
<h3 id="graph-query-language">“Graph” Query Language</h3>
<p>Let’s take a look at another advantage GraphQL offers: graph-based querying. As an example, let’s define our <code class="highlighter-rouge">pokemonType</code> to also have a field called <code class="highlighter-rouge">bestFriend</code>, which is another pokemon. In other words, we want a connection to exist from one pokemon to another. In addition, let’s add some information about our pokemon’s favorite move to our server - a connection from a <code class="highlighter-rouge">pokemonType</code> to a <code class="highlighter-rouge">moveType</code>. To do this, we can modify our <code class="highlighter-rouge">pokemonType</code> object created earlier to have a <code class="highlighter-rouge">favoriteMove</code> and <code class="highlighter-rouge">bestFriend</code> field:</p>
<script src="https://gist.github.com/rohan-varma/48bbe9ca6ab23f9049263506595d0d87.js"></script>
<p>Now that we’ve added connections from one pokemon to another and from a pokemon to a move, we can easily query for these data from our front end. Here’s an example query that our schema supports:</p>
<div class="language-javascript highlighter-rouge"><pre class="highlight"><code><span class="p">{</span>
<span class="nx">pokemon</span><span class="p">(</span><span class="nx">id</span><span class="err">:</span><span class="s2">"1"</span><span class="p">)</span> <span class="p">{</span>
<span class="nx">name</span>
<span class="nx">favoriteMove</span> <span class="p">{</span>
<span class="nx">name</span>
<span class="p">}</span>
<span class="nx">bestFriend</span> <span class="p">{</span>
<span class="nx">name</span>
<span class="p">}</span>
<span class="p">}</span>
<span class="p">}</span>
</code></pre>
</div>
<p>And a response to the above query:</p>
<div class="language-javascript highlighter-rouge"><pre class="highlight"><code><span class="p">{</span>
<span class="s2">"data"</span><span class="err">:</span> <span class="p">{</span>
<span class="s2">"pokemon"</span><span class="err">:</span> <span class="p">{</span>
<span class="s2">"name"</span><span class="err">:</span> <span class="s2">"Pikachu"</span><span class="p">,</span>
<span class="s2">"favoriteMove"</span><span class="err">:</span> <span class="p">{</span>
<span class="s2">"name"</span><span class="err">:</span> <span class="s2">"Thunderbolt"</span>
<span class="p">},</span>
<span class="s2">"bestFriend"</span><span class="err">:</span> <span class="p">{</span>
<span class="s2">"name"</span><span class="err">:</span> <span class="s2">"Charmander"</span>
<span class="p">}</span>
<span class="p">}</span>
<span class="p">}</span>
<span class="p">}</span>
</code></pre>
</div>
<p>So, we’ve asked our server for the name of a certain pokemon, the name of its favorite move, and the name of its best friend, and got back exactly that data. No need to create different API endpoints to get a particular set of data (just add or remove parameters to the query!), and no need to use a request’s response to trigger another GET request. For reference, this is what a RESTful solution may have looked like:</p>
<script src="https://gist.github.com/rohan-varma/4b5ec89548cf1f849bc4669cd9f526ca.js"></script>
<p>As you can see, such parsing can get pretty complicated very quickly if your connections are layered deep - imagine how ugly the above code would get if we asked for the favorite move of a friend of a friend of a friend of a certain pokemon, for example. Obviously, this is not the only solution using the RESTful paradigm - for example, an alternative solution could be to create different API endpoints on your backend that gets this particular set of data from the server. But with GraphQL, you only have <strong>one endpoint to access your data</strong>, and can easily query for connections defined on your server.</p>
<p>Essentially, GraphQL enhances the <strong>seperation of concerns</strong> between the front-end and backend. Developers on the front-end no longer need to worry about parsing server responses to get the specific set of data they need, and developers on the back-end no longer need to worry about creating different API endpoints for their application that retrieve very similar data.</p>
<h3 id="one-last-cool-thing-type-introspection">One last cool thing: Type Introspection</h3>
<p>Using GraphQL’s query syntax, we can ask our server what queries it supports. For example, to see what types are defined on our server, we can use the following query:</p>
<div class="language-javascript highlighter-rouge"><pre class="highlight"><code><span class="p">{</span>
<span class="nx">__schema</span> <span class="p">{</span>
<span class="nx">types</span> <span class="p">{</span>
<span class="nx">name</span>
<span class="p">}</span>
<span class="p">}</span>
<span class="p">}</span>
</code></pre>
</div>
<p>From this result, we can see one of the types is named <code class="highlighter-rouge">pokemon</code>. We can use another query to get additional documentation about the pokemon type:</p>
<div class="language-javascript highlighter-rouge"><pre class="highlight"><code><span class="p">{</span>
<span class="nx">__type</span><span class="p">(</span><span class="nx">name</span><span class="err">:</span><span class="s2">"pokemon"</span><span class="p">)</span> <span class="p">{</span>
<span class="nx">fields</span> <span class="p">{</span>
<span class="nx">name</span>
<span class="p">}</span>
<span class="p">}</span>
<span class="p">}</span>
</code></pre>
</div>
<p>A response would contain the fields this type has:</p>
<div class="language-javascript highlighter-rouge"><pre class="highlight"><code><span class="p">{</span>
<span class="s2">"data"</span><span class="err">:</span> <span class="p">{</span>
<span class="s2">"__type"</span><span class="err">:</span> <span class="p">{</span>
<span class="s2">"fields"</span><span class="err">:</span> <span class="p">[</span>
<span class="p">{</span>
<span class="s2">"name"</span><span class="p">:</span> <span class="s2">"id"</span>
<span class="p">},</span>
<span class="p">{</span>
<span class="s2">"name"</span><span class="p">:</span> <span class="s2">"name"</span>
<span class="p">},</span>
<span class="p">{</span>
<span class="s2">"name"</span><span class="p">:</span> <span class="s2">"thumbnail"</span>
<span class="p">},</span>
<span class="p">{</span>
<span class="s2">"name"</span><span class="p">:</span> <span class="s2">"favoriteMove"</span>
<span class="p">},</span>
<span class="p">{</span>
<span class="s2">"name"</span><span class="p">:</span> <span class="s2">"bestFriend"</span>
<span class="p">}</span>
<span class="p">]</span>
<span class="p">}</span>
<span class="p">}</span>
<span class="p">}</span>
</code></pre>
</div>
<p>The introspection system also allows you to query for additional information such as description of fields and deprecation status. This allows your endpoint for accessing data to be self-documenting - you can ask your schema itself all about its queries, types, and fields. Additionally, tools such as <code class="highlighter-rouge">GraphiQL</code> and the Chrome extenstion <code class="highlighter-rouge">ChromeiQL</code> have features such as auto-completion and a documentation tab, providing an environment for you to test your schema.</p>
<p>And that’s it! There’s definitely a lot more to learn about GraphQL (such as posting new data via a <a href="https://medium.com/@HurricaneJames/graphql-mutations-fb3ad5ae73c4#.5378parnj">mutation</a>), but hopefully this post has covered the basics. The <a href="https://facebook.github.io/graphql/">GraphQL specification</a> is an excellent place to learn more, and so is the <a href="https://github.com/graphql/graphql-js">reference JS implementation</a>.</p>
<h3 id="sources">Sources:</h3>
<ul>
<li><a href="http://graphql.org/docs/getting-started/">GraphQL docs</a></li>
<li><a href="https://github.com/graphql/graphql-js">JS implementation of GraphQL</a></li>
<li><a href="https://facebook.github.io/graphql/">GraphQL Spec</a></li>
<li><a href="https://speakerdeck.com/jpshelley/learning-graphql-for-mobile">Learning GraphQL for Mobile, by John Shelley</a></li>
<li><a href="https://medium.freecodecamp.com/introduction-to-graphql-1d8011b80159#.guh55srwp">An Introduction to GraphQL, by Guido Schmitz</a></li>
</ul>Taking a look at how GraphQL can improve upon the REST paradigm.Applying Neural Networks to Natural Language Processing Tasks2016-08-11T00:00:00+00:002016-08-11T00:00:00+00:00http://rohan-varma.github.io/Neural-NLP<p><img src="http://deeplearning.stanford.edu/wiki/images/thumb/8/85/STL_Logistic_Classifier.png/380px-STL_Logistic_Classifier.png" alt="Inputs into a neuron and its output." title="Inputs into a neuron and its output." /></p>
<h3 id="bringing-deep-learning-into-the-field-of-nlp">Bringing Deep Learning into the field of NLP</h3>
<p>Recently, there’s been a lot of advancement in using neural networks and other deep learning algorithms to obtain high performance on a variety of NLP tasks. Traditionally, the bag of words model along with classifiers that use this model, such as the Maximum Entropy Classifier, have been successfully leveraged to make very accurate predictions in NLP tasks such as sentiment analysis. However, with the advent of deep learning research and its applications to NLP, discoveries have been made that improve the accuracy of these methods in primarily two ways: a neural network with several layers of logistic functions, and unsupervised learning to optimize feature selection as a pre-training step.</p>
<h3 id="how-can-neural-networks-and-other-deep-learning-algorithms-help">How can Neural Networks and other Deep Learning algorithms help?</h3>
<p>At its core, deep learning (and neural networks) are all about giving the computer some data, and letting it figure out how it can use this data to come up with features and models to accurately represent complex tasks - such as analyzing a movie review for its sentiment. With more common machine learning algorithms, human-designed features are generally used to model the problem and prediction becomes a task of optimizing weights to minimize a cost function. However, hand crafting features is time consuming, and these human made features tend to either over-represent the general problem and become to specific or are incomplete over the entire problem space.</p>
<h3 id="supervised-learning-from-regression-to-a-neural-network">Supervised Learning: From Regression to a Neural Network</h3>
<p>The Max Entropy classifier, commonly abbreviated to a Maxent classifier, is a common probabilistic model used in NLP. Given some contextual information in a document (in the form of multisets, unigrams, bigrams, etc), this classifier attempts to predict the class label (positive, negative, neutral) for it. This classifier is also used in neural networks, and it’s known as the softmax layer - the final layer (and sometimes only) in the network used for classification. So, we can model a single neuron in a neural network as computing the same function as a max entropy classifier:</p>
<p><img src="https://raw.githubusercontent.com/rohan-varma/rohan-blog/gh-pages/images/NLPfirst.png" alt="Inputs into a neuron and its output." title="Inputs into a neuron and its output." /></p>
<p>Here, <em>x</em> is our vector of inputs, the neuron computes the maximum entropy function with parameters <em>w</em> and <em>b</em> and outputs a single result in <em>h</em>.</p>
<p>Then, a neural network with multiple neurons can simply be thought of feeding input to several different classification functions at the same time. A given vector of inputs (<em>x</em> in our above picture) is run through many (as opposed to a single) functions, where each neuron represents a different regression function. As a result, we obtain a vector of outputs:</p>
<p><img src="https://raw.githubusercontent.com/rohan-varma/rohan-blog/gh-pages/images/NLP2nd.png" alt="Feeding output vectors to the next layer. " title="Feeding output vectors to the next layer." /></p>
<p>…And you can feed this vector of outputs to another layer of logistic regression functions (or a single function), until you obtain your output, which is the probability that your vector belongs to a certain class:</p>
<p><img src="https://raw.githubusercontent.com/rohan-varma/rohan-blog/gh-pages/images/NLP3rd.png" alt="Output layer of the neural net. " title="Output layer of the neural net." /></p>
<h3 id="applying-neural-networks-to-unsupervised-problems-in-nlp">Applying Neural Networks to Unsupervised Problems in NLP</h3>
<p>In NLP, words and their surrounding contexts are pretty important: a word surrounded by relevant context is valuable, while a word surrounded by seemingly irrelevant context is not very valuable. Each word is mapped to a vector defined by its features (which in turn relate to the word’s surrounding context), and neural networks can be used to learn which features maximize a word vector’s score.</p>
<p>A valuable pre-training step for any supervised learning task in NLP (such as classifying restaurant reviews) would be to generate feature vectors that represent words well - as discussed in the beginning of this post, these features are often human-designated. Instead of this, a neural network can be used to learn these features .</p>
<p>The input to such a neural network would be a matrix defined by , for example, a sentence’s word vectors. For example, consider the following phrase and its associated matrix:</p>
<p><img src="https://raw.githubusercontent.com/rohan-varma/rohan-blog/gh-pages/images/NLP4th.png" alt="Each word has a corresponding word vector, resulting in a unique sentence matrix. " title="Each word has a corresponding word vector, resulting in a unique sentence matrix." /></p>
<p>Our neural network can then be composed of several layers, where each layer sends the previous layer’s output to a function. Training is achieved through back propagation: taking derivates using the chain rule with respect to the weights to optimize these weights. From this, the ideal weights that define our function (which is a composition of many functions) are learned. After training, we now have a method of extracting ideal feature vectors that a given word is mapped to.</p>
<p>This unsupervised neural network is powerful, especially when considered in the context of traditional supervised softmax models. Running this unsupervised network on a large text collection allows input features to be learned rather than human designated, often resulting in better results when these features are fed into a traditional, supervised neural network for classification.</p>
<h3 id="recursive-neural-networks">Recursive Neural Networks</h3>
<p>Current researchers are investigating the use of recursive neural networks to learn how sentences are broken down into tree structures. This recursive deep learning network can then successfully learn how to map similar sentences into the same vector space, even though they may be composed of words that mean entirely different things.</p>
<p>If you want to learn about deep learning and neural networks for NLP in detail, I’d highly recommend Stanford’s course on it: <a href="http://cs224d.stanford.edu/">Deep Learning for Natural Language Processing</a>.</p>
<h3 id="sources">Sources</h3>
<p><a href="http://colah.github.io/posts/2014-07-NLP-RNNs-Representations/">Hidden Layer Neural Networks: Deep Learning, NLP, and Representations</a></p>
<p><a href="http://blog.datumbox.com/machine-learning-tutorial-the-max-entropy-text-classifier/">Machine Learning Tutorial: The Max Entropy Text Classifier</a></p>
<p><a href="http://1.%20http//nlp.stanford.edu/courses/NAACL2013/NAACL2013-Socher-Manning-DeepLearning.pdf%20(http://nlp.stanford.edu/courses/NAACL2013/NAACL2013-Socher-Manning-DeepLearning.pdf)">Stanford’s Deep Learning Tutorial</a></p>
<p><a href="http://cs224d.stanford.edu/index.html">CS224d: Deep Learning for Natural Language Processing</a></p>