Pattern Recognition and Machine Learning: Chapter 1

Pattern Recognition and Machine Learning: Chapter 1

1.2 Probability Theory

The multivariate Guassian is given by

Where we have a dimensional vector of continuous variables, a covariance matrix where , and denotes the determinant of the covariance matrix.

Example: Maximum Likelihood with Gaussian Distribution

Given observations of a scalar with the IID assumptions, and we assume that each is drawn from a Gaussian distribution with mean and , we can write down the likelihood:

The log-likelihood can be given by

Maximum likelihood gives us:

and

i.e., the typical values for the sample mean and (uncorrected) sample standard deviation. If we apply Bessel’s correction and multiply the sample standard deviation by then we obtain an unbiased estimate for the standard deviation.

Downsides of simple maximum likelihood estimation

  • We can show that while the mean obtained through this estimation is unbiased, the variance is a biased estimate of the true distribution variance.
    • Biased in this context means that
    • This further means that if you take a bunch of samples of (tending towards infinity) and predict for each sample, then the difference between the average prediction and the actual parameter will be .
    • Practically, this doesn’t say much since:
      • It doesn’t make any statement about the difference of a single point estimate and the true parameter
      • In statistical learning, the underlying parameter is often unknown, so this quantity is impossible to compute.
      • But, it does provide some insight as to why model averaging and ensemble learning works well.
  • We have