1 minute read

Maximum Likelihood Estimation

In statistics, maximum likelihood estimation is a method of estimating the parameters of the assumed probability distribution, given some observed data [1]. In other words, the probability that has the population parameter, given some observed data.

Maximum likelihood function is highly related with Bayesian Statistics.

Let’s say that flipping the coin, and the head appears 7 times out of 10 trials. In this case we assume that probability of coin head (p) is 0.5. p = 0.5. And we call the p is ‘population parameter’.

Head of coin : p = 0.5

Binomial Distribution

10 trials, 7heads

= 0.1172

In a certain trial with 10 times flipping, we had 7 heads. Then can we say that the p = 0.5 ? In this case, we observed 7 heads out of 10 trials, we fix trials and heads. N = 10, heads = 8. And the Maximum Likelihood Function look like below.

Maximum Likelihood Estimation for Binomial Distribution

We can conclude that maximum likelihood is the highest value when the the population parameter : p = 0.7. This represents that the population parameter p: 0.7 is the best representation that can show the 7 heads out of 10 trials.

Reference

  1. Wikipedia contributors. (2022, August 14). Maximum likelihood estimation. Wikipedia. https://en.wikipedia.org/wiki/Maximum_likelihood_estimation

  2. <I have learned a lot by reading the Truth in Engineering blog (https://m.blog.naver.com/mykepzzang/221568285099)>


Maximum A Posteriori
Posterior Probability
  • Why the posterior probability hard to calculate in deep learning?

    Posterior = (prior * likelihood) / evidence

    In order to calculate ‘evidence’ the all the variables are needed to be integrated. As more variables the harder to calculate the posterior probability. That’s why the Markov Chain Monte Carlo is used to estimate it.

https://m.blog.naver.com/PostView.naver?isHttpsRedirect=true&blogId=junimnje&logNo=221508500551

Leave a comment