Posterior

Parameter Estimation

Introduction

The goal in parameter estimation is to summarize observations (samples) and assumtions (priors) in form of model that itself is a distribution. Depending on the kind of distribution various parameters can be adjusted. This is done in a way to fit the observations as best as possible, and in the case of a bayesian perspective also the prior assumptions.

The kind of model itself is also an (often discrete) hyper-parameter and can be choosen to best fit the observations and domain knowledge.

Click on the yellow shaded area below the horizontal axis on the right to place some sample points as you like. These represent some measurements you have taken. Then select a model distribution and adjust its parameters to fit the sample points as best as possible.

Instead of only fitting the sample points, you can also introduce your own assumtions (prior knowledge) into the model building by setting a prior distribution for each parameter. These prior distributions describe how strong you expect a parameter value to in a specific range. For the best effect you configure your priors before you place you sample points.

Then when adjusting the model parameters try to not only fit he sample points but also achieve a high probability in your prior distributions.


μ Prior Distribution:
𝜎 Prior Distribution:

Samples

Click on the light orange bar below the horizotal axis to create samples. Samples are required to determine the likelihood for the current parameter values.

Current Prior

You have not specified any prior distributions for the parameters. Not specifying any priors corresponds to an frequentist approach to statistics.

From this perspective the collected data points (samples) are the only influence on the choice of model parameters.

Try to specify a prior distribution for each parameter to describe your assumtion regarding the parameters value.


More Educational Tools