Update deep ppl authored by briss_01's avatar briss_01
...@@ -26,6 +26,16 @@ The central problem of PPLs is that comptuting the posterior distribution of lat ...@@ -26,6 +26,16 @@ The central problem of PPLs is that comptuting the posterior distribution of lat
Use variational Inference Use variational Inference
** How to Use Variational Inference?**
The goal is to find q<sup>*</sup>(θ) from a family of simpler distributions Q over the latent variable θ that is closest to the true posterior p(θ|x<sub>1</sub>, ..., x<sub>n</sub>).
This is done by:
1. Define model (e.g. a multi-layer perceptron; MLP) and initialise priors
2. Specify guides (simple probabilistic distributions over the latent variable θ
3. Run inference, i.e. learn distributions that fit the observed data -> posterior distribution for θ
4. Sample concrete weights and biases from θ (e.g. results in a concrete instance of the MLP)
5. (Optional) repeat weights and biases sampling to generate multiple concrete model instances
6. (Optional) use voting to get the best model (e.g. MLP) from the multiple samples
**Why Deep Learning and PPL?** **Why Deep Learning and PPL?**
... ...
......