This papers shows with a series of examples how probabilistic languages can be integrated into deep learning strategies. In particular shows how to model a multilayer perceptron unit (MPL) and a variational autoencoder (VAE).
**Conceptual Categorisation of Deep Probabilistic Programming Languages (PPL)**
**Probabilistic Programming Languages**
Probabilistic programming languages (PPLs) aim to express a probabilistic model as program. This abstraction is motivated by the idea that probabilistic models can become very difficult and involve intractable integrals, whereas implementing programs is usually a very standardised and thus easy to do.
As illustrated below the user specified probabilistic program specifies how to generate output data by sampling from a latent probability distribution. The compiler processes this program by checking for type errors and translate the program into an inference procedure.
Probabilistic programming languages (PPLs) aim to express a probabilistic model as program. This abstraction is motivated by the idea that probabilistic models can become very difficult whereas implementing programs is usually a very standardised and thus easy to do.
**Conceptual Categorisation of Deep Probabilistic Programming Languages (PPL)**
As illustrated in the Figure below, deep PPLs combine three existing concepts into one new concept.
As illustrated in the Figure below, deep PPLs combine three existing concepts into one new concept: "programming languages", "Bayesian statistics" and "deep learning".