Skip to content

Commit 54bf13e

Browse files
author
Jaan Altosaar
authored
Update README.md
1 parent c78cb3f commit 54bf13e

File tree

1 file changed

+43
-3
lines changed

1 file changed

+43
-3
lines changed

README.md

Lines changed: 43 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,16 +1,16 @@
11
# Variational Autoencoder / Deep Latent Gaussian Model in tensorflow and pytorch
22
Reference implementation for a variational autoencoder in TensorFlow and PyTorch.
33

4-
I recommend the PyTorch version.
4+
I recommend the PyTorch version. It includes an example of a more expressive variational family (the [inverse autoregressive flow](https://arxiv.org/abs/1606.04934).
55

6-
Mean-field variational inference is used to fit the model to binarized MNIST handwritten digits images. An inference network (encoder) is used to amortize the inference and share parameters across datapoints. The likelihood is parameterized by a generative network (decoder).
6+
Variational inference is used to fit the model to binarized MNIST handwritten digits images. An inference network (encoder) is used to amortize the inference and share parameters across datapoints. The likelihood is parameterized by a generative network (decoder).
77

88
Blog post: https://jaan.io/what-is-variational-autoencoder-vae-tutorial/
99

1010
Example output with importance sampling for estimating the marginal likelihood on Hugo Larochelle's Binary MNIST dataset. Finaly marginal likelihood on the test set of `-97.10` nats.
1111

1212
```
13-
$ python train_variational_autoencoder_pytorch.py
13+
$ python train_variational_autoencoder_pytorch.py --variational mean-field
1414
step: 0 train elbo: -558.69
1515
step: 0 valid elbo: -391.84 valid log p(x): -363.25
1616
step: 5000 train elbo: -116.09
@@ -41,3 +41,43 @@ step: 65000 train elbo: -96.19
4141
step: 65000 valid elbo: -104.46 valid log p(x): -97.43
4242
step: 65000 test elbo: -103.31 test log p(x): -97.10
4343
```
44+
45+
46+
Using a non mean-field, more expressive variational posterior approximation, the test marginal log-likelihood improves to `-95.33` nats:
47+
48+
```
49+
$ python train_variational_autoencoder_pytorch.py --variational flow
50+
step: 0 train elbo: -578.35
51+
step: 0 valid elbo: -407.06 valid log p(x): -367.88
52+
step: 10000 train elbo: -106.63
53+
step: 10000 valid elbo: -110.12 valid log p(x): -104.00
54+
step: 20000 train elbo: -101.51
55+
step: 20000 valid elbo: -105.02 valid log p(x): -99.11
56+
step: 30000 train elbo: -98.70
57+
step: 30000 valid elbo: -103.76 valid log p(x): -97.71
58+
step: 40000 train elbo: -104.31
59+
step: 40000 valid elbo: -103.71 valid log p(x): -97.27
60+
step: 50000 train elbo: -97.20
61+
step: 50000 valid elbo: -102.97 valid log p(x): -96.60
62+
step: 60000 train elbo: -97.50
63+
step: 60000 valid elbo: -102.82 valid log p(x): -96.49
64+
step: 70000 train elbo: -94.68
65+
step: 70000 valid elbo: -102.63 valid log p(x): -96.22
66+
step: 80000 train elbo: -92.86
67+
step: 80000 valid elbo: -102.53 valid log p(x): -96.09
68+
step: 90000 train elbo: -93.83
69+
step: 90000 valid elbo: -102.33 valid log p(x): -96.00
70+
step: 100000 train elbo: -93.91
71+
step: 100000 valid elbo: -102.48 valid log p(x): -95.92
72+
step: 110000 train elbo: -94.34
73+
step: 110000 valid elbo: -102.81 valid log p(x): -96.09
74+
step: 120000 train elbo: -88.63
75+
step: 120000 valid elbo: -102.53 valid log p(x): -95.80
76+
step: 130000 train elbo: -96.61
77+
step: 130000 valid elbo: -103.56 valid log p(x): -96.26
78+
step: 140000 train elbo: -94.92
79+
step: 140000 valid elbo: -102.81 valid log p(x): -95.86
80+
step: 150000 train elbo: -97.84
81+
step: 150000 valid elbo: -103.06 valid log p(x): -95.92
82+
step: 150000 test elbo: -101.64 test log p(x): -95.33
83+
```

0 commit comments

Comments
 (0)