I keep coming back to this ICML 2015 paper by Rezende and Mohamed (arXiv version). While this is not due to the particular novelty of the papers contents, I agree that the suggested approach is very promising for any inference approach, be it VI or adaptive Monte Carlo. The paper adopts the term normalizing flow for refering to the plain old change of variables formula for integrals. With the minor change of view that one can see this as a flow and the correct but slightly alien reference to a flow defined by the Langevin SDE or Fokker-Planck, both attributed only to ML/stats literature in the paper.

The theoretical contribution feels a little like a strawman: it simply states that, as Langevin and Hamiltonian dynamics can be seen as an infinitesimal normalizing flow, and both approximate the posterior when the step size goes to zero, normalizing flows can approximate the posterior arbitrarily well. This is of course nothing that was derived in the paper, nor is it news. Nor does it say anything about the practical approach suggested.

The invertible maps suggested have practical merit however, as they allow “splitting” of a mode into two, called the planar transformation (and plotted on the right of the image), as well as “attracting/repulsing” probability mass around a point. The Jacobian correction for both invertible maps being computable in time that is linear in the number of dimensions.

# Category Archives: Optimization

# A non-parametric ensemble transform method for Bayesian inference

This 2013 paper by Sebastian Reich in the Journal on Scientific Computing introduces an approach called the *Ensemble Transport Particle Filter (ETPF)*. The main innovation of ETPF, when compared to SMC-like filtering methods, lies in the resampling step. Which is

- based on an optimal transport idea and
- completely deterministic.

No rejuvenation step is used, contrary to the standard in SMC. While the notation is unfamiliar to me, coming from an SMC background, I’ll adopt it here: by denote samples from the prior with density (the , meaning forecast, is probably owed to Reich having done a lot of Bayesian weather prediction). The idea is to transform these into samples that follow the posterior density (the meaning analyzed), preferably without introducing unequal weights. Let the likelihood term be denoted by where is the data and let be the normalized importance weight. The normalization in the denominator stems from the fact that in Bayesian inference we can often only evaluate an unnormalized version of the posterior .

Then the optimal transport idea enters. Given the discrete realizations , is approximated by assigning the discrete probability vector , while is approximated by the probability vector . Now we construct a joint probability between the discrete random variables distributed according to and those distributed according to , i.e. a matrix with non-negative entries summing to 1 which has the column sum and row sum (another view would be that is a discrete copula which has prior and posterior as marginals). Let be the joint pmf induced by . To qualify as optimal transport, we now seek under the additional constraint of cyclical monotonicity. This boils down to a linear programming problem. For a fixed prior sample this induces a conditional distribution over the discretely approximated posterior given the discretely approximated prior .

We could now simply sample from this conditional to obtain equally weighted posterior samples for each . Instead, the paper proposes a deterministic transformation using the expected value . Reich proves that the mapping induced by this transformation is such that for , for . In other words, if the ensemble size M goes to infinity, we indeed get samples from the posterior.

Overall I think this is a very interesting approach. The construction of an optimal transport map based on the discrete approximations of prior and posterior is indeed novel compared to standard SMC. My one objection is that as it stands, the method will only work if the prior support covers all relevant regions of the posterior, as taking the expected value over prior samples will always lead to a contraction.

Of course, this is not a problem when M is infinite, but my intuition would be that it has a rather strong effect in our finite world. One remedy here would of course be to introduce a rejuvenation step as in SMC, for example moving each particle using MCMC steps that leave invariant.

# Accelerating Stochastic Gradient Descent using Predictive Variance Reduction

During the super nice International Conference on Monte Carlo techniques in the beginning of July in Paris at Université Descartes (photo), which featured many outstanding talks, one by Tong Zhang particularly caught my interest. He talked about several variants of Stochastic Gradient Descent (SGD) that basically use variance reduction techniques from Monte Carlo algorithms in order to improve the convergence rate versus vanilla SGD. Even though some of the papers mentioned in the talk do not always point out the connection to Monte Carlo variance reduction techniques.

One of the first works in this line, *Accelerating Stochastic Gradient Descent using Predictive Variance Reduction* by Johnson and Zhang, suggests using control variates to lower the variance of the loss estimate. Let be the loss for the parameter at and jth data point, then the usual batch gradient descent update is with as step size.

In naive SGD instead one picks a data point index uniformly and uses the update , usually with a decreasing step size to guarantee convergence. The expected update resulting from this Monte Carlo estimate of the batch loss is exactly the batch procedure update. However the variance of the estimate is very high, resulting in slow convergence of SGD after the first steps (even in minibatch variants).

The authors choose a well-known solution to this, namely the introduction of a control variate. Keeping a version of the parameter that is close to the optimum, say , observe that has an expected value of 0 and is thus a possible control variate. With the possible downside that whenever is updated, one has to go over the complete dataset.

The contribution, apart from the novel combination of knowledge, is the proof that this improves convergence. This proof assumes smoothness and strong convexity of the overall loss function and convexity of the for the individual data points and then shows that the proposed procedure (termed stochastic variance reduced gradient or SVRG) enjoys geometric convergence. Even though the proof uses a slightly odd version of the algorithm, namely where . Rather simply setting should intuitively improve convergence, but the authors could not report a result on that. Overall a very nice idea, and one that has been discussed in more papers quite a bit, among others by Simon Lacoste-Julien and Francis Bach.