This article in Le Monde (in French) covers Emmanuel Macrons speech in Lyon yesterday. I must admit that I know little more about Macrons program than what the article reports. One of the most interesting parts being that Macron is decidedly pro European. What caught my eye though was his metaphor regarding the populist right wing spreading of fear and the method of speaking almost exclusively to the populations anger. Macron called this the leprosy of democracy.
I couldn’t agree more and must say that I marvel at the strength of this image. While he was speaking about the french Front National, the exact same argument applies to German AfD. As well as to the countries where egocentric white men have made voters believe they would act in public interest, like the US, Hungary, and Poland.
This NIPS 2016 paper by Ranganath et al. is concerned with Variational Inference using objective functions other than KL-divergence between a target density and a proposal density . It’s called Operator VI as a fancy way to say that one is flexible in constructing how exactly the objective function uses and test functions from some family . I completely agree with the motivation: KL-Divergence in the form indeed underestimates the variance of $\pi$ and approximates only one mode. Using KL the other way around, takes all modes into account, but still tends to underestimate variance.
As a particular case, the authors suggest an objective using what they call the Langevin-Stein Operator which does not make use of the proposal density at all but uses test functions exclusively. The only requirement is that we be able to draw samples from the proposal. The authors claim that assuming access to limits applicability of an objective/operator. This claim is not substantiated however. The example they give in equation (10) is that it is not possible to find a Jacobian correction for a certain transformation of a standard normal random variable to a bimodal distribution. However their method is not the only one to get bimodality by transforming a standard normal variable and actually the Jacobian correction can be computed even for their suggested transformation! The problem they encounter really is that they throw away one dimension of , which makes the tranformation lose injectivity. However by not throwing the variable away, we keep injectivity and it is possible to compute the density of the transformed variables. The reasons for not accessing the density I thus find rather unconvincing.
To compute expectations with respect to , the authors suggest Monte Carlo sums, where every summand uses an evaluation of or its gradient. As that is the most computationally costly part in MCMC and SMC often times, I am very curious whether the method performs any better computationally than modern adaptive Monte Carlo methods.
The attack yesterday afternoon took place at one of my favorite places In Berlin, Kaiser-Wilhelm-Gedächtniskirche located at Breitscheidplatz. The church is a most beautiful symbol of starting from scratch after a devastating war. The historic, destroyed tower still exists and was complemented by a modernist church in the 50s (snapshot above).
Three hours before the attack I bought presents at Breitscheidplatz. Now lets hope the police will find the terrorist.
Yesterday I gave a talk on Reproducing Kernel Hilbert Spaces (RKHSs) in machine learning, in the Uncertainty Quantification seminar organized by Tim Sullivan. In earlier meetings, Tim himself an Han Cheng Lie gave talks on Vladimir Bogachevs use of RKHSs in his book on Gaussian Measures, which does not seem to mention where the “Reproducing Kernel” part comes from. Which is why I decided to start out with and concentrate on kernels. I pointed out the equivalence of a very simple classification algorithm using the dot product in an RKHS with the usage of KDEs for classification (at least for a certain class of positive definite kernels that are also densities).
After one year my PostDoc in Paris is now over and tomorrow I’m starting at SFB 1114 at FU Berlin. As I already told one of my new colleagues, it’s been quite a thing for me not being Xians office mate any more. One reason, obviously, is that with respect to work it’s a great luxury to be able to ask a senior researcher questions at almost any time. My second major reason is that the last time an office mate was so pleasant on a personal level was about six years ago when both Felix (then office mate) and I where new fathers and just generally got along very well. Christian told me last year how he was very thankful to Jim Berger for taking him as a PostDoc “and basically for no reason” (his words). Christians publication record wasn’t great but Jim Berger didn’t care. My response was that that was exactly how I felt, very thankful – and for the same reasons.
It was rather easy to feel at home scientifically in Paris in general. Especially in domains with a strong math component it’s hard to find a better place when taking the union of all Paris universities. Dauphine had a nice atmosphere with many young researchers, especially probabilists. This also made me reflect more on Germany, and I can’t help but feel that the situation in the most prosperous country in the EU is much worse than in France. Rather it is comparable, especially with respect to young researchers, to some EU countries that are currently fighting low tax revenue. But there is hope. If the EU dissolves there will be less comparisons…
The hard part of this year was family life and commuting back to Berlin almost every weekend. It gave my wife the life of a single mother during week days. Another reason to be very thankful, being supported like that. I’m happy to have been here when my son entered elementary school and hope that was it in terms of long term family separation.