My paper on improving black box variational inference (BBVI) using the James-Stein estimator was recently accepted in the SpringerNature journal, Artificial Life and Robotics, and is now available online. This makes for my first published paper during my PhD studies, and I’m very happy to see the first fruits of what is now a three-year endeavor.
This makes the second paper that I’ve seen to publication this year, although our Semiparametric Topic Model paper was already more than a year into the peer-review process when I started my PhD. I have another PhD paper currently awaiting peer review, and a third that I’m preparing for submission. At this point I think I’m getting a little about how the system works, but this isn’t to say it’s gotten easier.
At this point I’ve already talked at length about the ideas contained in the paper. By way of a quick precis, the paper provides a straightforward proof that we can make BBVI, or black-box variational inference, behave in a more stable fashion towards convergence through an old trick from multivariate statistics. We treat stochastic gradient ascent as if it’s an estimation problem, and achieve better convergence when using lower-MSE samples via the bias-variance tradeoff.
For those interested in going through the code and data, either for the purposes of replication or extension, I’ve put up a public repository on my github.
Leave a comment