Geometric Bayesian Inference
Background Bayesian Neural Networks provide a principled way to learn the function mapping inputs to outputs, while also quantifying uncertainty in the predictions. Exact inference is typically computationally intractable due to computational complexity, and among the various approximate methods, Laplace approximation is a simple yet effective approach. Recently, a geometric extension using Riemannian manifolds has been proposed that enables Laplace approximation to adapt to the local structure of the posterior [1]. This Riemannian Laplace approximation is both effective and meaningful, and it opens up several potential research directions. ...