Geometric Bayesian Inference
Background Bayesian neural networks is a principled technique to learn the function that maps input to output, while quantifying the uncertainty of the problem. Due to the computational complexity exact inference is prohibited and among several approaches Laplace approximation is a rather simple but yet effective way for approximate inference [1]. Recently, a geometric extension relying on Riemannian manifolds has been proposed that enables Laplace approximation to adapt to the local structure of the posterior [2]. This new Riemannian Laplace approximation is effective and meaningful, but it comes with an increase to the computational cost. In this project, we will consider techniques to: 1) improve the computational efficiency of the Riemannian Laplace approximation, and 2) provide a relaxation of the basic approach that is potentially fast while retaining the geometric characteristics. ...