Geometric Bayesian Inference

Background Bayesian neural networks is a principled technique to learn the function that maps input to output, while quantifying the uncertainty of the problem. Due to the computational complexity exact inference is prohibited and among several approaches Laplace approximation is a rather simple but yet effective way for approximate inference [1]. Recently, a geometric extension relying on Riemannian manifolds has been proposed that enables Laplace approximation to adapt to the local structure of the posterior [2]. This new Riemannian Laplace approximation is effective and meaningful, but it comes with an increase to the computational cost. In this project, we will consider techniques to: 1) improve the computational efficiency of the Riemannian Laplace approximation, and 2) provide a relaxation of the basic approach that is potentially fast while retaining the geometric characteristics. ...

May 27, 2024 · Georgios Arvanitidis

Subnetwork Learning for Laplace Approximations

Background The Laplace approximation is a promising approach to posterior approximation which can address some core issues of deep learning such as poor calibration. Scaling this method to large parameters spaces is intractable because the covariance matrix is quadratic in the number of neural network parameters and hence cannot be stored in memory. A proposed solution to this problem is to only treat a subset of the parameters as stochastic [1, 2, 3] and treat the rest as deterministic. However the method of selecting a subnetwork is still an open problem. In this project we will explore the possibility of learning optimal subnetwork structure by instantiating the small covariance matrix and backpropogating through a Bayesian loss function (ELBO, Marginal Likelihood, Predictive Posterior Distribution). ...

May 24, 2024 · Hrittik Roy