Geometric Bayesian Inference

Background Bayesian neural networks is a principled technique to learn the function that maps input to output, while quantifying the uncertainty of the problem. Due to the computational complexity exact inference is prohibited and among several approaches Laplace approximation is a rather simple but yet effective way for approximate inference [1]. Recently, a geometric extension relying on Riemannian manifolds has been proposed that enables Laplace approximation to adapt to the local structure of the posterior [2]....

May 27, 2024 · Georgios Arvanitidis

Subnetwork Learning for Laplace Approximations

Background The Laplace approximation is a promising approach to posterior approximation which can address some core issues of deep learning such as poor calibration. Scaling this method to large parameters spaces is intractable because the covariance matrix is quadratic in the number of neural network parameters and hence cannot be stored in memory. A proposed solution to this problem is to only treat a subset of the parameters as stochastic [1, 2, 3] and treat the rest as deterministic....

May 24, 2024 · Hrittik Roy