Geometric Analysis of Deep Representations

Background Modern deep neural networks, especially those in the overparameterized regime with a very high number of parameters, perform impressively well. Traditional learning theories contradict these empirical results and fail to explain this phenomenon, leading to new approaches that aim to understand why deep learning generalizes. A common belief is that flat minima [1] in the parameter space lead to models with good generalization characteristics. For instance, these models may learn to extract high-quality features from the data, known as representations....

May 27, 2024 · Georgios Arvanitidis

Geometric Bayesian Inference

Background Bayesian neural networks is a principled technique to learn the function that maps input to output, while quantifying the uncertainty of the problem. Due to the computational complexity exact inference is prohibited and among several approaches Laplace approximation is a rather simple but yet effective way for approximate inference [1]. Recently, a geometric extension relying on Riemannian manifolds has been proposed that enables Laplace approximation to adapt to the local structure of the posterior [2]....

May 27, 2024 · Georgios Arvanitidis