Auto-tune: PAC-Bayes Optimization over Prior and Posterior for Neural Networks
Speaker: Prof. Rongrong Wang
Time: 16:00 (BST), May 28, 2025.

Abstract
Training neural networks requires solving a large-scale optimization problem that is usually highly overparameterized with much more model parameters than the training data. The choice of the training algorithm therefore critically determines how well the model can generalize to the unseen test data. While traditional Adam-based training methods can potentially lead to optimal generalization, they often rely heavily on techniques like weight decay, dropout, smaller mini-batch sizes, data augmentation, pruning, and more. Implementing these regularization strategies can make the Adam approach not only more cumbersome and slower but also heavily dependent on meticulous hyperparameter tuning. Inspired by the PAC-Bayes bound, we introduce a PAC-training framework focusing on direct minimization of generalization error which exhibits comparable or even better performance with traditional methods.
Our Speaker
Rongrong Wang is an Associate Professor at Michigan State University with a joint appointment in the department of Computational Mathematics, Science and Engineering and the department of Mathematics. She received her Ph.D. from the University of Maryland, College Park, and subsequently held a postdoctoral position at the University of British Columbia. Her research focuses on modeling and optimization for data-driven computation, developing learning algorithms, optimization formulations, and scalable numerical methods with theoretical guarantees. Her work has applications in signal processing, machine learning, and inverse problems.


