Understanding ensemble learning: Our first step
作者:
时间:2024-01-16
阅读量:303次
  • 演讲人: 孙强(多伦多大学)
  • 时间:2024年1月17日10:30(北京时间)
  • 地点:紫金港校区行政楼1417报告厅
  • 主办单位:浙江大学数据科学研究中心

Abstract:

Overparameterized models, aka interpolators, are unstable. For example, the mininum-norm least square interpolator exhibits unbounded test errors when dealing with noisy data. In this talk, we study how ensemble stabilizes and thus improves the generalization performance, measured by the out-of-sample prediction risk, of an individual interpolator. We focus on bagged linear interpolators, as bagging is a popular randomization-based ensemble method that can be implemented in parallel. We introduce the multiplier-bootstrap-based bagged least square estimator, which can then be formulated as an average of the sketched least square estimators. The proposed multiplier bootstrap encompasses the classical bootstrap with replacement as a special case, along with a more intriguing variant which we call the Bernoulli bootstrap.


Focusing on the proportional regime where the sample size scales proportionally with the feature dimensionality, we investigate the out-of-sample prediction risks of the sketched and bagged least square estimators in both underparametrized and overparameterized regimes. Our results reveal the statistical roles of sketching and bagging. In particular, sketching modifies the aspect ratio and shifts the interpolation threshold of the minimum-norm estimator. However, the risk of the sketched estimator continues to be unbounded around the interpolation threshold due to excessive variance. In stark contrast, bagging effectively mitigates this variance, leading to a bounded limiting out-of-sample prediction risk. To further understand this stability improvement property, we establish that bagging acts as a form of implicit regularization, substantiated by the equivalence of the bagged estimator with its explicitly regularized counterpart. We also discuss several extensions.



Bio:

Qiang is currently an Associate Professor of Statistics at the University of Toronto (UofT), where he leads the StatsLE group. Driven by challenges in the industrial sector, he is interested broadly in deep learning, ensemble learning, generative AI, reinforcement learning, transfer learning, and trustworthy AI.  He is currently serving as an AE for EJS and an AC for various ML conferences.


Previously, he was an associate research scholar at Princeton University, and then an assistant professor at UofT. He received his PhD from the University of North Carolina at Chapel Hill (UNC-CH), and his BS from the University of Science and Technology of China (USTC).