Towards Differentially Private Deep Learning under Hidden State Assumption
作者:
时间:2024-07-30
阅读量:444次
  • 演讲人: Chen Liu (City University of Hong Kong)
  • 时间:2024年8月2日10:30(北京时间)
  • 地点:浙江大学紫金港校区行政楼13层会议室
  • 主办单位:浙江大学数据科学研究中心

Abstract: We present a novel approach called differentially private stochastic block coordinate descent (DP-SBCD) for training neural networks with provable guarantees of differential privacy under the hidden state assumption. Our methodology incorporates Lipschitz neural networks and decomposes the training process of the neural network into sub-problems, each corresponding to the training of a specific layer. By doing so, we extend the analysis of differential privacy under the hidden state assumption to encompass non-convex problems. Furthermore, in contrast to existing methods, we adopt a novel approach by utilizing calibrated noise sampled from adaptive distributions, yielding improved empirical trade-offs between utility and privacy.



Bio: Chen Liu is currently an assistant professor from Department of Computer Science, City University of Hong Kong. He obtained his Ph.D. degree in 2022 from EPFL under the supervison of Prof. Sabine Süsstrunk and Dr. Mathieu Salzmann. His research focuses on machine learning, optimization, adversarial robustness and differential privacy. More information is available on his website: liuchen1993.cn.