V
主页
京东 11.11 红包
【论文分享】VLDB 2024 | 基于选择性更新和释放的差分隐私梯度下降算法
发布人
Paper:《DPSUR: Accelerating Differentially Private Stochastic Gradient Descent Using Selective Update and Release》 Abtrast: Machine learning models are known to memorize private data to reduce their training loss, which can be inadvertently exploited by privacy attacks such as model inversion and membership inference. To protect against these attacks, differential privacy (DP) has become the de facto standard for privacy-preserving machine learning, particularly those popular training algorithms using stochastic gradient descent, such as DPSGD. Nonetheless, DPSGD still suffers from severe utility loss due to its slow convergence. This is partially caused by the random sampling, which brings bias and variance to the gradient, and partially by the Gaussian noise, which leads to fluctuation of gradient updates. Our key idea to address these issues is to apply selective updates to the model training, while discarding those useless or even harmful updates. Motivated by this, this paper proposes DPSUR, a Differentially Private training framework based on Selective Updates and Release, where the gradient from each iteration is evaluated based on a validation test, and only those updates leading to convergence are applied to the model. As such, DPSUR ensures the training in the right direction and thus can achieve faster convergence than DPSGD. The main challenges lie in two aspects -- privacy concerns arising from gradient evaluation, and gradient selection strategy for model update. To address the challenges, DPSUR introduces a clipping strategy for update randomization and a threshold mechanism for gradient selection. Experiments conducted on MNIST, FMNIST, CIFAR-10, and IMDB datasets show that DPSUR significantly outperforms previous works in terms of convergence speed and model utility.
打开封面
下载高清视频
观看高清视频
视频下载器
【组会分享】自适应差分隐私深度学习-《An Adaptive and Fast Convergent Approach to DP DL 》
【论文分享】差分隐私~《deep learning with differential privacy》~Moments Accoutant的关键思想
【论文分享】DASFFA 2019 | FedSel: 基于Top-k维度选择的本地差分隐私联邦学习算法
【教材分享】拉普拉斯机制?高斯机制?严格差分隐私?松弛差分隐私?
【论文分享】USENIX 2023 | 利用社区信息进行差分隐私图数据发布
【论文分享】《renyi differential privacy》-瑞丽差分隐私
【论文分享】特邀西安电子科技大学博士分享《Local and Central DP for Robustness and Privacy in FL》
【论文分享】《一种分布估计场景下的实用性本地差分隐私机制》| USENIX 2019
【学习分享】《本地化差分隐私综述》—LDP
【论文分享】中科院博士刘博超论文分享《DPGEN:用于自然图像合成的差分隐私生成能量引导网络》
【教材分享交流】差分隐私—《Differential Privacy From Theory to Practice》-chapter1、chapter2
【论文分享】《基于本地差分隐私的多维数据收集和分析》 | ICDE 2019
【ASTAPLE DP Group】SIGMO 2022 | LDP-IDS:无限数据流的本地差分隐私保护 - Rong Du
【论文讨论】《Learning Differenitally Private Languange Model》~Client Level FL-DP
【论文分享】《Locally Differentially Private Protocols for Frequency Estimation》
【教材分享】梯度下降在强凸/凸/非凸下的收敛速率证明
【ASTAPLE DP Group】CSF 2017 | Renyi Differential Privacy (瑞丽差分隐私) - Jianping Cai
【论文分享】基于K-means 聚类的差分隐私(指数机制)协同过滤推荐算法
【组会汇报】差分隐私-《PATESEMI-SUPERVISED KNOWLEDGE TRANSFER FOR DEEP LEARNING》(PATE)
【论文分享】《基于DPSGD的图节点分类任务》| ICLR 2023 (欢迎有做图神经/图联邦结合DP的同僚与我联系交流)
【论文分享】《用于个性化联邦学习的分层模型聚合》 | CPVR 2022
【论文分享】《资源受限边缘计算系统中的自适应联邦学习》| JSAC(CCF A)
神经网络,梯度下降,反向传播,AI三板斧,AI教父杰夫辛顿,守得云开见月明_大师传奇
【论文汇报】特邀浙江大学博士冯浩哲(知乎大V“捡到一束光”)分享《KD3A: 一种满足隐私保护要求的去中心化无监督域适应范式》[ICML2021]
【组会分享】《The Privacy Blanket of the Shuffle Model》~隐私毯子
【论文分享】湖南大学江政良分享论文《通过LWE难题实现高效且满足差分隐私的联邦学习安全聚合》| usenix security'22
【ASTAPLE DP Group】Advanced Composition in Differential Privacy - Xun Ran
【组会分享】《Gaussian Differential Privacy》
【录制搬运】浙江大学特聘研究员陈超超 《隐私保护机器学习》
【学术论文写作】英文学术会议论文写作经验分享-Jeff
【组会分享】《Comprehensive Privacy Analysis of Deep Learning》集中式与联邦场景下的消极与积极的成员推理攻击
【组会论文记录】《User-Level Privacy-Preserving Federated Learning: Analysis and Perform》
【组会汇报】《Hiding Among the Clones: A Simple and Nearly Optimal Analysis of Privacy》
【ASTAPLE DP Group】SIGMO 2020 | 本地差分隐私下的数值分布估计 - Liantong Yu
泰勒公式推导梯度下降法
【转载】Synthesizing Relational Data with Differential Privacy | Dr. Xiaokui Xia
(超爽中英)这绝对是2024最全最详细的【深度学习教程】,斯坦福大佬吴恩达亲授,建议收藏!
【论文讨论】《Distributed Gaussian Differentially Privacy Via Shuffing》
【转载】Three Flavors of Differentially Private Federated Learning | Dr. Yang Cao
【转载】Differential Privacy: Potential and Limitations | Prof. Ninghui Li