Skip to main content

Privacy

Universal Exact Compression of Differentially Private Mechanisms
·1481 words·7 mins· loading · loading
AI Theory Privacy 🏢 Stanford University
Poisson Private Representation (PPR) enables exact compression of any local differential privacy mechanism, achieving order-wise optimal trade-offs between communication, accuracy, and privacy.
Unified Mechanism-Specific Amplification by Subsampling and Group Privacy Amplification
·4228 words·20 mins· loading · loading
AI Generated AI Theory Privacy 🏢 Technical University of Munich
This paper presents a novel framework for achieving tighter differential privacy guarantees via mechanism-specific amplification using subsampling.
Truthful High Dimensional Sparse Linear Regression
·282 words·2 mins· loading · loading
AI Theory Privacy 🏢 King Abdullah University of Science and Technology
This paper presents a novel, truthful, and privacy-preserving mechanism for high-dimensional sparse linear regression, incentivizing data contribution while safeguarding individual privacy.
Trap-MID: Trapdoor-based Defense against Model Inversion Attacks
·3599 words·17 mins· loading · loading
AI Generated AI Theory Privacy 🏢 National Taiwan University
Trap-MID: Outsmarting model inversion attacks with cleverly placed ’trapdoors'!
The Limits of Differential Privacy in Online Learning
·440 words·3 mins· loading · loading
AI Theory Privacy 🏢 Hong Kong University of Science and Technology
This paper reveals fundamental limits of differential privacy in online learning, demonstrating a clear separation between pure, approximate, and non-private settings.
Scalable DP-SGD: Shuffling vs. Poisson Subsampling
·2155 words·11 mins· loading · loading
AI Generated AI Theory Privacy 🏢 Google Research
This paper reveals significant privacy gaps in shuffling-based DP-SGD, proposes a scalable Poisson subsampling method, and demonstrates its superior utility for private model training.
Sample-Efficient Private Learning of Mixtures of Gaussians
·256 words·2 mins· loading · loading
AI Theory Privacy 🏢 McMaster University
Researchers achieve a breakthrough in privacy-preserving machine learning by developing sample-efficient algorithms for learning Gaussian Mixture Models, significantly reducing the data needed while m…
Revisiting Differentially Private ReLU Regression
·1421 words·7 mins· loading · loading
AI Theory Privacy 🏢 KAUST
Differentially private ReLU regression algorithms, DP-GLMtron and DP-TAGLMtron, achieve comparable performance with only an additional factor of O(log N) in the utility upper bound compared to the con…
Reimagining Mutual Information for Enhanced Defense against Data Leakage in Collaborative Inference
·1566 words·8 mins· loading · loading
AI Theory Privacy 🏢 Department of Electrical and Computer Engineering, Duke University
InfoScissors defends collaborative inference from data leakage by cleverly reducing the mutual information between model outputs and sensitive device data, thus ensuring robust privacy without comprom…
Reconstruction Attacks on Machine Unlearning: Simple Models are Vulnerable
·2340 words·11 mins· loading · loading
AI Theory Privacy 🏢 Amazon
Deleting data from machine learning models exposes individuals to highly accurate reconstruction attacks, even when models are simple; this research demonstrates the vulnerability.
Public-data Assisted Private Stochastic Optimization: Power and Limitations
·337 words·2 mins· loading · loading
AI Generated AI Theory Privacy 🏢 Meta
Leveraging public data enhances differentially private (DP) learning, but its limits are unclear. This paper establishes tight theoretical bounds for DP stochastic convex optimization, revealing when …
Pseudo-Private Data Guided Model Inversion Attacks
·4550 words·22 mins· loading · loading
AI Generated AI Theory Privacy 🏢 University of Texas at Austin
Pseudo-Private Data Guided Model Inversion (PPDG-MI) significantly improves model inversion attacks by dynamically tuning the generative model to increase the sampling probability of actual private da…
PrivCirNet: Efficient Private Inference via Block Circulant Transformation
·3185 words·15 mins· loading · loading
AI Theory Privacy 🏢 Peking University
PrivCirNet accelerates private deep learning inference by cleverly transforming DNN weights into circulant matrices, converting matrix-vector multiplications into efficient 1D convolutions suitable fo…
Private Stochastic Convex Optimization with Heavy Tails: Near-Optimality from Simple Reductions
·397 words·2 mins· loading · loading
AI Theory Privacy 🏢 Apple
Achieving near-optimal rates for differentially private stochastic convex optimization with heavy-tailed gradients is possible using simple reduction-based techniques.
Private Online Learning via Lazy Algorithms
·475 words·3 mins· loading · loading
AI Generated AI Theory Privacy 🏢 Apple
New transformation boosts privacy in online learning!
Private Geometric Median
·1335 words·7 mins· loading · loading
AI Theory Privacy 🏢 Khoury College of Computer Sciences, Northeastern University
This paper introduces new differentially private algorithms to compute the geometric median, achieving improved accuracy by scaling with the effective data diameter instead of a known radius.
Private Edge Density Estimation for Random Graphs: Optimal, Efficient and Robust
·261 words·2 mins· loading · loading
AI Theory Privacy 🏢 ETH Zurich
This paper delivers a groundbreaking polynomial-time algorithm for optimally estimating edge density in random graphs while ensuring node privacy and robustness against data corruption.
Privacy without Noisy Gradients: Slicing Mechanism for Generative Model Training
·1432 words·7 mins· loading · loading
AI Theory Privacy 🏢 MIT-IBM Watson AI Lab, IBM Research
Train high-quality generative models with strong differential privacy using a novel slicing mechanism that injects noise into random low-dimensional data projections, avoiding noisy gradients.
Privacy Backdoors: Enhancing Membership Inference through Poisoning Pre-trained Models
·1757 words·9 mins· loading · loading
AI Theory Privacy 🏢 Google DeepMind
Researchers reveal ‘privacy backdoors,’ a new attack that exploits pre-trained models to leak user training data, highlighting critical vulnerabilities and prompting stricter model security measures.
Prior-itizing Privacy: A Bayesian Approach to Setting the Privacy Budget in Differential Privacy
·1883 words·9 mins· loading · loading
AI Theory Privacy 🏢 Department of Statistical Science
This paper introduces a Bayesian approach to setting the privacy budget in differential privacy, enabling agencies to balance data utility and confidentiality by customizing risk profiles.