Privacy
OSLO: One-Shot Label-Only Membership Inference Attacks
·2719 words·13 mins·
loading
·
loading
AI Theory
Privacy
π’ University of Massachusetts Amherst
One-shot label-only attack (OSLO) achieves high membership inference accuracy with only one query, surpassing existing methods by a large margin.
Oracle-Efficient Differentially Private Learning with Public Data
·293 words·2 mins·
loading
·
loading
AI Theory
Privacy
π’ MIT
This paper introduces computationally efficient algorithms for differentially private learning by leveraging public data, overcoming previous computational limitations and enabling broader practical a…
On the Computational Complexity of Private High-dimensional Model Selection
·1501 words·8 mins·
loading
·
loading
AI Theory
Privacy
π’ University of Michigan
This paper proposes a computationally efficient, differentially private best subset selection method for high-dimensional sparse linear regression, achieving both strong statistical utility and provab…
On the Benefits of Public Representations for Private Transfer Learning under Distribution Shift
·1927 words·10 mins·
loading
·
loading
AI Generated
AI Theory
Privacy
π’ Carnegie Mellon University
Public data boosts private AI accuracy even with extreme distribution shifts, improving private model training by up to 67% in three tasks. This is due to shared low-dimensional representations betwe…
On the Ability of Developers' Training Data Preservation of Learnware
·449 words·3 mins·
loading
·
loading
AI Theory
Privacy
π’ Nanjing University
Learnware systems enable model reuse; this paper proves RKME specifications protect developers’ training data while enabling effective model identification.
On provable privacy vulnerabilities of graph representations
·4156 words·20 mins·
loading
·
loading
AI Theory
Privacy
π’ Ant Group
Graph representation learning’s structural vulnerabilities are proven and mitigated via noisy aggregation, revealing crucial privacy-utility trade-offs.
On Differentially Private U Statistics
·403 words·2 mins·
loading
·
loading
AI Theory
Privacy
π’ UC San Diego
New algorithms achieve near-optimal differentially private U-statistic estimation, significantly improving accuracy over existing methods.
On Differentially Private Subspace Estimation in a Distribution-Free Setting
·418 words·2 mins·
loading
·
loading
AI Theory
Privacy
π’ Georgetown University
This paper presents novel measures quantifying data easiness for DP subspace estimation, supporting them with improved upper and lower bounds and a practical algorithm.
Noisy Dual Mirror Descent: A Near Optimal Algorithm for Jointly-DP Convex Resource Allocation
·2148 words·11 mins·
loading
·
loading
AI Generated
AI Theory
Privacy
π’ Nanyang Business School, Nanyang Technological University
Near-optimal algorithm for private resource allocation is introduced, achieving improved accuracy and privacy guarantees.
Noise-Aware Differentially Private Regression via Meta-Learning
·3336 words·16 mins·
loading
·
loading
AI Generated
AI Theory
Privacy
π’ University of Helsinki
Meta-learning and differential privacy combine to enable accurate, well-calibrated private regression, even with limited data, via the novel DPConvCNP model.
Nimbus: Secure and Efficient Two-Party Inference for Transformers
·3036 words·15 mins·
loading
·
loading
AI Generated
AI Theory
Privacy
π’ Shanghai Jiao Tong University
Nimbus achieves 2.7-4.7x speedup in BERT base inference using novel two-party computation techniques for efficient matrix multiplication and non-linear layer approximation.
Nearly Tight Black-Box Auditing of Differentially Private Machine Learning
·1819 words·9 mins·
loading
·
loading
AI Theory
Privacy
π’ University College London
This paper presents a new auditing method for DP-SGD that provides substantially tighter black-box privacy analyses than previous methods, yielding significantly closer empirical estimates to theoreti…
Locally Private and Robust Multi-Armed Bandits
·1621 words·8 mins·
loading
·
loading
AI Theory
Privacy
π’ Wayne State University
This research unveils a fundamental interplay between local differential privacy (LDP) and robustness against data corruption and heavy-tailed rewards in multi-armed bandits, offering a tight characte…
Langevin Unlearning: A New Perspective of Noisy Gradient Descent for Machine Unlearning
·1899 words·9 mins·
loading
·
loading
AI Theory
Privacy
π’ Georgia Institute of Technology
Langevin unlearning offers a novel, privacy-preserving machine unlearning framework based on noisy gradient descent, handling both convex and non-convex problems efficiently.
Instance-Specific Asymmetric Sensitivity in Differential Privacy
·1985 words·10 mins·
loading
·
loading
AI Theory
Privacy
π’ Mozilla
New algorithm improves differentially private estimations by adapting to dataset hardness, enhancing accuracy for variance, classification, and regression tasks.
Instance-Optimal Private Density Estimation in the Wasserstein Distance
·338 words·2 mins·
loading
·
loading
AI Theory
Privacy
π’ Apple
Instance-optimal private density estimation algorithms, adapting to data characteristics for improved accuracy in the Wasserstein distance, are introduced.
Faster Differentially Private Top-$k$ Selection: A Joint Exponential Mechanism with Pruning
·1673 words·8 mins·
loading
·
loading
AI Theory
Privacy
π’ University of Waterloo
Faster differentially private top-k selection achieved via a novel joint exponential mechanism with pruning, reducing time complexity from O(dk) to O(d+kΒ²/Ιlnd).
Faster Algorithms for User-Level Private Stochastic Convex Optimization
·1097 words·6 mins·
loading
·
loading
AI Theory
Privacy
π’ University of Wisconsin-Madison
Faster algorithms achieve optimal excess risk in user-level private stochastic convex optimization, overcoming limitations of prior methods without restrictive assumptions.
Extracting Training Data from Molecular Pre-trained Models
·2322 words·11 mins·
loading
·
loading
AI Generated
AI Theory
Privacy
π’ Zhejiang University
Researchers reveal a high risk of training data extraction from molecular pre-trained models, challenging the assumption that model sharing alone adequately protects against data theft.
Exactly Minimax-Optimal Locally Differentially Private Sampling
·1615 words·8 mins·
loading
·
loading
AI Theory
Privacy
π’ KAIST
This paper provides the first exact minimax-optimal mechanisms for locally differentially private sampling, applicable across all f-divergences.