Privacy
Efficient and Private Marginal Reconstruction with Local Non-Negativity
·1932 words·10 mins·
loading
·
loading
AI Generated
AI Theory
Privacy
π’ University of Massachusetts, Amherst
Efficiently and privately reconstructing marginal queries from noisy data using residuals improves accuracy of existing differential privacy mechanisms.
DOPPLER: Differentially Private Optimizers with Low-pass Filter for Privacy Noise Reduction
·2545 words·12 mins·
loading
·
loading
AI Theory
Privacy
π’ University of Southern California
DOPPLER, a novel low-pass filter, significantly enhances differentially private (DP) optimizer performance by reducing the impact of privacy noise, bridging the gap between DP and non-DP training.
Dimension-free Private Mean Estimation for Anisotropic Distributions
·233 words·2 mins·
loading
·
loading
AI Generated
AI Theory
Privacy
π’ UC Berkeley
Dimension-free private mean estimation is achieved for anisotropic data, breaking the curse of dimensionality in privacy-preserving high-dimensional analysis.
Differentially Private Stochastic Gradient Descent with Fixed-Size Minibatches: Tighter RDP Guarantees with or without Replacement
·2045 words·10 mins·
loading
·
loading
AI Theory
Privacy
π’ Texas State University
Tighter differential privacy (RDP) guarantees for DP-SGD with fixed-size minibatches are achieved, improving private deep learning model training.
Differentially Private Set Representations
·1424 words·7 mins·
loading
·
loading
AI Generated
AI Theory
Privacy
π’ Google
Differentially private set representations achieve optimal privacy-utility tradeoffs with exponentially smaller error than prior histogram methods.
Differentially Private Reinforcement Learning with Self-Play
·347 words·2 mins·
loading
·
loading
AI Theory
Privacy
π’ UC San Diego
This paper presents DP-Nash-VI, a novel algorithm ensuring trajectory-wise privacy in multi-agent reinforcement learning, achieving near-optimal regret bounds under both joint and local differential p…
Differentially Private Optimization with Sparse Gradients
·1282 words·7 mins·
loading
·
loading
AI Theory
Privacy
π’ Google Research
This paper presents new, nearly optimal differentially private algorithms for handling sparse gradients, significantly improving efficiency and scalability in large embedding models.
Differentially Private Graph Diffusion with Applications in Personalized PageRanks
·1969 words·10 mins·
loading
·
loading
AI Theory
Privacy
π’ Georgia Institute of Technology
This paper introduces a novel differentially private graph diffusion framework ensuring edge-level privacy, significantly improving utility-privacy trade-offs for personalized PageRank computation.
Differentially Private Equivalence Testing for Continuous Distributions and Applications
·583 words·3 mins·
loading
·
loading
AI Generated
AI Theory
Privacy
π’ Bar-Ilan University
First differentially private algorithm for testing equivalence between continuous distributions, enabling privacy-preserving comparisons of sensitive data.
Differential Privacy in Scalable General Kernel Learning via $K$-means Nystr{"o}m Random Features
·1468 words·7 mins·
loading
·
loading
AI Generated
AI Theory
Privacy
π’ KAIST
Differentially private scalable kernel learning is achieved via a novel DP K-means NystrΓΆm method, enabling efficient and accurate model training for general kernels while safeguarding privacy.
Debiasing Synthetic Data Generated by Deep Generative Models
·3364 words·16 mins·
loading
·
loading
AI Theory
Privacy
π’ Ghent University Hospital - SYNDARA
Debiasing synthetic data generated by deep generative models enhances statistical convergence rates, yielding reliable results for specific analyses.
Credit Attribution and Stable Compression
·299 words·2 mins·
loading
·
loading
AI Theory
Privacy
π’ Tel Aviv University
New definitions of differential privacy enable machine learning algorithms to credit sources appropriately, balancing data utility and copyright compliance.
Continual Counting with Gradual Privacy Expiration
·2038 words·10 mins·
loading
·
loading
AI Generated
AI Theory
Privacy
π’ Basic Algorithms Research Copenhagen
Continual counting with gradual privacy expiration: A new algorithm achieves optimal accuracy with exponentially decaying privacy!
Certified Machine Unlearning via Noisy Stochastic Gradient Descent
·2364 words·12 mins·
loading
·
loading
AI Generated
AI Theory
Privacy
π’ Georgia Institute of Technology
This paper introduces a novel machine unlearning method using projected noisy stochastic gradient descent, providing the first approximate unlearning guarantee under convexity, significantly improving…
Can Graph Neural Networks Expose Training Data Properties? An Efficient Risk Assessment Approach
·1856 words·9 mins·
loading
·
loading
AI Theory
Privacy
π’ Zhejiang University
New efficient attack reveals GNN model training data properties.
Banded Square Root Matrix Factorization for Differentially Private Model Training
·4880 words·23 mins·
loading
·
loading
AI Theory
Privacy
π’ Institute of Science and Technology (ISTA)
This paper introduces BSR, a novel banded square root matrix factorization for differentially private model training. Unlike existing methods, BSR avoids computationally expensive optimization, enabli…
Auditing Privacy Mechanisms via Label Inference Attacks
·1460 words·7 mins·
loading
·
loading
AI Theory
Privacy
π’ Google Research
New metrics audit label privatization, revealing differentially private schemes often outperform heuristic methods in the privacy-utility tradeoff.
Attack-Aware Noise Calibration for Differential Privacy
·2558 words·13 mins·
loading
·
loading
AI Generated
AI Theory
Privacy
π’ Lausanne University Hospital
Boosting machine learning model accuracy in privacy-preserving applications, this research introduces novel noise calibration methods directly targeting desired attack risk levels, bypassing conventio…
A Huber Loss Minimization Approach to Mean Estimation under User-level Differential Privacy
·334 words·2 mins·
loading
·
loading
AI Generated
AI Theory
Privacy
π’ Zhejiang Lab
Huber loss minimization ensures accurate and robust mean estimation under user-level differential privacy, especially for imbalanced datasets and heavy-tailed distributions.