Robustness
Wide Two-Layer Networks can Learn from Adversarial Perturbations
·2045 words·10 mins·
loading
·
loading
AI Theory
Robustness
🏢 University of Tokyo
Wide two-layer neural networks can generalize well from mislabeled adversarial examples because adversarial perturbations surprisingly contain sufficient class-specific features.
Unveiling the Hidden Structure of Self-Attention via Kernel Principal Component Analysis
·2602 words·13 mins·
loading
·
loading
AI Theory
Robustness
🏢 National University of Singapore
Self-attention, a key component of transformers, is revealed to be a projection of query vectors onto the principal components of the key matrix, derived from kernel PCA. This novel perspective leads…
Understanding and Improving Adversarial Collaborative Filtering for Robust Recommendation
·2042 words·10 mins·
loading
·
loading
AI Generated
AI Theory
Robustness
🏢 Chinese Academy of Sciences
PamaCF, a novel personalized adversarial collaborative filtering technique, significantly improves recommendation robustness and accuracy against poisoning attacks by dynamically adjusting perturbatio…
Treatment of Statistical Estimation Problems in Randomized Smoothing for Adversarial Robustness
·1858 words·9 mins·
loading
·
loading
AI Theory
Robustness
🏢 Tübingen AI Center, University of Tübingen
This paper optimizes randomized smoothing, a crucial certified defense against adversarial attacks, by introducing novel statistical methods that drastically reduce the computational cost, leading to …
Transferability Bound Theory: Exploring Relationship between Adversarial Transferability and Flatness
·1926 words·10 mins·
loading
·
loading
AI Generated
AI Theory
Robustness
🏢 East China Normal University
Challenging common assumptions, researchers prove that flatter adversarial examples don’t guarantee better transferability and introduce TPA, a theoretically-grounded attack creating more transferable…
The Price of Implicit Bias in Adversarially Robust Generalization
·3000 words·15 mins·
loading
·
loading
AI Generated
AI Theory
Robustness
🏢 New York University
Optimization’s implicit bias in robust machine learning hurts generalization; this work reveals how algorithm/architecture choices impact robustness, suggesting better optimization strategies are need…
The Implicit Bias of Gradient Descent toward Collaboration between Layers: A Dynamic Analysis of Multilayer Perceptions
·1405 words·7 mins·
loading
·
loading
AI Theory
Robustness
🏢 Department of Computer Science University of Exeter
Deep learning models’ success hinges on understanding gradient descent’s implicit bias. This study reveals how this bias influences layer collaboration, revealing a decreasing trend in adversarial rob…
SuperDeepFool: a new fast and accurate minimal adversarial attack
·4315 words·21 mins·
loading
·
loading
AI Generated
AI Theory
Robustness
🏢 EPFL
SuperDeepFool: a fast, accurate algorithm generating minimal adversarial perturbations, significantly improving deep learning model robustness evaluation and adversarial training.
Statistical Multicriteria Benchmarking via the GSD-Front
·2103 words·10 mins·
loading
·
loading
AI Theory
Robustness
🏢 Ludwig-Maximilians-Universität München
Researchers can now reliably benchmark classifiers using multiple quality metrics via the GSD-front, a new information-efficient technique that accounts for statistical uncertainty and deviations from…
Stability and Generalization of Adversarial Training for Shallow Neural Networks with Smooth Activation
·201 words·1 min·
loading
·
loading
AI Generated
AI Theory
Robustness
🏢 Johns Hopkins University
This paper provides novel theoretical guarantees for adversarial training of shallow neural networks, improving generalization bounds via early stopping and Moreau’s envelope smoothing.
Score-based generative models are provably robust: an uncertainty quantification perspective
·293 words·2 mins·
loading
·
loading
AI Theory
Robustness
🏢 Université Côte D'Azur
Score-based generative models are provably robust to multiple error sources, as shown via a novel Wasserstein uncertainty propagation theorem.
Sample and Computationally Efficient Robust Learning of Gaussian Single-Index Models
·262 words·2 mins·
loading
·
loading
AI Generated
AI Theory
Robustness
🏢 University of Wisconsin, Madison
This paper presents a computationally efficient algorithm for robustly learning Gaussian single-index models under adversarial label noise, achieving near-optimal sample complexity.
Robust Sparse Regression with Non-Isotropic Designs
·239 words·2 mins·
loading
·
loading
AI Theory
Robustness
🏢 National Taiwan University
New algorithms achieve near-optimal error rates for sparse linear regression, even under adversarial data corruption and heavy-tailed noise distributions.
Robust Neural Contextual Bandit against Adversarial Corruptions
·1411 words·7 mins·
loading
·
loading
AI Generated
AI Theory
Robustness
🏢 University of Illinois at Urbana-Champaign
R-NeuralUCB: A robust neural contextual bandit algorithm uses a context-aware gradient descent training to defend against adversarial reward corruptions, achieving better performance with theoretical …
Robust Mixture Learning when Outliers Overwhelm Small Groups
·2570 words·13 mins·
loading
·
loading
AI Generated
AI Theory
Robustness
🏢 ETH Zurich
Outlier-robust mixture learning gets order-optimal error guarantees, even when outliers massively outnumber small groups, via a novel meta-algorithm leveraging mixture structure.
Robust Graph Neural Networks via Unbiased Aggregation
·2885 words·14 mins·
loading
·
loading
AI Theory
Robustness
🏢 North Carolina State University
RUNG: a novel GNN architecture boasting superior robustness against adaptive attacks by employing an unbiased aggregation technique.
Robust Gaussian Processes via Relevance Pursuit
·2238 words·11 mins·
loading
·
loading
Machine Learning
Robustness
🏢 Meta
Robust Gaussian Processes via Relevance Pursuit tackles noisy data by cleverly inferring data-point specific noise levels, leading to more accurate predictions.
Rethinking Weight Decay for Robust Fine-Tuning of Foundation Models
·1703 words·8 mins·
loading
·
loading
AI Theory
Robustness
🏢 Georgia Institute of Technology
Selective Projection Decay (SPD) enhances robust fine-tuning of foundation models by selectively applying weight decay, improving generalization and out-of-distribution robustness.
Relational Verification Leaps Forward with RABBit
·1822 words·9 mins·
loading
·
loading
AI Theory
Robustness
🏢 University of Illinois Urbana-Champaign
RABBit: A novel Branch-and-Bound verifier for precise relational verification of Deep Neural Networks, achieving substantial precision gains over current state-of-the-art baselines.
Provable Editing of Deep Neural Networks using Parametric Linear Relaxation
·1758 words·9 mins·
loading
·
loading
AI Theory
Robustness
🏢 UC Davis
PREPARED efficiently edits DNNs to provably satisfy properties by relaxing the problem to a linear program, minimizing parameter changes.