Robustness
Enhancing Robustness of Graph Neural Networks on Social Media with Explainable Inverse Reinforcement Learning
·1977 words·10 mins·
loading
·
loading
AI Theory
Robustness
🏢 Hong Kong University of Science and Technology
MoE-BiEntIRL: A novel explainable inverse reinforcement learning method enhances GNN robustness against diverse social media attacks by reconstructing attacker policies and generating more robust trai…
Energy-based Epistemic Uncertainty for Graph Neural Networks
·4139 words·20 mins·
loading
·
loading
AI Theory
Robustness
🏢 Technical University of Munich
GEBM: a novel graph-based energy model for robust GNN uncertainty estimation.
Elliptical Attention
·3508 words·17 mins·
loading
·
loading
AI Generated
AI Theory
Robustness
🏢 FPT Software AI Center
Elliptical Attention enhances transformers by using a Mahalanobis distance metric, stretching the feature space to focus on contextually relevant information, thus improving robustness and reducing re…
ECLipsE: Efficient Compositional Lipschitz Constant Estimation for Deep Neural Networks
·2852 words·14 mins·
loading
·
loading
AI Theory
Robustness
🏢 Purdue University
ECLipsE: A novel compositional approach drastically accelerates Lipschitz constant estimation for deep neural networks, achieving speedups of thousands of times compared to the state-of-the-art while …
Diversity Is Not All You Need: Training A Robust Cooperative Agent Needs Specialist Partners
·1922 words·10 mins·
loading
·
loading
AI Theory
Robustness
🏢 VISTEC
Training robust cooperative AI agents requires diverse and specialized training partners, but existing methods often produce overfit partners. This paper proposes novel methods using reinforcement and…
Diffusion Models are Certifiably Robust Classifiers
·1735 words·9 mins·
loading
·
loading
AI Theory
Robustness
🏢 Tsinghua University
Diffusion models are certifiably robust classifiers due to their inherent O(1) Lipschitzness, a property further enhanced by generalizing to noisy data, achieving over 80% certified robustness on CIFA…
DiffHammer: Rethinking the Robustness of Diffusion-Based Adversarial Purification
·3686 words·18 mins·
loading
·
loading
AI Theory
Robustness
🏢 Hong Kong University of Science and Technology
DiffHammer unveils weaknesses in diffusion-based adversarial defenses by introducing a novel attack bypassing existing evaluation limitations, leading to more robust security solutions.
Detecting Brittle Decisions for Free: Leveraging Margin Consistency in Deep Robust Classifiers
·2604 words·13 mins·
loading
·
loading
AI Theory
Robustness
🏢 IID-Université Laval
Deep learning models’ robustness can be efficiently evaluated using a novel method, margin consistency, which leverages the correlation between input and logit margins for faster, accurate vulnerabili…
DAT: Improving Adversarial Robustness via Generative Amplitude Mix-up in Frequency Domain
·3721 words·18 mins·
loading
·
loading
AI Generated
AI Theory
Robustness
🏢 State Key Laboratory of Internet of Things for Smart City, University of Macau
Boost AI model robustness against adversarial attacks by creatively mixing training sample’s frequency amplitude with distractor images, focusing model learning on phase patterns, thus enhancing accur…
Corruption-Robust Linear Bandits: Minimax Optimality and Gap-Dependent Misspecification
·375 words·2 mins·
loading
·
loading
AI Theory
Robustness
🏢 University of Virginia
This paper presents novel algorithms for linear bandits that are robust to corrupted rewards, achieving minimax optimality and optimal scaling for gap-dependent misspecification, extending to reinforc…
Constrained Adaptive Attack: Effective Adversarial Attack Against Deep Neural Networks for Tabular Data
·2605 words·13 mins·
loading
·
loading
AI Theory
Robustness
🏢 University of Luxembourg
Constrained Adaptive Attack (CAA) significantly improves adversarial attacks on deep learning models for tabular data by combining gradient and search-based methods, achieving up to 96.1% accuracy dro…
Computational Aspects of Bayesian Persuasion under Approximate Best Response
·1555 words·8 mins·
loading
·
loading
AI Generated
AI Theory
Robustness
🏢 UC Berkeley
This paper presents efficient algorithms for Bayesian persuasion under approximate best response, offering polynomial-time solutions for specific cases and a quasi-polynomial-time approximation scheme…
Certified Robustness for Deep Equilibrium Models via Serialized Random Smoothing
·3902 words·19 mins·
loading
·
loading
AI Theory
Robustness
🏢 North Carolina State University
Accelerate DEQ certification up to 7x with Serialized Random Smoothing (SRS), achieving certified robustness on large-scale datasets without sacrificing accuracy.
CausalDiff: Causality-Inspired Disentanglement via Diffusion Model for Adversarial Defense
·1963 words·10 mins·
loading
·
loading
AI Theory
Robustness
🏢 Institute of Computing Technology, CAS
CausalDiff leverages causal inference and diffusion models to create a robust AI defense against unseen adversarial attacks, significantly outperforming state-of-the-art methods.
Adversarially Robust Dense-Sparse Tradeoffs via Heavy-Hitters
·388 words·2 mins·
loading
·
loading
AI Generated
AI Theory
Robustness
🏢 Carnegie Mellon University
Improved adversarially robust streaming algorithms for L_p estimation are presented, surpassing previous state-of-the-art space bounds and disproving the existence of inherent barriers.
Adversarially Robust Decision Transformer
·2778 words·14 mins·
loading
·
loading
AI Theory
Robustness
🏢 University College London
Adversarially Robust Decision Transformer (ARDT) enhances offline RL robustness against powerful adversaries by conditioning policies on minimax returns, achieving superior worst-case performance.
Achieving Domain-Independent Certified Robustness via Knowledge Continuity
·2020 words·10 mins·
loading
·
loading
AI Theory
Robustness
🏢 Carnegie Mellon University
Certifying neural network robustness across diverse domains, this paper introduces knowledge continuity—a novel framework ensuring model stability independent of input type, norms, and distribution.
Achievable distributional robustness when the robust risk is only partially identified
·1876 words·9 mins·
loading
·
loading
AI Generated
AI Theory
Robustness
🏢 ETH Zurich
This paper introduces a novel framework for evaluating the robustness of machine learning models when the true data distribution is only partially known. It defines a new risk measure (‘identifiable r…