AI Theory
Detecting Brittle Decisions for Free: Leveraging Margin Consistency in Deep Robust Classifiers
·2604 words·13 mins·
loading
·
loading
AI Theory
Robustness
🏢 IID-Université Laval
Deep learning models’ robustness can be efficiently evaluated using a novel method, margin consistency, which leverages the correlation between input and logit margins for faster, accurate vulnerabili…
Detecting and Measuring Confounding Using Causal Mechanism Shifts
·1590 words·8 mins·
loading
·
loading
AI Theory
Causality
🏢 Indian Institute of Technology Hyderabad
This paper proposes novel measures to detect and quantify confounding biases from observational data using causal mechanism shifts, even with unobserved confounders.
Derivatives of Stochastic Gradient Descent in parametric optimization
·1733 words·9 mins·
loading
·
loading
AI Generated
AI Theory
Optimization
🏢 Université Paul Sabatier
Stochastic gradient descent’s derivatives, crucial for hyperparameter optimization, converge to the solution mapping derivative; rates depend on step size, exhibiting O(log(k)²/k) convergence with van…
Derandomizing Multi-Distribution Learning
·204 words·1 min·
loading
·
loading
AI Theory
Optimization
🏢 Aarhus University
Derandomizing multi-distribution learning is computationally hard, but a structural condition allows efficient black-box conversion of randomized predictors to deterministic ones.
Denoising Diffusion Path: Attribution Noise Reduction with An Auxiliary Diffusion Model
·2911 words·14 mins·
loading
·
loading
AI Generated
AI Theory
Interpretability
🏢 School of Computer Science, Fudan University
Denoising Diffusion Path (DDPath) uses diffusion models to dramatically reduce noise in attribution methods for deep neural networks, leading to clearer explanations and improved quantitative results.
DeNetDM: Debiasing by Network Depth Modulation
·2848 words·14 mins·
loading
·
loading
AI Generated
AI Theory
Fairness
🏢 University of Surrey
DeNetDM uses network depth modulation to automatically debiase image classifiers without bias annotations or data augmentation, improving accuracy by 5%.
Deep linear networks for regression are implicitly regularized towards flat minima
·2602 words·13 mins·
loading
·
loading
AI Generated
AI Theory
Optimization
🏢 Institute of Mathematics
Deep linear networks implicitly regularize towards flat minima, with sharpness (Hessian’s largest eigenvalue) of minimizers linearly increasing with depth but bounded by a constant times the lower bou…
Deep Homomorphism Networks
·1657 words·8 mins·
loading
·
loading
AI Theory
Generalization
🏢 Roku, Inc.
Deep Homomorphism Networks (DHNs) boost graph neural network (GNN) expressiveness by efficiently detecting subgraph patterns using a novel graph homomorphism layer.
Decision-Focused Learning with Directional Gradients
·1724 words·9 mins·
loading
·
loading
AI Theory
Optimization
🏢 UC Los Angeles
New Perturbation Gradient losses connect expected decisions with directional derivatives, enabling Lipschitz continuous surrogates for predict-then-optimize, asymptotically yielding best-in-class poli…
Debiasing Synthetic Data Generated by Deep Generative Models
·3364 words·16 mins·
loading
·
loading
AI Theory
Privacy
🏢 Ghent University Hospital - SYNDARA
Debiasing synthetic data generated by deep generative models enhances statistical convergence rates, yielding reliable results for specific analyses.
Data-faithful Feature Attribution: Mitigating Unobservable Confounders via Instrumental Variables
·1976 words·10 mins·
loading
·
loading
AI Theory
Interpretability
🏢 Zhejiang University
Data-faithful feature attribution tackles misinterpretations from unobservable confounders by using instrumental variables to train confounder-free models, leading to more robust and accurate feature …
Data subsampling for Poisson regression with pth-root-link
·657 words·4 mins·
loading
·
loading
AI Generated
AI Theory
Optimization
🏢 University Potsdam
Sublinear coresets for Poisson regression are developed, offering 1±ε approximation guarantees, with complexity analyzed using a novel parameter and domain shifting.
Data Distribution Valuation
·3717 words·18 mins·
loading
·
loading
AI Theory
Valuation
🏢 Carnegie Mellon University
This paper proposes a novel MMD-based method for data distribution valuation, enabling theoretically-principled comparison of data distributions from limited samples, outperforming existing methods in…
DAT: Improving Adversarial Robustness via Generative Amplitude Mix-up in Frequency Domain
·3721 words·18 mins·
loading
·
loading
AI Generated
AI Theory
Robustness
🏢 State Key Laboratory of Internet of Things for Smart City, University of Macau
Boost AI model robustness against adversarial attacks by creatively mixing training sample’s frequency amplitude with distractor images, focusing model learning on phase patterns, thus enhancing accur…
CSPG: Crossing Sparse Proximity Graphs for Approximate Nearest Neighbor Search
·2426 words·12 mins·
loading
·
loading
AI Theory
Optimization
🏢 Fudan University
CSPG: a novel framework boosting Approximate Nearest Neighbor Search speed by 1.5-2x, using sparse proximity graphs and efficient two-staged search.
Cryptographic Hardness of Score Estimation
·386 words·2 mins·
loading
·
loading
AI Generated
AI Theory
Optimization
🏢 University of Washington
Score estimation, crucial for diffusion models, is computationally hard even with polynomial sample complexity unless strong distributional assumptions are made.
Credit Attribution and Stable Compression
·299 words·2 mins·
loading
·
loading
AI Theory
Privacy
🏢 Tel Aviv University
New definitions of differential privacy enable machine learning algorithms to credit sources appropriately, balancing data utility and copyright compliance.
Credal Learning Theory
·2051 words·10 mins·
loading
·
loading
AI Generated
AI Theory
Generalization
🏢 University of Manchester
Credal Learning Theory uses convex sets of probabilities to model data distribution variability, providing theoretical risk bounds for machine learning models in dynamic environments.
Covariate Shift Corrected Conditional Randomization Test
·2259 words·11 mins·
loading
·
loading
AI Generated
AI Theory
Causality
🏢 Harvard University
A new Covariate Shift Corrected Pearson Chi-squared Conditional Randomization (csPCR) test accurately assesses conditional independence even when data distributions vary between source and target popu…
Counterfactual Fairness by Combining Factual and Counterfactual Predictions
·2056 words·10 mins·
loading
·
loading
AI Theory
Fairness
🏢 Purdue University
This paper proposes a novel method to achieve optimal counterfactual fairness in machine learning models while minimizing predictive performance degradation.