AI Theory
Wasserstein Distributionally Robust Optimization through the Lens of Structural Causal Models and Individual Fairness
·2363 words·12 mins·
loading
·
loading
AI Generated
AI Theory
Fairness
π’ Max Planck Institute for Intelligent Systems
This paper introduces Causally Fair DRO, a novel framework for robust optimization that addresses individual fairness concerns by incorporating causal structures and sensitive attributes, providing th…
Wasserstein convergence of Cech persistence diagrams for samplings of submanifolds
·1477 words·7 mins·
loading
·
loading
AI Theory
Representation Learning
π’ UniversitΓ© Paris-Saclay, Inria
This paper proves that Δech persistence diagrams converge to the true underlying shape precisely when using Wasserstein distances with p > m, where m is the submanifold dimension, significantly advanc…
Warm-starting Push-Relabel
·1936 words·10 mins·
loading
·
loading
AI Theory
Optimization
π’ UC Berkeley
This research introduces the first theoretical guarantees for warm-starting the celebrated Push-Relabel network flow algorithm, improving its speed using a predicted flow, while maintaining worst-case…
Variance estimation in compound decision theory under boundedness
·323 words·2 mins·
loading
·
loading
AI Theory
Optimization
π’ University of Chicago
Unlocking the optimal variance estimation rate in compound decision theory under bounded means, this paper reveals a surprising (log log n/log n)Β² rate and introduces a rate-optimal cumulant-based est…
Validating Climate Models with Spherical Convolutional Wasserstein Distance
·2133 words·11 mins·
loading
·
loading
AI Theory
Optimization
π’ University of Illinois Urbana-Champaign
Researchers developed Spherical Convolutional Wasserstein Distance (SCWD) to more accurately validate climate models by considering spatial variability and local distributional differences.
Utilizing Human Behavior Modeling to Manipulate Explanations in AI-Assisted Decision Making: The Good, the Bad, and the Scary
·3625 words·18 mins·
loading
·
loading
AI Generated
AI Theory
Interpretability
π’ Purdue University
AI explanations can be subtly manipulated to influence human decisions, highlighting the urgent need for more robust and ethical AI explanation design.
Using Noise to Infer Aspects of Simplicity Without Learning
·2004 words·10 mins·
loading
·
loading
AI Theory
Interpretability
π’ Department of Computer Science, Duke University
Noise in data surprisingly simplifies machine learning models, improving their interpretability without sacrificing accuracy; this paper quantifies this effect across various hypothesis spaces.
User-item fairness tradeoffs in recommendations
·3047 words·15 mins·
loading
·
loading
AI Theory
Fairness
π’ Cornell University
Recommendation systems must balance user satisfaction with fair item exposure. This research provides a theoretical model and empirical validation showing that user preference diversity can significan…
User-Creator Feature Polarization in Recommender Systems with Dual Influence
·2172 words·11 mins·
loading
·
loading
AI Theory
Optimization
π’ Harvard University
Recommender systems, when influenced by both users and creators, inevitably polarize; however, prioritizing efficiency through methods like top-k truncation can surprisingly enhance diversity.
Unveiling User Satisfaction and Creator Productivity Trade-Offs in Recommendation Platforms
·1440 words·7 mins·
loading
·
loading
AI Theory
Optimization
π’ University of Virginia
Recommendation algorithms on UGC platforms face a critical trade-off: prioritizing user satisfaction reduces creator engagement, jeopardizing long-term content diversity. This research introduces a ga…
Unveiling the Potential of Robustness in Selecting Conditional Average Treatment Effect Estimators
·1533 words·8 mins·
loading
·
loading
AI Generated
AI Theory
Causality
π’ Hong Kong Polytechnic University
A new, nuisance-free Distributionally Robust Metric (DRM) is proposed for selecting robust Conditional Average Treatment Effect (CATE) estimators, improving the reliability of personalized decision-ma…
Unveiling the Hidden Structure of Self-Attention via Kernel Principal Component Analysis
·2602 words·13 mins·
loading
·
loading
AI Theory
Robustness
π’ National University of Singapore
Self-attention, a key component of transformers, is revealed to be a projection of query vectors onto the principal components of the key matrix, derived from kernel PCA. This novel perspective leads…
Unrolled denoising networks provably learn to perform optimal Bayesian inference
·2411 words·12 mins·
loading
·
loading
AI Generated
AI Theory
Optimization
π’ Harvard University
Unrolled neural networks, trained via gradient descent, provably achieve optimal Bayesian inference for compressed sensing, surpassing prior-aware counterparts.
Unraveling the Gradient Descent Dynamics of Transformers
·1273 words·6 mins·
loading
·
loading
AI Theory
Optimization
π’ University of Minnesota, Twin Cities
This paper reveals how large embedding dimensions and appropriate initialization guarantee convergence in Transformer training, highlighting Gaussian attention’s superior landscape over Softmax.
Universality of AdaGrad Stepsizes for Stochastic Optimization: Inexact Oracle, Acceleration and Variance Reduction
·1717 words·9 mins·
loading
·
loading
AI Theory
Optimization
π’ CISPA
Adaptive gradient methods using AdaGrad stepsizes achieve optimal convergence rates for convex composite optimization problems, handling inexact oracles, acceleration, and variance reduction without n…
Universal Exact Compression of Differentially Private Mechanisms
·1481 words·7 mins·
loading
·
loading
AI Theory
Privacy
π’ Stanford University
Poisson Private Representation (PPR) enables exact compression of any local differential privacy mechanism, achieving order-wise optimal trade-offs between communication, accuracy, and privacy.
Unified Mechanism-Specific Amplification by Subsampling and Group Privacy Amplification
·4228 words·20 mins·
loading
·
loading
AI Generated
AI Theory
Privacy
π’ Technical University of Munich
This paper presents a novel framework for achieving tighter differential privacy guarantees via mechanism-specific amplification using subsampling.
Unified Covariate Adjustment for Causal Inference
·1452 words·7 mins·
loading
·
loading
AI Theory
Causality
π’ Purdue University
Unified Covariate Adjustment (UCA) offers a scalable, doubly robust estimator for a wide array of causal estimands beyond standard methods.
Unelicitable Backdoors via Cryptographic Transformer Circuits
·1600 words·8 mins·
loading
·
loading
AI Theory
Safety
π’ Contramont Research
Researchers unveil unelicitable backdoors in language models, using cryptographic transformer circuits, defying conventional detection methods and raising crucial AI safety concerns.
Understanding the Expressive Power and Mechanisms of Transformer for Sequence Modeling
·1911 words·9 mins·
loading
·
loading
AI Generated
AI Theory
Generalization
π’ Peking University
This work systematically investigates the approximation properties of Transformer networks for sequence modeling, revealing the distinct roles of key components (self-attention, positional encoding, f…