AI Theory
Analytically deriving Partial Information Decomposition for affine systems of stable and convolution-closed distributions
·1956 words·10 mins·
loading
·
loading
AI Generated
AI Theory
Causality
π’ Carnegie Mellon University
This paper presents novel theoretical results enabling the analytical calculation of Partial Information Decomposition for various probability distributions, including those relevant to neuroscience, …
An In-depth Investigation of Sparse Rate Reduction in Transformer-like Models
·2521 words·12 mins·
loading
·
loading
AI Theory
Representation Learning
π’ School of Computing and Data Science, University of Hong Kong
Deep learning model interpretability improved via Sparse Rate Reduction (SRR), showing improved generalization and offering principled model design.
An Equivalence Between Static and Dynamic Regret Minimization
·321 words·2 mins·
loading
·
loading
AI Generated
AI Theory
Optimization
π’ UniversitΓ Degli Studi Di Milano
Dynamic regret minimization equals static regret in an extended space; this equivalence reveals a trade-off between loss variance and comparator variability, leading to a new algorithm achieving impro…
An engine not a camera: Measuring performative power of online search
·2609 words·13 mins·
loading
·
loading
AI Generated
AI Theory
Causality
π’ Max Planck Institute for Intelligent Systems
New research quantifies how search engines steer web traffic by subtly changing results, offering a powerful method for antitrust investigations and digital market analysis.
An Efficient High-dimensional Gradient Estimator for Stochastic Differential Equations
·1548 words·8 mins·
loading
·
loading
AI Generated
AI Theory
Optimization
π’ Stanford University
New unbiased gradient estimator for high-dimensional SDEs drastically reduces computation time without sacrificing estimation accuracy.
An effective framework for estimating individualized treatment rules
·3345 words·16 mins·
loading
·
loading
AI Generated
AI Theory
Causality
π’ University of Wisconsin-Madison
This paper introduces a unified ITR estimation framework using covariate balancing weights, achieving significant gains in robustness and effectiveness compared to existing methods.
An Analysis of Elo Rating Systems via Markov Chains
·2046 words·10 mins·
loading
·
loading
AI Generated
AI Theory
Optimization
π’ University of Warwick
Elo rating system’s convergence rigorously analyzed via Markov chains under the Bradley-Terry-Luce model, demonstrating competitive learning rates and informing efficient tournament design.
Amortized Eigendecomposition for Neural Networks
·2211 words·11 mins·
loading
·
loading
AI Generated
AI Theory
Optimization
π’ Sea AI Lab
Accelerate neural network training using ‘amortized eigendecomposition’ β a novel method replacing expensive eigendecomposition with faster QR decomposition while preserving accuracy.
Almost-Linear RNNs Yield Highly Interpretable Symbolic Codes in Dynamical Systems Reconstruction
·3184 words·15 mins·
loading
·
loading
AI Theory
Interpretability
π’ Dept. of Theoretical Neuroscience, Central Institute of Mental Health, Medical Faculty, Heidelberg University, Germany
Almost-linear RNNs (AL-RNNs) offer highly interpretable symbolic codes for dynamical systems reconstruction, simplifying the analysis of complex systems.
Almost Surely Asymptotically Constant Graph Neural Networks
·1976 words·10 mins·
loading
·
loading
AI Theory
Generalization
π’ University of Oxford
Many graph neural networks (GNNs) surprisingly converge to constant outputs with increasing graph size, limiting their expressiveness.
Almost Free: Self-concordance in Natural Exponential Families and an Application to Bandits
·359 words·2 mins·
loading
·
loading
AI Theory
Optimization
π’ University of Alberta
Generalized linear bandits with subexponential reward distributions are self-concordant, enabling second-order regret bounds free of exponential dependence on problem parameters.
Aligning Model Properties via Conformal Risk Control
·1981 words·10 mins·
loading
·
loading
AI Generated
AI Theory
Safety
π’ Stanford University
Post-processing pre-trained models for alignment using conformal risk control and property testing guarantees better alignment, even when training data is biased.
Aggregating Quantitative Relative Judgments: From Social Choice to Ranking Prediction
·2425 words·12 mins·
loading
·
loading
AI Theory
Optimization
π’ Carnegie Mellon University
This paper introduces Quantitative Relative Judgment Aggregation (QRJA), a novel social choice model, and applies it to ranking prediction, yielding effective and interpretable results on various real…
Adversarially Robust Dense-Sparse Tradeoffs via Heavy-Hitters
·388 words·2 mins·
loading
·
loading
AI Generated
AI Theory
Robustness
π’ Carnegie Mellon University
Improved adversarially robust streaming algorithms for L_p estimation are presented, surpassing previous state-of-the-art space bounds and disproving the existence of inherent barriers.
Adversarially Robust Decision Transformer
·2778 words·14 mins·
loading
·
loading
AI Theory
Robustness
π’ University College London
Adversarially Robust Decision Transformer (ARDT) enhances offline RL robustness against powerful adversaries by conditioning policies on minimax returns, achieving superior worst-case performance.
Adjust Pearson's $r$ to Measure Arbitrary Monotone Dependence
·1286 words·7 mins·
loading
·
loading
AI Generated
AI Theory
Optimization
π’ Beijing University of Posts and Telecommunications
Researchers refine Pearson’s correlation coefficient to precisely measure arbitrary monotone dependence, expanding its applicability beyond linear relationships.
Addressing Bias in Online Selection with Limited Budget of Comparisons
·2019 words·10 mins·
loading
·
loading
AI Theory
Optimization
π’ ENSAE
This paper introduces efficient algorithms for online selection with a budget constraint when comparing candidates from different groups has a cost, improving fairness and efficiency.
Adaptive Proximal Gradient Method for Convex Optimization
·1541 words·8 mins·
loading
·
loading
AI Theory
Optimization
π’ University of Vienna
Adaptive gradient descent methods are improved by leveraging local curvature information for entirely adaptive algorithms without added computational cost, proving convergence with only local Lipschit…
Adaptive Experimentation When You Can't Experiment
·1383 words·7 mins·
loading
·
loading
AI Theory
Causality
π’ University of Arizona
Adaptive experimentation tackles confounding in online A/B tests using encouragement designs and a novel linear bandit approach, achieving near-optimal sample complexity.
Adaptive and Optimal Second-order Optimistic Methods for Minimax Optimization
·1754 words·9 mins·
loading
·
loading
AI Generated
AI Theory
Optimization
π’ University of Texas at Austin
New adaptive second-order optimistic methods for minimax optimization achieve optimal convergence without line search, simplifying updates and improving efficiency.