AI Theory
Improved Algorithms for Contextual Dynamic Pricing
·515 words·3 mins·
loading
·
loading
AI Generated
AI Theory
Optimization
π’ CREST, ENSAE
New algorithms achieve optimal regret bounds for contextual dynamic pricing under minimal assumptions, improving revenue management with better price adjustments.
Implicit Regularization Paths of Weighted Neural Representations
·1797 words·9 mins·
loading
·
loading
AI Theory
Generalization
π’ Carnegie Mellon University
Weighted pretrained features implicitly regularize models, and this paper reveals equivalent paths between weighting schemes and ridge regularization, enabling efficient hyperparameter tuning.
Implicit Regularization of Decentralized Gradient Descent for Sparse Regression
·1887 words·9 mins·
loading
·
loading
AI Generated
AI Theory
Optimization
π’ Pennsylvania State University
Decentralized Gradient Descent achieves statistically optimal sparse model learning via implicit regularization, even with communication-efficient truncation.
Implicit Bias of Mirror Flow on Separable Data
·1523 words·8 mins·
loading
·
loading
AI Theory
Optimization
π’ EPFL
Mirror descent’s implicit bias on separable data is formally characterized, revealing convergence towards a maximum margin classifier determined by the potential’s ‘horizon function’.
If You Want to Be Robust, Be Wary of Initialization
·2056 words·10 mins·
loading
·
loading
AI Theory
Robustness
π’ KTH
Proper weight initialization significantly boosts Graph Neural Network (GNN) and Deep Neural Network (DNN) robustness against adversarial attacks, highlighting a critical, often-overlooked factor.
Identifying General Mechanism Shifts in Linear Causal Representations
·3163 words·15 mins·
loading
·
loading
AI Generated
AI Theory
Representation Learning
π’ University of Texas at Austin
Researchers can now pinpoint the sources of data shifts in complex linear causal systems using a new algorithm, even with limited perfect interventions, opening exciting possibilities for causal disco…
Identifying Functionally Important Features with End-to-End Sparse Dictionary Learning
·6707 words·32 mins·
loading
·
loading
AI Generated
AI Theory
Interpretability
π’ Apollo Research
End-to-end sparse autoencoders revolutionize neural network interpretability by learning functionally important features, outperforming traditional methods in efficiency and accuracy.
Identifying Equivalent Training Dynamics
·2147 words·11 mins·
loading
·
loading
AI Theory
Optimization
π’ University of California, Santa Barbara
New framework uses Koopman operator theory to identify equivalent training dynamics in deep neural networks, enabling quantitative comparison of different architectures and optimization methods.
Identifying Causal Effects Under Functional Dependencies
·1446 words·7 mins·
loading
·
loading
AI Theory
Causality
π’ University of California, Los Angeles
Unlocking identifiability of causal effects: This paper leverages functional dependencies in causal graphs to improve identifiability, leading to fewer needed variables in observational data.
Identification of Analytic Nonlinear Dynamical Systems with Non-asymptotic Guarantees
·1307 words·7 mins·
loading
·
loading
AI Theory
Optimization
π’ Coordinated Science Laboratory
This paper proves that non-active exploration suffices for identifying linearly parameterized nonlinear systems with real-analytic features, providing non-asymptotic guarantees for least-squares and s…
Identification and Estimation of the Bi-Directional MR with Some Invalid Instruments
·2386 words·12 mins·
loading
·
loading
AI Theory
Causality
π’ Beijing Technology and Business University
PReBiM algorithm accurately estimates bi-directional causal effects from observational data, even with invalid instruments, using a novel cluster fusion approach.
Identifiability Guarantees for Causal Disentanglement from Purely Observational Data
·1394 words·7 mins·
loading
·
loading
AI Theory
Causality
π’ MIT
This paper provides identifiability guarantees for causal disentanglement from purely observational data using nonlinear additive Gaussian noise models, addressing a major challenge in causal represen…
Hybrid Top-Down Global Causal Discovery with Local Search for Linear and Nonlinear Additive Noise Models
·2421 words·12 mins·
loading
·
loading
AI Theory
Causality
π’ Cornell University
Hybrid causal discovery algorithm efficiently learns unique causal graphs from observational data by leveraging local substructures and topological sorting, outperforming existing methods in accuracy …
How to Boost Any Loss Function
·3432 words·17 mins·
loading
·
loading
AI Generated
AI Theory
Optimization
π’ Google Research
Boosting, traditionally limited by assumptions about loss functions, is proven in this paper to efficiently optimize any loss function regardless of differentiability or convexity.
How does PDE order affect the convergence of PINNs?
·2124 words·10 mins·
loading
·
loading
AI Generated
AI Theory
Optimization
π’ University of California, Los Angeles
Higher-order PDEs hinder Physics-Informed Neural Network (PINN) convergence; this paper provides theoretical explanation and proposes variable splitting for improved accuracy.
How does Gradient Descent Learn Features --- A Local Analysis for Regularized Two-Layer Neural Networks
·1326 words·7 mins·
loading
·
loading
AI Theory
Optimization
π’ University of Washington
Neural networks learn features effectively through gradient descent, not just at the beginning, but also at the end of training, even with carefully regularized objectives.
How Does Black-Box Impact the Learning Guarantee of Stochastic Compositional Optimization?
·1242 words·6 mins·
loading
·
loading
AI Theory
Optimization
π’ Huazhong Agricultural University
This study reveals how black-box settings affect the learning guarantee of stochastic compositional optimization, offering sharper generalization bounds and novel learning guarantees for derivative-fr…
Honor Among Bandits: No-Regret Learning for Online Fair Division
·357 words·2 mins·
loading
·
loading
AI Theory
Fairness
π’ Harvard University
Online fair division algorithm achieves Γ(TΒ²/Β³) regret while guaranteeing envy-freeness or proportionality in expectation, a result proven tight.
Higher-Order Causal Message Passing for Experimentation with Complex Interference
·1660 words·8 mins·
loading
·
loading
AI Theory
Causality
π’ Stanford University
Higher-Order Causal Message Passing (HO-CMP) accurately estimates treatment effects in complex systems with unknown interference by using observed data to learn the system’s dynamics over time.
High-probability complexity bounds for stochastic non-convex minimax optimization
·1500 words·8 mins·
loading
·
loading
AI Theory
Optimization
π’ UniversitΓ© CΓ΄te D'Azur
First high-probability complexity guarantees for solving stochastic nonconvex minimax problems using a single-loop method are established.