Skip to main content

AI Theory

Loss Landscape Characterization of Neural Networks without Over-Parametrization
·2958 words·14 mins· loading · loading
AI Theory Optimization 🏒 University of Basel
Deep learning optimization is revolutionized by a new function class, enabling convergence guarantees without over-parameterization and accommodating saddle points.
Looks Too Good To Be True: An Information-Theoretic Analysis of Hallucinations in Generative Restoration Models
·2111 words·10 mins· loading · loading
AI Generated AI Theory Optimization 🏒 Verily AI (Google Life Sciences)
Generative image restoration models face a critical trade-off: higher perceptual quality often leads to increased hallucinations (unreliable predictions).
Lookback Prophet Inequalities
·467 words·3 mins· loading · loading
AI Theory Optimization 🏒 ENSAE, Ecole Polytechnique
This paper enhances prophet inequalities by allowing lookback, improving competitive ratios and providing algorithms for diverse observation orders, thereby bridging theory and real-world online selec…
Logical characterizations of recurrent graph neural networks with reals and floats
·334 words·2 mins· loading · loading
AI Theory Representation Learning 🏒 Tampere University
Recurrent Graph Neural Networks (GNNs) with real and floating-point numbers are precisely characterized by rule-based and infinitary modal logics, respectively, enabling a deeper understanding of thei…
Log-concave Sampling from a Convex Body with a Barrier: a Robust and Unified Dikin Walk
·1308 words·7 mins· loading · loading
AI Theory Optimization 🏒 New York University
This paper introduces robust Dikin walks for log-concave sampling, achieving faster mixing times and lower iteration costs than existing methods, particularly for high-dimensional settings.
Locally Private and Robust Multi-Armed Bandits
·1621 words·8 mins· loading · loading
AI Theory Privacy 🏒 Wayne State University
This research unveils a fundamental interplay between local differential privacy (LDP) and robustness against data corruption and heavy-tailed rewards in multi-armed bandits, offering a tight characte…
Localized Adaptive Risk Control
·2386 words·12 mins· loading · loading
AI Generated AI Theory Fairness 🏒 University of Cambridge
Localized Adaptive Risk Control (L-ARC) improves fairness and reliability of online prediction by providing localized statistical risk guarantees, surpassing existing methods in high-stakes applicatio…
Linear Causal Representation Learning from Unknown Multi-node Interventions
·418 words·2 mins· loading · loading
AI Theory Causality 🏒 Carnegie Mellon University
Unlocking Causal Structures: New algorithms identify latent causal relationships from interventions, even when multiple variables are affected simultaneously.
Linear Causal Bandits: Unknown Graph and Soft Interventions
·1964 words·10 mins· loading · loading
AI Theory Causality 🏒 Rensselaer Polytechnic Institute
Causal bandits with unknown graphs and soft interventions are solved by establishing novel upper and lower regret bounds, plus a computationally efficient algorithm.
Least Squares Regression Can Exhibit Under-Parameterized Double Descent
·3874 words·19 mins· loading · loading
AI Generated AI Theory Generalization 🏒 Applied Math, Yale University
Under-parameterized linear regression models can surprisingly exhibit double descent, contradicting traditional bias-variance assumptions.
Learning-Augmented Priority Queues
·2516 words·12 mins· loading · loading
AI Theory Optimization 🏒 ENSAE, Ecole Polytechnique
This paper introduces learning-augmented priority queues, using predictions to boost efficiency and optimality, achieving significant performance gains over traditional methods.
Learning-Augmented Dynamic Submodular Maximization
·387 words·2 mins· loading · loading
AI Theory Optimization 🏒 Indian Institute of Technology Bombay
Leveraging predictions, this paper presents a novel algorithm for dynamic submodular maximization achieving significantly faster update times (O(poly(log n, log w, log k)) amortized) compared to exist…
Learning-Augmented Approximation Algorithms for Maximum Cut and Related Problems
·249 words·2 mins· loading · loading
AI Theory Optimization 🏒 Google Research
This paper shows how noisy predictions about optimal solutions can improve approximation algorithms for NP-hard problems like MAX-CUT, exceeding classical hardness bounds.
Learning-Augmented Algorithms with Explicit Predictors
·3004 words·15 mins· loading · loading
AI Generated AI Theory Optimization 🏒 Bocconi University
This paper introduces a novel framework for learning-augmented algorithms that improves performance by integrating the learning process into the algorithm itself, rather than treating the predictor as…
Learning-Augmented Algorithms for the Bahncard Problem
·3280 words·16 mins· loading · loading
AI Theory Optimization 🏒 Zhejiang University
PFSUM, a novel learning-augmented algorithm, leverages short-term predictions to achieve superior performance in solving the Bahncard problem, outperforming existing methods with improved consistency …
Learning with Fitzpatrick Losses
·2495 words·12 mins· loading · loading
AI Generated AI Theory Optimization 🏒 Ecole Des Ponts
Tighter losses than Fenchel-Young losses are presented, refining Fenchel-Young inequalities using the Fitzpatrick function to improve model accuracy while preserving prediction link functions.
Learning to Understand: Identifying Interactions via the MΓΆbius Transform
·2143 words·11 mins· loading · loading
AI Theory Interpretability 🏒 UC Berkeley
Unlocking complex models’ secrets: New algorithm identifies input interactions using the MΓΆbius Transform, boosting interpretability with surprising speed and accuracy.
Learning to Solve Quadratic Unconstrained Binary Optimization in a Classification Way
·3329 words·16 mins· loading · loading
AI Theory Optimization 🏒 National University of Defence Technology
Researchers developed Value Classification Model (VCM), a neural solver that swiftly solves quadratic unconstrained binary optimization (QUBO) problems by directly generating solutions using a classif…
Learning to Mitigate Externalities: the Coase Theorem with Hindsight Rationality
·314 words·2 mins· loading · loading
AI Theory Optimization 🏒 University of California, Berkeley
Economists learn to resolve externalities efficiently even when players lack perfect information, maximizing social welfare by leveraging bargaining and online learning.
Learning to Handle Complex Constraints for Vehicle Routing Problems
·3237 words·16 mins· loading · loading
AI Theory Optimization 🏒 Nanyang Technological University
Proactive Infeasibility Prevention (PIP) framework significantly improves neural methods for solving complex Vehicle Routing Problems by proactively preventing infeasible solutions and enhancing const…