AI Theory
Strategic Littlestone Dimension: Improved Bounds on Online Strategic Classification
·389 words·2 mins·
loading
·
loading
AI Theory
Optimization
🏢 Toyota Technological Institute at Chicago
This paper introduces the Strategic Littlestone Dimension, a novel complexity measure for online strategic classification, proving instance-optimal mistake bounds in the realizable setting and improve…
Strategic Linear Contextual Bandits
·1349 words·7 mins·
loading
·
loading
AI Theory
Optimization
🏢 Alan Turing Institute
Strategic agents gaming recommender systems is solved by a novel mechanism that incentivizes truthful behavior while minimizing regret, offering a solution to a key challenge in online learning.
Stochastic Zeroth-Order Optimization under Strongly Convexity and Lipschitz Hessian: Minimax Sample Complexity
·361 words·2 mins·
loading
·
loading
AI Theory
Optimization
🏢 UC Santa Barbara
Stochastic zeroth-order optimization of strongly convex functions with Lipschitz Hessian achieves optimal sample complexity, as proven by matching upper and lower bounds with a novel two-stage algorit…
Stochastic Optimization Algorithms for Instrumental Variable Regression with Streaming Data
·1722 words·9 mins·
loading
·
loading
AI Theory
Causality
🏢 UC Davis
New streaming algorithms for instrumental variable regression achieve fast convergence rates, solving the problem efficiently without matrix inversions or mini-batches, enabling real-time causal analy…
Stochastic Optimal Control Matching
·1801 words·9 mins·
loading
·
loading
AI Theory
Optimization
🏢 Meta AI
Stochastic Optimal Control Matching (SOCM) significantly reduces errors in stochastic optimal control by learning a matching vector field using a novel iterative diffusion optimization technique.
Stochastic Optimal Control and Estimation with Multiplicative and Internal Noise
·2159 words·11 mins·
loading
·
loading
AI Theory
Optimization
🏢 Pompeu Fabra University
A novel algorithm significantly improves stochastic optimal control by accurately modeling sensorimotor noise, achieving substantially lower costs than current state-of-the-art solutions, particularly…
Stochastic Newton Proximal Extragradient Method
·1769 words·9 mins·
loading
·
loading
AI Generated
AI Theory
Optimization
🏢 University of Texas at Austin
Stochastic Newton Proximal Extragradient (SNPE) achieves faster global and local convergence rates for strongly convex functions, improving upon existing stochastic Newton methods by requiring signifi…
Stochastic Extragradient with Flip-Flop Shuffling & Anchoring: Provable Improvements
·1705 words·9 mins·
loading
·
loading
AI Generated
AI Theory
Optimization
🏢 KAIST
Stochastic extragradient with flip-flop shuffling & anchoring achieves provably faster convergence in minimax optimization.
Stochastic Concept Bottleneck Models
·2532 words·12 mins·
loading
·
loading
AI Generated
AI Theory
Interpretability
🏢 ETH Zurich
Stochastic Concept Bottleneck Models (SCBMs) revolutionize interpretable ML by efficiently modeling concept dependencies, drastically improving intervention effectiveness and enabling CLIP-based conce…
Stochastic Amortization: A Unified Approach to Accelerate Feature and Data Attribution
·2842 words·14 mins·
loading
·
loading
AI Theory
Interpretability
🏢 Stanford University
Stochastic Amortization accelerates feature and data attribution by training amortized models using noisy, yet unbiased, labels, achieving order-of-magnitude speedups over existing methods.
Stepwise Alignment for Constrained Language Model Policy Optimization
·2517 words·12 mins·
loading
·
loading
AI Theory
Safety
🏢 University of Tsukuba
Stepwise Alignment for Constrained Policy Optimization (SACPO) efficiently aligns LLMs with human values, prioritizing both helpfulness and harmlessness via a novel stepwise approach.
Statistical-Computational Trade-offs for Density Estimation
·433 words·3 mins·
loading
·
loading
AI Theory
Optimization
🏢 MIT
Density estimation algorithms face inherent trade-offs: reducing sample needs often increases query time. This paper proves these trade-offs are fundamental, showing limits to how much improvement is…
Statistical Multicriteria Benchmarking via the GSD-Front
·2103 words·10 mins·
loading
·
loading
AI Theory
Robustness
🏢 Ludwig-Maximilians-Universität München
Researchers can now reliably benchmark classifiers using multiple quality metrics via the GSD-front, a new information-efficient technique that accounts for statistical uncertainty and deviations from…
Statistical Estimation in the Spiked Tensor Model via the Quantum Approximate Optimization Algorithm
·1516 words·8 mins·
loading
·
loading
AI Theory
Optimization
🏢 University of California, Los Angeles
Quantum Approximate Optimization Algorithm (QAOA) achieves weak recovery in spiked tensor models matching classical methods, but with potential constant factor advantages for certain parameters.
Statistical and Geometrical properties of the Kernel Kullback-Leibler divergence
·1547 words·8 mins·
loading
·
loading
AI Theory
Optimization
🏢 CREST, ENSAE, IP Paris
Regularized Kernel Kullback-Leibler divergence solves the original KKL’s disjoint support limitation, enabling comparison of any probability distributions with a closed-form solution and efficient gra…
Stable Minima Cannot Overfit in Univariate ReLU Networks: Generalization by Large Step Sizes
·2167 words·11 mins·
loading
·
loading
AI Theory
Generalization
🏢 University of California, San Diego
Deep ReLU networks trained with large, constant learning rates avoid overfitting in univariate regression due to minima stability, generalizing well even with noisy labels.
Stability and Generalization of Asynchronous SGD: Sharper Bounds Beyond Lipschitz and Smoothness
·1414 words·7 mins·
loading
·
loading
AI Theory
Generalization
🏢 National University of Defense Technology
Sharper ASGD generalization bounds achieved by leveraging on-average model stability, even without Lipschitz and smoothness assumptions; validated with diverse machine learning models.
Stability and Generalization of Adversarial Training for Shallow Neural Networks with Smooth Activation
·201 words·1 min·
loading
·
loading
AI Generated
AI Theory
Robustness
🏢 Johns Hopkins University
This paper provides novel theoretical guarantees for adversarial training of shallow neural networks, improving generalization bounds via early stopping and Moreau’s envelope smoothing.
Spectral Graph Pruning Against Over-Squashing and Over-Smoothing
·4594 words·22 mins·
loading
·
loading
AI Generated
AI Theory
Representation Learning
🏢 Universität Des Saarlandes
Spectral graph pruning simultaneously mitigates over-squashing and over-smoothing in GNNs via edge deletion, improving generalization.
Solving Inverse Problems via Diffusion Optimal Control
·2106 words·10 mins·
loading
·
loading
AI Theory
Optimization
🏢 Yale University
Revolutionizing inverse problem solving, this paper introduces diffusion optimal control, a novel framework converting signal recovery into a discrete optimal control problem, surpassing limitations o…