Optimization
Statistical and Geometrical properties of the Kernel Kullback-Leibler divergence
·1547 words·8 mins·
loading
·
loading
AI Theory
Optimization
🏢 CREST, ENSAE, IP Paris
Regularized Kernel Kullback-Leibler divergence solves the original KKL’s disjoint support limitation, enabling comparison of any probability distributions with a closed-form solution and efficient gra…
Solving Sparse & High-Dimensional-Output Regression via Compression
·2050 words·10 mins·
loading
·
loading
AI Generated
Machine Learning
Optimization
🏢 National University of Singapore
SHORE: a novel two-stage framework efficiently solves sparse & high-dimensional output regression, boosting interpretability and scalability.
Solving Inverse Problems via Diffusion Optimal Control
·2106 words·10 mins·
loading
·
loading
AI Theory
Optimization
🏢 Yale University
Revolutionizing inverse problem solving, this paper introduces diffusion optimal control, a novel framework converting signal recovery into a discrete optimal control problem, surpassing limitations o…
Smoothed Online Classification can be Harder than Batch Classification
·302 words·2 mins·
loading
·
loading
AI Generated
AI Theory
Optimization
🏢 University of Michigan
Smoothed online classification can be harder than batch classification when label spaces are unbounded, challenging existing assumptions in machine learning.
Small coresets via negative dependence: DPPs, linear statistics, and concentration
·402 words·2 mins·
loading
·
loading
Optimization
🏢 University of Lille
DPPs create smaller, more accurate coresets than existing methods, improving machine learning efficiency without sacrificing accuracy.
Slack-Free Spiking Neural Network Formulation for Hypergraph Minimum Vertex Cover
·1466 words·7 mins·
loading
·
loading
AI Theory
Optimization
🏢 Intel Labs
A novel slack-free spiking neural network efficiently solves the Hypergraph Minimum Vertex Cover problem on neuromorphic hardware, outperforming CPU-based methods in both speed and energy consumption.
SkipPredict: When to Invest in Predictions for Scheduling
·2285 words·11 mins·
loading
·
loading
AI Theory
Optimization
🏢 Harvard University
SkipPredict optimizes scheduling by prioritizing cheap predictions and using expensive ones only when necessary, achieving cost-effective performance.
Single-Loop Stochastic Algorithms for Difference of Max-Structured Weakly Convex Functions
·1750 words·9 mins·
loading
·
loading
AI Generated
Machine Learning
Optimization
🏢 Texas A&M University
SMAG, a novel single-loop stochastic algorithm, achieves state-of-the-art convergence for solving non-smooth non-convex optimization problems involving differences of max-structured weakly convex func…
Shuffling Gradient-Based Methods for Nonconvex-Concave Minimax Optimization
·337 words·2 mins·
loading
·
loading
AI Generated
AI Theory
Optimization
🏢 IBM Research
New shuffling gradient methods achieve state-of-the-art oracle complexity for nonconvex-concave minimax optimization problems, offering improved performance and efficiency.
Sharpness-Aware Minimization Activates the Interactive Teaching's Understanding and Optimization
·1829 words·9 mins·
loading
·
loading
AI Theory
Optimization
🏢 School of Artificial Intelligence, Jilin University
Sharpness Reduction Interactive Teaching (SRIT) boosts interactive teaching’s performance by integrating SAM’s generalization capabilities, leading to improved model accuracy and generalization.
Shaping the distribution of neural responses with interneurons in a recurrent circuit model
·1538 words·8 mins·
loading
·
loading
AI Theory
Optimization
🏢 Center for Computational Neuroscience, Flatiron Institute
Researchers developed a recurrent neural circuit model that efficiently transforms sensory signals into neural representations by dynamically adjusting interneuron connectivity and activation function…
SGD vs GD: Rank Deficiency in Linear Networks
·381 words·2 mins·
loading
·
loading
AI Theory
Optimization
🏢 EPFL
SGD surprisingly diminishes network rank, unlike GD, due to a repulsive force between eigenvalues, offering insights into deep learning generalization.
Sequential Probability Assignment with Contexts: Minimax Regret, Contextual Shtarkov Sums, and Contextual Normalized Maximum Likelihood
·217 words·2 mins·
loading
·
loading
AI Theory
Optimization
🏢 University of Toronto
This paper introduces contextual Shtarkov sums, a new complexity measure characterizing minimax regret in sequential probability assignment with contexts, and derives the minimax optimal algorithm, co…
Separation and Bias of Deep Equilibrium Models on Expressivity and Learning Dynamics
·2192 words·11 mins·
loading
·
loading
AI Generated
AI Theory
Optimization
🏢 Peking University
Deep Equilibrium Models (DEQs) outperform standard neural networks, but lack theoretical understanding. This paper provides general separation results showing DEQ’s superior expressivity and character…
Semidefinite Relaxations of the Gromov-Wasserstein Distance
·2209 words·11 mins·
loading
·
loading
AI Theory
Optimization
🏢 National University of Singapore
This paper introduces a novel, tractable semidefinite program (SDP) relaxation for the Gromov-Wasserstein distance, enabling the computation of globally optimal transportation plans.
Semi-Random Matrix Completion via Flow-Based Adaptive Reweighting
·349 words·2 mins·
loading
·
loading
AI Theory
Optimization
🏢 MIT
New nearly-linear time algorithm achieves high-accuracy semi-random matrix completion, overcoming previous limitations on accuracy and noise tolerance.
Scaling Laws in Linear Regression: Compute, Parameters, and Data
·1352 words·7 mins·
loading
·
loading
AI Theory
Optimization
🏢 UC Berkeley
Deep learning’s neural scaling laws defy conventional wisdom; this paper uses infinite-dimensional linear regression to theoretically explain this phenomenon, showing that implicit regularization of S…
Scalable Neural Network Verification with Branch-and-bound Inferred Cutting Planes
·2551 words·12 mins·
loading
·
loading
AI Generated
AI Theory
Optimization
🏢 University of Illinois Urbana-Champaign
BICCOS: Scalable neural network verification via branch-and-bound inferred cutting planes.
Scalable Bayesian Optimization via Focalized Sparse Gaussian Processes
·2563 words·13 mins·
loading
·
loading
AI Generated
Machine Learning
Optimization
🏢 Tsinghua University
FOCALBO, a hierarchical Bayesian optimization algorithm using focalized sparse Gaussian processes, efficiently tackles high-dimensional problems with massive datasets, achieving state-of-the-art perfo…
Sample-Efficient Geometry Reconstruction from Euclidean Distances using Non-Convex Optimization
·1912 words·9 mins·
loading
·
loading
AI Theory
Optimization
🏢 University of North Carolina at Charlotte
Reconstructing geometry from minimal Euclidean distance samples: A novel algorithm achieves state-of-the-art data efficiency with theoretical guarantees.