AI Theory
No-regret Learning in Harmonic Games: Extrapolation in the Face of Conflicting Interests
·354 words·2 mins·
loading
·
loading
AI Theory
Optimization
🏢 University of Oxford
Extrapolated FTRL ensures Nash equilibrium convergence in harmonic games, defying standard no-regret learning limitations.
No-Regret Learning for Fair Multi-Agent Social Welfare Optimization
·277 words·2 mins·
loading
·
loading
AI Theory
Fairness
🏢 University of Iowa
This paper solves the open problem of achieving no-regret learning in online multi-agent Nash social welfare maximization.
No Free Lunch Theorem and Black-Box Complexity Analysis for Adversarial Optimisation
·532 words·3 mins·
loading
·
loading
AI Generated
AI Theory
Optimization
🏢 University of Birmingham
No free lunch for adversarial optimization: This paper proves that no single algorithm universally outperforms others when finding Nash Equilibrium, introducing black-box complexity analysis to estab…
No Free Delivery Service: Epistemic limits of passive data collection in complex social systems
·2178 words·11 mins·
loading
·
loading
AI Theory
Generalization
🏢 Meta AI
Passive data collection in complex social systems invalidates standard AI model validation; new methods are needed.
Nimbus: Secure and Efficient Two-Party Inference for Transformers
·3036 words·15 mins·
loading
·
loading
AI Generated
AI Theory
Privacy
🏢 Shanghai Jiao Tong University
Nimbus achieves 2.7-4.7x speedup in BERT base inference using novel two-party computation techniques for efficient matrix multiplication and non-linear layer approximation.
Neural Pfaffians: Solving Many Many-Electron Schrödinger Equations
·2649 words·13 mins·
loading
·
loading
AI Theory
Optimization
🏢 Technical University of Munich
Neural Pfaffians revolutionize many-electron Schrödinger equation solutions by using fully learnable neural wave functions based on Pfaffians, achieving unprecedented accuracy and generalizability acr…
Neural Persistence Dynamics
·2242 words·11 mins·
loading
·
loading
AI Theory
Representation Learning
🏢 University of Salzburg
Neural Persistence Dynamics learns collective behavior from topological features, accurately predicting parameters of governing equations without tracking individual entities.
Neural Network Reparametrization for Accelerated Optimization in Molecular Simulations
·2783 words·14 mins·
loading
·
loading
AI Generated
AI Theory
Optimization
🏢 IBM Research
Accelerate molecular simulations using neural network reparametrization! This flexible method adjusts system complexity, enhances optimization, and maintains continuous access to fine-grained modes, o…
Neural network learns low-dimensional polynomials with SGD near the information-theoretic limit
·455 words·3 mins·
loading
·
loading
AI Generated
AI Theory
Generalization
🏢 Princeton University
SGD can train neural networks to learn low-dimensional polynomials near the information-theoretic limit, surpassing previous correlational statistical query lower bounds.
Neural Model Checking
·2039 words·10 mins·
loading
·
loading
AI Theory
Safety
🏢 University of Birmingham
Neural networks revolutionize hardware model checking by generating formal proof certificates, outperforming state-of-the-art techniques in speed and scalability.
Neural Combinatorial Optimization for Robust Routing Problem with Uncertain Travel Times
·2186 words·11 mins·
loading
·
loading
AI Theory
Optimization
🏢 Sun Yat-Sen University
Neural networks efficiently solve robust routing problems with uncertain travel times, minimizing worst-case deviations from optimal routes under the min-max regret criterion.
Neural collapse vs. low-rank bias: Is deep neural collapse really optimal?
·2988 words·15 mins·
loading
·
loading
AI Generated
AI Theory
Optimization
🏢 Institute of Science and Technology Austria
Deep neural collapse, previously believed optimal, is shown suboptimal in multi-class, multi-layer networks due to a low-rank bias, yielding even lower-rank solutions.
Neur2BiLO: Neural Bilevel Optimization
·2909 words·14 mins·
loading
·
loading
AI Theory
Optimization
🏢 University of Toronto
NEUR2BILO: a neural network-based heuristic solves mixed-integer bilevel optimization problems extremely fast, achieving high-quality solutions for diverse applications.
Nesterov acceleration despite very noisy gradients
·2415 words·12 mins·
loading
·
loading
AI Generated
AI Theory
Optimization
🏢 University of Pittsburgh
AGNES, a novel accelerated gradient descent algorithm, achieves accelerated convergence even with very noisy gradients, significantly improving training efficiency for machine learning models.
Nearly Tight Black-Box Auditing of Differentially Private Machine Learning
·1819 words·9 mins·
loading
·
loading
AI Theory
Privacy
🏢 University College London
This paper presents a new auditing method for DP-SGD that provides substantially tighter black-box privacy analyses than previous methods, yielding significantly closer empirical estimates to theoreti…
Nearly Optimal Approximation of Matrix Functions by the Lanczos Method
·1646 words·8 mins·
loading
·
loading
AI Theory
Optimization
🏢 University of Washington
Lanczos-FA, a simple algorithm for approximating matrix functions, surprisingly outperforms newer methods; this paper proves its near-optimality for rational functions, explaining its practical succes…
Nearly Minimax Optimal Submodular Maximization with Bandit Feedback
·384 words·2 mins·
loading
·
loading
AI Theory
Optimization
🏢 University of Washington
This research establishes the first minimax optimal algorithm for submodular maximization with bandit feedback, achieving a regret bound matching the lower bound.
Nearly Minimax Optimal Regret for Multinomial Logistic Bandit
·1353 words·7 mins·
loading
·
loading
AI Theory
Optimization
🏢 Seoul National University
This paper presents OFU-MNL+, a constant-time algorithm achieving nearly minimax optimal regret for contextual multinomial logistic bandits, closing the gap between existing upper and lower bounds.
Near-Optimal Streaming Heavy-Tailed Statistical Estimation with Clipped SGD
·397 words·2 mins·
loading
·
loading
AI Generated
AI Theory
Optimization
🏢 Stanford University
Clipped SGD achieves near-optimal sub-Gaussian rates for high-dimensional heavy-tailed statistical estimation in streaming settings, improving upon existing state-of-the-art results.
Navigable Graphs for High-Dimensional Nearest Neighbor Search: Constructions and Limits
·495 words·3 mins·
loading
·
loading
AI Generated
AI Theory
Optimization
🏢 New York University
Sparse navigable graphs enable efficient nearest neighbor search, but their construction and limits in high dimensions remain unclear. This paper presents an efficient method to construct navigable gr…