Skip to main content

AI Theory

Learning Discrete Concepts in Latent Hierarchical Models
·2302 words·11 mins· loading · loading
AI Theory Interpretability 🏢 Carnegie Mellon University
This paper introduces a novel framework for learning discrete concepts from high-dimensional data, establishing theoretical conditions for identifying underlying hierarchical causal structures and pro…
Learning diffusion at lightspeed
·1990 words·10 mins· loading · loading
AI Theory Optimization 🏢 ETH Zurich
JKOnet* learns diffusion processes at unprecedented speed and accuracy by directly minimizing a simple quadratic loss function, bypassing complex bilevel optimization problems.
Learning Cut Generating Functions for Integer Programming
·1721 words·9 mins· loading · loading
AI Generated AI Theory Optimization 🏢 Johns Hopkins University
This research develops data-driven methods for selecting optimal cut generating functions in integer programming, providing theoretical guarantees and empirical improvements over existing techniques.
Learning Better Representations From Less Data For Propositional Satisfiability
·2124 words·10 mins· loading · loading
AI Theory Representation Learning 🏢 CISPA Helmholtz Center for Information Security
NeuRes, a novel neuro-symbolic approach, achieves superior SAT solving accuracy using significantly less training data than existing methods by combining certificate-driven learning with expert iterat…
Learning a Single Neuron Robustly to Distributional Shifts and Adversarial Label Noise
·235 words·2 mins· loading · loading
AI Theory Robustness 🏢 University of Wisconsin-Madison
This work presents a computationally efficient algorithm that robustly learns a single neuron despite adversarial label noise and distributional shifts, providing provable approximation guarantees.
Learnability of high-dimensional targets by two-parameter models and gradient flow
·2386 words·12 mins· loading · loading
AI Generated AI Theory Optimization 🏢 Skoltech
Two-parameter models can surprisingly learn high-dimensional targets with near-perfect accuracy using gradient flow, challenging the need for high-dimensional models.
Latent Neural Operator for Solving Forward and Inverse PDE Problems
·2797 words·14 mins· loading · loading
AI Theory Optimization 🏢 Institute of Automation, Chinese Academy of Sciences
Latent Neural Operator (LNO) dramatically improves solving PDEs by using a latent space, boosting accuracy and reducing computation costs.
Last-Iterate Convergence for Generalized Frank-Wolfe in Monotone Variational Inequalities
·1879 words·9 mins· loading · loading
AI Generated AI Theory Optimization 🏢 Purdue IE
Generalized Frank-Wolfe algorithm achieves fast last-iterate convergence for constrained monotone variational inequalities, even with noisy data.
Langevin Unlearning: A New Perspective of Noisy Gradient Descent for Machine Unlearning
·1899 words·9 mins· loading · loading
AI Theory Privacy 🏢 Georgia Institute of Technology
Langevin unlearning offers a novel, privacy-preserving machine unlearning framework based on noisy gradient descent, handling both convex and non-convex problems efficiently.
John Ellipsoids via Lazy Updates
·311 words·2 mins· loading · loading
AI Theory Optimization 🏢 Carnegie Mellon University
Faster John ellipsoid computation achieved via lazy updates and fast matrix multiplication, improving efficiency and enabling low-space streaming algorithms.
Iterative Methods via Locally Evolving Set Process
·3065 words·15 mins· loading · loading
AI Theory Optimization 🏢 Fudan University
This paper proposes a novel framework, the locally evolving set process, to develop faster localized iterative methods for solving large-scale graph problems, achieving significant speedup over existi…
Is Score Matching Suitable for Estimating Point Processes?
·1651 words·8 mins· loading · loading
AI Theory Optimization 🏢 Center for Applied Statistics and School of Statistics, Renmin University of China
Weighted score matching offers a consistent, efficient solution for estimating parameters in point processes, overcoming the limitations of previous methods.
Is O(log N) practical? Near-Equivalence Between Delay Robustness and Bounded Regret in Bandits and RL
·403 words·2 mins· loading · loading
AI Theory Robustness 🏢 University of Washington
Zero Graves-Lai constant ensures both bounded regret and delay robustness in online decision-making, particularly for linear models.
Is Knowledge Power? On the (Im)possibility of Learning from Strategic Interactions
·385 words·2 mins· loading · loading
AI Theory Optimization 🏢 UC Berkeley
In strategic settings, repeated interactions alone may not enable uninformed players to achieve optimal outcomes, highlighting the persistent impact of information asymmetry.
Is Cross-validation the Gold Standard to Estimate Out-of-sample Model Performance?
·1790 words·9 mins· loading · loading
AI Theory Optimization 🏢 Columbia University
Cross-validation isn’t always superior; simple plug-in methods often perform equally well for estimating out-of-sample model performance, especially when considering computational costs.
IPM-LSTM: A Learning-Based Interior Point Method for Solving Nonlinear Programs
·2991 words·15 mins· loading · loading
AI Generated AI Theory Optimization 🏢 Xi'an Jiaotong University
IPM-LSTM accelerates nonlinear program solving by up to 70% using LSTM networks to approximate linear system solutions within the interior point method.
Invariant subspaces and PCA in nearly matrix multiplication time
·336 words·2 mins· loading · loading
AI Theory Optimization 🏢 IBM Research
Generalized eigenvalue problems get solved in nearly matrix multiplication time, providing new, faster PCA algorithms!
Intruding with Words: Towards Understanding Graph Injection Attacks at the Text Level
·5345 words·26 mins· loading · loading
AI Theory Robustness 🏢 Renmin University of China
Researchers unveil text-level graph injection attacks, revealing a new vulnerability in GNNs and highlighting the importance of text interpretability in attack success.
Intrinsic Robustness of Prophet Inequality to Strategic Reward Signaling
·248 words·2 mins· loading · loading
AI Generated AI Theory Robustness 🏢 Chinese University of Hong Kong
Strategic players can manipulate reward signals, but simple threshold policies still achieve a surprisingly good approximation to the optimal prophet value, even in this more realistic setting.
Interventionally Consistent Surrogates for Complex Simulation Models
·1862 words·9 mins· loading · loading
AI Generated AI Theory Causality 🏢 University of Oxford
This paper introduces a novel framework for creating interventionally consistent surrogate models for complex simulations, addressing computational limitations and ensuring accurate policy evaluation.