Skip to main content

AI Theory

Random Function Descent
·1682 words·8 mins· loading · loading
AI Theory Optimization 🏒 University of Mannheim
Random Function Descent (RFD) replaces the classical convex function framework with a random function approach, providing a scalable gradient descent method with inherent scale invariance and a theore…
Random Cycle Coding: Lossless Compression of Cluster Assignments via Bits-Back Coding
·1446 words·7 mins· loading · loading
AI Theory Optimization 🏒 University of Toronto
Random Cycle Coding (RCC) optimally compresses cluster assignments in large datasets, saving up to 70% storage in vector databases by eliminating the need for integer IDs.
QWO: Speeding Up Permutation-Based Causal Discovery in LiGAMs
·1522 words·8 mins· loading · loading
AI Theory Causality 🏒 College of Management of Technology, EPFL
QWO: a novel method dramatically speeds up permutation-based causal discovery in linear Gaussian models, enabling the analysis of larger datasets and advancing causal inference.
Queueing Matching Bandits with Preference Feedback
·1365 words·7 mins· loading · loading
AI Generated AI Theory Optimization 🏒 Seoul National University
Novel algorithms stabilize multi-server queueing systems with unknown service rates, achieving sublinear regret by learning server preferences via preference feedback.
Query-Efficient Correlation Clustering with Noisy Oracle
·1484 words·7 mins· loading · loading
AI Theory Optimization 🏒 CENTAI Institute
Novel algorithms for query-efficient correlation clustering with noisy oracles achieve a balance between query complexity and solution quality, offering theoretical guarantees and outperforming baseli…
Quantum Algorithms for Non-smooth Non-convex Optimization
·360 words·2 mins· loading · loading
AI Theory Optimization 🏒 Chinese University of Hong Kong
Quantum algorithms achieve speedups in non-smooth, non-convex optimization, outperforming classical methods by a factor of Ρ⁻²/³ in query complexity for finding (δ,Ρ)-Goldstein stationary points.
Quantum algorithm for large-scale market equilibrium computation
·643 words·4 mins· loading · loading
AI Generated AI Theory Optimization 🏒 Centre for Quantum Technologies, National University of Singapore
Quantum speedup achieved for large-scale market equilibrium computation!
Quantifying Aleatoric Uncertainty of the Treatment Effect: A Novel Orthogonal Learner
·2359 words·12 mins· loading · loading
AI Theory Causality 🏒 LMU Munich
New orthogonal learner quantifies treatment effect’s randomness, providing sharper insights beyond average effects.
Qualitative Mechanism Independence
·1560 words·8 mins· loading · loading
AI Theory Causality 🏒 Cornell University
Researchers introduce QIM-compatibility, a novel framework for modeling qualitative relationships in probability distributions using directed hypergraphs, significantly expanding beyond standard condi…
Quadratic Quantum Variational Monte Carlo
·1669 words·8 mins· loading · loading
AI Theory Optimization 🏒 University of Texas at Austin
Q2VMC, a novel quantum chemistry algorithm, drastically boosts the efficiency and accuracy of solving the SchrΓΆdinger equation using a quadratic update mechanism and neural network ansatzes.
Putting Gale & Shapley to Work: Guaranteeing Stability Through Learning
·1809 words·9 mins· loading · loading
AI Theory Optimization 🏒 Penn State University
Researchers improve two-sided matching market algorithms by prioritizing stability through novel bandit-learning algorithms, providing theoretical bounds on sample complexity and demonstrating intrigu…
Public-data Assisted Private Stochastic Optimization: Power and Limitations
·337 words·2 mins· loading · loading
AI Generated AI Theory Privacy 🏒 Meta
Leveraging public data enhances differentially private (DP) learning, but its limits are unclear. This paper establishes tight theoretical bounds for DP stochastic convex optimization, revealing when …
Pseudo-Private Data Guided Model Inversion Attacks
·4550 words·22 mins· loading · loading
AI Generated AI Theory Privacy 🏒 University of Texas at Austin
Pseudo-Private Data Guided Model Inversion (PPDG-MI) significantly improves model inversion attacks by dynamically tuning the generative model to increase the sampling probability of actual private da…
Proximal Causal Inference With Text Data
·4077 words·20 mins· loading · loading
AI Theory Causality 🏒 Johns Hopkins University
Unmeasured confounders hinder causal inference; this paper introduces a novel method using two pre-treatment text instances and zero-shot models to infer proxies for unobserved confounders, enabling p…
Proving Theorems Recursively
·2409 words·12 mins· loading · loading
AI Theory Optimization 🏒 University of Edinburgh
POETRY: a recursive neural theorem prover achieving 5.1% higher success rate and solving substantially longer proofs.
Provably Safe Neural Network Controllers via Differential Dynamic Logic
·2824 words·14 mins· loading · loading
AI Theory Safety 🏒 Karlsruhe Institute of Technology
Verifiably safe AI controllers are created via a novel framework, VerSAILLE, which uses differential dynamic logic and open-loop NN verification to prove safety for unbounded time horizons.
Provably Optimal Memory Capacity for Modern Hopfield Models: Transformer-Compatible Dense Associative Memories as Spherical Codes
·1812 words·9 mins· loading · loading
AI Theory Representation Learning 🏒 Northwestern University
Researchers achieve provably optimal memory capacity in transformer-compatible Hopfield models by framing the problem as an optimal spherical code arrangement, resulting in a novel sublinear time algo…
Provable Tempered Overfitting of Minimal Nets and Typical Nets
·1386 words·7 mins· loading · loading
AI Theory Generalization 🏒 Technion
Deep learning’s generalization ability defies conventional wisdom; this paper proves that overfitting in deep neural networks is ’tempered’, neither catastrophic nor perfectly benign, for both minimal…
Provable Editing of Deep Neural Networks using Parametric Linear Relaxation
·1758 words·9 mins· loading · loading
AI Theory Robustness 🏒 UC Davis
PREPARED efficiently edits DNNs to provably satisfy properties by relaxing the problem to a linear program, minimizing parameter changes.
Provable Benefits of Complex Parameterizations for Structured State Space Models
·1827 words·9 mins· loading · loading
AI Generated AI Theory Optimization 🏒 Tel Aviv University
Complex numbers boost neural network performance! This study proves that complex parameterizations in structured state space models (SSMs) enable more efficient and practical learning of complex mappi…