Skip to main content

AI Theory

Revisiting Differentially Private ReLU Regression
·1421 words·7 mins· loading · loading
AI Theory Privacy 🏒 KAUST
Differentially private ReLU regression algorithms, DP-GLMtron and DP-TAGLMtron, achieve comparable performance with only an additional factor of O(log N) in the utility upper bound compared to the con…
Rethinking Weight Decay for Robust Fine-Tuning of Foundation Models
·1703 words·8 mins· loading · loading
AI Theory Robustness 🏒 Georgia Institute of Technology
Selective Projection Decay (SPD) enhances robust fine-tuning of foundation models by selectively applying weight decay, improving generalization and out-of-distribution robustness.
Rethinking the Capacity of Graph Neural Networks for Branching Strategy
·1678 words·8 mins· loading · loading
AI Generated AI Theory Optimization 🏒 MIT
This paper proves that higher-order GNNs can universally approximate strong branching in MILP solvers, whereas simpler GNNs can only accurately approximate for a specific class of problems.
Rethinking Parity Check Enhanced Symmetry-Preserving Ansatz
·2377 words·12 mins· loading · loading
AI Theory Optimization 🏒 Shanghai Jiao Tong University
Enhanced VQAs via Hamming Weight Preserving ansatz and parity checks achieve superior performance on quantum chemistry and combinatorial problems, showcasing quantum advantage potential in NISQ era.
Reshuffling Resampling Splits Can Improve Generalization of Hyperparameter Optimization
·4058 words·20 mins· loading · loading
AI Theory Optimization 🏒 Munich Center for Machine Learning (MCML)
Reshuffling data splits during hyperparameter optimization surprisingly improves model generalization, offering a computationally cheaper alternative to standard methods.
Replicable Uniformity Testing
·268 words·2 mins· loading · loading
AI Generated AI Theory Optimization 🏒 UC San Diego
This paper presents the first replicable uniformity tester with nearly linear dependence on the replicability parameter, enhancing the reliability of scientific studies using distribution testing algo…
Replicability in Learning: Geometric Partitions and KKM-Sperner Lemma
·301 words·2 mins· loading · loading
AI Theory Optimization 🏒 Sandia National Laboratories
This paper reveals near-optimal relationships between geometric partitions and replicability in machine learning, establishing the optimality of existing algorithms and introducing a new neighborhood …
ReLIZO: Sample Reusable Linear Interpolation-based Zeroth-order Optimization
·2192 words·11 mins· loading · loading
AI Theory Optimization 🏒 Shanghai Jiao Tong University
ReLIZO boosts zeroth-order optimization by cleverly reusing past queries, drastically cutting computation costs while maintaining gradient estimation accuracy.
Reliable Learning of Halfspaces under Gaussian Marginals
·265 words·2 mins· loading · loading
AI Theory Optimization 🏒 University of Wisconsin-Madison
New algorithm reliably learns Gaussian halfspaces with significantly improved sample and computational complexity compared to existing methods, offering strong computational separation from standard a…
Relational Verification Leaps Forward with RABBit
·1822 words·9 mins· loading · loading
AI Theory Robustness 🏒 University of Illinois Urbana-Champaign
RABBit: A novel Branch-and-Bound verifier for precise relational verification of Deep Neural Networks, achieving substantial precision gains over current state-of-the-art baselines.
Reimagining Mutual Information for Enhanced Defense against Data Leakage in Collaborative Inference
·1566 words·8 mins· loading · loading
AI Theory Privacy 🏒 Department of Electrical and Computer Engineering, Duke University
InfoScissors defends collaborative inference from data leakage by cleverly reducing the mutual information between model outputs and sensitive device data, thus ensuring robust privacy without comprom…
Regression under demographic parity constraints via unlabeled post-processing
·1578 words·8 mins· loading · loading
AI Generated AI Theory Fairness 🏒 IRT SystemX, Université Gustave Eiffel
Ensuring fair regression predictions without using sensitive attributes? This paper presents a novel post-processing algorithm, achieving demographic parity with strong theoretical guarantees and comp…
RegExplainer: Generating Explanations for Graph Neural Networks in Regression Tasks
·2208 words·11 mins· loading · loading
AI Theory Interpretability 🏒 New Jersey Institute of Technology
RegExplainer unveils a novel method for interpreting graph neural networks in regression tasks, bridging the explanation gap by addressing distribution shifts and tackling continuously ordered decisio…
Refusal in Language Models Is Mediated by a Single Direction
·4093 words·20 mins· loading · loading
AI Theory Safety 🏒 Independent
LLM refusal is surprisingly mediated by a single, easily manipulated direction in the model’s activation space.
ReEvo: Large Language Models as Hyper-Heuristics with Reflective Evolution
·3978 words·19 mins· loading · loading
AI Theory Optimization 🏒 Peking University
ReEvo, a novel integration of evolutionary search and LLM reflections, generates state-of-the-art heuristics for combinatorial optimization problems, demonstrating superior sample efficiency.
Recurrent neural networks: vanishing and exploding gradients are not the end of the story
·2602 words·13 mins· loading · loading
AI Theory Optimization 🏒 ETH Zurich
Recurrent neural networks struggle with long-term memory due to a newly identified ‘curse of memory’: increasing parameter sensitivity with longer memory. This work provides insights into RNN optimiza…
Reconstruction Attacks on Machine Unlearning: Simple Models are Vulnerable
·2340 words·11 mins· loading · loading
AI Theory Privacy 🏒 Amazon
Deleting data from machine learning models exposes individuals to highly accurate reconstruction attacks, even when models are simple; this research demonstrates the vulnerability.
RashomonGB: Analyzing the Rashomon Effect and Mitigating Predictive Multiplicity in Gradient Boosting
·2640 words·13 mins· loading · loading
AI Theory Fairness 🏒 JPMorgan Chase Global Technology Applied Research
RashomonGB tackles predictive multiplicity in gradient boosting by introducing a novel inference technique to efficiently identify and mitigate conflicting model predictions, improving model selection…
Randomized Truthful Auctions with Learning Agents
·324 words·2 mins· loading · loading
AI Generated AI Theory Optimization 🏒 Google Research
Randomized truthful auctions outperform deterministic ones when bidders employ learning algorithms, maximizing revenue in repeated interactions.
Randomized Strategic Facility Location with Predictions
·1312 words·7 mins· loading · loading
AI Theory Optimization 🏒 Columbia University
Randomized strategies improve truthful learning-augmented mechanisms for strategic facility location, achieving better approximations than deterministic methods.