Skip to main content

AI Theory

Computational Aspects of Bayesian Persuasion under Approximate Best Response
·1555 words·8 mins· loading · loading
AI Generated AI Theory Robustness 🏢 UC Berkeley
This paper presents efficient algorithms for Bayesian persuasion under approximate best response, offering polynomial-time solutions for specific cases and a quasi-polynomial-time approximation scheme…
Compositional PAC-Bayes: Generalization of GNNs with persistence and beyond
·2208 words·11 mins· loading · loading
AI Theory Generalization 🏢 ETH Zurich
Novel compositional PAC-Bayes framework delivers data-dependent generalization bounds for persistence-enhanced Graph Neural Networks, improving model design and performance.
Compact Proofs of Model Performance via Mechanistic Interpretability
·4006 words·19 mins· loading · loading
AI Theory Interpretability 🏢 MIT
Researchers developed a novel method using mechanistic interpretability to create compact formal proofs for AI model performance, improving AI safety and reliability.
Community Detection Guarantees using Embeddings Learned by Node2Vec
·2609 words·13 mins· loading · loading
AI Generated AI Theory Representation Learning 🏢 Columbia University
Node2Vec, a popular network embedding method, is proven to consistently recover community structure in stochastic block models, paving the way for more reliable unsupervised community detection.
Communication Bounds for the Distributed Experts Problem
·2565 words·13 mins· loading · loading
AI Theory Optimization 🏢 Carnegie Mellon University
This paper presents communication-efficient protocols for the distributed experts problem, achieving near-optimal regret with theoretical and empirical validation.
Collaboration! Towards Robust Neural Methods for Routing Problems
·2519 words·12 mins· loading · loading
AI Theory Optimization 🏢 Eindhoven University of Technology
A novel Collaborative Neural Framework (CNF) enhances the robustness of neural vehicle routing methods against adversarial attacks by collaboratively training multiple models and intelligently distrib…
Coherence-free Entrywise Estimation of Eigenvectors in Low-rank Signal-plus-noise Matrix Models
·1535 words·8 mins· loading · loading
AI Theory Optimization 🏢 University of Wisconsin-Madison
New method for eigenvector estimation achieves optimal rates without coherence dependence, improving low-rank matrix denoising and related tasks.
Coded Computing for Resilient Distributed Computing: A Learning-Theoretic Framework
·2345 words·12 mins· loading · loading
AI Generated AI Theory Optimization 🏢 University of Minnesota
LeTCC: A novel learning-theoretic framework for resilient distributed computing, achieving faster convergence and higher accuracy than existing methods by integrating learning theory principles with c…
Clustering in Causal Attention Masking
·1455 words·7 mins· loading · loading
AI Theory Causality 🏢 MIT
Researchers strengthen understanding of transformer self-attention by proving asymptotic convergence to single clusters under causal masking, linking it to the Rényi parking problem.
Class Distribution Shifts in Zero-Shot Learning: Learning Robust Representations
·2470 words·12 mins· loading · loading
AI Theory Representation Learning 🏢 Hebrew University of Jerusalem
Zero-shot learning models often fail in real-world scenarios due to unseen class distribution shifts. This work introduces a novel algorithm that learns robust representations by creating synthetic d…
ChronoEpilogi: Scalable Time Series Selection with Multiple Solutions
·2554 words·12 mins· loading · loading
AI Theory Causality 🏢 University of Cergy Paris
ChronoEpilogi efficiently finds all minimal sets of time-series variables optimally predicting a target, improving forecasting while providing crucial insights for knowledge discovery and causal model…
Challenges of Generating Structurally Diverse Graphs
·2126 words·10 mins· loading · loading
AI Theory Optimization 🏢 HSE University
Researchers developed novel algorithms to generate structurally diverse graphs, improving graph algorithm testing and neural network evaluation.
Certified Robustness for Deep Equilibrium Models via Serialized Random Smoothing
·3902 words·19 mins· loading · loading
AI Theory Robustness 🏢 North Carolina State University
Accelerate DEQ certification up to 7x with Serialized Random Smoothing (SRS), achieving certified robustness on large-scale datasets without sacrificing accuracy.
Certified Machine Unlearning via Noisy Stochastic Gradient Descent
·2364 words·12 mins· loading · loading
AI Generated AI Theory Privacy 🏢 Georgia Institute of Technology
This paper introduces a novel machine unlearning method using projected noisy stochastic gradient descent, providing the first approximate unlearning guarantee under convexity, significantly improving…
CausalDiff: Causality-Inspired Disentanglement via Diffusion Model for Adversarial Defense
·1963 words·10 mins· loading · loading
AI Theory Robustness 🏢 Institute of Computing Technology, CAS
CausalDiff leverages causal inference and diffusion models to create a robust AI defense against unseen adversarial attacks, significantly outperforming state-of-the-art methods.
Causal vs. Anticausal merging of predictors
·304 words·2 mins· loading · loading
AI Theory Causality 🏢 Amazon
Causal assumptions drastically alter predictor merging, with CMAXENT revealing logistic regression for causal and LDA for anticausal directions.
Causal Temporal Representation Learning with Nonstationary Sparse Transition
·2158 words·11 mins· loading · loading
AI Theory Representation Learning 🏢 Carnegie Mellon University
CtrlNS: A novel framework for causal temporal representation learning tackles the challenge of nonstationary time series by leveraging sparse transition assumptions, achieving improved accuracy in ide…
Causal Inference in the Closed-Loop: Marginal Structural Models for Sequential Excursion Effects
·2206 words·11 mins· loading · loading
AI Theory Causality 🏢 Carnegie Mellon University
Researchers introduce a non-parametric causal inference framework to analyze closed-loop optogenetics designs, revealing previously hidden causal effects of neural circuit manipulations on behavior.
Causal Effect Identification in a Sub-Population with Latent Variables
·1896 words·9 mins· loading · loading
AI Theory Causality 🏢 ETH Zurich
This paper introduces a novel algorithm to accurately compute causal effects within specific sub-populations, even when hidden factors influence the data, advancing causal inference significantly.
Causal Discovery from Event Sequences by Local Cause-Effect Attribution
·2331 words·11 mins· loading · loading
AI Theory Causality 🏢 CISPA Helmholtz Center for Information Security
CASCADE algorithm unveils hidden causal structures in event sequences by minimizing description length, surpassing existing Granger causality-based methods.