Skip to main content

AI Theory

Interventional Causal Discovery in a Mixture of DAGs
·1892 words·9 mins· loading · loading
AI Generated AI Theory Causality 🏢 Carnegie Mellon University
This study presents CADIM, an adaptive algorithm using interventions to learn true causal relationships from mixtures of DAGs, achieving near-optimal intervention sizes and providing quantifiable opti…
Intervention and Conditioning in Causal Bayesian Networks
·296 words·2 mins· loading · loading
AI Theory Causality 🏢 Cornell University
Researchers uniquely estimate probabilities in Causal Bayesian Networks using simple independence assumptions, enabling analysis from observational data and simplifying counterfactual probability calc…
Interpretable Concept-Based Memory Reasoning
·2660 words·13 mins· loading · loading
AI Theory Interpretability 🏢 KU Leuven
CMR: A novel Concept-Based Memory Reasoner delivers human-understandable, verifiable AI task predictions by using a neural selection mechanism over a set of human-understandable logic rules, achievin…
Interpolating Item and User Fairness in Multi-Sided Recommendations
·1620 words·8 mins· loading · loading
AI Theory Fairness 🏢 MIT
Problem (FAIR) framework and FORM algorithm achieve flexible multi-stakeholder fairness in online recommendation systems, balancing platform revenue with user and item fairness.
Instance-Specific Asymmetric Sensitivity in Differential Privacy
·1985 words·10 mins· loading · loading
AI Theory Privacy 🏢 Mozilla
New algorithm improves differentially private estimations by adapting to dataset hardness, enhancing accuracy for variance, classification, and regression tasks.
Instance-Optimal Private Density Estimation in the Wasserstein Distance
·338 words·2 mins· loading · loading
AI Theory Privacy 🏢 Apple
Instance-optimal private density estimation algorithms, adapting to data characteristics for improved accuracy in the Wasserstein distance, are introduced.
Injecting Undetectable Backdoors in Obfuscated Neural Networks and Language Models
·372 words·2 mins· loading · loading
AI Theory Robustness 🏢 Yale University
Researchers developed a novel method to inject undetectable backdoors into obfuscated neural networks and language models, even with white-box access, posing significant security risks.
Information-theoretic Limits of Online Classification with Noisy Labels
·481 words·3 mins· loading · loading
AI Theory Optimization 🏢 CSOI, Purdue University
This paper unveils the information-theoretic limits of online classification with noisy labels, showing that the minimax risk is tightly characterized by the Hellinger gap of noisy label distributions…
Information-theoretic Generalization Analysis for Expected Calibration Error
·1937 words·10 mins· loading · loading
AI Theory Generalization 🏢 Osaka University
New theoretical analysis reveals optimal binning strategies for minimizing bias in expected calibration error (ECE), improving machine learning model calibration evaluation.
Inference via Interpolation: Contrastive Representations Provably Enable Planning and Inference
·1693 words·8 mins· loading · loading
AI Theory Representation Learning 🏢 Princeton University
Contrastive learning enables efficient probabilistic inference in high-dimensional time series by creating Gaussian representations that form a Gauss-Markov chain, allowing for closed-form solutions t…
Inexact Augmented Lagrangian Methods for Conic Optimization: Quadratic Growth and Linear Convergence
·1589 words·8 mins· loading · loading
AI Theory Optimization 🏢 UC San Diego
This paper proves that inexact ALMs applied to SDPs achieve linear convergence for both primal and dual iterates, contingent solely on strict complementarity and a bounded solution set, thus resol…
Incorporating Surrogate Gradient Norm to Improve Offline Optimization Techniques
·2087 words·10 mins· loading · loading
AI Theory Optimization 🏢 Washington State University
IGNITE improves offline optimization by incorporating surrogate gradient norm to reduce model sharpness, boosting performance up to 9.6%
Improving Subgroup Robustness via Data Selection
·1691 words·8 mins· loading · loading
AI Theory Robustness 🏢 MIT
Data Debiasing with Datamodels (D3M) efficiently improves machine learning model robustness by identifying and removing specific training examples that disproportionately harm minority groups’ accurac…
Improving Decision Sparsity
·4802 words·23 mins· loading · loading
AI Generated AI Theory Interpretability 🏢 Duke University
Boosting machine learning model interpretability, this paper introduces cluster-based and tree-based Sparse Explanation Values (SEV) for generating more meaningful and credible explanations by optimiz…
Improving Alignment and Robustness with Circuit Breakers
·2515 words·12 mins· loading · loading
AI Theory Safety 🏢 Gray Swan AI
AI systems are made safer by ‘circuit breakers’ that directly control harmful internal representations, significantly improving alignment and robustness against adversarial attacks with minimal impact…
Improving Adversarial Robust Fairness via Anti-Bias Soft Label Distillation
·2396 words·12 mins· loading · loading
AI Generated AI Theory Robustness 🏢 Institute of Artificial Intelligence, Beihang University
Boosting adversarial robustness fairness in deep neural networks, Anti-Bias Soft Label Distillation (ABSLD) adaptively adjusts soft label smoothness to reduce error gap between classes.
Improving Adaptivity via Over-Parameterization in Sequence Models
·2081 words·10 mins· loading · loading
AI Generated AI Theory Generalization 🏢 Tsinghua University
Over-parameterized gradient descent dynamically adapts to signal structure, improving sequence model generalization and outperforming fixed-kernel methods.
Improved Regret for Bandit Convex Optimization with Delayed Feedback
·324 words·2 mins· loading · loading
AI Theory Optimization 🏢 Zhejiang University
A novel algorithm, D-FTBL, achieves improved regret bounds for bandit convex optimization with delayed feedback, tightly matching existing lower bounds in worst-case scenarios.
Improved Guarantees for Fully Dynamic $k$-Center Clustering with Outliers in General Metric Spaces
·1694 words·8 mins· loading · loading
AI Theory Optimization 🏢 Eindhoven University of Technology
A novel fully dynamic algorithm achieves a (4+ε)-approximate solution for the k-center clustering problem with outliers in general metric spaces, boasting an efficient update time.
Improved Analysis for Bandit Learning in Matching Markets
·707 words·4 mins· loading · loading
AI Generated AI Theory Optimization 🏢 Shanghai Jiao Tong University
A new algorithm, AOGS, achieves significantly lower regret in two-sided matching markets by cleverly integrating exploration and exploitation, thus removing the dependence on the number of arms (K) in…