Skip to main content

AI Theory

Understanding and Improving Adversarial Collaborative Filtering for Robust Recommendation
·2042 words·10 mins· loading · loading
AI Generated AI Theory Robustness 🏒 Chinese Academy of Sciences
PamaCF, a novel personalized adversarial collaborative filtering technique, significantly improves recommendation robustness and accuracy against poisoning attacks by dynamically adjusting perturbatio…
Uncovering, Explaining, and Mitigating the Superficial Safety of Backdoor Defense
·3401 words·16 mins· loading · loading
AI Theory Safety 🏒 Hong Kong University of Science and Technology
Current backdoor defenses, while effective at reducing attack success rates, are vulnerable to rapid re-learning. This work unveils this superficial safety, proposes a novel attack, and introduces a p…
Ultrafast classical phylogenetic method beats large protein language models on variant effect prediction
·2536 words·12 mins· loading · loading
AI Generated AI Theory Optimization 🏒 UC Berkeley
A revolutionary ultrafast phylogenetic method outperforms protein language models in variant effect prediction by efficiently estimating amino acid substitution rates from massive datasets.
UDC: A Unified Neural Divide-and-Conquer Framework for Large-Scale Combinatorial Optimization Problems
·5789 words·28 mins· loading · loading
AI Generated AI Theory Optimization 🏒 School of System Design and Intelligent Manufacturing, Southern University of Science and Technology
A unified neural divide-and-conquer framework (UDC) achieves superior performance on large-scale combinatorial optimization problems by employing a novel Divide-Conquer-Reunion training method and a h…
Truthfulness of Calibration Measures
·337 words·2 mins· loading · loading
AI Theory Optimization 🏒 UC Berkeley
Researchers developed Subsampled Smooth Calibration Error (SSCE), a new truthful calibration measure for sequential prediction, solving the problem of existing measures being easily gamed.
Truthful High Dimensional Sparse Linear Regression
·282 words·2 mins· loading · loading
AI Theory Privacy 🏒 King Abdullah University of Science and Technology
This paper presents a novel, truthful, and privacy-preserving mechanism for high-dimensional sparse linear regression, incentivizing data contribution while safeguarding individual privacy.
Treatment of Statistical Estimation Problems in Randomized Smoothing for Adversarial Robustness
·1858 words·9 mins· loading · loading
AI Theory Robustness 🏒 Tübingen AI Center, University of Tübingen
This paper optimizes randomized smoothing, a crucial certified defense against adversarial attacks, by introducing novel statistical methods that drastically reduce the computational cost, leading to …
Trap-MID: Trapdoor-based Defense against Model Inversion Attacks
·3599 words·17 mins· loading · loading
AI Generated AI Theory Privacy 🏒 National Taiwan University
Trap-MID: Outsmarting model inversion attacks with cleverly placed ’trapdoors'!
Transformation-Invariant Learning and Theoretical Guarantees for OOD Generalization
·541 words·3 mins· loading · loading
AI Theory Generalization 🏒 Yale University
This paper introduces a novel theoretical framework for robust machine learning under distribution shifts, offering learning rules and guarantees, highlighting the game-theoretic viewpoint of distribu…
Transferability Bound Theory: Exploring Relationship between Adversarial Transferability and Flatness
·1926 words·10 mins· loading · loading
AI Generated AI Theory Robustness 🏒 East China Normal University
Challenging common assumptions, researchers prove that flatter adversarial examples don’t guarantee better transferability and introduce TPA, a theoretically-grounded attack creating more transferable…
Transductive Learning is Compact
·325 words·2 mins· loading · loading
AI Theory Optimization 🏒 USC
Supervised learning’s sample complexity is compact: a hypothesis class is learnable if and only if all its finite projections are learnable, simplifying complexity analysis.
Transcendence: Generative Models Can Outperform The Experts That Train Them
·2384 words·12 mins· loading · loading
AI Theory Generalization 🏒 OpenAI
Generative models can outperform their human trainers: A groundbreaking study shows how autoregressive transformers, trained on chess game data, can achieve higher game ratings than any of the human …
Training for Stable Explanation for Free
·2565 words·13 mins· loading · loading
AI Theory Interpretability 🏒 Hong Kong University of Science and Technology
R2ET: training for robust ranking explanations by an effective regularizer.
Trading Place for Space: Increasing Location Resolution Reduces Contextual Capacity in Hippocampal Codes
·1950 words·10 mins· loading · loading
AI Theory Representation Learning 🏒 University of Pennsylvania
Boosting hippocampal spatial resolution surprisingly shrinks its contextual memory capacity, revealing a crucial trade-off between precision and context storage.
Trading off Consistency and Dimensionality of Convex Surrogates for Multiclass Classification
·1544 words·8 mins· loading · loading
AI Theory Optimization 🏒 Harvard University
Researchers achieve a balance between accuracy and efficiency in multiclass classification by introducing partially consistent surrogate losses and novel methods.
Trade-Offs of Diagonal Fisher Information Matrix Estimators
·2685 words·13 mins· loading · loading
AI Theory Optimization 🏒 Australian National University
This paper examines the trade-offs between two popular diagonal Fisher Information Matrix (FIM) estimators in neural networks, deriving variance bounds and highlighting the importance of considering e…
Trace is the Next AutoDiff: Generative Optimization with Rich Feedback, Execution Traces, and LLMs
·2686 words·13 mins· loading · loading
AI Theory Optimization 🏒 Microsoft Research
Trace: Automating AI workflow design with LLMs.
Towards the Dynamics of a DNN Learning Symbolic Interactions
·1849 words·9 mins· loading · loading
AI Theory Interpretability 🏒 Shanghai Jiao Tong University
DNNs learn interactions in two phases: initially removing complex interactions, then gradually learning higher-order ones, leading to overfitting.
Towards Harmless Rawlsian Fairness Regardless of Demographic Prior
·3255 words·16 mins· loading · loading
AI Generated AI Theory Fairness 🏒 School of Computer Science and Engineering, Beihang University
VFair achieves harmless Rawlsian fairness in regression tasks without relying on sensitive demographic data by minimizing the variance of training losses.
Towards Estimating Bounds on the Effect of Policies under Unobserved Confounding
·1610 words·8 mins· loading · loading
AI Theory Causality 🏒 Google DeepMind
This paper presents a novel framework for estimating bounds on policy effects under unobserved confounding, offering tighter bounds and robust estimators for higher-dimensional data.