🏢 Duke University
What does guidance do? A fine-grained analysis in a simple setting
·3498 words·17 mins·
loading
·
loading
AI Theory
Optimization
🏢 Duke University
Diffusion guidance, a common generative modeling technique, is shown to not sample from its intended distribution; instead, it heavily biases samples towards the boundary of the conditional distributi…
Toward Efficient Inference for Mixture of Experts
·2411 words·12 mins·
loading
·
loading
Natural Language Processing
Machine Translation
🏢 Duke University
Unlocking the speed and efficiency of Mixture-of-Expert models, this research unveils novel optimization techniques, achieving dramatic improvements in inference throughput and resource usage.
Randomized Exploration in Cooperative Multi-Agent Reinforcement Learning
·3372 words·16 mins·
loading
·
loading
AI Generated
Machine Learning
Reinforcement Learning
🏢 Duke University
Provably efficient randomized exploration in cooperative MARL is achieved via a novel unified algorithm framework, CoopTS, using Thompson Sampling with PHE and LMC exploration strategies.
On Neural Networks as Infinite Tree-Structured Probabilistic Graphical Models
·2116 words·10 mins·
loading
·
loading
AI Theory
Interpretability
🏢 Duke University
DNNs are powerful but lack the clear semantics of PGMs. This paper innovatively constructs infinite tree-structured PGMs that exactly correspond to DNNs, revealing that DNN forward propagation approxi…
Navigating the Effect of Parametrization for Dimensionality Reduction
·3077 words·15 mins·
loading
·
loading
Machine Learning
Dimensionality Reduction
🏢 Duke University
ParamRepulsor, a novel parametric dimensionality reduction method, achieves state-of-the-art local structure preservation by mining hard negatives and using a tailored loss function.
Minimax Optimal and Computationally Efficient Algorithms for Distributionally Robust Offline Reinforcement Learning
·1835 words·9 mins·
loading
·
loading
AI Generated
Machine Learning
Reinforcement Learning
🏢 Duke University
Minimax-optimal, computationally efficient algorithms are proposed for distributionally robust offline reinforcement learning, addressing challenges posed by function approximation and model uncertain…
Interpretable Generalized Additive Models for Datasets with Missing Values
·2769 words·13 mins·
loading
·
loading
Machine Learning
Interpretability
🏢 Duke University
M-GAM: Interpretable additive models handling missing data with superior accuracy & sparsity!
Inflationary Flows: Calibrated Bayesian Inference with Diffusion-Based Models
·3134 words·15 mins·
loading
·
loading
Machine Learning
Deep Learning
🏢 Duke University
Calibrated Bayesian inference achieved via novel diffusion models uniquely mapping high-dimensional data to lower-dimensional Gaussian distributions.
Improving Decision Sparsity
·4802 words·23 mins·
loading
·
loading
AI Generated
AI Theory
Interpretability
🏢 Duke University
Boosting machine learning model interpretability, this paper introduces cluster-based and tree-based Sparse Explanation Values (SEV) for generating more meaningful and credible explanations by optimiz…
GUIDE: Real-Time Human-Shaped Agents
·2015 words·10 mins·
loading
·
loading
Machine Learning
Reinforcement Learning
🏢 Duke University
GUIDE: Real-time human-shaped AI agents achieve up to 30% higher success rates using continuous human feedback, boosted by a parallel training model that mimics human input for continued improvement.
A Combinatorial Algorithm for the Semi-Discrete Optimal Transport Problem
·1938 words·10 mins·
loading
·
loading
AI Theory
Optimization
🏢 Duke University
A new combinatorial algorithm dramatically speeds up semi-discrete optimal transport calculations, offering an efficient solution for large datasets and higher dimensions.