Skip to main content

Machine Learning

Dual Critic Reinforcement Learning under Partial Observability
·2549 words·12 mins· loading · loading
AI Generated Machine Learning Reinforcement Learning 🏢 Tsinghua University
DCRL, a Dual Critic Reinforcement Learning framework, effectively mitigates high variance in reinforcement learning under partial observability by synergistically combining an oracle critic (with full…
Dual Cone Gradient Descent for Training Physics-Informed Neural Networks
·3668 words·18 mins· loading · loading
AI Generated Machine Learning Deep Learning 🏢 Artificial Intelligence Graduate School UNIST
Dual Cone Gradient Descent (DCGD) enhances Physics-Informed Neural Network (PINN) training by resolving gradient imbalance issues, leading to more accurate and stable solutions for complex partial dif…
DU-Shapley: A Shapley Value Proxy for Efficient Dataset Valuation
·1646 words·8 mins· loading · loading
Machine Learning Federated Learning 🏢 Inria
DU-Shapley efficiently estimates the Shapley value for dataset valuation, enabling fair compensation in collaborative machine learning by leveraging the problem’s structure for faster computation.
Drift-Resilient TabPFN: In-Context Learning Temporal Distribution Shifts on Tabular Data
·5519 words·26 mins· loading · loading
AI Generated Machine Learning Generalization 🏢 Technical University of Munich
Drift-Resilient TabPFN masters temporal data shifts!
Doubly Mild Generalization for Offline Reinforcement Learning
·2279 words·11 mins· loading · loading
AI Generated Machine Learning Reinforcement Learning 🏢 Tsinghua University
Doubly Mild Generalization (DMG) improves offline reinforcement learning by selectively leveraging generalization beyond training data, achieving state-of-the-art results.
Don't Compress Gradients in Random Reshuffling: Compress Gradient Differences
·2058 words·10 mins· loading · loading
Machine Learning Federated Learning 🏢 King Abdullah University of Science and Technology
Boost federated learning efficiency! This paper introduces novel algorithms that cleverly combine gradient compression with random reshuffling, significantly reducing communication complexity and impr…
DOFEN: Deep Oblivious Forest ENsemble
·6861 words·33 mins· loading · loading
Machine Learning Deep Learning 🏢 Sinopac Holdings
DOFEN: Deep Oblivious Forest Ensemble achieves state-of-the-art performance on tabular data by using a novel DNN architecture inspired by oblivious decision trees, surpassing other DNNs.
Does Worst-Performing Agent Lead the Pack? Analyzing Agent Dynamics in Unified Distributed SGD
·1640 words·8 mins· loading · loading
AI Generated Machine Learning Federated Learning 🏢 North Carolina State University
A few high-performing agents using efficient sampling strategies can significantly boost the overall convergence speed of distributed machine learning algorithms, surpassing the performance of many mo…
Does Egalitarian Fairness Lead to Instability? The Fairness Bounds in Stable Federated Learning Under Altruistic Behaviors
·1528 words·8 mins· loading · loading
Machine Learning Federated Learning 🏢 Southern University of Science and Technology
Achieving egalitarian fairness in federated learning without sacrificing stability is possible; this paper derives optimal fairness bounds considering clients’ altruism and network topology.
Do's and Don'ts: Learning Desirable Skills with Instruction Videos
·2781 words·14 mins· loading · loading
AI Generated Machine Learning Reinforcement Learning 🏢 KAIST
DoDont, a novel algorithm, uses instruction videos to guide unsupervised skill discovery, effectively learning desirable behaviors while avoiding undesirable ones in complex continuous control tasks.
Divide-and-Conquer Posterior Sampling for Denoising Diffusion priors
·3064 words·15 mins· loading · loading
Machine Learning Deep Learning 🏢 CMAP, Ecole Polytechnique
Divide-and-Conquer Posterior Sampling (DCPS) efficiently samples complex posterior distributions from denoising diffusion models (DDMs) for Bayesian inverse problems, significantly improving accuracy …
Distributionally Robust Reinforcement Learning with Interactive Data Collection: Fundamental Hardness and Near-Optimal Algorithms
·518 words·3 mins· loading · loading
Machine Learning Reinforcement Learning 🏢 Stanford University
Provably sample-efficient robust RL via interactive data collection is achieved by introducing the vanishing minimal value assumption to mitigate the curse of support shift, enabling near-optimal algo…
Distributional Successor Features Enable Zero-Shot Policy Optimization
·2834 words·14 mins· loading · loading
AI Generated Machine Learning Reinforcement Learning 🏢 University of Washington
DiSPOs: a novel model for zero-shot policy optimization in reinforcement learning, enabling quick adaptation to new tasks by learning a distribution of successor features and avoiding compounding erro…
Distributional Reinforcement Learning with Regularized Wasserstein Loss
·2196 words·11 mins· loading · loading
Machine Learning Reinforcement Learning 🏢 University of Alberta
Sinkhorn distributional RL (SinkhornDRL) uses a regularized Wasserstein loss to improve distributional reinforcement learning.
Distributed Least Squares in Small Space via Sketching and Bias Reduction
·1322 words·7 mins· loading · loading
Machine Learning Optimization 🏢 University of Michigan
Researchers developed a novel sparse sketching method for distributed least squares regression, achieving near-unbiased estimates with optimal space and time complexity.
Dissect Black Box: Interpreting for Rule-Based Explanations in Unsupervised Anomaly Detection
·1770 words·9 mins· loading · loading
Machine Learning Unsupervised Learning 🏢 Tsinghua University
SCD-Tree & GBD: Unlocking interpretable rules for unsupervised anomaly detection!
Disentangling Interpretable Factors with Supervised Independent Subspace Principal Component Analysis
·3550 words·17 mins· loading · loading
AI Generated Machine Learning Representation Learning 🏢 Columbia University
Supervised Independent Subspace PCA (sisPCA) disentangles interpretable factors in high-dimensional data by leveraging supervision to maximize subspace dependence on target variables while minimizing …
Disentangling and mitigating the impact of task similarity for continual learning
·2158 words·11 mins· loading · loading
Machine Learning Transfer Learning 🏢 Washington University in St Louis
This study reveals that high input similarity paired with low output similarity is detrimental to continual learning, whereas the opposite scenario is relatively benign; offering insights into mitigat…
Disentangled Unsupervised Skill Discovery for Efficient Hierarchical Reinforcement Learning
·1850 words·9 mins· loading · loading
Machine Learning Reinforcement Learning 🏢 University of Texas at Austin
DUSDi: A novel method for learning disentangled skills in unsupervised reinforcement learning, enabling efficient reuse for diverse downstream tasks.
Discrete-state Continuous-time Diffusion for Graph Generation
·2084 words·10 mins· loading · loading
Machine Learning Deep Learning 🏢 University of Illinois Urbana-Champaign
DISCO: a novel discrete-state continuous-time diffusion model for flexible and efficient graph generation, outperforming state-of-the-art methods.