Skip to main content

Machine Learning

Hyperbolic Embeddings of Supervised Models
·2703 words·13 mins· loading · loading
Machine Learning Representation Learning 🏒 Google Research
This paper presents a novel approach for embedding supervised models in hyperbolic space, linking loss functions to hyperbolic distances and introducing monotonic decision trees for unambiguous visual…
Hyper-opinion Evidential Deep Learning for Out-of-Distribution Detection
·2165 words·11 mins· loading · loading
Machine Learning Deep Learning 🏒 Tongji University
Hyper-opinion Evidential Deep Learning (HEDL) enhances out-of-distribution detection by integrating sharp and vague evidence for superior uncertainty estimation and classification accuracy.
HYDRA-FL: Hybrid Knowledge Distillation for Robust and Accurate Federated Learning
·2450 words·12 mins· loading · loading
AI Generated Machine Learning Federated Learning 🏒 University of Massachusetts, Amherst
HYDRA-FL: A novel hybrid knowledge distillation method makes federated learning robust against poisoning attacks while maintaining accuracy!
Hybrid Reinforcement Learning Breaks Sample Size Barriers In Linear MDPs
·1464 words·7 mins· loading · loading
Machine Learning Reinforcement Learning 🏒 University of Pennsylvania
Hybrid RL algorithms achieve sharper error/regret bounds than existing offline/online RL methods in linear MDPs, improving sample efficiency without stringent assumptions on behavior policy quality.
How Transformers Utilize Multi-Head Attention in In-Context Learning? A Case Study on Sparse Linear Regression
·2236 words·11 mins· loading · loading
AI Generated Machine Learning Few-Shot Learning 🏒 University of Hong Kong
Multi-head transformers utilize distinct attention patterns across layersβ€”multiple heads are essential for initial data preprocessing, while a single head suffices for subsequent optimization steps, o…
How to Solve Contextual Goal-Oriented Problems with Offline Datasets?
·2005 words·10 mins· loading · loading
Machine Learning Reinforcement Learning 🏒 Microsoft Research
CODA: A novel method for solving contextual goal-oriented problems with offline datasets, using unlabeled trajectories and context-goal pairs to create a fully labeled dataset, outperforming other bas…
How Sparse Can We Prune A Deep Network: A Fundamental Limit Perspective
·2596 words·13 mins· loading · loading
Machine Learning Deep Learning 🏒 Huazhong University of Science and Technology
Deep network pruning’s fundamental limits are characterized, revealing how weight magnitude and network sharpness determine the maximum achievable sparsity.
How many classifiers do we need?
·1821 words·9 mins· loading · loading
AI Generated Machine Learning Deep Learning 🏒 UC Berkeley
Boost ensemble accuracy by predicting performance with fewer classifiers using a novel polarization law and refined error bounds.
How Does Variance Shape the Regret in Contextual Bandits?
·334 words·2 mins· loading · loading
Machine Learning Reinforcement Learning 🏒 MIT
Low reward variance drastically improves contextual bandit regret, defying minimax assumptions and highlighting the crucial role of eluder dimension.
How Does Message Passing Improve Collaborative Filtering?
·1963 words·10 mins· loading · loading
Machine Learning Representation Learning 🏒 University of California, Riverside
TAG-CF boosts collaborative filtering accuracy by up to 39.2% on cold users, using only a single message-passing step at test time, avoiding costly training-time computations.
How does Inverse RL Scale to Large State Spaces? A Provably Efficient Approach
·1501 words·8 mins· loading · loading
Machine Learning Reinforcement Learning 🏒 Politecnico Di Milano
CATY-IRL: A novel, provably efficient algorithm solves Inverse Reinforcement Learning’s scalability issues for large state spaces, improving upon state-of-the-art methods.
HORSE: Hierarchical Representation for Large-Scale Neural Subset Selection
·1821 words·9 mins· loading · loading
Machine Learning Deep Learning 🏒 Chinese University of Hong Kong
HORSE: A novel attention-based neural network significantly improves large-scale neural subset selection by up to 20%, addressing limitations in existing methods.
Higher-Rank Irreducible Cartesian Tensors for Equivariant Message Passing
·3519 words·17 mins· loading · loading
AI Generated Machine Learning Deep Learning 🏒 NEC Laboratories Europe
Higher-rank irreducible Cartesian tensors boost accuracy and efficiency in equivariant message-passing neural networks for atomistic simulations.
High-dimensional (Group) Adversarial Training in Linear Regression
·1556 words·8 mins· loading · loading
AI Generated Machine Learning Optimization 🏒 Georgia Institute of Technology
Adversarial training achieves minimax-optimal prediction error in high-dimensional linear regression under l∞-perturbation, improving upon existing methods.
Hierarchical Programmatic Option Framework
·5774 words·28 mins· loading · loading
AI Generated Machine Learning Reinforcement Learning 🏒 National Taiwan University
Hierarchical Programmatic Option framework (HIPO) uses human-readable programs as options in reinforcement learning to solve long, repetitive tasks with improved interpretability and generalization.
Hierarchical Hybrid Sliced Wasserstein: A Scalable Metric for Heterogeneous Joint Distributions
·2222 words·11 mins· loading · loading
Machine Learning Deep Learning 🏒 University of Texas at Austin
Hierarchical Hybrid Sliced Wasserstein (H2SW) solves the challenge of comparing complex, heterogeneous joint distributions by introducing novel slicing operators, leading to a scalable and statistical…
Hierarchical Federated Learning with Multi-Timescale Gradient Correction
·2189 words·11 mins· loading · loading
Machine Learning Federated Learning 🏒 Purdue University
MTGC tackles multi-timescale model drift in hierarchical federated learning.
HHD-GP: Incorporating Helmholtz-Hodge Decomposition into Gaussian Processes for Learning Dynamical Systems
·1903 words·9 mins· loading · loading
Machine Learning Deep Learning 🏒 University of Hong Kong
HHD-GP leverages Helmholtz-Hodge decomposition within Gaussian Processes to learn physically meaningful components of dynamical systems, enhancing prediction accuracy and interpretability.
HGDL: Heterogeneous Graph Label Distribution Learning
·2697 words·13 mins· loading · loading
Machine Learning Semi-Supervised Learning 🏒 Florida Atlantic University
HGDL: Heterogeneous Graph Label Distribution Learning, a new framework that leverages graph topology and content to enhance label distribution prediction.
Heterogeneity-Guided Client Sampling: Towards Fast and Efficient Non-IID Federated Learning
·2418 words·12 mins· loading · loading
Machine Learning Federated Learning 🏒 University of Texas at Austin
HiCS-FL: A novel federated learning client sampling method that leverages data heterogeneity for faster, more efficient global model training in non-IID settings.