Skip to main content

Machine Learning

LaSCal: Label-Shift Calibration without target labels
·3140 words·15 mins· loading · loading
Machine Learning Unsupervised Learning 🏢 ESAT-PSI, KU Leuven
LaSCal, a novel label-free calibration method, ensures reliable model predictions under label shift by using a consistent calibration error estimator, achieving effective and robust unsupervised calib…
Large Stepsize Gradient Descent for Non-Homogeneous Two-Layer Networks: Margin Improvement and Fast Optimization
·2166 words·11 mins· loading · loading
AI Generated Machine Learning Deep Learning 🏢 UC Berkeley
Large stepsize GD on non-homogeneous neural networks shows monotonic risk reduction after an initial oscillating phase, demonstrating implicit bias and optimization gains.
Large Scale Transfer Learning for Tabular Data via Language Modeling
·2834 words·14 mins· loading · loading
Machine Learning Transfer Learning 🏢 University of Washington
TABULA-8B, a novel language model for tabular prediction, achieves state-of-the-art zero-shot and few-shot performance across various benchmarks, exceeding existing methods by 5-15 percentage points.
Large Pre-trained time series models for cross-domain Time series analysis tasks
·1870 words·9 mins· loading · loading
Machine Learning Self-Supervised Learning 🏢 Georgia Institute of Technology
Large Pre-trained Time-series Models (LPTM) achieves superior forecasting and time-series classification results using a novel adaptive segmentation method, requiring up to 40% less data and 50% less …
Label Noise: Ignorance Is Bliss
·3504 words·17 mins· loading · loading
AI Generated Machine Learning Semi-Supervised Learning 🏢 University of Michigan
Ignorance is bliss: A new framework shows ignoring label noise in multi-class classification can achieve state-of-the-art performance, especially when using self-supervised feature extraction.
Label Delay in Online Continual Learning
·4705 words·23 mins· loading · loading
AI Generated Machine Learning Continual Learning 🏢 University of Oxford
Bridging the accuracy gap in online continual learning caused by label delays, a new framework with Importance Weighted Memory Sampling prioritizes relevant memory samples, significantly outperforming…
Knowledge Graph Completion by Intermediate Variables Regularization
·2107 words·10 mins· loading · loading
AI Generated Machine Learning Deep Learning 🏢 Fudan University
Novel intermediate variables regularization boosts knowledge graph completion!
Kernel-Based Function Approximation for Average Reward Reinforcement Learning: An Optimist No-Regret Algorithm
·311 words·2 mins· loading · loading
Machine Learning Reinforcement Learning 🏢 MediaTek Research
Novel optimistic RL algorithm using kernel methods achieves no-regret performance in the challenging infinite-horizon average-reward setting.
Kernel PCA for Out-of-Distribution Detection
·2628 words·13 mins· loading · loading
AI Generated Machine Learning Deep Learning 🏢 Shanghai Jiao Tong University
Boosting Out-of-Distribution Detection with Kernel PCA!
KALM: Knowledgeable Agents by Offline Reinforcement Learning from Large Language Model Rollouts
·3188 words·15 mins· loading · loading
Machine Learning Reinforcement Learning 🏢 National Key Laboratory for Novel Software Technology, Nanjing University, China
KALM: Knowledgeable agents learn complex tasks from LLMs via offline RL using imaginary rollouts, significantly outperforming baselines.
Kaleidoscope: Learnable Masks for Heterogeneous Multi-agent Reinforcement Learning
·2281 words·11 mins· loading · loading
Machine Learning Reinforcement Learning 🏢 Hong Kong University of Science and Technology
Kaleidoscope: Learnable Masks for Heterogeneous MARL achieves high sample efficiency and policy diversity by using learnable masks for adaptive partial parameter sharing.
Iteratively Refined Early Interaction Alignment for Subgraph Matching based Graph Retrieval
·3157 words·15 mins· loading · loading
Machine Learning Deep Learning 🏢 UC San Diego
IsoNet++ iteratively refines subgraph matching via early interaction GNNs and node-pair partner interactions, significantly boosting graph retrieval accuracy.
Iteratively Refined Behavior Regularization for Offline Reinforcement Learning
·2346 words·12 mins· loading · loading
Machine Learning Reinforcement Learning 🏢 Shanxi University
Iteratively Refined Behavior Regularization boosts offline reinforcement learning by iteratively refining the reference policy, ensuring robust and effective control policy learning.
Is Value Learning Really the Main Bottleneck in Offline RL?
·2601 words·13 mins· loading · loading
Machine Learning Reinforcement Learning 🏢 UC Berkeley
Offline RL’s performance often lags behind imitation learning, but this paper reveals that policy learning and generalization, not value function learning, are often the main bottlenecks.
Is Mamba Compatible with Trajectory Optimization in Offline Reinforcement Learning?
·3014 words·15 mins· loading · loading
Machine Learning Reinforcement Learning 🏢 National University of Defence Technology
Decision Mamba (DeMa) outperforms Decision Transformer (DT) in offline RL trajectory optimization with 30% fewer parameters in Atari and a quarter in MuJoCo, demonstrating the efficacy of Mamba’s line…
Inversion-based Latent Bayesian Optimization
·4093 words·20 mins· loading · loading
AI Generated Machine Learning Optimization 🏢 Korea University
InvBO: Inversion-based Latent Bayesian Optimization solves the misalignment problem in LBO, boosting optimization accuracy and efficiency.
Inverse M-Kernels for Linear Universal Approximators of Non-Negative Functions
·1416 words·7 mins· loading · loading
Machine Learning Deep Learning 🏢 NTT Corporation
Unlocking efficient non-negative function approximation: This paper introduces inverse M-kernels, enabling flexible, linear universal approximators for one-dimensional inputs.
Inverse Factorized Soft Q-Learning for Cooperative Multi-agent Imitation Learning
·3040 words·15 mins· loading · loading
Machine Learning Reinforcement Learning 🏢 Singapore Management University
New multi-agent imitation learning algorithm (MIFQ) leverages inverse soft Q-learning and factorization for stable, efficient training, achieving state-of-the-art results on challenging benchmarks.
Introducing Spectral Attention for Long-Range Dependency in Time Series Forecasting
·3194 words·15 mins· loading · loading
Machine Learning Deep Learning 🏢 Seoul National University
Spectral Attention boosts long-range dependency capture in time series forecasting, achieving state-of-the-art results across various models and datasets.
IntraMix: Intra-Class Mixup Generation for Accurate Labels and Neighbors
·2763 words·13 mins· loading · loading
AI Generated Machine Learning Semi-Supervised Learning 🏢 Massive Data Computing Lab, Harbin Institute of Technology
IntraMix: Boost GNN accuracy by cleverly generating high-quality labels and enriching node neighborhoods using intra-class Mixup.