Machine Learning
Adaptive Labeling for Efficient Out-of-distribution Model Evaluation
·1748 words·9 mins·
loading
·
loading
Machine Learning
Active Learning
🏢 Columbia University
Adaptive labeling minimizes uncertainty in out-of-distribution model evaluation by strategically selecting which data points to label, leading to more efficient and reliable assessments.
Adaptive Exploration for Data-Efficient General Value Function Evaluations
·2591 words·13 mins·
loading
·
loading
Machine Learning
Reinforcement Learning
🏢 McGill University
GVFExplorer: An adaptive behavior policy efficiently learns multiple GVFs by minimizing return variance, optimizing data usage and reducing prediction errors.
Adaptive Depth Networks with Skippable Sub-Paths
·3982 words·19 mins·
loading
·
loading
AI Generated
Machine Learning
Deep Learning
🏢 Incheon National University
Adaptive Depth Networks with Skippable Sub-Paths: Train once, deploy efficiently! This paper proposes a novel training method to create adaptive-depth networks, enabling on-demand model depth selectio…
Adaptive $Q$-Aid for Conditional Supervised Learning in Offline Reinforcement Learning
·3193 words·15 mins·
loading
·
loading
Machine Learning
Reinforcement Learning
🏢 KAIST
Q-Aided Conditional Supervised Learning (QCS) effectively combines the stability of return-conditioned supervised learning with the stitching ability of Q-functions, achieving superior offline reinfor…
Adapting to Unknown Low-Dimensional Structures in Score-Based Diffusion Models
·393 words·2 mins·
loading
·
loading
Machine Learning
Deep Learning
🏢 Chinese University of Hong Kong
Score-based diffusion models are improved by a novel coefficient design, enabling efficient adaptation to unknown low-dimensional data structures and achieving a convergence rate of O(k²/√T).
Adam on Local Time: Addressing Nonstationarity in RL with Relative Adam Timesteps
·2522 words·12 mins·
loading
·
loading
AI Generated
Machine Learning
Reinforcement Learning
🏢 University of Oxford
Adam-Rel: A novel optimizer for RL, dramatically improves performance by resetting Adam’s timestep to 0 after target network updates, preventing large, suboptimal changes.
Ada-MSHyper: Adaptive Multi-Scale Hypergraph Transformer for Time Series Forecasting
·3027 words·15 mins·
loading
·
loading
Machine Learning
Deep Learning
🏢 Zhejiang University
Ada-MSHyper: A novel adaptive multi-scale hypergraph transformer significantly boosts time series forecasting accuracy by modeling group-wise interactions and handling complex temporal variations.
ActSort: An active-learning accelerated cell sorting algorithm for large-scale calcium imaging datasets
·2928 words·14 mins·
loading
·
loading
Machine Learning
Active Learning
🏢 Stanford University
ActSort: Active learning dramatically accelerates cell sorting in massive calcium imaging datasets, minimizing human effort and improving accuracy.
Active, anytime-valid risk controlling prediction sets
·1276 words·6 mins·
loading
·
loading
Machine Learning
Active Learning
🏢 Carnegie Mellon University
This paper introduces anytime-valid risk-controlling prediction sets for active learning, guaranteeing low risk even with adaptive data collection and limited label budgets.
Active Set Ordering
·1737 words·9 mins·
loading
·
loading
AI Generated
Machine Learning
Active Learning
🏢 Applied Artificial Intelligence Institute, Deakin University
Active Set Ordering: Efficiently discover input subsets (maximizers, top-k) of expensive black-box functions via pairwise comparisons, using a novel Mean Prediction algorithm with theoretical guarante…
Active preference learning for ordering items in- and out-of-sample
·2112 words·10 mins·
loading
·
loading
Machine Learning
Active Learning
🏢 Chalmers University of Technology
Active learning efficiently orders items using contextual attributes, minimizing comparison needs and improving generalization.
Active Learning of General Halfspaces: Label Queries vs Membership Queries
·262 words·2 mins·
loading
·
loading
Machine Learning
Active Learning
🏢 University of Wisconsin-Madison
Active learning for general halfspaces is surprisingly hard; membership queries are key to efficiency.
Active Learning for Derivative-Based Global Sensitivity Analysis with Gaussian Processes
·3450 words·17 mins·
loading
·
loading
AI Generated
Machine Learning
Active Learning
🏢 Stanford University
Boost global sensitivity analysis efficiency by 10x with novel active learning methods targeting derivative-based measures for expensive black-box functions!
Active design of two-photon holographic stimulation for identifying neural population dynamics
·1672 words·8 mins·
loading
·
loading
Machine Learning
Active Learning
🏢 UC Berkeley
Researchers developed an active learning method using two-photon holographic optogenetics to efficiently identify neural population dynamics, achieving up to a two-fold reduction in data needed for ac…
Activation Map Compression through Tensor Decomposition for Deep Learning
·2035 words·10 mins·
loading
·
loading
Machine Learning
Deep Learning
🏢 Telecom Paris
Slash deep learning’s memory footprint! This paper introduces a novel activation map compression technique via tensor decomposition, significantly boosting on-device training efficiency for edge AI.
Action Gaps and Advantages in Continuous-Time Distributional Reinforcement Learning
·1595 words·8 mins·
loading
·
loading
Machine Learning
Reinforcement Learning
🏢 McGill University
Distributional RL’s sensitivity to high-frequency decisions is unveiled, with new algorithms solving existing performance issues in continuous-time RL.
Achieving Tractable Minimax Optimal Regret in Average Reward MDPs
·1775 words·9 mins·
loading
·
loading
Machine Learning
Reinforcement Learning
🏢 Univ. Grenoble Alpes
First tractable algorithm achieves minimax optimal regret in average-reward MDPs, solving a major computational challenge in reinforcement learning.
Achieving Near-Optimal Convergence for Distributed Minimax Optimization with Adaptive Stepsizes
·2347 words·12 mins·
loading
·
loading
AI Generated
Machine Learning
Federated Learning
🏢 ETH Zurich
D-AdaST: A novel distributed adaptive minimax optimization method achieves near-optimal convergence by tracking stepsizes, solving the inconsistency problem hindering existing adaptive methods.
Achieving Linear Convergence with Parameter-Free Algorithms in Decentralized Optimization
·1448 words·7 mins·
loading
·
loading
AI Generated
Machine Learning
Optimization
🏢 Innopolis University
A novel parameter-free decentralized optimization algorithm achieves linear convergence for strongly convex, smooth objectives, eliminating the need for hyperparameter tuning and improving scalability…
Achieving Constant Regret in Linear Markov Decision Processes
·1852 words·9 mins·
loading
·
loading
AI Generated
Machine Learning
Reinforcement Learning
🏢 MIT
Cert-LSVI-UCB achieves constant regret in RL with linear function approximation, even under model misspecification, using a novel certified estimator.