Machine Learning
Tri-Level Navigator: LLM-Empowered Tri-Level Learning for Time Series OOD Generalization
·1858 words·9 mins·
loading
·
loading
Machine Learning
Few-Shot Learning
🏢 Tongji University
LLM-powered Tri-level learning framework enhances time series OOD generalization.
TreeVI: Reparameterizable Tree-structured Variational Inference for Instance-level Correlation Capturing
·1694 words·8 mins·
loading
·
loading
Machine Learning
Variational Inference
🏢 School of Computer Science and Engineering, Sun Yat-Sen University
TreeVI: Scalable tree-structured variational inference captures instance-level correlations for improved model accuracy.
Treeffuser: probabilistic prediction via conditional diffusions with gradient-boosted trees
·2082 words·10 mins·
loading
·
loading
Machine Learning
Deep Learning
🏢 Department of Computer Science, Columbia University
Treeffuser: Accurate probabilistic predictions from tabular data using conditional diffusion models and gradient-boosted trees!
Transition Constrained Bayesian Optimization via Markov Decision Processes
·2420 words·12 mins·
loading
·
loading
Machine Learning
Reinforcement Learning
🏢 Imperial College London
This paper presents a novel BayesOpt framework that incorporates Markov Decision Processes to optimize black-box functions with transition constraints, overcoming limitations of traditional methods.
Transformers Learn to Achieve Second-Order Convergence Rates for In-Context Linear Regression
·3338 words·16 mins·
loading
·
loading
Machine Learning
Few-Shot Learning
🏢 University of Southern California
Transformers surprisingly learn second-order optimization methods for in-context linear regression, achieving exponentially faster convergence than gradient descent!
Transformers as Game Players: Provable In-context Game-playing Capabilities of Pre-trained Models
·502 words·3 mins·
loading
·
loading
AI Generated
Machine Learning
Reinforcement Learning
🏢 University of Virginia
Pre-trained transformers can provably learn to play games near-optimally using in-context learning, offering theoretical guarantees for both decentralized and centralized settings.
Transformers are Minimax Optimal Nonparametric In-Context Learners
·1461 words·7 mins·
loading
·
loading
AI Generated
Machine Learning
Meta Learning
🏢 University of Tokyo
Transformers excel at in-context learning by leveraging minimax-optimal nonparametric learning, achieving near-optimal risk with sufficient pretraining data diversity.
Transferring disentangled representations: bridging the gap between synthetic and real images
·3866 words·19 mins·
loading
·
loading
Machine Learning
Representation Learning
🏢 Università Degli Studi Di Genova
This paper bridges the gap between synthetic and real image disentanglement by proposing a novel transfer learning approach. The method leverages weakly supervised learning on synthetic data to train…
Transferable Boltzmann Generators
·4942 words·24 mins·
loading
·
loading
AI Generated
Machine Learning
Deep Learning
🏢 Freie Universität Berlin
Transferable Boltzmann Generators enable efficient, zero-shot sampling of unseen molecular systems’ equilibrium distributions, boosting molecular simulations.
Transfer Learning for Latent Variable Network Models
·1891 words·9 mins·
loading
·
loading
Machine Learning
Transfer Learning
🏢 University of Texas at Austin
This paper presents efficient algorithms for transfer learning in latent variable network models, achieving vanishing error under specific conditions, and attaining minimax optimal rates for stochasti…
Transductive Active Learning: Theory and Applications
·3403 words·16 mins·
loading
·
loading
Machine Learning
Active Learning
🏢 ETH Zurich
This paper introduces transductive active learning, proving its efficiency in minimizing uncertainty and achieving state-of-the-art results in neural network fine-tuning and safe Bayesian optimization…
Trajectory Data Suffices for Statistically Efficient Learning in Offline RL with Linear q^π-Realizability and Concentrability
·479 words·3 mins·
loading
·
loading
AI Generated
Machine Learning
Reinforcement Learning
🏢 University of Alberta
Offline RL with trajectory data achieves statistically efficient learning under linear q*-realizability and concentrability, solving a previously deemed impossible problem.
Training Binary Neural Networks via Gaussian Variational Inference and Low-Rank Semidefinite Programming
·1655 words·8 mins·
loading
·
loading
AI Generated
Machine Learning
Deep Learning
🏢 University of Chicago
VISPA, a novel BNN training framework using Gaussian variational inference and low-rank SDP, achieves state-of-the-art accuracy on various benchmarks.
Towards Understanding Extrapolation: a Causal Lens
·2076 words·10 mins·
loading
·
loading
Machine Learning
Transfer Learning
🏢 Carnegie Mellon University
This work unveils a causal lens on extrapolation, offering theoretical guarantees for accurate predictions on out-of-support data, even with limited target samples.
Towards the Transferability of Rewards Recovered via Regularized Inverse Reinforcement Learning
·2032 words·10 mins·
loading
·
loading
AI Generated
Machine Learning
Reinforcement Learning
🏢 SYCAMORE, EPFL
This paper proposes a novel solution to the transferability problem in inverse reinforcement learning (IRL) using principal angles to measure the similarity between transition laws. It provides suffi…
Towards Stable Representations for Protein Interface Prediction
·2364 words·12 mins·
loading
·
loading
AI Generated
Machine Learning
Representation Learning
🏢 Hong Kong University of Science and Technology
ATProt: Adversarial training makes protein interface prediction robust to flexibility!
Towards Exact Gradient-based Training on Analog In-memory Computing
·1654 words·8 mins·
loading
·
loading
Machine Learning
Deep Learning
🏢 Rensselaer Polytechnic Institute
Analog in-memory computing (AIMC) training suffers from asymptotic errors due to asymmetric updates. This paper rigorously proves this limitation, proposes a novel discrete-time model to characterize …
Towards Efficient and Optimal Covariance-Adaptive Algorithms for Combinatorial Semi-Bandits
·1492 words·8 mins·
loading
·
loading
Machine Learning
Reinforcement Learning
🏢 Univ. Grenoble Alpes, Inria, CNRS, Grenoble INP, LJK
Novel covariance-adaptive algorithms achieve optimal gap-free regret bounds for combinatorial semi-bandits, improving efficiency with sampling-based approaches.
Towards Dynamic Message Passing on Graphs
·2834 words·14 mins·
loading
·
loading
Machine Learning
Deep Learning
🏢 Institute of Computing Technology, CAS
N2: A novel dynamic message-passing GNN tackles message-passing bottlenecks and high computational costs by introducing learnable pseudo-nodes and dynamic pathways in a common state space, achieving s…
Towards Diverse Device Heterogeneous Federated Learning via Task Arithmetic Knowledge Integration
·2932 words·14 mins·
loading
·
loading
Machine Learning
Federated Learning
🏢 UC San Diego
TAKFL, a novel federated learning framework, tackles device heterogeneity by independently distilling knowledge from diverse devices and integrating it adaptively, achieving state-of-the-art performan…