Skip to main content

Machine Learning

Improving Neural Network Surface Processing with Principal Curvatures
·2900 words·14 mins· loading · loading
AI Generated Machine Learning Deep Learning 🏢 Inria
Boosting neural network surface processing: Using principal curvatures as input significantly improves segmentation and classification accuracy while reducing computational overhead.
Improving Linear System Solvers for Hyperparameter Optimisation in Iterative Gaussian Processes
·3448 words·17 mins· loading · loading
Machine Learning Gaussian Processes 🏢 University of Cambridge
Accelerate Gaussian process hyperparameter optimization by up to 72x using novel linear system solver techniques.
Improving Generalization and Convergence by Enhancing Implicit Regularization
·2134 words·11 mins· loading · loading
Machine Learning Deep Learning 🏢 Peking University
IRE framework expedites the discovery of flat minima in deep learning, enhancing generalization and convergence. By decoupling the dynamics of flat and sharp directions, IRE boosts sharpness reduction…
Improving Equivariant Model Training via Constraint Relaxation
·1689 words·8 mins· loading · loading
Machine Learning Deep Learning 🏢 University of Pennsylvania
Boost equivariant model training by strategically relaxing constraints during training, enhancing optimization and generalization!
Improving Deep Reinforcement Learning by Reducing the Chain Effect of Value and Policy Churn
·3413 words·17 mins· loading · loading
Machine Learning Reinforcement Learning 🏢 Université De Montréal
Deep RL agents often suffer from instability due to the ‘chain effect’ of value and policy churn; this paper introduces CHAIN, a novel method to reduce this churn, thereby improving DRL performance an…
Improving Deep Learning Optimization through Constrained Parameter Regularization
·3522 words·17 mins· loading · loading
Machine Learning Deep Learning 🏢 University of Freiburg
Constrained Parameter Regularization (CPR) outperforms traditional weight decay by dynamically adapting regularization strengths for individual parameters, leading to better deep learning model perfor…
Improved Sample Complexity for Multiclass PAC Learning
·258 words·2 mins· loading · loading
Machine Learning Optimization 🏢 Purdue University
This paper significantly improves our understanding of multiclass PAC learning by reducing the sample complexity gap and proposing two novel approaches to fully resolve the optimal sample complexity.
Improved Sample Complexity Bounds for Diffusion Model Training
·360 words·2 mins· loading · loading
Machine Learning Deep Learning 🏢 University of Texas at Austin
Training high-quality diffusion models efficiently is now possible, thanks to novel sample complexity bounds improving exponentially on previous work.
Improved Regret of Linear Ensemble Sampling
·1286 words·7 mins· loading · loading
AI Generated Machine Learning Reinforcement Learning 🏢 Seoul National University
Linear ensemble sampling achieves a state-of-the-art regret bound of Õ(d³/²√T) with a logarithmic ensemble size, closing the theory-practice gap in linear bandit algorithms.
Improved off-policy training of diffusion samplers
·2211 words·11 mins· loading · loading
Machine Learning Deep Learning 🏢 University of Toronto
Researchers enhanced diffusion samplers by developing a novel exploration strategy and a unified library, improving sample quality and addressing reproducibility challenges.
Improved learning rates in multi-unit uniform price auctions
·442 words·3 mins· loading · loading
AI Generated Machine Learning Reinforcement Learning 🏢 University of Oxford
New modeling of bid space in multi-unit uniform price auctions achieves regret of Õ(K4/3T2/3) under bandit feedback, improving over prior work and closing the gap with discriminatory pricing.
Improved Bayes Regret Bounds for Multi-Task Hierarchical Bayesian Bandit Algorithms
·1596 words·8 mins· loading · loading
Machine Learning Reinforcement Learning 🏢 Hong Kong University of Science and Technology
This paper significantly improves Bayes regret bounds for hierarchical Bayesian bandit algorithms, achieving logarithmic regret in finite action settings and enhanced bounds in multi-task linear and c…
Imprecise Label Learning: A Unified Framework for Learning with Various Imprecise Label Configurations
·2869 words·14 mins· loading · loading
AI Generated Machine Learning Semi-Supervised Learning 🏢 Carnegie Mellon University
Unified framework for imprecise label learning handles noisy, partial, and semi-supervised data, improving model training efficiency and accuracy.
Implicitly Guided Design with PropEn: Match your Data to Follow the Gradient
·2326 words·11 mins· loading · loading
Machine Learning Deep Learning 🏢 Prescient/MLDD, Genentech Research and Early Development
PropEn: a novel framework for implicitly guided design optimization that leverages ‘matching’ to boost efficiency by matching samples and approximating the gradient without a discriminator.
Identifying Selections for Unsupervised Subtask Discovery
·3702 words·18 mins· loading · loading
AI Generated Machine Learning Reinforcement Learning 🏢 Carnegie Mellon University
This paper introduces seq-NMF, a novel method for unsupervised subtask discovery in reinforcement learning that leverages selection variables to enhance generalization and data efficiency.
Identifying Latent State-Transition Processes for Individualized Reinforcement Learning
·2375 words·12 mins· loading · loading
Machine Learning Reinforcement Learning 🏢 Carnegie Mellon University
This study introduces a novel framework for individualized reinforcement learning, guaranteeing the identifiability of latent factors influencing state transitions and providing a practical method for…
Identify Then Recommend: Towards Unsupervised Group Recommendation
·1520 words·8 mins· loading · loading
Machine Learning Self-Supervised Learning 🏢 Ant Group
Unsupervised group recommendation model, ITR, achieves superior user and group recommendation accuracy by dynamically identifying user groups and employing self-supervised learning, eliminating the ne…
Identifiable Object-Centric Representation Learning via Probabilistic Slot Attention
·2355 words·12 mins· loading · loading
Machine Learning Representation Learning 🏢 Imperial College London
Probabilistic Slot Attention achieves identifiable object-centric representations without supervision, advancing systematic generalization in machine learning.
HyperPrism: An Adaptive Non-linear Aggregation Framework for Distributed Machine Learning over Non-IID Data and Time-varying Communication Links
·1515 words·8 mins· loading · loading
Machine Learning Federated Learning 🏢 Shanghai University of Electric Power
HyperPrism, a novel framework, tackles challenges in distributed machine learning by using adaptive non-linear aggregation to handle non-IID data and dynamic communication links, significantly improvi…
HyperLogic: Enhancing Diversity and Accuracy in Rule Learning with HyperNets
·2378 words·12 mins· loading · loading
Machine Learning Deep Learning 🏢 School of Data Science, the Chinese University of Hong Kong (Shenzhen)
HyperLogic uses hypernetworks to generate diverse, accurate, and concise rule sets from neural networks, enhancing both interpretability and accuracy in rule learning.