Skip to main content

Machine Learning

Interpretable Mesomorphic Networks for Tabular Data
·2985 words·15 mins· loading · loading
Machine Learning Interpretability 🏒 University of Freiburg
Interpretable Mesomorphic Neural Networks (IMNs) achieve accuracy comparable to black-box models while offering free-lunch explainability for tabular data through instance-specific linear models gener…
Interpretable Generalized Additive Models for Datasets with Missing Values
·2769 words·13 mins· loading · loading
Machine Learning Interpretability 🏒 Duke University
M-GAM: Interpretable additive models handling missing data with superior accuracy & sparsity!
Interpretable Concept Bottlenecks to Align Reinforcement Learning Agents
·2372 words·12 mins· loading · loading
Machine Learning Reinforcement Learning 🏒 Computer Science Department, TU Darmstadt
Successive Concept Bottleneck Agents (SCoBots) improve reinforcement learning by integrating interpretable layers, enabling concept-level inspection and human-in-the-loop revisions to fix misalignment…
Interactive Deep Clustering via Value Mining
·1729 words·9 mins· loading · loading
Machine Learning Unsupervised Learning 🏒 Sichuan University
Interactive Deep Clustering (IDC) significantly boosts deep clustering performance by strategically incorporating minimal user interaction to resolve ambiguous sample classifications.
Interaction-Force Transport Gradient Flows
·1588 words·8 mins· loading · loading
Machine Learning Unsupervised Learning 🏒 Humboldt University of Berlin
New gradient flow geometry improves MMD-based sampling by teleporting particle mass, guaranteeing global exponential convergence, and yielding superior empirical results.
Integrating Suboptimal Human Knowledge with Hierarchical Reinforcement Learning for Large-Scale Multiagent Systems
·2222 words·11 mins· loading · loading
AI Generated Machine Learning Reinforcement Learning 🏒 University of Wollongong
Hierarchical Human Knowledge-guided MARL (hhk-MARL) framework accelerates large-scale multi-agent training by integrating suboptimal human knowledge, significantly improving performance and scalabilit…
Integrating GNN and Neural ODEs for Estimating Non-Reciprocal Two-Body Interactions in Mixed-Species Collective Motion
·1573 words·8 mins· loading · loading
Machine Learning Deep Learning 🏒 University of Tokyo
Deep learning framework integrating GNNs and neural ODEs precisely estimates non-reciprocal two-body interactions in mixed-species collective motion, accurately replicating both individual and collect…
Instructor-inspired Machine Learning for Robust Molecular Property Prediction
·2041 words·10 mins· loading · loading
Machine Learning Semi-Supervised Learning 🏒 Stanford University
InstructMol, a novel semi-supervised learning algorithm, leverages unlabeled data and an instructor model to significantly improve the accuracy and robustness of molecular property prediction, even wi…
Initializing Services in Interactive ML Systems for Diverse Users
·1498 words·8 mins· loading · loading
Machine Learning Federated Learning 🏒 University of Washington
Adaptively initializing multi-service ML systems for diverse users using minimal data, this paper introduces a randomized algorithm achieving near-optimal loss with provable guarantees.
Infusing Self-Consistency into Density Functional Theory Hamiltonian Prediction via Deep Equilibrium Models
·1907 words·9 mins· loading · loading
Machine Learning Deep Learning 🏒 Microsoft Research
Deep Equilibrium Models (DEQs) infused into DFT Hamiltonian prediction achieves self-consistency, accelerating large-scale materials simulations.
Inflationary Flows: Calibrated Bayesian Inference with Diffusion-Based Models
·3134 words·15 mins· loading · loading
Machine Learning Deep Learning 🏒 Duke University
Calibrated Bayesian inference achieved via novel diffusion models uniquely mapping high-dimensional data to lower-dimensional Gaussian distributions.
Infinite Limits of Multi-head Transformer Dynamics
·4731 words·23 mins· loading · loading
AI Generated Machine Learning Deep Learning 🏒 Harvard University
Researchers reveal how the training dynamics of transformer models behave at infinite width, depth, and head count, providing key insights for scaling up these models.
Inferring stochastic low-rank recurrent neural networks from neural data
·3178 words·15 mins· loading · loading
Machine Learning Deep Learning 🏒 University of Tübingen, Germany
Researchers developed a method using variational sequential Monte Carlo to fit stochastic low-rank recurrent neural networks to neural data, enabling efficient analysis and generation of realistic neu…
Inference of Neural Dynamics Using Switching Recurrent Neural Networks
·2472 words·12 mins· loading · loading
Machine Learning Deep Learning 🏒 Yale University
SRNNs reveal behaviorally-relevant neural dynamics switches!
Inductive biases of multi-task learning and finetuning: multiple regimes of feature reuse
·3248 words·16 mins· loading · loading
AI Generated Machine Learning Transfer Learning 🏒 Columbia University
Multi-task learning and finetuning show surprising feature reuse biases, including a novel ’nested feature selection’ regime where finetuning prioritizes a sparse subset of pretrained features, signif…
Incremental Learning of Retrievable Skills For Efficient Continual Task Adaptation
·2821 words·14 mins· loading · loading
Machine Learning Reinforcement Learning 🏒 Carnegie Mellon University
IsCiL: a novel adapter-based continual imitation learning framework that efficiently adapts to new tasks by incrementally learning and retrieving reusable skills.
In-Trajectory Inverse Reinforcement Learning: Learn Incrementally From An Ongoing Trajectory
·1427 words·7 mins· loading · loading
Machine Learning Reinforcement Learning 🏒 Pennsylvania State University
MERIT-IRL: First in-trajectory IRL framework learns reward & policy incrementally from ongoing trajectories, guaranteeing sub-linear regret.
Improving Temporal Link Prediction via Temporal Walk Matrix Projection
·2541 words·12 mins· loading · loading
AI Generated Machine Learning Deep Learning 🏒 CCSE Lab, Beihang University
TPNet boosts temporal link prediction accuracy and efficiency by unifying relative encodings via temporal walk matrices and using random feature propagation.
Improving self-training under distribution shifts via anchored confidence with theoretical guarantees
·2507 words·12 mins· loading · loading
Machine Learning Semi-Supervised Learning 🏒 Northwestern University
Anchored Confidence (AnCon) significantly improves self-training under distribution shifts by using a temporal ensemble to smooth noisy pseudo-labels, achieving 8-16% performance gains without computa…
Improving Neural ODE Training with Temporal Adaptive Batch Normalization
·3052 words·15 mins· loading · loading
AI Generated Machine Learning Deep Learning 🏒 Hong Kong University of Science and Technology
Boosting Neural ODE training, Temporal Adaptive Batch Normalization (TA-BN) resolves traditional Batch Normalization’s limitations by providing a continuous-time counterpart, enabling deeper networks …