Skip to main content

Machine Learning

HEPrune: Fast Private Training of Deep Neural Networks With Encrypted Data Pruning
·2059 words·10 mins· loading · loading
Machine Learning Deep Learning 🏢 University of Central Florida
HEPrune accelerates private deep learning training 16x by integrating encrypted data pruning, achieving this speedup with minimal accuracy loss.
HC-GAE: The Hierarchical Cluster-based Graph Auto-Encoder for Graph Representation Learning
·1626 words·8 mins· loading · loading
Machine Learning Representation Learning 🏢 Zhejiang Key Laboratory of Intelligent Education Technology and Application,Zhejiang Normal University
HC-GAE: A novel hierarchical graph autoencoder combats over-smoothing by using hard node assignment to create isolated subgraphs, improving graph representation learning for classification.
Handling Learnwares from Heterogeneous Feature Spaces with Explicit Label Exploitation
·2061 words·10 mins· loading · loading
Machine Learning Transfer Learning 🏢 National Key Laboratory for Novel Software Technology, Nanjing University, China
This paper enhances learnware dock systems by using model outputs to improve heterogeneous learnware management, enabling effective task handling even without perfectly matched models.
Hamiltonian Score Matching and Generative Flows
·1465 words·7 mins· loading · loading
Machine Learning Generative Modeling 🏢 MIT
Hamiltonian Generative Flows (HGFs) revolutionize generative modeling by leveraging Hamiltonian dynamics, offering enhanced score matching and generative capabilities.
Hamiltonian Monte Carlo on ReLU Neural Networks is Inefficient
·1771 words·9 mins· loading · loading
Machine Learning Deep Learning 🏢 University of Delaware
Hamiltonian Monte Carlo struggles with ReLU neural networks: high rejection rates hinder Bayesian deep learning.
Hamiltonian Monte Carlo Inference of Marginalized Linear Mixed-Effects Models
·2352 words·12 mins· loading · loading
Machine Learning Deep Learning 🏢 University of Massachusetts Amherst
Accelerate Bayesian inference in linear mixed-effects models by efficiently marginalizing random effects using fast linear algebra, enabling faster and more accurate posterior estimations.
Guiding Neural Collapse: Optimising Towards the Nearest Simplex Equiangular Tight Frame
·3208 words·16 mins· loading · loading
Machine Learning Deep Learning 🏢 Australian National University
Researchers devised a novel method to accelerate neural network training by guiding the optimization process toward a Simplex Equiangular Tight Frame, exploiting the Neural Collapse phenomenon to enha…
Guided Trajectory Generation with Diffusion Models for Offline Model-based Optimization
·3001 words·15 mins· loading · loading
Machine Learning Optimization 🏢 Korea Advanced Institute of Science and Technology (KAIST)
GTG, a novel conditional generative modeling approach, leverages diffusion models to generate high-scoring design trajectories for offline model-based optimization, outperforming existing methods on b…
GUIDE: Real-Time Human-Shaped Agents
·2015 words·10 mins· loading · loading
Machine Learning Reinforcement Learning 🏢 Duke University
GUIDE: Real-time human-shaped AI agents achieve up to 30% higher success rates using continuous human feedback, boosted by a parallel training model that mimics human input for continued improvement.
GTA: Generative Trajectory Augmentation with Guidance for Offline Reinforcement Learning
·3982 words·19 mins· loading · loading
Machine Learning Reinforcement Learning 🏢 KAIST
Generative Trajectory Augmentation (GTA) significantly boosts offline reinforcement learning by generating high-reward trajectories using a conditional diffusion model, enhancing algorithm performance…
Group and Shuffle: Efficient Structured Orthogonal Parametrization
·2149 words·11 mins· loading · loading
AI Generated Machine Learning Deep Learning 🏢 HSE University
Group-and-Shuffle (GS) matrices enable efficient structured orthogonal parametrization, improving parameter and computational efficiency in orthogonal fine-tuning for deep learning.
Grounded Answers for Multi-agent Decision-making Problem through Generative World Model
·2428 words·12 mins· loading · loading
Machine Learning Reinforcement Learning 🏢 National Key Laboratory of Human-Machine Hybrid Augmented Intelligence
Generative world models enhance multi-agent decision-making by simulating trial-and-error learning, improving answer accuracy and explainability.
Great Minds Think Alike: The Universal Convergence Trend of Input Salience
·4780 words·23 mins· loading · loading
AI Generated Machine Learning Deep Learning 🏢 Purdue University
Deep neural networks surprisingly exhibit universal convergence in input salience, aligning more closely as model capacity increases, revealing valuable insights into model behavior and improving deep…
GraphMETRO: Mitigating Complex Graph Distribution Shifts via Mixture of Aligned Experts
·2217 words·11 mins· loading · loading
Machine Learning Deep Learning 🏢 Stanford University
GraphMETRO tackles complex graph distribution shifts by using a Mixture-of-Experts model to decompose shifts into interpretable components, achieving state-of-the-art results.
GraphCroc: Cross-Correlation Autoencoder for Graph Structural Reconstruction
·2820 words·14 mins· loading · loading
Machine Learning Representation Learning 🏢 Northeastern University
GraphCroc, a novel graph autoencoder, leverages cross-correlation to accurately reconstruct complex graph structures, outperforming self-correlation-based methods.
Graph Neural Networks Need Cluster-Normalize-Activate Modules
·1944 words·10 mins· loading · loading
Machine Learning Deep Learning 🏢 TU Darmstadt
Boost GNN performance and overcome oversmoothing with Cluster-Normalize-Activate (CNA) modules: a simple yet highly effective plug-and-play solution!
Graph Neural Networks Do Not Always Oversmooth
·1471 words·7 mins· loading · loading
Machine Learning Semi-Supervised Learning 🏢 RWTH Aachen University
Deep graph neural networks often suffer from oversmoothing; this paper reveals a non-oversmoothing phase controllable by weight variance, enabling deep, expressive models.
Graph Neural Flows for Unveiling Systemic Interactions Among Irregularly Sampled Time Series
·1999 words·10 mins· loading · loading
Machine Learning Deep Learning 🏢 University of Manchester
GNeuralFlow unveils systemic interactions in irregularly sampled time series by learning a directed acyclic graph representing conditional dependencies, achieving superior performance in classificatio…
Graph Edit Distance with General Costs Using Neural Set Divergence
·3177 words·15 mins· loading · loading
Machine Learning Deep Learning 🏢 EPFL
GRAPHEDX, a novel neural network, accurately estimates graph edit distance with varying operation costs, outperforming existing methods.
Graph Diffusion Policy Optimization
·2821 words·14 mins· loading · loading
AI Generated Machine Learning Reinforcement Learning 🏢 Zhejiang University
GDPO: A novel method optimizes graph diffusion models for any objective using reinforcement learning, achieving state-of-the-art performance in diverse graph generation tasks.