Deep Learning
Inference of Neural Dynamics Using Switching Recurrent Neural Networks
·2472 words·12 mins·
loading
·
loading
Machine Learning
Deep Learning
🏢 Yale University
SRNNs reveal behaviorally-relevant neural dynamics switches!
Improving Temporal Link Prediction via Temporal Walk Matrix Projection
·2541 words·12 mins·
loading
·
loading
AI Generated
Machine Learning
Deep Learning
🏢 CCSE Lab, Beihang University
TPNet boosts temporal link prediction accuracy and efficiency by unifying relative encodings via temporal walk matrices and using random feature propagation.
Improving Neural ODE Training with Temporal Adaptive Batch Normalization
·3052 words·15 mins·
loading
·
loading
AI Generated
Machine Learning
Deep Learning
🏢 Hong Kong University of Science and Technology
Boosting Neural ODE training, Temporal Adaptive Batch Normalization (TA-BN) resolves traditional Batch Normalization’s limitations by providing a continuous-time counterpart, enabling deeper networks …
Improving Neural Network Surface Processing with Principal Curvatures
·2900 words·14 mins·
loading
·
loading
AI Generated
Machine Learning
Deep Learning
🏢 Inria
Boosting neural network surface processing: Using principal curvatures as input significantly improves segmentation and classification accuracy while reducing computational overhead.
Improving Generalization and Convergence by Enhancing Implicit Regularization
·2134 words·11 mins·
loading
·
loading
Machine Learning
Deep Learning
🏢 Peking University
IRE framework expedites the discovery of flat minima in deep learning, enhancing generalization and convergence. By decoupling the dynamics of flat and sharp directions, IRE boosts sharpness reduction…
Improving Equivariant Model Training via Constraint Relaxation
·1689 words·8 mins·
loading
·
loading
Machine Learning
Deep Learning
🏢 University of Pennsylvania
Boost equivariant model training by strategically relaxing constraints during training, enhancing optimization and generalization!
Improving Deep Learning Optimization through Constrained Parameter Regularization
·3522 words·17 mins·
loading
·
loading
Machine Learning
Deep Learning
🏢 University of Freiburg
Constrained Parameter Regularization (CPR) outperforms traditional weight decay by dynamically adapting regularization strengths for individual parameters, leading to better deep learning model perfor…
Improved Sample Complexity Bounds for Diffusion Model Training
·360 words·2 mins·
loading
·
loading
Machine Learning
Deep Learning
🏢 University of Texas at Austin
Training high-quality diffusion models efficiently is now possible, thanks to novel sample complexity bounds improving exponentially on previous work.
Improved off-policy training of diffusion samplers
·2211 words·11 mins·
loading
·
loading
Machine Learning
Deep Learning
🏢 University of Toronto
Researchers enhanced diffusion samplers by developing a novel exploration strategy and a unified library, improving sample quality and addressing reproducibility challenges.
Implicitly Guided Design with PropEn: Match your Data to Follow the Gradient
·2326 words·11 mins·
loading
·
loading
Machine Learning
Deep Learning
🏢 Prescient/MLDD, Genentech Research and Early Development
PropEn: a novel framework for implicitly guided design optimization that leverages ‘matching’ to boost efficiency by matching samples and approximating the gradient without a discriminator.
HyperLogic: Enhancing Diversity and Accuracy in Rule Learning with HyperNets
·2378 words·12 mins·
loading
·
loading
Machine Learning
Deep Learning
🏢 School of Data Science, the Chinese University of Hong Kong (Shenzhen)
HyperLogic uses hypernetworks to generate diverse, accurate, and concise rule sets from neural networks, enhancing both interpretability and accuracy in rule learning.
Hyper-opinion Evidential Deep Learning for Out-of-Distribution Detection
·2165 words·11 mins·
loading
·
loading
Machine Learning
Deep Learning
🏢 Tongji University
Hyper-opinion Evidential Deep Learning (HEDL) enhances out-of-distribution detection by integrating sharp and vague evidence for superior uncertainty estimation and classification accuracy.
How Sparse Can We Prune A Deep Network: A Fundamental Limit Perspective
·2596 words·13 mins·
loading
·
loading
Machine Learning
Deep Learning
🏢 Huazhong University of Science and Technology
Deep network pruning’s fundamental limits are characterized, revealing how weight magnitude and network sharpness determine the maximum achievable sparsity.
How many classifiers do we need?
·1821 words·9 mins·
loading
·
loading
AI Generated
Machine Learning
Deep Learning
🏢 UC Berkeley
Boost ensemble accuracy by predicting performance with fewer classifiers using a novel polarization law and refined error bounds.
HORSE: Hierarchical Representation for Large-Scale Neural Subset Selection
·1821 words·9 mins·
loading
·
loading
Machine Learning
Deep Learning
🏢 Chinese University of Hong Kong
HORSE: A novel attention-based neural network significantly improves large-scale neural subset selection by up to 20%, addressing limitations in existing methods.
Higher-Rank Irreducible Cartesian Tensors for Equivariant Message Passing
·3519 words·17 mins·
loading
·
loading
AI Generated
Machine Learning
Deep Learning
🏢 NEC Laboratories Europe
Higher-rank irreducible Cartesian tensors boost accuracy and efficiency in equivariant message-passing neural networks for atomistic simulations.
Hierarchical Hybrid Sliced Wasserstein: A Scalable Metric for Heterogeneous Joint Distributions
·2222 words·11 mins·
loading
·
loading
Machine Learning
Deep Learning
🏢 University of Texas at Austin
Hierarchical Hybrid Sliced Wasserstein (H2SW) solves the challenge of comparing complex, heterogeneous joint distributions by introducing novel slicing operators, leading to a scalable and statistical…
HHD-GP: Incorporating Helmholtz-Hodge Decomposition into Gaussian Processes for Learning Dynamical Systems
·1903 words·9 mins·
loading
·
loading
Machine Learning
Deep Learning
🏢 University of Hong Kong
HHD-GP leverages Helmholtz-Hodge decomposition within Gaussian Processes to learn physically meaningful components of dynamical systems, enhancing prediction accuracy and interpretability.
HEPrune: Fast Private Training of Deep Neural Networks With Encrypted Data Pruning
·2059 words·10 mins·
loading
·
loading
Machine Learning
Deep Learning
🏢 University of Central Florida
HEPrune accelerates private deep learning training 16x by integrating encrypted data pruning, achieving this speedup with minimal accuracy loss.
Hamiltonian Monte Carlo on ReLU Neural Networks is Inefficient
·1771 words·9 mins·
loading
·
loading
Machine Learning
Deep Learning
🏢 University of Delaware
Hamiltonian Monte Carlo struggles with ReLU neural networks: high rejection rates hinder Bayesian deep learning.