Deep Learning
What Matters in Graph Class Incremental Learning? An Information Preservation Perspective
·3421 words·17 mins·
loading
·
loading
Machine Learning
Deep Learning
🏢 College of Intelligence and Computing, Tianjin University
GSIP framework mitigates catastrophic forgetting in graph class incremental learning by preserving crucial graph information, achieving a 10% improvement in forgetting metrics.
What is my quantum computer good for? Quantum capability learning with physics-aware neural networks
·1734 words·9 mins·
loading
·
loading
Machine Learning
Deep Learning
🏢 Sandia National Laboratories
Quantum-physics-aware neural networks achieve up to 50% improved accuracy in predicting quantum computer capabilities, scaling to 100+ qubits.
What If the Input is Expanded in OOD Detection?
·3779 words·18 mins·
loading
·
loading
Machine Learning
Deep Learning
🏢 Wuhan University
Boost OOD detection accuracy by averaging model confidence scores from original and corrupted inputs!
WeiPer: OOD Detection using Weight Perturbations of Class Projections
·5838 words·28 mins·
loading
·
loading
AI Generated
Machine Learning
Deep Learning
🏢 Free University of Berlin
WeiPer enhances OOD detection by cleverly perturbing class projections, creating a richer representation that improves various existing methods and achieves state-of-the-art results.
Weight for Robustness: A Comprehensive Approach towards Optimal Fault-Tolerant Asynchronous ML
·1754 words·9 mins·
loading
·
loading
Machine Learning
Deep Learning
🏢 Technion
Optimal fault-tolerant asynchronous machine learning is achieved via a novel weighted robust aggregation framework, ensuring efficient training despite Byzantine failures and heterogeneous resources.
Weight Diffusion for Future: Learn to Generalize in Non-Stationary Environments
·2419 words·12 mins·
loading
·
loading
Machine Learning
Deep Learning
🏢 Tencent AI Lab
Weight Diffusion (W-Diff) masters evolving domain generalization by using conditional diffusion models to learn classifier weight evolution patterns, enabling superior generalization to unseen future …
Weight decay induces low-rank attention layers
·1731 words·9 mins·
loading
·
loading
Machine Learning
Deep Learning
🏢 ETH Zurich
Weight decay in deep learning surprisingly induces low-rank attention layers, potentially harming performance but offering optimization strategies for large language models.
WaveAttack: Asymmetric Frequency Obfuscation-based Backdoor Attacks Against Deep Neural Networks
·2153 words·11 mins·
loading
·
loading
Machine Learning
Deep Learning
🏢 East China Normal University
WaveAttack: A new backdoor attack method leveraging asymmetric frequency obfuscation for high stealthiness and effectiveness in Deep Neural Networks.
Wasserstein Gradient Boosting: A Framework for Distribution-Valued Supervised Learning
·3031 words·15 mins·
loading
·
loading
AI Generated
Machine Learning
Deep Learning
🏢 University of Edinburgh
Wasserstein Gradient Boosting (WGBoost) extends gradient boosting to handle probability distributions as outputs, enabling more robust and informative predictions in various applications.
Variational Flow Matching for Graph Generation
·1968 words·10 mins·
loading
·
loading
AI Generated
Machine Learning
Deep Learning
🏢 UvA-Bosch Delta Lab
CatFlow: a novel flow matching method for graph generation, offering superior computational efficiency and performance.
Utilizing Image Transforms and Diffusion Models for Generative Modeling of Short and Long Time Series
·3365 words·16 mins·
loading
·
loading
Machine Learning
Deep Learning
🏢 Ben-Gurion University
ImagenTime transforms time series into images, leveraging advanced diffusion models for superior generative modeling of both short and long sequences.
Unveiling The Matthew Effect Across Channels: Assessing Layer Width Sufficiency via Weight Norm Variance
·2478 words·12 mins·
loading
·
loading
Machine Learning
Deep Learning
🏢 Dept. of CSE & School of AI & MoE Key Lab of AI, Shanghai Jiao Tong University
Neural network efficiency is improved by analyzing weight norm variance across channels to identify optimal layer widths, resulting in reduced parameters and boosted performance.
Universal Physics Transformers: A Framework For Efficiently Scaling Neural Operators
·2318 words·11 mins·
loading
·
loading
Machine Learning
Deep Learning
🏢 ELLIS Unit Linz
Universal Physics Transformers (UPTs) offer a unified, scalable framework for efficiently training neural operators across diverse spatio-temporal physics problems, overcoming limitations of existing …
Universal Neural Functionals
·1439 words·7 mins·
loading
·
loading
Machine Learning
Deep Learning
🏢 Stanford University
Universal Neural Functionals (UNFs) automatically construct permutation-equivariant models for any weight space, improving learned optimizer performance and generalization.
UniTS: A Unified Multi-Task Time Series Model
·4241 words·20 mins·
loading
·
loading
Machine Learning
Deep Learning
🏢 Harvard University
UniTS: one model to rule them all! This unified multi-task time series model excels in forecasting, classification, anomaly detection, and imputation, outperforming specialized models across 38 divers…
United We Stand, Divided We Fall: Fingerprinting Deep Neural Networks via Adversarial Trajectories
·2453 words·12 mins·
loading
·
loading
Machine Learning
Deep Learning
🏢 Huazhong University of Science and Technology
ADV-TRA uses adversarial trajectories to robustly fingerprint deep neural networks, outperforming state-of-the-art methods against various removal attacks.
UniIF: Unified Molecule Inverse Folding
·2175 words·11 mins·
loading
·
loading
AI Generated
Machine Learning
Deep Learning
🏢 Zhejiang University
UniIF: A unified model revolutionizes molecule inverse folding, achieving state-of-the-art results across protein, RNA, and material design by employing a novel geometric block attention network.
Unifying Homophily and Heterophily for Spectral Graph Neural Networks via Triple Filter Ensembles
·2292 words·11 mins·
loading
·
loading
Machine Learning
Deep Learning
🏢 School of Computer Science and Cyber Engineering, Guangzhou University, China
TFE-GNN: A novel spectral GNN using triple filter ensembles for superior homophily/heterophily handling and improved generalization on real-world graphs.
Unifying Generation and Prediction on Graphs with Latent Graph Diffusion
·2196 words·11 mins·
loading
·
loading
AI Generated
Machine Learning
Deep Learning
🏢 Massachusetts Institute of Technology
Latent Graph Diffusion (LGD) unifies graph learning, solving all task levels and types with a single framework and state-of-the-art results.
Understanding the Expressivity and Trainability of Fourier Neural Operator: A Mean-Field Perspective
·2537 words·12 mins·
loading
·
loading
Machine Learning
Deep Learning
🏢 University of Tokyo
A mean-field theory explains Fourier Neural Operator (FNO) behavior, linking expressivity to trainability by identifying ordered and chaotic phases that correspond to vanishing or exploding gradients,…