Deep Learning
Con4m: Context-aware Consistency Learning Framework for Segmented Time Series Classification
·2306 words·11 mins·
loading
·
loading
Machine Learning
Deep Learning
π’ Zhejiang University
Con4m, a novel consistency learning framework, leverages contextual information to effectively classify segmented time series with inconsistent boundary labels and varying durations of classes, signif…
Combining Statistical Depth and Fermat Distance for Uncertainty Quantification
·2374 words·12 mins·
loading
·
loading
Machine Learning
Deep Learning
π’ Institut De Recherche en Informatique De Toulouse
Boosting neural network prediction reliability, this research ingeniously combines statistical depth and Fermat distance for superior uncertainty quantification, eliminating the need for distributiona…
Collaborative Refining for Learning from Inaccurate Labels
·1752 words·9 mins·
loading
·
loading
Machine Learning
Deep Learning
π’ Ant Group
Collaborative Refining for Learning from Inaccurate Labels (CRL) refines data using annotator agreement, improving model accuracy with noisy labels.
CODA: A Correlation-Oriented Disentanglement and Augmentation Modeling Scheme for Better Resisting Subpopulation Shifts
·1907 words·9 mins·
loading
·
loading
Machine Learning
Deep Learning
π’ City University of Hong Kong
CODA: A novel modeling scheme tackles subpopulation shifts in machine learning by disentangling spurious correlations, augmenting data strategically, and using reweighted consistency loss for improved…
Changing the Training Data Distribution to Reduce Simplicity Bias Improves In-distribution Generalization
·3113 words·15 mins·
loading
·
loading
Machine Learning
Deep Learning
π’ UC Los Angeles
Boosting in-distribution generalization is achieved by strategically altering the training data distribution to reduce simplicity bias and promote uniform feature learning.
Cardinality-Aware Set Prediction and Top-$k$ Classification
·1676 words·8 mins·
loading
·
loading
Machine Learning
Deep Learning
π’ Google Research
This paper proposes cardinality-aware top-k classification, improving accuracy and efficiency by dynamically adjusting prediction set sizes.
Bridging OOD Detection and Generalization: A Graph-Theoretic View
·2436 words·12 mins·
loading
·
loading
Machine Learning
Deep Learning
π’ University of Illinois Urbana-Champaign
A novel graph-theoretic framework bridges OOD detection & generalization, offering theoretical error bounds and competitive empirical performance.
Bridging Geometric States via Geometric Diffusion Bridge
·1526 words·8 mins·
loading
·
loading
Machine Learning
Deep Learning
π’ Peking University
Geometric Diffusion Bridge (GDB) accurately predicts geometric state evolution in complex systems by leveraging a probabilistic approach and equivariant diffusion processes, surpassing existing deep l…
Breaking the curse of dimensionality in structured density estimation
·1465 words·7 mins·
loading
·
loading
Machine Learning
Deep Learning
π’ Berlin Institute for the Foundations of Learning and Data
Researchers break the curse of dimensionality in structured density estimation using graph resilience, a novel graphical parameter that effectively reduces the sample complexity.
Boosting Graph Pooling with Persistent Homology
·2499 words·12 mins·
loading
·
loading
Machine Learning
Deep Learning
π’ Chinese University of Hong Kong, Shenzhen
Boosting graph neural networks: Topology-Invariant Pooling (TIP) leverages persistent homology to enhance graph pooling, achieving consistent performance gains across diverse datasets.
Boosted Conformal Prediction Intervals
·4433 words·21 mins·
loading
·
loading
AI Generated
Machine Learning
Deep Learning
π’ Stanford University
Boosting conformal prediction intervals improves accuracy and precision by tailoring them to specific desired properties via machine learning.
Block Sparse Bayesian Learning: A Diversified Scheme
·3512 words·17 mins·
loading
·
loading
Machine Learning
Deep Learning
π’ Beihang University
Diversified Block Sparse Bayesian Learning (DivSBL) improves block sparse signal recovery by adapting to unknown block structures, enhancing accuracy and robustness over existing methods.
Beyond Slow Signs in High-fidelity Model Extraction
·2693 words·13 mins·
loading
·
loading
AI Generated
Machine Learning
Deep Learning
π’ University of Cambridge
Researchers drastically sped up high-fidelity deep learning model extraction, improving efficiency by up to 14.8x and challenging previous assumptions on the extraction bottleneck.
Better by default: Strong pre-tuned MLPs and boosted trees on tabular data
·6742 words·32 mins·
loading
·
loading
Machine Learning
Deep Learning
π’ Inria Paris
Strong pre-tuned MLPs and meta-tuned default parameters for GBDTs and MLPs improve tabular data classification and regression.
BAN: Detecting Backdoors Activated by Neuron Noise
·2897 words·14 mins·
loading
·
loading
Machine Learning
Deep Learning
π’ Radboud University
BAN: a novel backdoor defense using adversarial neuron noise for efficient detection and mitigation.
B-ary Tree Push-Pull Method is Provably Efficient for Distributed Learning on Heterogeneous Data
·1511 words·8 mins·
loading
·
loading
Machine Learning
Deep Learning
π’ Chinese University of Hong Kong, Shenzhen
B-ary Tree Push-Pull (BTPP) achieves linear speedup for distributed learning on heterogeneous data, significantly outperforming state-of-the-art methods with minimal communication.
Attractor Memory for Long-Term Time Series Forecasting: A Chaos Perspective
·3387 words·16 mins·
loading
·
loading
AI Generated
Machine Learning
Deep Learning
π’ Hong Kong University of Science and Technology
Attraos: a novel long-term time series forecasting model leveraging chaos theory, significantly outperforms existing methods by utilizing attractor dynamics for efficient and accurate prediction.
AROMA: Preserving Spatial Structure for Latent PDE Modeling with Local Neural Fields
·3676 words·18 mins·
loading
·
loading
Machine Learning
Deep Learning
π’ Sorbonne UniversitΓ©
AROMA: Attentive Reduced Order Model with Attention enhances PDE modeling with local neural fields, offering efficient processing of diverse geometries and superior performance in simulating 1D and 2D…
Are Uncertainty Quantification Capabilities of Evidential Deep Learning a Mirage?
·2360 words·12 mins·
loading
·
loading
Machine Learning
Deep Learning
π’ MIT
Evidential deep learning’s uncertainty quantification is unreliable; this paper reveals its limitations, proposes model uncertainty incorporation for improved performance.
Are Self-Attentions Effective for Time Series Forecasting?
·3575 words·17 mins·
loading
·
loading
Machine Learning
Deep Learning
π’ Seoul National University
Cross-Attention-only Time Series Transformer (CATS) outperforms existing models by removing self-attention, improving long-term forecasting accuracy, and reducing computational cost.