Skip to main content

Machine Learning

DataStealing: Steal Data from Diffusion Models in Federated Learning with Multiple Trojans
·3940 words·19 mins· loading · loading
AI Generated Machine Learning Federated Learning 🏢 Zhejiang University
Attackers can steal massive private data from federated learning diffusion models using multiple Trojans and an advanced attack, AdaSCP, which circumvents existing defenses.
Data-Efficient Operator Learning via Unsupervised Pretraining and In-Context Learning
·4184 words·20 mins· loading · loading
AI Generated Machine Learning Self-Supervised Learning 🏢 Simon Fraser University
Data-efficient neural operator learning is achieved via unsupervised pretraining and in-context learning, significantly reducing simulation costs and improving generalization.
Data Free Backdoor Attacks
·2377 words·12 mins· loading · loading
AI Generated Machine Learning Deep Learning 🏢 the Pennsylvania State University
Data-Free Backdoor Attacks (DFBA) injects undetectable backdoors into pre-trained classifiers without retraining or architectural changes, bypassing existing defenses.
Data Augmentation with Diffusion for Open-Set Semi-Supervised Learning
·3101 words·15 mins· loading · loading
AI Generated Machine Learning Semi-Supervised Learning 🏢 Kim Jaechul Graduate School of AI, KAIST
Boosting semi-supervised learning, a new data augmentation method using diffusion models significantly improves model accuracy, especially with mismatched data distributions.
Data Acquisition via Experimental Design for Data Markets
·2343 words·11 mins· loading · loading
Machine Learning Federated Learning 🏢 MIT
Federated data acquisition via experimental design (DAVED) achieves lower prediction error without labeled validation data, optimizing cost-effectively for test-set predictions in decentralized market…
DASH: Warm-Starting Neural Network Training in Stationary Settings without Loss of Plasticity
·4111 words·20 mins· loading · loading
Machine Learning Deep Learning 🏢 Graduate School of AI, KAIST
DASH combats neural network training’s plasticity loss during warm-starting by selectively forgetting memorized noise while preserving features, improving accuracy and efficiency.
D2R2: Diffusion-based Representation with Random Distance Matching for Tabular Few-shot Learning
·1776 words·9 mins· loading · loading
Machine Learning Few-Shot Learning 🏢 Hong Kong University of Science and Technology
D2R2: A novel diffusion-based model for tabular few-shot learning, achieves state-of-the-art results by leveraging semantic knowledge and distance matching.
Cross-Device Collaborative Test-Time Adaptation
·2757 words·13 mins· loading · loading
Machine Learning Deep Learning 🏢 South China University of Technology
CoLA: Collaborative Lifelong Adaptation boosts test-time adaptation efficiency by sharing domain knowledge across multiple devices, achieving significant accuracy gains with minimal computational over…
CRONOS: Enhancing Deep Learning with Scalable GPU Accelerated Convex Neural Networks
·2029 words·10 mins· loading · loading
Machine Learning Deep Learning 🏢 Stanford University
CRONOS: Scaling convex neural network training to ImageNet!
Credal Deep Ensembles for Uncertainty Quantification
·3555 words·17 mins· loading · loading
Machine Learning Deep Learning 🏢 KU Leuven
Credal Deep Ensembles (CreDEs) improve uncertainty quantification in deep learning by predicting probability intervals, enhancing accuracy and calibration, particularly for out-of-distribution data.
Counter-Current Learning: A Biologically Plausible Dual Network Approach for Deep Learning
·1908 words·9 mins· loading · loading
Machine Learning Deep Learning 🏢 Cornell University
Biologically inspired Counter-Current Learning (CCL) uses dual networks for deep learning, offering comparable performance to other biologically plausible algorithms while enhancing biological realism…
Convolutions and More as Einsum: A Tensor Network Perspective with Advances for Second-Order Methods
·8742 words·42 mins· loading · loading
Machine Learning Deep Learning 🏢 Vector Institute
This paper accelerates second-order optimization in CNNs by 4.5x, using a novel tensor network representation that simplifies convolutions and reduces memory overhead.
Convergence Analysis of Split Federated Learning on Heterogeneous Data
·2397 words·12 mins· loading · loading
Machine Learning Federated Learning 🏢 Guangdong University of Technology
Split Federated Learning (SFL) convergence is analyzed for heterogeneous data, achieving O(1/T) and O(1/√T) rates for strongly convex and general convex objectives respectively. The study also extend…
ControlSynth Neural ODEs: Modeling Dynamical Systems with Guaranteed Convergence
·2928 words·14 mins· loading · loading
Machine Learning Deep Learning 🏢 Southeast University
ControlSynth Neural ODEs (CSODEs) guarantee convergence in complex dynamical systems via tractable linear inequalities, improving neural ODE modeling.
Controlling Continuous Relaxation for Combinatorial Optimization
·2228 words·11 mins· loading · loading
Machine Learning Unsupervised Learning 🏢 Fujitsu Limited
Continuous Relaxation Annealing (CRA) significantly boosts unsupervised learning-based solvers for combinatorial optimization by dynamically shifting from continuous to discrete solutions, eliminating…
Controlled maximal variability along with reliable performance in recurrent neural networks
·2025 words·10 mins· loading · loading
Machine Learning Reinforcement Learning 🏢 Universitat Pompeu Fabra
NeuroMOP, a novel neural principle, maximizes neural variability while ensuring reliable performance in recurrent neural networks, offering new insights into brain function and artificial intelligence…
Contrastive dimension reduction: when and how?
·1931 words·10 mins· loading · loading
Machine Learning Dimensionality Reduction 🏢 University of North Carolina at Chapel Hill
This research introduces a hypothesis test and a contrastive dimension estimator to identify unique foreground information in contrastive datasets, advancing the field of dimension reduction.
CONTRAST: Continual Multi-source Adaptation to Dynamic Distributions
·2633 words·13 mins· loading · loading
Machine Learning Domain Adaptation 🏢 University of Michigan
CONTRAST efficiently adapts multiple source models to dynamic data distributions by optimally weighting models and selectively updating only the most relevant ones, achieving robust performance withou…
Continuous Temporal Domain Generalization
·2639 words·13 mins· loading · loading
AI Generated Machine Learning Domain Generalization 🏢 University of Tokyo
Koodos: a novel Koopman operator-driven framework that tackles Continuous Temporal Domain Generalization (CTDG) by modeling continuous data dynamics and learning model evolution across irregular time …
Continuous Partitioning for Graph-Based Semi-Supervised Learning
·2240 words·11 mins· loading · loading
Machine Learning Semi-Supervised Learning 🏢 UC San Diego
CutSSL: a novel framework for graph-based semi-supervised learning, surpasses state-of-the-art accuracy by solving a continuous nonconvex quadratic program that provably yields integer solutions, exce…