Federated Learning
Eager Updates For Overlapped Communication and Computation in DiLoCo
·3815 words·18 mins·
loading
·
loading
AI Generated
🤗 Daily Papers
Machine Learning
Federated Learning
🏢 Google DeepMind
Eager updates drastically speed up training massive language models by cleverly overlapping communication and computation in DiLoCo, achieving near-optimal performance even with low bandwidth.
Streaming DiLoCo with overlapping communication: Towards a Distributed Free Lunch
·5509 words·26 mins·
loading
·
loading
AI Generated
🤗 Daily Papers
Machine Learning
Federated Learning
🏢 Google DeepMind
Streaming DiLoCo achieves two orders of magnitude bandwidth reduction in billion-scale parameter LLM training by synchronizing parameter subsets sequentially, overlapping communication with computatio…
Just a Simple Transformation is Enough for Data Protection in Vertical Federated Learning
·2945 words·14 mins·
loading
·
loading
AI Generated
🤗 Daily Papers
Machine Learning
Federated Learning
🏢 MIPT
Simple tweak, big privacy win: MLP-based architectures boost data protection in federated learning.
A New Federated Learning Framework Against Gradient Inversion Attacks
·2925 words·14 mins·
loading
·
loading
AI Generated
🤗 Daily Papers
Machine Learning
Federated Learning
🏢 School of Computing and Data Science, University of Hong Kong
HyperFL: A new federated learning framework breaking the direct connection between shared parameters and private data, effectively defending against gradient inversion attacks while maintaining favora…