2024-09-20

Keren Li , PhD, Department of Mathematics, University of Alabama at Birmingham

"Stability and Convergence in Distributed Learning with Representatives"

In the evolving landscape of distributed learning, addressing the challenges posed by heterogeneous data distributions is paramount for achieving stability and convergence. Traditional Federated Learning methods often struggle with instability and slow convergence when confronted with such heterogeneity, leading to inefficiencies and potential risks in high-stakes applications. This talk introduces Representative Learning as a robust alternative, designed to mitigate these challenges. By generating pseudo data points or 'representatives' that encapsulate the critical features of local data nodes, this framework enables efficient, privacy-conscious analysis and ensures more reliable convergence across diverse environments. Representative Learning reduces communication overhead, enhances scalability, and improves the stability of learning processes, offering a path toward more resilient and interpretable distributed learning systems. The discussion will explore the theoretical foundations of this approach, its practical applications, and the advantages it provides over traditional methods, particularly in managing the complexities of heterogeneous data in distributed settings.

Stay connected TwitterFacebook LinkedIn YouTubeInstagram