Optimization in Data Science: Enhancing Autoencoders and Accelerating Federated Learning

Series
SIAM Student Seminar
Time
Monday, January 22, 2024 - 2:00pm for 1 hour (actually 50 minutes)
Location
Skiles 005
Speaker
Xue Feng – UC Davis – xffeng@ucdavis.edu
Organizer
Biraj Dahal

In this presentation, I will discuss my research in the field of data science, specifically in two areas: improving autoencoder interpolations and accelerating federated learning algorithms. My work combines advanced mathematical concepts with practical machine learning applications, contributing to both the theoretical and applied aspects of data science. The first part of my talk focuses on image sequence interpolation using autoencoders, which are essential tools in generative modeling. The focus is when there is only limited training data. By introducing a novel regularization term based on dynamic optimal transport to the loss function of autoencoder, my method can generate more robust and semantically coherent interpolation results. Additionally, the trained autoencoder can be used to generate barycenters. However, computation efficiency is a bottleneck of our method, and we are working on improving it. The second part of my presentation focuses on accelerating federated learning (FL) through the application of Anderson Acceleration. Our method achieves the same level of convergence performance as state-of-the-art second-order methods like GIANT by reweighting the local points and their gradients. However, our method only requires first-order information, making it a more practical and efficient choice for large-scale and complex training problems. Furthermore, our method is theoretically guaranteed to converge to the global minimizer with a linear rate.