Project co-supervised with Stylianos Venieris from Samsung AI

Federated Learning

Federated Learning (FL) is a powerful technique for training a model on a server with data from several clients in a privacy-preserving manner. In FL, a server sends the model to every client, who then train the model locally and send it back to the server. The server aggregates the updated models and repeats the process for several rounds. FL incurs significant communication costs, in particular when transmitting the updated local models from the clients back to the server.

Communication-Efficient Learning of Deep Networks from Decentralized Data

Federated Optimization in Heterogeneous Networks

Project Aim

To date, federated learning considers a homogeneous dataflow for each individual device, meaning the neural network topology running on each device is the same. Various methods have been proposed to address the issue of systems heterogeneity (i.e. differences in systems characteristics on each device in the network) and statistical heterogeneity (i.e. non-identical distributions of data across the network).

FjORD: Fair and Accurate Federated Learning under heterogeneous...

This project takes a different approach to the heterogeneity problem by allowing agents to transmit data to each other, instead of being limited to communication with the server only.

In this case, a large dataflow graph is computed, and the challenge then becomes how to partition it efficiently across the numerous devices with different capabilities.

Skill requirements

The candidate should be experienced in Object Orientated Programming in Python. Ideally, the candidate should have experience or at least willing to learn various Machine Learning frameworks in Python (such as Pytorch and Pytorch Lightning).