Federated Learning for Resource Constrained Devices

As resource constrained edge devices become increasingly more powerful, they are able to provide a larger quantity of higher quality data. However, as these devices are decentralized, it becomes difficult to gain insights from multiple devices at the same time. Federated learning allows us to learn...

Full description

Bibliographic Details
Main Author: Jain, Kriti
Other Authors: Kagal, Lalana
Format: Thesis
Published: Massachusetts Institute of Technology 2022
Online Access:https://hdl.handle.net/1721.1/144688
Description
Summary:As resource constrained edge devices become increasingly more powerful, they are able to provide a larger quantity of higher quality data. However, as these devices are decentralized, it becomes difficult to gain insights from multiple devices at the same time. Federated learning allows us to learn from multiple devices in a decentralized manner without requiring data to be shared. Each client trains its own model and communicates relevant model information to a central server. The server aggregates this information according to some specified algorithm and sends the clients a global model; the clients then update their own private models with this global model, without ever sharing their local data or accessing any other client’s local data. On edge devices, however, federated learning becomes increasingly difficult because of computation, battery, and storage constraints. This thesis has two main contributions. The first is a modular, single-machine simulator for federated learning on edge devices. The second is a real world scalable federated learning system for Android devices that is able to automatically allocate resources by leveraging PyTorch Lightning. To the best of my knowledge, this is the first work that uses PyTorch Lightning specifically for training, and not just inference, on edge devices.