- Flower: https://flower.ai/
- PySyft: https://github.com/OpenMined/PySyft
- FedML: https://github.com/FedML-AI/FedML
- FATE: https://github.com/FederatedAI/FATE
- openfederatedlearning: https://github.com/securefederatedai/openfederatedlearning
- FedLab: https://github.com/SMILELab-FL/FedLab
- Federated learning is a machine learning technique that trains a model collaboratively across multiple devices without centralizing sensitive data.
- data is not available on a centralized server
- data available on one server is not enough to train a good model
- Regulations
- User preference
- Data volume
- Privacy
- Centralized machine learning: move the data to the computation
- Federated (machine) Learning: move the computation to the data
- Step 0: Initialize global model
- Step 1: Send model to a number of connected organizations/devices (client nodes)
- Step 2: Train model locally on the data of each organization/device (client node)
- Step 3: Return model updates back to the server
- Step 4: Aggregate model updates into a new global model
- Federated Averaging (McMahan et al., 2016), often abbreviated as FedAvg
- Step 5: Repeat steps 1 to 4 until the model converges
- We simulate having multiple datasets from multiple organizations (also called the “cross-silo” setting in federated learning) by splitting the original CIFAR-10 dataset into multiple partitions.
-
Federated Learning with PyTorch and Flower (Quickstart Example):
-
Install Flower:
- pip install flwr
- pip install "flwr[simulation]"
- import flwr
- print(flwr.version) # 1.22.0
-
flwr new flower-tutorial --framework pytorch --username flwrlabs
-
cd flower-tutorial
-
pip install -e .
-
flwr run . OR flwr run . --run-config "num-server-rounds=5 local-epochs=3"
-
New:
-
pip install -U "flwr[simulation]"
-
flwr new @flwrlabs/quickstart-pytorch
-
cd quickstart-pytorch
-
pip install -e .
-
flwr run .
- Incentive-Based Federated Learning