All the practice of Falcon is for project Griffin.
A computation-parallel deep learning architecture.
The SSP demo is a part of Falcon, please find it in the directory SSP_Demo
(all the demo files and related datasets are in this directory).
We hope this can help researchers to start their first distrtibuted DL training in SSP scheme via PyTorch.
The implementation guideline follows the papers as:
- Q. Ho, J. Cipar, H. Cui, J. K. Kim, S. Lee, P. B. Gibbons, G. A. Gibson, G. R. Ganger, and E. P. Xing, "More effective distributed ml via a stale synchronous parallel parameter server," in Proc. NIPS, Lake Tahoe, Nevada, USA, 2013.
- W. Zhang, S. Gupta, X. Lian, and J. Liu, "Staleness-aware async-sgd for distributed deep learning," in Proc. IJCAI, New York, USA, 2016.
Two classical datasets are supported: MNIST and CIFAR-10.
- MNIST: This demo has already contained MNIST in the directory
data
, you can also download it from http://yann.lecun.com/exdb/mnist/ - CIFAR-10: You can download it from https://www.cs.toronto.edu/~kriz/cifar.html