On Distributed Deep Network for Processing Large-Scale Sets of Complex Data
Conference paper
Chen, D (2016). On Distributed Deep Network for Processing Large-Scale Sets of Complex Data. 8th International Conference on Intelligent Human-Machine Systems and Cybernetics (IHMSC). Hangzhou, China 27 - 28 Aug 2016
Authors | Chen, D |
---|---|
Type | Conference paper |
Abstract | Recent work in unsupervised feature learning and deep learning has shown that being able to train large models can dramatically improve performance. In this paper, we consider the problem of training a deep network with hundreds of parameters using distributed CPU cores. We have developed Bagging-Down SGD algorithm to solve the distributing problems. Bagging-Down SGD introduces the parameter server adding on the several model replicas, and separates the updating and the training computing to accelerate the whole system. We have successfully used our system to train a distributed deep network, and achieve state-of-the-art performance on MINIST, a visual handwriting font library. We show that these techniques dramatically accelerate the training of this kind of distributed deep network. |
Year | 2016 |
Accepted author manuscript | License File Access Level Open |
Publication dates | |
27 Aug 2016 | |
Publication process dates | |
Deposited | 12 Aug 2016 |
Accepted | 27 Jul 2016 |
https://openresearch.lsbu.ac.uk/item/872q3
Download files
60
total views91
total downloads1
views this month2
downloads this month