On Distributed Deep Network for Processing Large-Scale Sets of Complex Data
Chen, D (2016). On Distributed Deep Network for Processing Large-Scale Sets of Complex Data. 8th International Conference on Intelligent Human-Machine Systems and Cybernetics (IHMSC). Hangzhou, China 27 - 28 Aug 2016 London South Bank University.
Recent work in unsupervised feature learning and deep learning has shown that being able to train large models can dramatically improve performance. In this paper, we consider the problem of training a deep network with hundreds of parameters using distributed CPU cores. We have developed Bagging-Down SGD algorithm to solve the distributing problems. Bagging-Down SGD introduces the parameter server adding on the several model replicas, and separates the updating and the training computing to accelerate the whole system. We have successfully used our system to train a distributed deep network, and achieve state-of-the-art performance on MINIST, a visual handwriting font library. We show that these techniques dramatically accelerate the training of this kind of distributed deep network.
|Publisher||London South Bank University|
|Accepted author manuscript|
CC BY 4.0
|27 Aug 2016|
|Publication process dates|
|Deposited||12 Aug 2016|
|Accepted||27 Jul 2016|
3views this month
0downloads this month