Predictive Ensemble Modelling: An Experimental Comparison of Boosting Implementation Methods
Adegoke, V, Chen, D, Barikzai, S and Banissi, E (2017). Predictive Ensemble Modelling: An Experimental Comparison of Boosting Implementation Methods. 2017 European Modelling Symposium (EMS). Manchester 20 - 21 Nov 2017 London South Bank University.
|Authors||Adegoke, V, Chen, D, Barikzai, S and Banissi, E|
This paper presents the empirical comparison of boosting implementation by reweighting and resampling methods. The goal of this paper is to determine which of the two methods performs better. In the study, we used four algorithms namely: Decision Stump, Neural Network, Random Forest and Support Vector Machine as base classifiers and AdaBoost as a technique to develop various ensemble models. We applied 10-fold cross validation method in measuring and evaluating the performance metrics of the models. The results show that in both methods the average of the correctly classified and incorrectly classified are relatively the same. However, average values of the RMSE in both methods are insignificantly different. The results further show that the two methods are independent of the datasets and the base classier used. Additionally, we found that the complexity of the chosen ensemble technique and boosting method does not necessarily lead to better performance.
|Keywords||AdaBoost; ensemble based system; machine learning; resampling; reweighting|
|Publisher||London South Bank University|
|Accepted author manuscript|
CC BY 4.0
|20 Nov 2017|
|Publication process dates|
|Deposited||29 Nov 2017|
|Accepted||20 Oct 2017|
0views this month
5downloads this month