Home

Ensemble Learning

The idea of ensemble learning is to employ multiple learners and combine their predictions. There is no definitive taxonomy. Jain, Duin and Mao (2000) list eighteen classifier combination schemes; Witten and Frank (2000) detail four methods of combining multiple models: bagging, boosting, stacking and error-correcting output codes whilst Alpaydin (2004) covers seven methods of combining multiple learners: voting, error-correcting output codes, bagging, boosting, mixtures of experts, stacked generalization and cascading. We focus on four methods, then review the literature in general.

Bagging
Boosting (including AdaBoost)
Stacked Generalization
Random Subspace Method