Overview
Bagging = 同一个模型,用不同数据训练,再平均结果
典型代表
- BaggingClassifier
- Random Forest ⭐⭐⭐⭐⭐
The big concept of bagging is similar to voting classifier, but instead of using a different set of classifiers, we are using the same classifier trained on a different subsets of our training data.
Now the question is: how does one classifier get a different subset of our training data?
This is normally can be achieved via random sampling, i.e we take a specific number of data points from a training instance randomly as samples, and then we use those samples to train our classifier model.

