Ensemble classifiers
Thomas G Dietterich defines Ensemble methods as follows:
"Ensemble methods are learning algorithms that construct a set of classifiers and then classify new data points by taking a (weighted) vote of their prediction."
You can get more information from http://web.engr.oregonstate.edu/~tgd/publications/mcs-ensembles.pdf.
Ensemble methods create a set of weak classifiers and combine them into a strong classifier. A weak classifier is a classifier that performs slightly better than a classifier that randomly guesses the prediction. Rattle offers two types of ensemble models: Random Forest and Boosting.
Boosting
Boosting is an ensemble method, so it creates a set of different classifiers. Imagine that you have m classifiers, we can define a classifier x as:
When we need to evaluate a new observation, we can calculate the average of all m tree's predictions using the following formula:
We can improve this evaluation by adding a weight to each tree, as shown here in this formula...