If you have ever competed in a Kaggle competition, you are probably familiar with the use of combiningÂ different predictive models for improved accuracy which will creep your score up in the leader board. While it is widely used, there are only a few resources that I am aware of where a clear description is available (One that I know of isÂ here, and there is also aÂ caret package extensionÂ for it). Therefore, Â I will try to workout a simple example here to illustrate how different models can be combined. The example I have chosen is theÂ House PricesÂ competition from Kaggle. This is a regression problem and given lots of features about houses, one is expected to predict their prices on a test set. I will use three different regression methods to create predictions (XGBoost, Neural Networks, and Support Vector Regression) and stack them up to produce a final prediction. I assume that the reader is familiar with R, Xgboost and caret packages, as well as support vector regression and neural networks.

The main idea of constructing a predictive model by combining different models can be schematically illustrated as below:

Let me describe the key points in the figure:

- Initial training data (X) hasÂ
*m*Â observations, andÂ*n*Â features (so it isÂ*m x n*). - There are M different models that are trained on X (by some method of training, like cross-validation) before hand.
- Each model provides predictions for the outcome (y) which are then cast into a second level training data (Xl2) which is nowÂ
*m x M*. Namely, the M predictions become features for this second level data. - A second level model (or models) can then be trained on this data to produce the final outcomes which will be used for predictions.

There are several ways that the second level data (Xl2) can be built. Here, I will discussÂ stacking, which works great for small or medium size data sets. Stacking uses a similar idea to k-folds cross validation to createÂ out-of-sampleÂ predictions.

The key word here isÂ out-of-sample, since if we were to use predictions from the M models that areÂ fit to all the training data, then the second level model will be biased towards the best of M models. This will be of no use.

As an illustration of this point, letâ€™s say thatÂ model 1Â has lower training accuracy, thanÂ model 2Â on the training data. There may however be data pointsÂ whereÂ model 1Â performs better, but for some reason it performs terribly on others (see figure below). Instead,Â model 2Â may have a better overall performance on all the data points, but it has worse performance on the very set of points whereÂ model 1is better. The idea is to combine these two models where they perform the best. This is why creating out-of-sample predictions have a higher chance of capturing distinct regions where each model performs the best.

First, let me describe what I mean by stacking. The idea is to divide the training set into several pieces like you would do in k-folds cross validation. For each fold, the rest of the folds are used to obtain a predictions using all the models 1â€¦M. The best way to explain this is by the figure below:

Here, we divide our training data into N folds, and hold the Nth fold out for validation (i.e. the holdout fold). Suppose we have M number of models (we will later use M=3). As the figure shows, prediction for each fold (Fj) is obtained from a fit using the rest of the folds and collected in an out-of-sample predictions matrix (Xoos). Namely, the level 2 training data Xl2 is Xoos.Â This is repeated for each of the models. The out-of -sample prediction matrix (Xoos) will then be used in a second level training (by some method of choice) to obtain the final predictions forÂ all the data points. There are several points to note:

- We have not simply stacked the predictions on all the training data from the M models column-by-column to create a second level training data, due to the problem mentioned above (the fact that the second level training will simply choose the best of the M models).
- By using out-of-sample predictions, we still have a large data to train the second level model. We just need to train on Xoos and predict on the holdout fold (Nth). This is in contrast to model ensembles.

Now, each model (1â€¦M) can be trained on the (N-1) folds and a prediction on the holdout fold (Nth) can be made. There is nothing new here. But what we doÂ is that, using the second level model which is trained on Xoos, we will obtain predictions on the holdout data. We want that theÂ predictions from the second level training be better than each of the M predictions from the original models. If not, we will have to restructure the way we combine models.

Let me illustrate what I just wrote with a concrete example. For the case of the House Prices data, I have used 10 folds of division of the training data. The first 9 is used for building Xoos, and 10th is the holdout data for validation. I trained three level 1 models: XGBoost, neural network, support vector regression. For level 2, I used a linear elasticnet model (i.e. LASSO + Ridge regression). Below are the root-mean-squared errors (RMSE) of each of the models evaluated on the holdout fold:

XGBoost :0.10380062

Neural Network: Â 0.10147352

Support Vector Regression: Â 0.10726746

Stacked:Â Â 0.10005465

As clear from this data, the stacked model has slightly lower RMSE than the rest. This may look too small of a change, but when Kaggle leaderships are involved, such small differences matter a lot!

Graphically, once can see that the circled data point is a prediction which is worse in XGBoost (which is the best model when trained on all the training data), but neural network and support vector regression does better for that specific point. In the stacked model, that data point is placed close to where it is forÂ neural network and support vector regression. Of course you can also see some cases where using just XGboost is better than stacking (like some of the lower lying points). However, the overall predictive accuracy of the stacked model is better.

One final complication that will further boost your score:Â If you have spare computational time, you can create repeated stacks. This will further reduce the variance of your predictions (something reminiscent of bagging).

For example, letâ€™s create a 10 folds stacking not just once, but 10 times! (say by caretâ€™sÂ createMultiFolds function).Â This will give us multiple level 2 predictions, which can then be average over. For example, below are the RMSE values on the holdout data (rmse1: XGBoost, rmse2: Neural Network, rmse3: Support Vector Regression), for 20 different random 10-folds created. Averaging the final predictions from the level 2 predictions on these Xoosâ€™s (i.e. Xoos from stack1, Xoos from stack2, â€¦, Xoos from stack10), would further improve your score.

Once we verify that stacking results in better predictions than each of the models, then we re-run the whole machinery once again, without keeping Nth fold as holdout data. We create Xoos from all the folds, and the the second level training uses Xoos to predict the test set which Kaggle provides us with. Hopefully, this will creep your score up in the leader board!.

Final word: You can find the scripts from myÂ Github repo.

Note: If you have a classification problem, you can still use the same procedure to stack class probabilities.

Source:Â Stacking models for improved predictions: A case study for housing prices