I'm reading the following kaggle post for learning how to incorporate model stacking
http://blog.kaggle.com/2016/12/27/a-kagglers-guide-to-model-stacking-in-practice/ in ML models. The structure behind constructing the 5 folds and creating out of sample predictions on the training data makes sense for the purpose of building the meta model or the model on top of the base models. However i'm not sure how it uses hyper parameter tuning especially for the base models.
So the concept of getting out of sample predictions makes sense to me. We essentially for each of the 5 folds use the other 4 folds to train and then predict on the fifth. So how do we actually hyper parameter tune the base models on this same dataset without adding bias, it's seems to me that this is not possible?
Note i'm making the assumption that there is no more data available to use. I'd appreciate any help!