I was reading the material related to XGBoost. It seems that this method does not require any variable scaling since it is based on trees and this one can capture complex non-linearity pattern, interactions. And it can handle both numerical and categorical variables and it also seems that redundant variables does not affect this method too much.
Usually, in predictive modeling, you may do some selection among all the features you have and you may also create some new features from the set of features you have. So select a subset of features means you think there are some redundancy in your set of features; create some new features from the current feature set means you do some functional transformations on your current features. Then, both of these two points should be covered in XGBoost. Then, does it mean that to use XGBoost, you only need to choose those tunning parameters wisely? What is the value of doing feature engineering using XGBoost?