Machine Learning contests like Kaggle usually layout the machine learning task in a human-understandable way. E.g. they might tell you the meaning of the input (features). But what if a Machine Learning contest doesn't want to expose the meaning of its input data? One approach I can think of is to apply a (random) rotation matrix to the features, so that each resulting feature doesn't have obvious meaning.
A rotation on the input space shouldn't change a model's ability to separate the positives from the negatives (using binary classification as an example) -- after all the same hyper plan (when applied the same rotation) can be used to separate the examples. What could be changed by the rotation is the distribution of each feature (i.e. when looking at a single feature's values across all examples) if a contestant cares about them. However, rotation is PCA-invariant, so if a contestant decides to work on the PCA-ed version of the input then the rotation doesn't change anything there.
How much do contestants reply on statistical analysis on the (raw, i.e. non-PCA-ed) input features? Is there any (other) thing that I should be aware of that a rotation can change for a contestant during such a contest?