I assume you trained your model on
(f1, f2, f3, f4, f5, f6) and in your test data you sometimes have
(f1, f2, f3) and sometimes have for example
(f1, f2, f3, f4, f5, f6), right? Because if your test data always have
(f1, f2, f3), then isn't it better to just train a model on available features?
So if my assumption is correct what I would do is to manipulate the training set a bit, keeping some training set with
(f1, f2, f3, f4, f5, f6) and some others with
(f1, f2, f3) with replacement of real values in their
(f4, f5, f6) by e.g. mean of respective feature. So all training set still have
(f1, f2, f3, f4, f5, f6) but some of them have manipulated
(f4, f5, f6). Then finally when testing, do the same manipulation to those test data that have a smaller number of features.
I think like this your model learn how to predict base on
(f1, f2, f3) when other features are not available. but at the same time, take advantage of all features if they are all available.
It's probably not the best approach but it worth to try.