So, I have a dataset that is too big to load into memory all at once. Therefore I want to use a generator to load batches of data to train on.
In this scenario, how do I go about performing scaling of the features using LabelEncoder + StandardScaler from scikitlearn?
Some more context:
I have 10million+ samples of data with 23 features and 1 label column in a database.
My setup used to be (when it was ~3 million samples) to load in pandas with sql, perform some more feature extractions, use LabelEncoder on some features, do train/test split and then use StandardScaler on the training features. And then fit my keras model.
However this workflow is no longer possible on my machines because of the amount of data. (MemoryErrors.)
I'm looking into using keras.utils.Sequence to load batches of data instead of everything in memory at once, this way i would only need to have the complete list of indexes, and one full batch in memory at a time.
However how would I go about LabelEncoding and more importantly: How would I go about feature scaling in this scenario? And given the context, is this a correct approach?