I'm trying to use autoencoders in keras to create a linear transformation similar to independent component analysis (ICA) (using this to denoise electroencephalographic data, time series of 64x100000 pts). While the autoencoder does a good job of re-creating the input using a smaller number of neurons in the hidden layers, there's no structure to the weights in the hidden layers, i.e., it doesn't seem to isolate structure in the data, it just mixes everything up in the compressed layers. For example, ICA is able to isolate things like heart-beats, eye-blinks, and brain activity, but the reconstructed time-series in the hidden layers are just linear mixtures of all of these different signals.
Does anyone know how to force the time series constructed in the hidden layers to be temporally independent? I'm thinking I'll have to create some kind of custom regularizer, but I have no idea how to go about this.
Here's my keras code:
import numpy as np import scipy as scipy from keras.models import Sequential from keras.layers.core import Dense, Activation, Dropout from keras.layers import BatchNormalization import matplotlib.pyplot as plt from keras import regularizers from keras.constraints import max_norm, non_neg, unit_norm from keras import backend as K raw = scipy.io.loadmat('raw.mat') raw = raw['raw']; #acts = scipy.io.loadmat('acts.mat') #acts = acts['acts']; model = Sequential() model.add(Dense(32,input_shape=(64,))) model.add(Activation('linear')) model.add(Dense(16)) model.add(Activation('linear')) model.add(Dense(64)) model.add(Activation('linear')) model.summary() model.compile(optimizer='Adamax',loss='mean_absolute_error') xtrain = raw[:,:].transpose() #xref = acts[:,:].transpose() model.fit(xtrain, xtrain, verbose=1, batch_size=200, epochs=20)