7

5

I used MSE loss function, SGD optimization:

```
xtrain = data.reshape(21168, 21, 21, 21,1)
inp = Input(shape=(21, 21, 21,1))
x = Conv3D(filters=512, kernel_size=(3, 3, 3), activation='relu',padding='same')(inp)
x = MaxPool3D(pool_size=(3, 3, 3),padding='same')(x)
x = Conv3D(filters=512, kernel_size=(3, 3, 3), activation='relu',padding='same')(x)
x = Conv3D(filters=256, kernel_size=(3, 3, 3), activation='relu',padding='same')(x)
encoded = Conv3D(filters=128, kernel_size=(3, 3, 3), activation='relu',padding='same')(x)
print ("shape of decoded", K.int_shape(encoded))
x = Conv3D(filters=512, kernel_size=(3, 3, 3), activation='relu',padding='same')(encoded)
x = Conv3D(filters=256, kernel_size=(3, 3, 3), activation='relu',padding='same')(x)
x = Conv3D(filters=512, kernel_size=(3, 3, 3), activation='relu',padding='same')(x)
x = Conv3D(filters=512, kernel_size=(3, 3, 3), activation='relu',padding='same')(x)
x = UpSampling3D((3, 3, 3))(x)
decoded = Conv3D(filters=1, kernel_size=(3, 3, 3), activation='relu',
padding='same')(x)
print ("shape of decoded", K.int_shape(decoded))
autoencoder = Model(inp, decoded)
autoencoder.compile(optimizer='sgd', loss='mse')
autoencoder.fit(xtrain, xtrain,
epochs=30,
batch_size=32,
shuffle=True,
validation_split=0.2
)
Epoch 1/30
16934/16934 [==============================] - 446s - loss: 34552663732314849715 15904.0000 - val_loss: 1893.9425
Epoch 2/30
16934/16934 [==============================] - 444s - loss: 1896.7580 - val_loss : 1893.9425
Epoch 3/30
16934/16934 [==============================] - 444s - loss: 1896.7580 - val_loss : 1893.9425
Epoch 4/30
16934/16934 [==============================] - 444s - loss: 1896.7580 - val_loss : 1893.9425
Epoch 5/30
16934/16934 [==============================] - 444s - loss: 1896.7580 - val_loss : 1893.9425
Epoch 6/30
16934/16934 [==============================] - 444s - loss: 1896.7580 - val_loss : 1893.9425
Epoch 7/30
16934/16934 [==============================] - 444s - loss: 1896.7580 - val_loss : 1893.9425
Epoch 8/30
16934/16934 [==============================] - 444s - loss: 1896.7580 - val_loss : 1893.9425
Epoch 9/30
16934/16934 [==============================] - 444s - loss: 1896.7580 - val_loss : 1893.9425
Epoch 10/30
16934/16934 [==============================] - 444s - loss: 1896.7580 - val_loss : 1893.9425
Epoch 11/30
16934/16934 [==============================] - 445s - loss: 1896.7580 - val_loss : 1893.9425
Epoch 12/30
16934/16934 [==============================] - 445s - loss: 1896.7580 - val_loss : 1893.9425
Epoch 13/30
16934/16934 [==============================] - 445s - loss: 1896.7580 - val_loss : 1893.9425
Epoch 14/30
16934/16934 [==============================] - 445s - loss: 1896.7580 - val_loss : 1893.9425
Epoch 15/30
16934/16934 [==============================] - 445s - loss: 1896.7580 - val_loss : 1893.9425
```

I'm facing same problem but did get rid of it, how did you get rid of that? – gdmanandamohon – 2018-10-27T14:49:09.107

I suspect its happening because of having unsupported values like 'nan' or 'inf' etc – user63026 – 2018-11-22T02:05:53.270