I’m trying to figure out how to write an optimal convolutional neural network with respect to maximizing and minimizing filters in a convolution 2D layer. This is my thinking and I’m not sure if it's correct.
If we have a dataset of 32x32 images, we could start with a
Conv2D layer, filter of 3x3 and stride of 1x1. Therefore the maximum times this filter would be able to fit into the 32 x 32 images would be 30 times e.g.
newImageX * newImageY
newImageX = (imageX – filterX + 1) newImageY = (imageY – filterY + 1)
Am I right in thinking that because there are only
newImageX * newImageY patterns in the 32 x 32 image, that the maximum amount of filters should be
newImageX * newImageY, and any more would be redundant?
So, the following code is the maximum possible filters given that we have 3x3 filter, 1x1 stride and 32x32 images?
Conv2D((30*30), kernel=(3,3), stride=(1,1), input_shape=(32,32,1))
Is there any reason to go above
30*30 filters, and are there any reasons to go below this number, assuming that
input_shape remain the same?
If you knew you were looking for one specific filter, would you have to use the maximum amount of filters to ensure that the one you were looking for was present, or can you include it another way?