How to normalise image input to backpropogation algorithm?


I am implementing a simple backpropagation neural network for classifying images. One set of images are cars another set of images are buildings (houses). So far I have used Sobel Edge detector after converting the images from black and white. I need a way to remove the offset (in other words normalise the input) of where the car or where the house is in the image.

Will taking the discrete Fourier cosine transform remove the offset? (so the input to the neural network will be the coefficients of the discrete cosine Fourier transform). To be clear, when I mean offset I mean a pair of values (across the number of pixels, and vertically the number of pixels) determining where the car or the building is in the 2D image from the origin.


Posted 2020-09-01T18:59:48.817

Reputation: 11

For image classification, in general, I'd recommend going for Convolutional Neural Networks. Is there a particular reason/constraint why you are not using them, but trying to hand-craft features, instead? – Daniel B. – 2020-09-02T20:01:38.307

No answers