My network works on 32x32 normalized (translationally) but noisy images. Its task it to determine whether image has simple symmetry (horizontal/vertical). It needs to be reasonably robust to rotation (up to 20 degrees).
I approached this task with a simple perceptron-like net with 2 hidden layers. It performs reasonably well (on limited amount of data that I have) but I can't shake off the feeling that this design is absolutely the worst for what i need.
- It is hard to judge generalization capacity of the net (overfits really easily)
- It performs alot worse than a simple deterministic program that i wrote for the same task
Symmetry is such a simple concept but my neural network (being feed forward type) can't represent it efficiently. What I have in mind is a kind of RNN that decides what axis to fold the image along and then judging on how well folded parts of image match. Are there papers on something like that?