## Concrete example of how transposed convolutions are able to *add* features to an image

1

Say we have a simple gray scale image. If we use a filter which is just the 3x3 identity matrix (or more pointedly the identity matrix but with -1 instead of the 0 entries), it is fairly easy to see how applying this filter with stride length 1 and padding of 1 would produce an image of the same size that represents the presence of north-west diagonals in the input image.

As I am reading more about the generative networks in GAN paradigms, I am learning that 'transposed convolutions' are used to turn gaussian noise into meaningful images like a human face. However, when I try to look at sources for transposed convolutions, most articles address the upscaling use of these convolutions, rather than their 'generative' properties. Also it is not clear to me that upscaling is even necessary in these applications, since we could start with noise that has the same resolution as our desired output.

I am asking for an example, article, or paper that can provide me with more understanding as to the feature generation aspect of transposed convolutions. I have found this interesting article that relates the word 'transpose' to the transpose of a matrix. I have a good background in linear algegra, and I understand how the transpose would swap the dimensions of the input/output. This has obvious relation to upscaling/downscaling, but this effect would happen if we replaced the m x n matrix with any other n x m matrix, not specifically just the transpose. Essentially, I'm not sure how actual transpose functor can go from detecting a given feature associated to a convolutional filter, to producing that same feature

EDIT: I've done some thinking at it is clear to me now how the transpose matrix will produce an 'input image' that has the features specified by a given feature map. That is if $$M$$ is the matrix given by the convultion operation, and $$F$$ is a feature map then

$$M^T F$$ will produce an image with the corresponding features. Its obviously not a perfect inverse operation, but it works. However I still don't yet see how to interpret this transposed matrix as a convolution of its own