## Autoencoder produces repeated artifacts after convergence

4

0

As experiment, I have tried using an autoencoder to encode height data from the alps, however the decoded image is very pixellated after training for several hours as show in the image below. This repeating patter is larger than the final kernel size, so I would think it would possible to remove these repeating patterns from the image to some extent.

The image is (1, 512, 512) and is sampled down to (16, 32, 32). This is done with pytorch. Here is the relevant sample of the code in which the exact layers are shown.

        self.encoder = nn.Sequential(
# Input is (N, 1, 512, 512)
nn.Conv2d(1, 16, 3, padding=1), # Shape (N, 16, 512, 512)
nn.Tanh(),
nn.MaxPool2d(2, stride=2), # Shape (N, 16, 256, 256)
nn.Conv2d(16, 32, 3, padding=1), # Shape (N, 32, 256, 256)
nn.Tanh(),
nn.MaxPool2d(2, stride=2), # Shape (N, 32, 128, 128)
nn.Conv2d(32, 32, 3, padding=1), # Shape (N, 32, 128, 128)
nn.Tanh(),
nn.MaxPool2d(2, stride=2), # Shape (N, 32, 64, 64)
nn.Conv2d(32, 16, 3, padding=1), # Shape (N, 16, 64, 64)
nn.Tanh(),
nn.MaxPool2d(2, stride=2) # Shape (N, 16, 32, 32)
)
self.decoder = nn.Sequential(
# Transpose convolution operator
nn.ConvTranspose2d(16, 32, 4, stride=2, padding=1), # Shape (N, 32, 64, 64)
nn.Tanh(),
nn.ConvTranspose2d(32, 32, 4, stride=2, padding=1), # Shape (N, 32, 128, 128)
nn.Tanh(),
nn.ConvTranspose2d(32, 16, 4, stride=2, padding=1), # Shape (N, 32, 256 256)
nn.Tanh(),
nn.ConvTranspose2d(16, 1, 4, stride=2, padding=1), # Shape (N, 32, 512, 512)
nn.ReLU()
)


Relevant image: left side original, right side result from autoencoder

So could these pixellated effects in the above image be resolved?

1An autoencoder will always lose some features of the original image. Perhaps try increasing the size the original image is down sampled to before reconstruction – Recessive – 2020-01-09T00:37:12.587

I have tried it now. Still, the same result. I also tried not downsampling at all i.e. sending the image to a 16x128x128 and then transforming it back. – Yadeses – 2020-01-09T12:52:07.587

1

Perhaps you are getting checkerboard artifacts Explained here, solutions involve changing the kernel and stride size to prevent them from being not divisible. Besides that, a solution could be to apply Gaussian smoothing to minimize the artifacts.

For example, using Gaussian smoothing in OpenCV with your image results in

import cv2
img = cv2.imread('s.png') # I took a screen shot to see how it would look
blur = cv2.blur(img,(8,8))
plt.imshow(blur)


The artifact is gone using a kernel of size (8,8). I hope this can help