Before GAN, what are the commonly used techniques for image-to-image translation?


As per a post, image-to-image translation is a type of CV problem.

I guess I understand the concept of image-to-image translation.

enter image description here

I am aware that GANs(generative adversarial networks) are good at this kind of problems.

I just wondered what the commonly used techniques are for this kind of problems Before GAN?

Could someone please give a hint? Thanks in advance.


Posted 2020-04-06T07:58:05.687

Reputation: 175



Image to Image translation is the task of transferring an image's characteristics from one domain and representing it in another. GANs have provided an end to end method to do this task. Prior to Gans, these tasks were done individually, by using classic image processing techniques mainly. Techniques such as image denoising, or finding edges in photos, or using web results to join various images were used.

As mentioned by P Isola et. al., in the CycleGAN paper, Hertzmann et al. in their paper, Image Analogies, have employed a non-parametric texture model on a single input-output training image pair.

In Image Quilting for Texture Synthesis and Transfer, the authors used existing patches of images and stitched them together.

In Data-driven Hallucination of Different Times of Day from a Single Outdoor Photo, here the authors compare the input image with an available dataset of time-lapse videos similar to the input and find the frame at the time of input image and frame for a target time. They use local affine transformation to change the scene of an input image into target image.

Recent works have focused on using a dataset of paired input-output examples to learn the translation function using CNNs for semantic segmentation, Fully convolutional networks for semantic segmentation

Aniket Velhankar

Posted 2020-04-06T07:58:05.687

Reputation: 76