Image to Image translation is the task of transferring an image's characteristics from one domain and representing it in another. GANs have provided an end to end method to do this task.
Prior to Gans, these tasks were done individually, by using classic image processing techniques mainly.
Techniques such as image denoising, or finding edges in photos, or using web results to join various images were used.
As mentioned by P Isola et. al., in the CycleGAN paper, Hertzmann et al. in their paper, Image Analogies, have employed a non-parametric texture model on a single input-output training image pair.
In Image Quilting for Texture Synthesis and Transfer, the authors used existing patches of images and stitched them together.
In Data-driven Hallucination of Different Times of Day from a Single Outdoor Photo, here the authors compare the input image with an available dataset of time-lapse videos similar to the input and find the frame at the time of input image and frame for a target time. They use local affine transformation to change the scene of an input image into target image.
Recent works have focused on using a dataset of paired input-output examples to learn the translation function using CNNs for semantic segmentation,
Fully convolutional networks for semantic segmentation