Currently, many algorithms are available for image inpainting. In my application, I have some special restriction on training dataset-
- Let's consider the training dataset of human facial images.
- Although all human face has the same general structure, they may have subtle differences depending on racial characteristics.
- Consider that in the training dataset, we have ten facial images from each race.
Now, in my learning algorithm, can we come up with a two-step method? Wherein the first step, we will learn about the general facial structure more accurately using all training data. In the next step, we will learn those subtle features of each race by only learning ten images associated with that race? It might restore a distorted image more accurately.
Suppose we have a distorted facial image of a person from race 'A,' where the nasal area of that image is lost. Now with the first step, we can learn the nasal structure more accurately by using all of the training data, and in the second step using only the ten images associated with race 'A,' we can fine-tune those generated data. As we have only 10 data with race 'A,' if we use only those small subset data to learn the whole model, then probably we will not be able to capture the all general architecture of the face in the first place.
P.S. I am not from Computer Science/ML background, probably my problem description is a little bit vague. It would be great if someone provides an edit/tag suggestion.