JavaScript Client Side GAN implementation

2

Recently I came across this website which is a year old: https://affinelayer.com/pixsrv/

On Desktop we can draw and download the trained model in browser and see the corresponding image generated on the right side.

Wondering how it works and with some investigation I have the following 2 questions:

  1. Why are we only downloading one file per “showcase”? Because GAN should need 1 model for converting the image and 1 model for verifying the image from what I have read.
    1. Why are the models in pict extension? Which framework did the author use to create the pict files? In linked repositories I have found TensorFlow, PyTorch etc which none of them produces pict files...

Thanks in advance. I am very curious why such JavaScript client side demo is lacking on the internet... or I just haven’t found the correct keyword to find them out.

Sunny Pun

Posted 2018-03-01T15:21:52.770

Reputation: 121

Answers

3

  1. While training a GAN, 2 models are used. A generator and a discriminator. This training process usually takes hours (or days) to complete. This is an offline process and is not happening in the browser.

  2. The pict file is the pre-trained model that has been imported to deeplearn.js for inference. The example you have linked to above accepts a sketch drawn in the Canvas and uses the (previously-created) model with deeplearn.js to generate the result.

The full process used by the Tensorflow port of pix2pix is described here. https://github.com/affinelayer/pix2pix-tensorflow#getting-started

Tom Clive

Posted 2018-03-01T15:21:52.770

Reputation: 31