Reversing A Keras Dense GAN

1

I have a Keras GAN where every layer in the generator has more neurons than the last and also where they all have an activation of LeakyReLU(alpha=0.1). I am trying to map the image back to the noise by solving linear equations in respect to the "output to be", which is the sum of all the weights*last input value then plus a bias going through the activation function. I reverse the LeakyReLU by multiplying all the negative output-to-be-s by 10 (1/alpha) to solve only for a set of linear equations. It gives wrong results though. Here is my code:

def inversemodel(model, predict):
    current=predict[:]
    layerlist=model.layers[:]
    layerlist.reverse()
    for layer in layerlist:
        lastlayerlen=np.array(layer.get_weights()[0]).shape[0]
        a=np.transpose(layer.get_weights()[0][:], (1, 0))[:lastlayerlen]
        b=np.negative(np.array(layer.get_weights()[1]))
        b=b[:lastlayerlen]
        #minus biases from both sides
        tobe=np.array(current[:lastlayerlen])
        for index, i in enumerate(tobe):
            if i<0:
                tobe[index]=i*10
        b=np.subtract(b, tobe)
        current = solve(a, b).tolist()
    return current

If this questions is better suited in StackOverflow, please move it and not delete it.

Aphrodite

Posted 2020-01-11T14:20:00.430

Reputation: 97

No answers