I am confused about the Q values of a duelling deep Q network (DQN). As far as I know, duelling DQNs have 2 outputs
Advantage: how good it is to be in a particular state $s$
Value: the advantage of choosing a particular action $a$
We can make these two outputs into Q values (reward for choosing particular action $a$ when in state $s$) by adding them together.
However, in a DQN, we get Q values from the single output layer of the network.
Now, suppose that I use the same DQN model with the very same weights in my input and hidden layers and changing the output layer which gives us Q values to advantage and value outputs. Then, during training, if I add them together, will it give me the same Q value for a particular state, supposing all the parameters of both my algorithms are the same except for the output layers?