Tied embedding in Sequence to Sequence Task


Is it sensible to use a tied embedding between the encoder and decoder in a Sequence to Sequence task where the question and answering is within the same language?

This will lower the number of trainable parameters considerably since only one embedding is used instead of two and to my limited knowledge I don't think it should make a difference to the quality of the model.


Posted 2020-04-30T19:26:51.820

Reputation: 21

No answers