Seq2Seq Model training: Encoder vs. Decoder

0

Can someone point me to an article which explains how the model training is done in Seq2Seq? I know "Teacher Forcing" is used but what I found so far hasn't been detailed enough. What I am most confused about is where the training happens? The back-propagation goes back to the encoder?

Any insight here would be really appreciated. Thanks!

kee

Posted 2019-06-14T16:45:21.113

Reputation: 163

Answers

-1

You can check medium page, https://link.medium.com/iolQKIWawX. You can also get more detailed pages on medium itself, shared link includes the references of those.

vipin bansal

Posted 2019-06-14T16:45:21.113

Reputation: 1 322