Encoder-Decoder Sequence-to-Sequence Model for Translations in Both Directions


Is it possible to use a pre-trained sequence to sequence encoder-decoder model which translates an input text in source language to an output in target language to do an inverse translation? That is, take an input in target language, and output a sequence in source language?

Are there architectures which can do translations (or seq to seq generation) in both directions?


Posted 2018-08-01T13:30:11.267

Reputation: 183

No answers