I was reading the paper "Attention is all you need" (https://arxiv.org/pdf/1706.03762.pdf ) and came across this site http://jalammar.github.io/illustrated-transformer/ which provided a great breakdown of the architecture of the Transformer.
Unfortunately, I was unable to find any explanation of why it works with input/output lengths that are not equal (eg. input: “je suis étudiant” and expected output: “i am a student”).
My main confusion is this. From what I understand, when we are passing the output from the encoder to the decoder (say $3 \times 10$ in this case), we do so via a Multi-Head Attention layer, which takes in 3 inputs:
- A Query (from encoder), of dimension $3 \times k_1$
- A Key (from encoder), of dimension $3 \times k_1$
- A Value (from decoder), of dimension $L_0 \times k_1$, where $L_0$ refers to the number of words in the (masked) output sentence.
Given that the Multi-Head Attention should take in 3 matrices which have the same number of rows (or at least this is what I have understood from its architecture), how do we deal with the problem of varying output lengths?