Why does Position Embeddings work?

3

2

In the papers "Convolutional Sequence to Sequence Learning" and "Attention Is All You Need", positions embeddings are simply added to the input words embeddings to give the model a sense of the order of the input sequence. These position embeddings are generated from a sinusoidal signal depending on the absolute position of the word in the sequence and the dimension. We obtain position embeddings of the same dimension as the word embeddings and we simply sum these two.

I can understand that this helps the model to get a sens of the ordering of the input, but I'm quite disturbed by the fact that adding these two might also erase some of the information contained in the word embeddings. Do you have an explanation on why this might work (or not) ? Is there some literature about it ?

Robin

Posted 2018-11-08T16:05:00.290

Reputation: 1 267

1

Same question. Why the random matrix can be trained to contains the position info. https://github.com/google-research/bert/blob/master/modeling.py#L491-L520

– 不是phd的phd – 2019-01-03T06:12:07.033

No answers