I think your last question is worth discussing, but forgive my careless on skipping the details of the model and just leaving a quick answer here :P
Repeating a sentence in your corpus would definitely change the learning result, and strength the relationship of the words in this sentence, because one of the models behind word2vec is
skip-gram, which assume the center word can be used to predict its surroundings.
But I have to ask another question coming follows: what is our purpose of using word2vec?
- To find similar words in semantic and synthetic, which is used to search and information retrieval.
- A skip-gram model is useful for modeling those like click-sequence data, which could be used in recommendation