2

I have a similarity/distance matrix:

```
a | b | c
a 0 | 1 | 2
b 1 | 0 | 3
c 2 | 3 | 0
```

I want to build an encoder/model that learns an n-dimensional representation of each of the points in the dataset s.t. the euclidean-difference between the representations produces the difference provided in the matrix, e.g. distance(a,b) = 1 etc.

Any ideas?

Why you cannot just use the row as representation (e.g. a=[0, 1, 2], b=[1, 0 3], etc). – Vincenzo Lavorini – 2018-02-18T15:24:06.923

Because the Euclidean distance between [0,1,2] and [1,0,3] does not equal 1 – kPow989 – 2018-02-18T15:54:01.067

But if you impose a distance between two of those vectors the other will result as scaled. If you want that the difference between any two of those vectors to be equal to one, than you loose information on the real distance between them, so there is no point in starting from that matrix – Vincenzo Lavorini – 2018-02-18T17:12:32.443