4

5

I'm trying to build a toy recommendation engine to wrap my mind around Singular Value Decomposition (SVD). I've read enough content to understand the motivations and intuition behind the actual decomposition of the matrix A (a user *x* movie matrix).

I need to know more about what goes on after that.

```
from numpy.linalg import svd
import numpy as np
A = np.matrix([
[0, 0, 0, 4, 5],
[0, 4, 3, 0, 0],
...
])
U, S, V = svd(A)
k = 5 #dimension reduction
A_k = U[:, :k] * np.diag(S[:k]) * V[:k, :]
```

Three Questions:

Do the values of matrix

`A_k`

represent the the predicted/approximate ratings?What role/ what steps does cosine similarity play in the recommendation?

And finally I'm using Mean Absolute Error (MAE) to calculate my error. But what I'm values am I comparing? Something like

`MAE(A, A_k)`

or something else?

Well what I understand is that A_k is an approximation of A after the exclusion of all of the "noisy" or unimportant underlying features (lowest eigenvalues). What I don't understand is how one goes about making rating predictions after you've done the matrix factorization. – William Gottschalk – 2016-12-01T03:24:59.887

That's correct in the

eigensense but incorrect in the sense of what makes a feature important. SVD chooses the features that the most variant, which is not necessarily the most important but typically can be. In the pathological case, SVD can choose the worst features if your outcome has low variance. – franciscojavierarceo – 2016-12-01T04:39:29.577ok. I follow that. But my question is what happens

afterthe svd decomposes a matrix? How to I obtain recommendations from those pieces? – William Gottschalk – 2016-12-01T04:55:36.587This answer doesn't answer the question. – SmallChess – 2016-12-01T08:43:44.280

@StudentT I've updated my response. – franciscojavierarceo – 2016-12-01T11:08:11.137