How exp(-z) is working in a sigmoid function in neural networks while z is a matrix?

0

function g = sigmoid(z)

%SIGMOID Compute sigmoid function
%J = SIGMOID(z) computes the sigmoid of z.

g = 1.0 ./ (1.0 + exp(-z));

end

I'm going through the Andrew Ng Coursera course. I doubt that how exp(-z) is computed directly while z is a matrix?

akib

Posted 2019-01-14T18:47:40.267

Reputation: 11

Word of advice: You are really have to think in terms of vectors and matrices instead of scalars (ok, an scalar is just a funny word to say "single number"). This will be particularly important when calculating the loss too! – Juan Antonio Gomez Moriano – 2019-01-14T21:35:58.050

Answers

2

In many languages and libraries, operations that apply to a scalar can be applied to vectors, matrices and tensors. They're just applied element-wise, and the result is another vector, matrix, etc with each value transformed by that function.

Sean Owen

Posted 2019-01-14T18:47:40.267

Reputation: 5 987