I think this is because your targets `y`

are continuous instead of binary. Therefore, either ignore the `accuracy`

report, or binarize your targets if applicable.

I assumed you are using `Keras`

.
When you use `metrics=['accuracy']`

, this is what happens under the hood:

```
if metric in ('accuracy', 'acc'):
metric_fn = metrics_module.binary_accuracy
```

where

```
def binary_accuracy(y_true, y_pred):
return K.mean(K.equal(y_true, K.round(y_pred)), axis=-1)
```

In the case of continuous targets, only those `y_true`

that are exactly `0`

or exactly `1`

will be equal to model prediction `K.round(y_pred))`

. Therefore, `accuracy`

cannot be used for continuous targets.

Here is a code that demonstrates this issue:

```
from keras import Sequential
from keras.layers import LSTM, Dense
import numpy as np
# Parameters
N = 1000
halfN = int(N/2)
seq_len = 10
dimension = 2
lstm_units = 3
# Data
np.random.seed(123) # to generate the same numbers
X_zero = np.random.normal(0, 1, size=(halfN, seq_len, dimension))
y_binary_zero = np.zeros((halfN, 1)) # output is only 0
y_continuous_zero = np.random.randint(0, 50, (halfN, 1)) / 100 # output is in [0, 0.5]
X_one = np.random.normal(1, 1, size=(halfN, seq_len, dimension))
y_binary_one = np.ones((halfN, 1)) # output is only 1
y_continuous_one = 0.5 + np.random.randint(0, 50, (halfN, 1)) / 100 # output is in [0.5, 1.0]
p = np.random.permutation(N) # to shuffle zero and one classes
X = np.concatenate((X_zero, X_one))[p]
y_binary = np.concatenate((y_binary_zero, y_binary_one))[p]
y_continuous = np.concatenate((y_continuous_zero, y_continuous_one))[p]
# Build model
model = Sequential()
model.add(LSTM(lstm_units, input_shape=(None, dimension)))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
print(model.summary())
# Fit model
# fit using binary outputs
print('-----------------------Binary---------------------')
model.fit(X, y_binary, batch_size=32, epochs=10)
# fit using continuous outputs
print('-----------------------Continuous---------------------')
model.fit(X, y_continuous, batch_size=32, epochs=10)
```

which outputs

```
...
-----------------------Binary---------------------
...
1000/1000 [==============================] - 0s 122us/step - loss: 0.3989 - acc: 0.9500
-----------------------Continuous---------------------
...
1000/1000 [==============================] - 0s 135us/step - loss: 0.5759 - acc: 0.0050
```

1

This feels very likely to be the case. This person had similar results from using MSE as loss but accuracy as a metric: https://datascience.stackexchange.com/questions/48346/multi-output-regression-problem-with-tensorflow/

– Simon Larsson – 2019-04-11T14:22:11.883Thank you Esmailian. I must read your code precisely. But just wanted to add, YES. I use LSTM network for stock price prediction and my inputs and labels both are arrays of float numbers like

`[100.0000 101.2900 99.8956 ....]`

– user145959 – 2019-04-11T17:52:02.257