How to feed LSTM with different input array sizes?



If I like to write a LSTM network and feed it by different input array sizes, how is it possible?

For example I want to get voice messages or text messages in a different language and translate them. So the first input maybe is "hello" but the second is "how are you doing". How can I design a LSTM that can handle different input array sizes?

I am using Keras implementation of LSTM.


Posted 2019-04-07T08:04:54.550

Reputation: 992



The easiest way is to use Padding and Masking.

There are three general ways to handle variable-length sequences:

  1. Padding and masking (which can be used for (3)),
  2. Batch size = 1, and
  3. Batch size > 1, with equi-length samples in each batch.

Padding and masking

In this approach, we pad the shorter sequences with a special value to be masked (skipped) later. For example, suppose each timestamp has dimension 2, and -10 is the special value, then

X = [

  [[1,    1.1],
   [0.9, 0.95]],  # sequence 1 (2 timestamps)

  [[2,    2.2],
   [1.9, 1.95],
   [1.8, 1.85]],  # sequence 2 (3 timestamps)


will be converted to

X2 = [

  [[1,    1.1],
   [0.9, 0.95],
   [-10, -10]], # padded sequence 1 (3 timestamps)

  [[2,    2.2],
   [1.9, 1.95],
   [1.8, 1.85]], # sequence 2 (3 timestamps)

This way, all sequences would have the same length. Then, we use a Masking layer that skips those special timestamps like they don't exist. A complete example is given at the end.

For cases (2) and (3) you need to set the seq_len of LSTM to None, e.g.

model.add(LSTM(units, input_shape=(None, dimension)))

this way LSTM accepts batches with different lengths; although samples inside each batch must be the same length. Then, you need to feed a custom batch generator to model.fit_generator (instead of

I have provided a complete example for simple case (2) (batch size = 1) at the end. Based on this example and the link, you should be able to build a generator for case (3) (batch size > 1). Specifically, we either (a) return batch_size sequences with the same length, or (b) select sequences with almost the same length, and pad the shorter ones the same as case (1), and use a Masking layer before LSTM layer to ignore the padded timestamps, e.g.

model.add(Masking(mask_value=special_value, input_shape=(None, dimension)))

where first dimension of input_shape in Masking is again None to allow batches with different lengths.

Here is the code for cases (1) and (2):

from keras import Sequential
from keras.utils import Sequence
from keras.layers import LSTM, Dense, Masking
import numpy as np

class MyBatchGenerator(Sequence):
    'Generates data for Keras'
    def __init__(self, X, y, batch_size=1, shuffle=True):
        self.X = X
        self.y = y
        self.batch_size = batch_size
        self.shuffle = shuffle

    def __len__(self):
        'Denotes the number of batches per epoch'
        return int(np.floor(len(self.y)/self.batch_size))

    def __getitem__(self, index):
        return self.__data_generation(index)

    def on_epoch_end(self):
        'Shuffles indexes after each epoch'
        self.indexes = np.arange(len(self.y))
        if self.shuffle == True:

    def __data_generation(self, index):
        Xb = np.empty((self.batch_size, *X[index].shape))
        yb = np.empty((self.batch_size, *y[index].shape))
        # naively use the same sample over and over again
        for s in range(0, self.batch_size):
            Xb[s] = X[index]
            yb[s] = y[index]
        return Xb, yb

# Parameters
N = 1000
halfN = int(N/2)
dimension = 2
lstm_units = 3

# Data
np.random.seed(123)  # to generate the same numbers
# create sequence lengths between 1 to 10
seq_lens = np.random.randint(1, 10, halfN)
X_zero = np.array([np.random.normal(0, 1, size=(seq_len, dimension)) for seq_len in seq_lens])
y_zero = np.zeros((halfN, 1))
X_one = np.array([np.random.normal(1, 1, size=(seq_len, dimension)) for seq_len in seq_lens])
y_one = np.ones((halfN, 1))
p = np.random.permutation(N)  # to shuffle zero and one classes
X = np.concatenate((X_zero, X_one))[p]
y = np.concatenate((y_zero, y_one))[p]

# Batch = 1
model = Sequential()
model.add(LSTM(lstm_units, input_shape=(None, dimension)))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
model.fit_generator(MyBatchGenerator(X, y, batch_size=1), epochs=2)

# Padding and Masking
special_value = -10.0
max_seq_len = max(seq_lens)
Xpad = np.full((N, max_seq_len, dimension), fill_value=special_value)
for s, x in enumerate(X):
    seq_len = x.shape[0]
    Xpad[s, 0:seq_len, :] = x
model2 = Sequential()
model2.add(Masking(mask_value=special_value, input_shape=(max_seq_len, dimension)))
model2.add(Dense(1, activation='sigmoid'))
model2.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
print(model2.summary()), y, epochs=50, batch_size=32)

Extra notes

  1. Note that if we pad without masking, padded value will be regarded as actual value, thus, it becomes noise in data. For example, a padded temperature sequence [20, 21, 22, -10, -10] will be the same as a sensor report with two noisy (wrong) measurements at the end. Model may learn to ignore this noise completely or at least partially, but it is reasonable to clean the data first, i.e. use a mask.


Posted 2019-04-07T08:04:54.550

Reputation: 7 434

Thank you very much Esmailian for your complete example. Just one question: What is the difference between using padding+masking and only using padding(like what the other answer suggested)? Will we see a considerable effect on the final result? – user145959 – 2019-04-07T21:01:31.950

@user145959 my pleasure! I added a note at the end. – Esmailian – 2019-04-07T23:13:00.063

Wow a great answer! It's called bucketing, right? – Aditya – 2019-04-08T03:39:06.500

1@Aditya Thanks Aditya! I think bucketing is the partitioning of a large sequence into smaller chunks, but sequences in each batch are not necessarily chunks of the same (larger) sequence, they can be independent data points. – Esmailian – 2019-04-08T11:23:56.897

@Esmailian really good answer, specially case #1. – Night Walker – 2019-07-30T18:51:49.413

It looks like for #1 Padding and Masking, in your code, you pad to the right, by adding -10 (the padding character) to the end. Keras's sequence.pad_sequences function pads to the left or the beginning by default. I'm wondering if it matters whether we pad to the left or the right...would you know? – flow2k – 2019-08-19T00:14:24.917


@flow2k It does not matter, pads are completely ignored. Take a look at this question.

– Esmailian – 2019-08-19T16:33:59.673

Thanks @Esmailian - just what I was looking for. On another note, I was investigating how to work this if there is an Embedding layer before the LSTM layer. It seems we can't use the Masking layer before that, since Embedding must be the first layer. But: it turns out Embedding supports using the integer 0 as a special value, with the mask_zero argument:

– flow2k – 2019-08-21T06:57:21.233


We use LSTM layers with multiple input sizes. But, you need to process them before they are feed to the LSTM.

Padding the sequences:

You need the pad the sequences of varying length to a fixed length. For this preprocessing, you need to determine the max length of sequences in your dataset.

The values are padded mostly by the value of 0. You can do this in Keras with :

y = keras.preprocessing.sequence.pad_sequences( x , maxlen=10 )
  • If the sequence is shorter than the max length, then zeros will appended till it has a length equal to the max length.

  • If the sequence is longer than the max length then, the sequence will be trimmed to the max length.

Shubham Panchal

Posted 2019-04-07T08:04:54.550

Reputation: 1 792

3Padding everything to a fixed length is wastage of space. – Aditya – 2019-04-08T03:39:42.433

I agree with @Aditya, and it incurs computation cost, too. But is it not the case that simplistic padding is still widely used? Keras even has a function just for this. Perhaps this is because other, more efficient and challenging solutions do not provide significant model performance gain? If anyone has experience or has done comparisons, please weigh in. – flow2k – 2019-08-19T00:22:54.023

Actually padding is the most efficient way, cause in that way Keras can allocate tensors of fixed length and do everything on the GPU without misalignments of memory. To keep the sequences with different length would be less efficient.

The best way is to use padding + masking as explained by Esmailian. – Steve3nto – 2020-07-28T16:06:56.853