Classification of a time series data


I have 5 different classes in which I want to classify some data points. I'm using RNN with Echo-state networks (Reservoir computing).

Normally, a straightforward method consists of computing the outputs y and deciding the class using the argmax[k](y) with k is the dimension of the right class. Now I found another method that consists of computing the argmax[k](sum(y)), and that is what I didn't understand.

I've attached an image of the mathematical approach. The latter is understandable, but it appears not logical for me, since the sum of y over the data points timescale gives the same result for each time sequence, meaning that all data points will be classified in the same class(which corresponds to the max of the sum(y)).

Can someone what I don't understand?

Readouts for Classification


Posted 2018-02-17T11:26:39.247

Reputation: 61



The method you have here is for time series, so input and output are a set of values over a time period.

The sum method sum(y) here is a short hand (abbreviation) for the average over the last mentioned equation, it is not a conventional sum.

Also do note argmax k over the sums is done in separate dimension for each k representing the individual class.

So in my understanding sum is needed only because it is a series of data and sum is calculated per class for the whole series.

The y target (n) is 1 in only in one dimension when match happen, so argmax k finds with highest value the correct class.


Posted 2018-02-17T11:26:39.247

Reputation: 539