1

In cutting edge deployment of deep networks for different architectures (such as $CNN$, $QRNN$ etc) what is the historical trend of current limits of trainability possible computationally? By this I mean number of nodes and number of weights?

1

In cutting edge deployment of deep networks for different architectures (such as $CNN$, $QRNN$ etc) what is the historical trend of current limits of trainability possible computationally? By this I mean number of nodes and number of weights?

Hi. You need to define limits in a more precise way. – nbro – 2019-12-27T12:36:22.600