How do neural network topologies affect GPU/TPU acceleration?


I was thinking about different neural network topologies for some applications. However, I am not sure how this would affect the efficiency of hardware acceleration using GPU/TPU/some other chip.

If, instead of layers that would be fully connected, I have layers with neurons connected in some other way (some pairs of neurons connected, others not), how is this going to affect the hardware acceleration?

An example of this is the convolutional networks. However, there is still a clear pattern, which perhaps is exploited by the acceleration, which would mean that if there is no such pattern, the acceleration would not work as well?

Should this be a concern? If so, is there some rule of thumb for how the connectivity pattern is going to affect the efficiency of hardware acceleration?


Posted 2019-09-23T14:05:46.593

Reputation: 151

1The answer is going to depend on details - e.g. how sparse the connections are, whether they can be arranged meaningfully into layers etc. Also whether you have training data available in large batches. Do you have any specific use case or scenario that would narrow the scope down a little? Or are you looking for a broad but shallow answer? – Neil Slater – 2019-09-23T19:00:54.807

I am interested in a broad but shallow answer. Or even better, to some place where I could read more. – user2316602 – 2019-10-03T18:02:46.577

No answers