I was thinking about different neural network topologies for some applications. However, I am not sure how this would affect the efficiency of hardware acceleration using GPU/TPU/some other chip.
If, instead of layers that would be fully connected, I have layers with neurons connected in some other way (some pairs of neurons connected, others not), how is this going to affect the hardware acceleration?
An example of this is the convolutional networks. However, there is still a clear pattern, which perhaps is exploited by the acceleration, which would mean that if there is no such pattern, the acceleration would not work as well?
Should this be a concern? If so, is there some rule of thumb for how the connectivity pattern is going to affect the efficiency of hardware acceleration?