Traditionally, when working with tabular data, one can be sure(or at least know) that a model works because the included features could explain a target variable, say "Price of a ticket" good. More features can be then be engineered to explain the target variable even better.
I have heard people say, that there is no need to hand-engineer features when working with CNNs or RNNs or Deep Neural Networks, provided all the advancements in AI and computation. So, my question is, how would one know, before training, why a particular architecture worked(or would work) when it did or why it didn't when the performance isn't acceptable or very bad. And also that not all of us would have the time to try out all possible architectures, how can one know or at least be sure that something would work for the problem in hand. Or to say, what are the things one needs to follow when designing an architecture to train for a problem, to ensure that an architecture will work?