Why has this merge proven beneficial?
If you think about the shared Value/Policy network as consisting of a shared component (the Residual Network layers) with a Value and Policy component on top rather than Separation of Concerns it makes more sense.
The underlying premise is that the shared part of the network (the ResNet) provides a high-level generalization of the input (the game states leading up to the move) that is a good input representation for both the shallow Value and Policy networks.
When that is the case, we can reduce the computational load a lot by training a single shared ResNet and using it for two much simpler networks than training two ResNets for the Value and Policy. In their case, training the two together also improves regularisation and thus creates a more robust, general representation.
Specifically, the Alpha Go Zero paper by Silver et al., Mastering the Game of Go without Human Knowledge, states that:
Combining policy and value together into a single network slightly reduced the
move prediction accuracy, but reduced the value error and boosted playing
performance in AlphaGo by around another 600 Elo. This is partly due to
improved computational efficiency, but more importantly the dual objective regularises the network to a common representation that supports multiple use cases.
Can this technique be applied in general or only in special cases?
Like common components in software libraries, it only makes sense when the problems you are trying to solve benefit from a shared representation.
You can use it if you are training classifiers for similar tasks, or training a new task with little data where you already have a classifier trained over a larger, similar dataset.
Outside of Go, it is often used in image recognition.
Deep pre-trained networks such as the ones from the ImageNet ILSVRC competitions are commonly used as a starting point. They are classifiers that have been trained (for weeks!) on over a million images.
Then, say you want to create a network to recognize you favourite brand of bicycles, you start with the general image-recognition pipeline trained on ImageNet, chop of the last layers that do the actual classification ("it's a Border Collie") and add a small new classifier to pick out only the bicycles you care about.
Since the pre-trained classifier already provides high-level image concepts that are good building blocks for image recognition (it classifies 200 categories), this saves you a lot of training and makes for a very robust classifier.
Of course there are many cases where the problems do not have useful shared representations and thus no benefit from a combined network. Nevertheless, it is a useful tool in the right situations.
Look up Transfer Learning or Multi-Task Learning to learn more about this.