## How do I sample conditionally from deep belief networks?

1

Deep belief networks (DBNs) are generative models, where, usually, you sample by thermalising the deepest layer (as it's a restricted Boltzmann machine), and then forward propagating a sample towards the visible layer to get a sample from the learned distribution.

This is less flexible sampling than in a single layer DBN: a restricted Boltzmann machine. There, we can start our sampling chain at any state we want, and get samples "around" that state. In particular, we can clamp some visible nodes $$\{v_i\}$$ and get samples from the conditional probability $$(v_j|\{v_i\})$$

Is there a way to do something similar in DBNs? When we interpret the non-RBM layers as RBMs by removing directionality, can we treat it as a deep Boltzmann machine and start sampling at e.g. a training example again?