If you follow the linked literature (down the rabbit hole for a few levels), you end up at a paper from 2005 from Kervann and Boulanger
- at least that's as deep as I got.
In that linked webpage, they define the patch-based image noise reduction methods as such:
The main idea is to associate with each pixel the weighted sum of data points within an adaptive neighborhood.
So a patch is an area of a single image, like a convolutional kernel, but it doesn't convolve.
They talk about adaptive patches, meaning that you need to (perhaps randomly) select a pixel, then adapt the patch size used in order to include enough surrounding information to reproduce a homogenous patch as the output.
It seems as though training with clean images, to which noise is added (additive Gaussian white noise), are used to train. This will help with robustness of final models by reducing the variance, but doing so also must introduce a bias to re-create areas where the noise is somehow uniform. The first link above, if you scroll down, shows many examples of typical images to be de-noised. The noise is not always so uniform.
Here is a picture taken from that 2005 paper, where they show patch-regions (marked in yellow). Page 5 gives a nice short description of the general idea. Patch sizes in their work were typically $7 x 7$ or $9 x 9$ in pixel size.