I am currently reading Boosting the Performance of RBF Networks with Dynamic Decay Adjustment by Michael R. Berthold and Jay Diamond (online) to understand how Dynamic Decay Adjustment (DDA; a constructive trainining algorithm for RBF networks). Doing so, I stumbled over the word prototype a couple of times:
Unfortunately PRCE networks do not adjust the standard deviation of their prototypes individually, using only one global value for this parameter.
This paper introduces the Dynamic Decay Adjustment (DDA) algorithm which utilizes the constructive nature of the PRCE algorithm together with independent adaptation of each prototype's decay factor.
PNNs are not suitable for large databases because they commit one new prototype for each training pattern they encounter, eeffectively becoming a referential memory scheme.
I've tried to find it in the only resource they referenced in this context (D.L. Reilly, L.N. Cooper, C. Elbaum: "A Neural Model for Category Learning"), but sadly I don't have access to that one.
I found an explanation on https://chrisjmccormick.wordpress.com/2013/08/15/radial-basis-function-network-rbfn-tutorial/:
An RBFN performs classification by measuring the input’s similarity to examples from the training set. Each RBFN neuron stores a “prototype”, which is just one of the examples from the training set. When we want to classify a new input, each neuron computes the Euclidean distance between the input and its prototype. Roughly speaking, if the input more closely resembles the class A prototypes than the class B prototypes, it is classified as class A.
So a prototype is just the parameters (center and radius, assuming Gaussians are used) of an RBFs neuron?
Rephrasing the first quoted sentence, does it mean that the RBF networks usually only learn the center and the radius is fixed?
My question is if I understood it correct. Please add a reference (which is not a random blog article) which makes it more clear.