Both types of networks try to reconstruct the input after feeding it through some kind of compression / decompression mechanism. For outlier detection the reconstruction error between input and output is measured - outliers are expected to have a higher reconstruction error.
The main difference seems to be the way how the input is compressed:
Plain autoencoders squeeze the input through a hidden layer that has fewer neurons than the input/output layers.. that way the network has to learn a compressed representation of the data.
Replicator neural networks squeeze the data through a hidden layer that uses a staircase-like activation function. The staircase-like activation function makes the network compress the data by assigning it to a certain number of clusters (depending on the number of neurons and number of steps).
From Replicator Neural Networks for Outlier Modeling in
Segmental Speech Recognition:
RNNs were originally introduced in the field of data compression .
Hawkins et al. proposed it for outlier modeling . In both papers a
5-layer structure is recommended, with a linear output layer and a
special staircase-like activation function in the middle layer (see
Fig. 2). The role of this activation function is to quantize the
vector of middle hidden layer outputs into grid points and so arrange
the data points into a number of clusters.