Convolutional Nets (CNN) rely on mathematical convolution (e.g. 2D or 3D convolutions), which is commonly used for signal processing. Images are a type of signal, and convolution can equally be used on sound, vibrations, etc. So, in principle, CNNs can find applications to any signal, and probably more.
In practice, there exists already work on NLP (as mentioned by Matthew Graves), where some people process text with CNNs rather than recursive networks. Some other works apply to sound processing (no reference here, but I have yet unpublished work ongoing).
Original contents: In answer to the original title question, which has changed now. Perhaps need to delete this one.
Research on adversarial networks (and related) show that even deep networks can easily be fooled, leading them to see a dog (or whatever object) in what appears to be random noise when a human look at it (the article has clear examples).
Another issue is the generalization power of a neural network. Convolutional nets have amazed the world with their capability to generalize way better than other techniques. But if the network is only fed images of cats, it will recognize only cats (and probably see cats everywhere, as by adversarial network results). In other words, even CNs have a hard time generalizing too far beyond what they learned from.
The recognition limit is hard to define precisely. I would simply say that the diversity of the learning data pushes the limit (I assume further detail should lead to more appropriate venue for discussion).