If this is just a one-time case, you can simply re-train the neural network. If you frequently have to add new classes, then this is a bad idea. What you want to do in such cases is called content-based image retrieval (CBIR), or simply image retrieval or visual search. I will explain both cases in my answer below.
If this just happens once - you forgot the 11th class, or your customer changed his/her mind - but it won't happen again, then then you can simply an 11th output node to the last layer. Initialize the weights to this node randomly, but use the weights you already have for the other outputs. Then, just train it as usual. It might be helpful to fix some weights, i.e. don't train these.
An extreme case would be to only train the new weights, and leave all others fixed. But I am not sure whether this will work that well - might be worth a try.
Content-based image retrieval
Consider the following example: you are working for a CD store, who wants their customers to be able to take a picture of an album cover, and the application shows them the CD they scanned in their online store.
In that case, you would have to re-train the network for every new CD they have in the store. That might be 5 new CDs each day, so re-training the network that way is not suitable.
The solution is to train a network, which maps the image into a feature space. Each image will be represented by a descriptor, which is e.g. a 256-dimensional vector. You can "classify" an image by calculating this descriptor, and comparing it to your database of descriptors (i.e. the descriptors of all CDs you have in your store). The closest descriptor in the database wins.
How do you train a neural network to learn such a descriptor vector? That is an active field of research. You can find recent work by searching for keywords like "image retrieval" or "metric learning".
Right now, people usually take a pre-trained network, e.g. VGG-16, cut off the FC layers, and use the final convolutional as your descriptor vector. You can further train this network e.g. by using a siamese network with triplet loss.