There are some research at least on the "foolability" of neural networks, that gives insight on potential high risk of neural nets even when they "seem" 99.99% acurate.
A very good paper on this is in Nature: https://www.nature.com/articles/d41586-019-03013-5
In a nutshell:
It shows diverse exemples of fooling neural networks/AIs, for exemple one where a few bits of scotch tape places on a "Stop" sign changes it, for the neural net, into a "limited to 40" sign... (whereas a human would still see a "Stop" sign!).
And also 2 striking exemples of turning an animal into another by just adding invisible (for humans!) colored dots, (turning in the exemple a Panda into a Gibbon, where a human hardly see anything different so still sees a Panda).
Then they elaborate on diverse research venues, involving for exemple ways to try to prevent such attacks.
The whole page is a good read to any AI researcher and shows lots of troubling problems (especially for automated systems such as cars, and soon maybe armaments).
An exerpt relevant to the question:
Hendrycks and his colleagues have suggested quantifying a DNN’s robustness against making errors by testing how it performs against a large range of adversarial examples. However, training a network to withstand one kind of attack could weaken it against others, they say. And researchers led by Pushmeet Kohli at Google DeepMind in London are trying to inoculate DNNs against making mistakes. Many adversarial attacks work by making tiny tweaks to the component parts of an input — such as subtly altering the colour of pixels in an image — until this tips a DNN over into a misclassification. Kohli’s team has suggested that a robust DNN should not change its output as a result of small changes in its input, and that this property might be mathematically incorporated into the network, constraining how it learns.
For the moment, however, no one has a fix on the overall problem of brittle AIs. The root of the issue, says Bengio, is that DNNs don’t have a good model of how to pick out what matters. When an AI sees a doctored image of a lion as a library, a person still sees a lion because they have a mental model of the animal that rests on a set of high-level features — ears, a tail, a mane and so on — that lets them abstract away from low-level arbitrary or incidental details. “We know from prior experience which features are the salient ones,” says Bengio. “And that comes from a deep understanding of the structure of the world.”
Another excerpt, near the end:
"Researchers in the field say they are making progress in fixing deep learning’s flaws, but acknowledge that they’re still groping for new techniques to make the process less brittle. There is not much theory behind deep learning, says Song. “If something doesn’t work, it’s difficult to figure out why,” she says. “The whole field is still very empirical. You just have to try things.”"