I'm working on Adversarial Machine Learning, and have read multiple papers on this topic, some of them are mentioned as follows:
- Poisoning Attacks on SVMs: https://arxiv.org/pdf/1206.6389.pdf
- Adversarial Label Flips on Support Vector Machines
However, I am not able to find any literature on data poisoning for SVMs using Manifold regularization. Is there anyone who has knowledge about that?