Augmenting Histological Images with Adversarial Attacks
Abstract:
Neural networks have shown to be vulnerable against adversarial attacks - images with carefully crafted adversarial noise that is imperceptible to the human eye. In medical imaging tasks this can be a major threat for making predictions based on deep neural network solutions. In this paper we propose a pipeline for augmenting a small histological image dataset using State-of-the-Art data generation methods and demonstrate an increase in accuracy of a neural classifier trained on the augmented dataset when faced with adversarial images. When trained on the non-augmented dataset, the neural network achieves an accuracy of 55.24 on the test set with added adversarial noise, and an accuracy of 97.40 on the same test set when trained on the augmented dataset.
Keywords:
Adversarial Attacks, Deep Learning, Image Classification, Histology, Tissue Recognition