Paper Title
Generating Attachable Adversarial Patches to Make the Object Identification Wrong Based on Neural Networks
Abstract
Adversarial example is the one that can make our network misclassification through small disturbance, which are often harmless to human cognition but fatal to neural networks. Nowadays, there is no way to resist all kinds of disturbance attacks, which makes people have more doubts about the architecture of the network. Three different sub-models are proposed in this research to attack the neural networks. The attack scope model can effectively reduce the attack range and guide the adversarial algorithm to conduct accurate perturbation attack. Adversarial attack models can generate different adversarial patches through adversarial algorithms. The adversarial patches are compact and can be manufactured artificially. This disturbance patch can be directly attached to the original map to efficiently and accurately disturb the target model. The success rate of disturbance by generating a small patch is 70.1%. Especially, the method proposed in this paper can be applied in different neural networks.
Keywords - Deep Learning, Neural Network, Adversarial Attack, Adversarial Patch.