Headless Horseman: Adversarial Attacks on Transfer Learning Models
- Resource Type
- Conference
- Authors
- Abdelkader, Ahmed; Curry, Michael J.; Fowl, Liam; Goldstein, Tom; Schwarzschild, Avi; Shu, Manli; Studer, Christoph; Zhu, Chen
- Source
- ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) Acoustics, Speech and Signal Processing (ICASSP), ICASSP 2020 - 2020 IEEE International Conference on. :3087-3091 May, 2020
- Subject
- Signal Processing and Analysis
Training
Transfer learning
Signal processing
Feature extraction
Security
Task analysis
Speech processing
Transfer Learning
adversarial
attack
synthetic labels
implicit regularization
- Language
- ISSN
- 2379-190X
Transfer learning facilitates the training of task-specific classifiers using pre-trained models as feature extractors. We present a family of transferable adversarial attacks against such classifiers, generated without access to the classification head; we call these headless attacks. We first demonstrate successful transfer attacks against a victim network using only its feature extractor. This motivates the introduction of a label-blind adversarial attack. This transfer attack method does not require any information about the class-label space of the victim. Our attack lowers the accuracy of a ResNet18 trained on CIFAR10 by over 40%.