PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This paper proposes a black box extraction attack model on pre-trained image classifiers to rebuild a functionally equivalent model with high similarity. Common model extraction attacks use a large number of training samples to feed the target classifier which is time-consuming with redundancy. The attack results have a high dependency on the selected training samples and the target model. The extracted model may only get part of crucial features because of inappropriate sample selection. To eliminate these uncertainties, we proposed the VAE-kdtree attack model which eliminates the high dependency between selected training samples and the target model. It can not only save redundant computation, but also extract critical boundaries more accurately in image classification. This VAE-kdtree model has shown to achieve around 90% similarity on MNIST and around 80% similarity on MNIST-Fashion with a target Convolutional Network Model and a target Support Vector Machine Model. The performance of this VAE-kdtree model could be further improved by adopting higher dimension space of the kdtree.
Tianqi Wen,Haibo Hu, andHuadi Zheng
"An extraction attack on image recognition model using VAE-kdtree model", Proc. SPIE 11766, International Workshop on Advanced Imaging Technology (IWAIT) 2021, 117660N (13 March 2021); https://doi.org/10.1117/12.2590844
ACCESS THE FULL ARTICLE
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
The alert did not successfully save. Please try again later.
Tianqi Wen, Haibo Hu, Huadi Zheng, "An extraction attack on image recognition model using VAE-kdtree model," Proc. SPIE 11766, International Workshop on Advanced Imaging Technology (IWAIT) 2021, 117660N (13 March 2021); https://doi.org/10.1117/12.2590844