Improving the robustness of binarized neural network using the EFAT method
328 viewsDOI:
https://doi.org/10.54939/1859-1043.j.mst.CSCE5.2021.14-23Keywords:
BNNs; Binarized Neural Networks; Adversarial Attack; Adversarial Defence; Fast Adversarial Training.Abstract
In recent years with the explosion of research in artificial intelligence, deep learning models based on convolutional neural networks (CNNs) are one of the promising architectures for practical applications thanks to their reasonably good achievable accuracy. However, CNNs characterized by convolutional layers often have a large number of parameters and computational workload, leading to large energy consumption for training and network inference. The binarized neural network (BNN) model has been recently proposed to overcome that drawback. The BNNs use binary representation for the inputs and weights, which inherently reduces memory requirements and simplifies computations while still maintaining acceptable accuracy. BNN thereby is very suited for the practical realization of Edge-AI application on resource- and energy-constrained devices such as embedded or mobile devices. As CNN and BNN both compose linear transformations layers, they can be fooled by adversarial attack patterns. This topic has been actively studied recently but most of them are for CNN. In this work, we examine the impact of the adversarial attack on BNNs and propose a solution to improve the accuracy of BNN against this type of attack. Specifically, we use an Enhanced Fast Adversarial Training (EFAT) method to train the network that helps the BNN be more robust against major adversarial attack models with a very short training time. Experimental results with Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD) attack models on our trained BNN network with MNIST dataset increased accuracy from 31.34% and 0.18% to 96.96% and 85.08%, respectively.
References
[1]. Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. “XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks.” ECCV (2016).
[2]. Matthieu Courbariaux, Itay Hubara, Daniel Soudry, Ran El-Yaniv and Yoshua Bengio. “Binarized Neural Networks: Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1.” arXiv: Learning (2016).
[3]. Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. “Enabling AI at the edge with XNOR-networks”. Communications of the ACM (2020), Vol.63, pp. 83-90.
[4]. Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. "Intriguing properties of neural networks." arXiv preprint arXiv:1312.6199 (2013).
[5]. Alexey Kurakin, Ian Goodfellow, and Samy Bengio. "Adversarial examples in the physical world." (2016).
[6]. Ian Goodfellow, Jonathon Shlens, and Christian Szegedy. "Explaining and harnessing adversarial examples." arXiv preprint arXiv:1412.6572 (2014).
[7]. Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. "Towards deep learning models resistant to adversarial attacks." arXiv preprint arXiv:1706.06083 (2017).
[8]. Ali Shafahi, Mahyar Najibi, Amin Ghiasi, Zheng Xu, John Dickerson, Christoph Studer, Larry S. Davis, Gavin Taylor, and Tom Goldstein. "Adversarial training for free!." arXiv preprint arXiv:1904.12843 (2019).
[9]. Eric Wong, Leslie Rice, and J. Zico Kolter. "Fast is better than free: Revisiting adversarial training." arXiv preprint arXiv:2001.03994 (2020).
[10]. Angus Galloway, Graham W. Taylor, and Medhat Moussa. "Attacking binarized neural networks." arXiv preprint arXiv:1711.00449 (2017).
[11]. Manoj Rohit Vemparala, Alexander Frickenstein, Nael Fasfous, Lukas Frickenstein, Qi Zhao, Sabine Kuhn, Daniel Ehrhardt et al. "BreakingBED--Breaking Binary and Efficient Deep Neural Networks by Adversarial Attacks." arXiv preprint arXiv:2103.08031 (2021).
[12]. Lukas Geiger. “Binarized Neural Networks on microcontrollers.” tinyML Talks (2021).