Multimodality fire and smoke detection system
129 viewsDOI:
https://doi.org/10.54939/1859-1043.j.mst.97.2024.138-147Keywords:
Convolution neural network; Deep learning; Fire warning; Sensor; Fire detection; Multi modalities.Abstract
Early smoke and fire detection is extremely important to prevent serious consequences for humans and property. A common solution is to utilize physical sensors such as gas detection sensors, smoke detection sensors, and temperature detection sensors caused by fire. However, the detection time of physical sensors is slower than combining multiple cues, especially combining with computer vision. In this paper, we propose a multi-modal fire and smoke detection solution that combines physical sensors (Sensor) and image sensors (Camera). In particular, our proposed method applies artificial intelligence (AI) and Internet of Things (IoT) to detect smoke and fire in the indoor environment. The knowledge distillation algorithm (KD) transfers from the full version of YOLO teacher models to the reduced version of YOLO model, whose detection accuracy is 10% smaller than the full version. The KD model is simpler, so it has a faster response time than the full model up to 8.22 (ms) and 51.56 (ms) when it runs on GPU and CPU, respectively.
References
[1]. Đoàn Thị Hương Giang, Hồ Anh Dũng, Nguyễn Ngọc Trung, Nguyễn Trung Hiếu, “Improvement of performance of human detection in abnormal crowd using knowledge distillation for YOLO network,” Tạp chí Khoa học và Công nghệ Trường Đại học Công nghiệp Hà Nội, Tập 60 - Số 4, pp. 39-44, (2024), doi: 10.57001/huih5804.2024.124.
[2]. Huong-Giang Doan, Ngoc-Trung Nguyen, “New blender-based augmentation method with quantitative evaluation of CNNs for hand gesture recognition”, Indonesian Journal of Electrical Engineering and Computer Science, Vol. 30, No. 2, pp. 796-806, (2023). DOI: 10.11591/ijeecs.v30.i2.pp796-806, (2023).
[3]. Redmon Joseph, Divvala Santosh, Girshick Ross, Farhadi Ali. “YOLOv1: You Only Look Once: Unified, Real-Time Object Detection”. 779-788. 10.1109/CVPR.2016.91, (2016).
[4]. Redmon Joseph, Farhadi Ali. “YOLO9000: Better, Faster, Stronger”. 6517-6525. 10.1109/CVPR.2017.690, (2017).
[5]. J. Redmon, A. Farhadi. “Yolov3: An incremental improvement”. CoRR journal, Vol. abs/1804.02767, pp. 1-6, (2018).
[6]. A. Bochkovskiy, C.Y. Wang, H.Y. M. Liao. “Yolov4: Optimal speed and accuracy of object detection”. ArXiv journal, Vol. abs/2004.10934, 2020, pp. 1-17, (2004).
[7]. Glenn Jocher. “Yolov5 in pytorch”. https://github.com/ultralytics/yolov5, (2020).
[8]. Chuyi Li, Lulu Li, Hongliang Jiang, Kaiheng Weng, Yifei Geng, Liang Li, Zaidan Ke, Qingyuan Li, Meng Cheng, Weiqiang Nie, Yiduo Li, Bo Zhang, Yufei Liang, Linyuan Zhou, Xiaoming Xu, Xiangxiang Chu, Xiaoming Wei, Xiaolin Wei. “YOLOv6: A Single-Stage Object Detection Framework for Industrial Applications. Computer Vision and Pattern Recognition”, (2022).
[9]. Wang Chien-Yao, Bochkovskiy Alexey, Liao Hong-yuan. “YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors”. 10.48550/arXiv.2207.02696, (2022).
[10]. Jocher, G.; Chaurasia, A.; Qiu, J. “YOLO by Ultralytics,”. GitHub. (2023). Available online: https://github.com/ultralytics/ultralytics.
[11]. Wang, C., Yeh, I., & Liao, H. “YOLOv9: Learning What You Want to Learn Using Programmable Gradient Information”. ArXiv, abs/2402.13616, (2024).
[12]. https://thietbidienthongminhata.com/
[13]. R. Y. Rubinstein, “Optimization of computer simulation models with rare events,” European Journal of Operational Research, vol. 99, no. 1, pp. 89–112, (1997).
[14]. G. E. Hinton, O. Vinyals, and J. Dean, “Distilling the knowledge in a neural network,” ArXiv, vol. abs/1503.02531, (2015).
[15]. http://canhsatpccc.gov.vn/Home/tabid/40/language/vi-VN/default.aspx