Application of deep learning in computer vision for the target detection-tracking system on shoulder-training flight vehicles
132 viewsDOI:
https://doi.org/10.54939/1859-1043.j.mst.94.2024.159-165Keywords:
Target detection and tracking system; Digital camera system; Deep learning model; YOLO algorithm.Abstract
The paper presents an application of a deep learning model in computer vision for the target detection and tracking system on shoulder-training flight vehicles. The purpose of the research is to use the digital camera system as a simulated detection and tracking system similar to the features of the seeker of shoulder-launched flight vehicles, as a basis for building shoulder-training flight vehicles. To detect and track moving targets outdoors, 3D target detection ability must meet accuracy requirements; however, when traditional image processing algorithms are applied, they are not effective. To solve this problem, the article focuses on researching and applying the YOLO deep learning model to the digital camera tracking system. Using the YOLO version 8 model, the authors tested data collection, image processing and model training as well as considered the target detection and tracking ability of a digital camera system. At the same time, a newly designed shoulder-training flight vehicle - 72 millimeters with a digital camera tracking system was tested outdoors. As a result, detecting and tracking the target by the digital camera system similar to the shoulder-launched flight vehicle - 72 millimeters in a specifically limited range condition is obtained.
References
[1]. K. Kumar et al., “Comparative study on object detection in visual scenes using deep learning,” World Journal of Advanced Engineering Technology and Sciences, Vol. 12, No. 2, pp. 045-050, (2023). DOI: https://doi.org/10.30574/wjaets.2023.10.2.0262
[2]. J. Jiang et al., “Research on moving object tracking technology of sports video based on deep learning algorithm,” 4th International Conference on Information Systems and Computer Aided Education, pp. 2376-2380, (2021). DOI: https://doi.org/10.1145/3482632.3487433
[3]. V. G. Dhanya et al., “Deep learningbased computer vision approaches for smart agricultural applications,” Artificial Intelligence in Agriculture, pp. 211-229, (2022). DOI: https://doi.org/10.1016/j.aiia.2022.09.007
[4]. M. H. Koresh, “Computer vision based traffic sign sensing for smart transport,” Journal of Innovative Image Processing, Vol. 1, No.1, pp. 11-19, (2019). DOI: https://doi.org/10.36548/jiip.2019.1.002
[5]. L. M. Belmonte et al., “Computer vision in autonomous unmanned aerial vehicles a systematic mapping study,” Applied Sciences, Vol. 9, No.15, pp. 3196, (2019). DOI: https://doi.org/10.3390/app9153196
[6]. E. Shreyas et al., “3D object detection and tracking methods using deep learning for computer vision applications,” International Conference on Recent Trends on Electronics, Information, Communication & Technology (RTEICT). IEEE, (2021). DOI: https://doi.org/10.1109/RTEICT52294.2021.9573964
[7]. J. Redmon et al., “You only look once: Unified, real-time object detection,” In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 779-788, (2016). DOI: https://doi.org/10.1109/CVPR.2016.91
[8]. J. F. Henriques et al., “High-speed tracking with kernelized correlation filters,” IEEE transactions on pattern analysis and machine intelligence, Vol. 37, No.3, pp. 583-596, (2014). DOI: https://doi.org/10.1109/TPAMI.2014.2345390
[9]. Cục Kỹ thuật PK-KQ, “Thuyết minh kỹ thuật và hướng dẫn sử dụng phối bộ tên lửa tầm thấp A72 (9M32M),” (2003).
[10]. T. V. Dực, N.V. Sơn, P. V. Uy, “Động học bay và nguyên lý dẫn khí cụ bay điều khiển một kênh,” NXB Khoa học Kỹ thuật (2006).
[11]. M. Ed. Schumann, “A Book about Colab:(and Related Activities),” Printed Matter (2015).