DETEKSI KEPATUHAN ALAT PELINDUNG DIRI DI TERMINAL BAHAN BAKAR MINYAK MENGGUNAKAN CONVOLUTIONAL NEURAL NETWORKS (CNN)

  • Seni Patrisillia Bina Nusantara University
  • Gede Putra Kusuma Bina Nusantara University
Keywords: Faster R-CNN, Fuel Terminal Monitoring, Object Detection, PPE, SSD

Abstract

Currently, personal protective equipment (PPE) non-compliance is recorded manually through voluntary incident reports, making the data obtained less objective and prone to omissions. This situation drives the need for the application of object detection technology to improve the consistency of monitoring and the accuracy of findings related to PPE compliance. This study aims to compare three object detection architectures—Faster R-CNN, SSD, and YOLOv9—to determine the most suitable model for application in fuel oil terminal environments, particularly in red zone areas that have a high risk of atmospheric explosions. The research method includes collecting 5,000 images from incident reports and public data platforms, followed by annotation, pre-processing, and augmentation to improve model robustness. Faster R-CNN was trained with a ResNet50 and ResNet101 backbone, SSD with the same backbone, while YOLOv9 was tested with variants YOLOv9-C and YOLOv9-E. Model performance was analyzed quantitatively using mean average precision (mAP50) and loss–epoch graphs, and qualitatively through prediction visualization under real-world conditions such as small objects, visual obstructions, and low lighting. The results showed that SSD provided the most stable performance with a balanced mAP50 across training, validation, and test data, as well as consistent visual predictions across various field conditions. In conclusion, the SSD architecture is the most suitable choice for implementing PPE compliance monitoring at fuel terminals due to its acceptable accuracy and robustness to varying environmental conditions.

References

[1] K. Drid, M. Allaoui, and M. L. Kherfi, “Object detector combination for increasing accuracy and detecting more overlapping objects,” in International Conference on Image and Signal Processing, Springer, 2020, pp. 290–296.
[2] R. Nurcahyo and M. Iqbal, “Comparative analysis of deep learning models for vehicle detection,” J. Syst. Eng. Inf. Technol., vol. 1, no. 1, pp. 27–32, 2022, doi: 10.29207/joseit.v1i1.1960.
[3] U. Khusni, A. M. Arymurthy, and H. Susanto, “Small Object Detection Based on SSD-ResNeXt101,” in Proceedings of the 11th International Conference on Robotics, Vision, Signal Processing and Power Applications: Enhancing Research and Innovation through the Fourth Industrial Revolution, Springer, 2022, pp. 1058–1064.
[4] L. Hallonqvist and M. Cromsjö, “Detection of safety equipment in the manufacturing industry using image recognition,” 2021.
[5] H. Liang and S. Seo, “Automatic detection of construction workers’ helmet wear based on lightweight deep learning,” Appl. Sci., vol. 12, no. 20, p. 10369, 2022.
[6] S. D. Rajendran, S. N. Wahab, and S. P. Yeap, “Design of a smart safety vest incorporated with metal detector kits for enhanced personal protection,” Saf. Health Work, vol. 11, no. 4, pp. 537–542, 2020, doi: 10.1016/j.shaw.2020.06.007.
[7] A. M. El-Kafrawy and E. H. Seddik, “Personal Protective Equipment (PPE) Monitoring for Construction Site Safety using YOLOv12,” in 2025 International Conference on Machine Intelligence and Smart Innovation (ICMISI), IEEE, 2025, pp. 456–459.
[8] H. Shao, S. Lei, C. Yan, X. Deng, and Y. Qi, “Highly Differentiated Target Detection under Extremely Low-Light Conditions Based on Improved YOLOX Model.,” C. Model. Eng. Sci., vol. 140, no. 2, 2024.
[9] S. Kim, S. H. Hong, H. Kim, M. Lee, and S. Hwang, “Small object detection (SOD) system for comprehensive construction site safety monitoring,” Autom. Constr., vol. 156, p. 105103, 2023.
[10] U. Nepal and H. Eslamiat, “Comparing YOLOv3, YOLOv4 and YOLOv5 for autonomous landing spot detection in faulty UAVs,” Sensors, vol. 22, no. 2, p. 464, 2022, doi: 10.3390/s22020464.
[11] N. M. Yusof, A. Sophian, H. F. M. Zaki, A. A. Bawono, A. H. Embong, and A. Ashraf, “Assessing the performance of YOLOv5, YOLOv6, and YOLOv7 in road defect detection and classification: a comparative study,” Bull. Electr. Eng. Informatics, vol. 13, no. 1, pp. 350–360, 2024, doi: 10.11591/eei.v13i1.6317.
[12] S. M. Alkentar, B. Alsahwa, A. Assalem, and D. Karakolla, “Practical comparation of the accuracy and speed of YOLO, SSD and Faster RCNN for drone detection,” J. Eng., vol. 27, no. 8, pp. 19–31, 2021, doi: 10.31026/j.eng.2021.08.02.
[13] W. Li, “Analysis of object detection performance based on Faster R-CNN,” in Journal of Physics: Conference Series, IOP Publishing, 2021, p. 12085. doi: 10.1088/1742-6596/1827/1/012085.
[14] M. J. B. Brutas, A. L. Fajardo, E. P. Quilloy, L. J. R. Manuel, and A. A. Borja, “Enhancing seed germination test classification for pole sitao (Vigna unguiculata (L.) Walp.) using SSD MobileNet and faster R-CNN models,” Appl. Sci., vol. 14, no. 13, p. 5572, 2024, doi: 10.3390/app14135572.
[15] A. H. N. Hidayah, S. A. Radzi, N. A. Razak, W. H. M. Saad, Y. C. Wong, and A. A. Naja, “Disease detection of solanaceous crops using deep learning for robot vision,” J. Robot. Control, vol. 3, no. 6, pp. 790–799, 2022, doi: 10.18196/jrc.v3i6.15948.
[16] T. Diwan, G. Anirudh, and J. V Tembhurne, “Object detection using YOLO: challenges, architectural successors, datasets and applications,” Multimed. Tools Appl., vol. 82, no. 6, pp. 9243–9275, 2023.
[17] U. Sirisha, S. P. Praveen, P. N. Srinivasu, P. Barsocchi, and A. K. Bhoi, “Statistical analysis of design aspects of various YOLO-based deep learning models for object detection,” Int. J. Comput. Intell. Syst., vol. 16, no. 1, p. 126, 2023.
[18] T. Islam, T. T. Sarker, K. R. Ahmed, and N. Lakhssassi, “Detection and classification of cannabis seeds using retinanet and faster r-CNN,” Seeds, vol. 3, no. 3, pp. 456–478, 2024, doi: 10.3390/seeds3030031.
[19] P. Vilcapoma et al., “Comparison of faster R-CNN, YOLO, and SSD for third molar angle detection in dental panoramic X-rays,” Sensors, vol. 24, no. 18, p. 6053, 2024.
[20] L. Fan, J. Yu, and Z. Hu, “SSD object detection algorithm based on feature fusion and channel attention,” Int. J. Adv. Netw., Monit. Control, vol. 7, pp. 80–89, 2023, doi: 10.2478/ijanmc-2022-0029.
[21] L. Shen, B. Lang, and Z. Song, “DS-YOLOv8-based object detection method for remote sensing images,” Ieee Access, vol. 11, pp. 125122–125137, 2023.
[22] P. Nani, S. Das, and S. Dey, “Enhancing object recognition: a comprehensive analysis of CNN based deep learning models considering lighting conditions and perspectives,” Evol. Intell., vol. 18, no. 4, p. 72, 2025.
[23] L. Wei, M. I. Solihin, S. ‘Atifah Saruchi, W. Astuti, L. W. Hong, and A. C. Kit, “Surface defects detection of cylindrical high-precision industrial parts based on deep learning algorithms: A review,” in Operations Research Forum, Springer, 2024, p. 58.
[24] R. Sapkota et al., “YOLO advances to its genesis: a decadal and comprehensive review of the You Only Look Once (YOLO) series,” Artif. Intell. Rev., vol. 58, no. 9, p. 274, 2025.
[25] R. Adidarma, J. Maverick, I. N. Alam, and L. A. Wulandhari, “Personal Protective Equipment Detection for Construction Site Safety Using YOLOv9,” in 2024 International Conference on Informatics, Multimedia, Cyber and Information System (ICIMCIS), IEEE, 2024, pp. 467–471.
Published
2025-12-16
How to Cite
Patrisillia, S., & Kusuma, G. P. (2025). DETEKSI KEPATUHAN ALAT PELINDUNG DIRI DI TERMINAL BAHAN BAKAR MINYAK MENGGUNAKAN CONVOLUTIONAL NEURAL NETWORKS (CNN). TEKNIMEDIA: Teknologi Informasi Dan Multimedia, 6(2), 311-322. https://doi.org/10.46764/teknimedia.v6i2.372
Abstract viewed = 0 times
PDF downloaded = 0 times