An enhanced deep learning-based feature extraction framework for moving object detection
Document Type
Article
Publication Title
Discover Applied Sciences
Abstract
Detecting changes is a crucial step in computer vision-based monitoring systems. However, the primary objective of these systems is to accurately identify moving objects, ensuring their applicability in diverse real-world scenarios. Various methods across the globe are developed by the researcher for change detection. However, most current methods require improvement in the challenging datasets. This article introduces an innovative Moving Object Detection Algorithm (MODA) for detecting moving objects for benchmark CD-Net 2014, WallFlower, Star, STERE, DUTS, NLPR, NJU2K, and SIP datasets. The designed approach utilizes an encoder-decoder model, where the encoder framework incorporates a modified ResNet-50 model with a transfer learning strategy that can retain subtle details effectively. The designed Multi-Scale Feature Pooling Framework (MSFP) guarantees the preservation of multi-scale and multi-dimensional features across different scales. The developed decoder architecture consists of stacked transposed convolutional layers tasked with translating features back into the image. To evaluate the efficacy of the designed scheme, analyses were carried out, comparing it with forty-two existing methods. The results obtained from the developed algorithm are validated through both subjective and objective assessments. It could be observed that the developed model outperforms forty-two existing techniques in terms of considered measures. The slow-moving object dataset achieved an average F-measure of 98.59% and an average misclassification error of 0.83. In the CD-Net 2014 dataset, the model achieved an average precision of 0.8886, an average recall of 0.8583, an average F-measure of 0.8500, and an average percentage of wrong classification error of 0.8200. Further, the P-Test and average intersection over Union also found out. Furthermore, similarity metrics are computed for the Star dataset, while the WallFlower dataset is evaluated based on average F-measure. Also, the developed approach provides better accuracy for the unseen video setup.
DOI
10.1007/s42452-025-07433-z
Publication Date
7-1-2025
Recommended Citation
Panigrahi, Upasana; Sahoo, Prabodh Kumar; Panda, Manoj Kumar; and Samantaray, Aswini Kumar, "An enhanced deep learning-based feature extraction framework for moving object detection" (2025). Open Access archive. 13039.
https://impressions.manipal.edu/open-access-archive/13039