Integrating Multi-Sensors and AI to Develop Improved Surveillance Systems

Document Type

Article

Publication Title

Journal of Robotics and Control Jrc

Abstract

This paper explores advancements in surveillance systems, focusing on the integration of multisensory and AI technologies in urban and environmental monitoring. It highlights the fusion of data sources such as video feeds, LiDAR, and wireless networks for enhanced real-time surveillance in complex environments. Artificial intelligence (AI) plays a critical role in anomaly detection, object identification, and behavior analysis, improving response times in high-traffic and security-sensitive areas. However, these technologies raise privacy concerns, emphasizing the need for responsible data management and ethical frameworks. Also, there is probability of false positives which can lead to unnecessary action disturbing the normal mode of life. These technologies involve high financial requirements hence must be used judiciously. In current study human surveillance is carried out in indoor environments by two AI algorithms: YOLOV5 and R-CNN. The results of these algorithms can be fused with LiDAR data for better decision making. R-CNN produced better results than YOLOV5 but the fusion with sensor data led to accurate detection of humans in indoor environments. R-CNN showcased better results than YOLOV5. The future of surveillance should focus on balancing safety and personal rights while adapting policies to ensure privacy and accountability in an increasingly tech-driven world.

First Page

980

Last Page

994

DOI

10.18196/jrc.v6i2.25596

Publication Date

1-1-2025

This document is currently not available here.

Share

COinS