YLOMF: You Look Once for Multi-Feature—A Multi-Feature Multi-Task Network for Intelligent Real-Time Perception in Autonomous Driving

Document Type

Article

Publication Title

IEEE Access

Abstract

Road traffic accidents claim 1.35 million lives annually, making them the leading cause of death among individuals aged 5–29 years, according to the world health organization (WHO). Low- and middle-income countries, despite having only 60% of the world’s vehicles, account for 93% of fatalities due to speeding, impaired driving, inadequate safety measures, and poor infrastructure. In the era of Industry 4.0, intelligent driving assistance systems offer a transformative solution to mitigate human errors, particularly those caused by fatigue or drowsiness. This study presents You Look Once for Multi-Feature(YLOMF), a novel single-stage, vision-based hierarchical model for autonomous driving. It features modular feature component heads, ensuring customizable deployment and real-time efficiency on edge devices. Trained on the Karlsruhe Institute of Technology and Toyota Technological Institute (KITTI) dataset and evaluated on Berkeley Deep Drive Dataset (BDD100K), Indian Driving Dataset (IDD), and Audi Autonomous Driving Dataset (A2D2), YLOMF exhibits robust cross-domain generalization. It achieves 85.1% IoU for lane segmentation, 81.56% accuracy in 2D object detection, and 31.5% (easy), 26.38% (medium), and 23.9% (hard) accuracy in 3D instance detection, surpassing state-of-the-art benchmarks. Depth estimation performance was validated using absolute relative error, squared relative error, RMSE, and logged RMSE. By delivering a computationally efficient and highly accurate perception framework, YLOMF enhances scene understanding and object recognition in real time. Its integration into autonomous systems offers significant potential for reducing road accidents and improving overall traffic safety in safety-critical environments.

First Page

110867

Last Page

110881

DOI

10.1109/ACCESS.2025.3583341

Publication Date

1-1-2025

This document is currently not available here.

Share

COinS