Elevating Amodal Segmentation Using ASH-Net Architecture for Accurate Object Boundary Estimation

Document Type

Article

Publication Title

IEEE Access

Abstract

Amodal segmentation is a critical task in the field of computer vision as it involves accurately estimating object boundaries that extend beyond occlusion. This paper introduces a network named after the Amodal Segmentation Head, ASH-Net, a novel architecture specifically designed for amodal segmentation. ASH-Net is comprised of a ResNet-50 backbone, a Feature Pyramid Network middle layer, and an Amodal Segmentation Head. The evaluation encompasses three diverse datasets, namely COCOA-cls, KINS, and D2SA, providing a comprehensive analysis of ASH-Net's capabilities. The results obtained demonstrate the superiority of ASH-Net in accurately estimating object boundaries beyond occlusion across multiple datasets. Specifically, ASH-Net achieves an Average Precision of 62.15% on the COCOA-cls dataset, 72.58% on the KINS dataset, and an impressive 91.4% on the D2SA dataset. Through extensive evaluation using average precision and average recall metrics, ASH-Net exhibits exceptional performance compared to state-of-the-art models. These findings highlight the remarkable performance of ASH-Net in overcoming occlusion challenges and accurately delineating object boundaries. This research identifies optimal training parameters, such as coefficient dimensions and aspect ratios, that significantly enhance segmentation performance while maintaining computational efficiency. The proposed ASH-Net architecture and its performance pave the way for improved object recognition, enhanced scene understanding, and the development of practical applications in various domains.

First Page

83377

Last Page

83389

DOI

10.1109/ACCESS.2023.3301724

Publication Date

1-1-2023

This document is currently not available here.

Share

COinS