LeSegGAN: A Hybrid Attention-Based GAN for Accurate Lesion Segmentation in Dermatological Images

Document Type

Article

Publication Title

IEEE Access

Abstract

Accurate segmentation of skin lesions from dermatological images is essential for the early detection of melanoma and other skin cancers. Conventional methods based on convolutional neural networks (CNNs) and transformer architectures often struggle to capture both local and global contextual features, delineate irregular lesion boundaries, and remain robust against artifacts such as hair, shadows, and illumination variations. To overcome these challenges, we introduce LeSegGAN, a hybrid attention-enhanced generative adversarial network (GAN) framework for robust skin lesion segmentation. The generator combines convolutional and inception modules with residual connections and channel attention to extract multi-scale features, while a vision transformer (ViT)-based discriminator improves segmentation accuracy through adversarial learning. A composite loss function integrating weighted binary cross-entropy, Dice, and focal losses further addresses class imbalance and enhances performance. LeSegGAN is evaluated on four benchmark datasets namely, Waterloo skin cancer, MED-NODE, SD-260, and ISIC-2016. The proposed LeSegGAN consistently outperformed five state-of-the-art deep learning models (UNet, UNet++, SegNet, FCN, and DTP-Net), achieving accuracies of 0.9943, 0.9759, 0.9873, and 0.9724, with corresponding IoU scores of 0.9451, 0.9664, 0.8709, and 0.7717. These results highlight LeSegGAN’s strong generalization ability and robustness, demonstrating its potential for integration into computer-aided diagnostic systems for automated skin cancer detection.

First Page

177019

Last Page

177035

DOI

10.1109/ACCESS.2025.3621107

Publication Date

1-1-2025

This document is currently not available here.

Share

COinS