WeakSegNet: Combining Unsupervised, Few-Shot, and Weakly Supervised Methods for the Semantic Segmentation of Low-Magnification Effusion Cytology Images

Document Type

Article

Publication Title

IEEE Access

Abstract

Effusion cytology analysis can be time consuming for cytopathologists, but the burden can be reduced through automatic malignancy detection. The main challenge in the automation process is pixel-wise labeling. We proposed WeakSegNet, a new model that addresses the challenge of semantic segmentation in low-magnification images by utilizing only four images with pixel-wise labels. WeakSegNet combines unsupervised, few-shot, and weakly supervised learning methods. In the first stage, an unsupervised model, DeepClusterSeg, learns the homogeneous structures from different images. The few-shot method uses only four images with pixel-wise labels to map homogeneous structures to the required classes. The final stage utilized image-level labels to predict precise classes using weakly supervised learning. We conducted our experiments using a dataset from KMC Hospital, MAHE, which consisted of 345 images. We performed 5-fold cross-validation to evaluate the results. Our proposed model achieved promising results, with an F-score of 0.85 and an IoU of 0.81 for the malignant class, surpassing the performance of the standard k-means algorithm with weakly supervised learning (F-scores of 0.65 and an IoU of 0.61). The semantic segmentation of low-magnification images using our approach eliminated 47% of the sub-regions that need to be scanned at high magnification. This innovative approach reduces the workload of cytopathologists and maintains a high accuracy in effusion cytology malignancy detection.

First Page

144467

Last Page

144478

DOI

10.1109/ACCESS.2025.3598953

Publication Date

1-1-2025

This document is currently not available here.

Share

COinS