"Data augmentation for Gram-stain images based on Vector Quantized Vari" by Shwetha V, Keerthana Prasad et al.
 

Data augmentation for Gram-stain images based on Vector Quantized Variational AutoEncoder

Document Type

Article

Publication Title

Neurocomputing

Abstract

Availability of large-scale datasets plays a significant role in segmentation and classification tasks using deep learning. However, domains such as healthcare inherently suffer from unavailability and inaccessibility of data. This leads to challenges in deploying CNN-based models for computer-aided diagnosis. This challenge extends to Gram-stain image analysis for detecting bacterial infections, which is a crucial task. The lack of datasets containing Gram-stained direct and culture smear images exacerbates the significant challenges in deep learning tasks. In this regard, we investigate a novel application of the Variational AutoEncoder. Specifically, the Vector Quantized Variational AutoEncoder model is trained to generate the Gram-stain images. Incorporating a novel loss function, where the quality loss (Lqu) is derived by integrating the LossSSIM, L1, and L2 losses with the VQ-VAE loss (Lossvq) for proposed approach for Gram-stained direct and culture smear images. This modification facilitates the creation of images closely resembling the original input, leading to notable SSIM scores of 0.92 for Gram-stained culture images and 0.88 for Gram-stained direct smear images. The current study compares the proposed method with state-of-the-art machine learning based and CNN based transformations. This work also demonstrates the classification process with and without image augmentation. It shows that the area under the curve in the case of augmentation is higher by an average of 20%.

DOI

10.1016/j.neucom.2024.128123

Publication Date

10-1-2024

This document is currently not available here.

Plum Print visual indicator of research metrics
PlumX Metrics
  • Citations
    • Citation Indexes: 4
  • Usage
    • Abstract Views: 8
  • Captures
    • Readers: 16
see details

Share

COinS