Enhanced JAYA optimization based medical image fusion in adaptive non subsampled shearlet transform domain

Document Type

Article

Publication Title

Engineering Science and Technology, an International Journal

Abstract

Multi-modal image fusion has gained popularity in the medical field as it assists doctors to view the diverse medical image modalities in a single image. The treatment is effectively planned by looking into the fused image that helps doctors diagnose diseases. The medical image fusion aims to merge the texture features from multiple images in a single image. The proposed method includes the application of Adaptive window-based Non-Subsampled Shearlet Transform (ANSST) on source images to separate the low and high-frequency directional sub-bands. Further, an enhanced JAYA (EJAYA) optimization framework is utilized to obtain the adaptive weights for combining high-frequency sub-bands for a multi-modal medical image fusion. The low-frequency bands are fused using the max rule based on the average energy of low-frequency sub-bands. The entire process focuses on preserving the low-frequency band's energy while improving the texture details in the combined image. In the end, inverse ANSST is applied on merged low-frequency and high-frequency components to get the fused image. Extensive experiments are conducted on data sets obtained from the Brain Atlas website comprising more than 100 images. The significance of the current approach is validated by qualitative and quantitative assessments. The proposed method exhibits good performance in terms of subjective analysis compared to the recent well-known image fusion techniques.

DOI

10.1016/j.jestch.2022.101245

Publication Date

11-1-2022

This document is currently not available here.

Share

COinS