Comparative Study of Hybrid Deep Learning Models for Kannada Sign Language Recognition

Document Type

Article

Publication Title

International Journal of Computational Intelligence Systems

Abstract

Sign language recognition (SLR) systems continue to face significant challenges in accurately interpreting dynamic gestures, particularly for underrepresented languages like Kannada sign language (KSL). This study presents a novel hybrid deep learning architecture that synergistically combines convolutional neural networks (CNNs), hand keypoints (HKPs), long short-term memory (LSTM) networks, and transformers to achieve robust spatial-temporal-contextual learning for KSL recognition. Developed on a newly curated dataset of 1080 medical-domain KSL gestures, our model addresses critical gaps in dataset diversity and model generalizability. The proposed framework demonstrates superior performance with 97.6% training accuracy, 96.75% validation accuracy, and 81% testing accuracy on unseen data—outperforming conventional CNN-LSTM (46%) and HKP-LSTM (71%) baselines. By hierarchically integrating CNN-extracted spatial features, HKP-derived structural priors, LSTM-processed temporal dynamics, and Transformer-modeled long-range dependencies, this work establishes a new benchmark for KSL recognition while providing a scalable solution for real-world healthcare and assistive technology applications.

DOI

10.1007/s44196-025-00922-4

Publication Date

12-1-2025

This document is currently not available here.

Share

COinS