A review on NLP zero-shot and few-shot learning: methods and applications
Document Type
Article
Publication Title
Discover Applied Sciences
Abstract
Zero-shot and few-shot learning techniques in natural language processing (NLP), this comprehensive review traces their evolution from traditional methods to cutting-edge approaches like transfer learning and pre-trained language models, semantic embedding, attribute-based approaches, generative models for data augmentation in zero-shot learning, and meta-learning, model-agnostic meta-learning, relationship networks, model-agnostic meta-learning (MAML), prototypical networks in few-shot learning. Real-world applications underscore the adaptability and efficacy of these techniques across various NLP tasks in both industry and academia. Acknowledging challenges inherent in zero-shot and few-shot learning, this review identifies limitations and suggests avenues for improvement. It emphasizes theoretical foundations alongside practical considerations such as accuracy and generalization across diverse NLP tasks. By consolidating key insights, this review provides researchers and practitioners with valuable guidance on the current state and future potential of zero-shot and few-shot learning techniques in addressing real-world NLP challenges. Looking ahead, this review aims to stimulate further research, fostering a deeper understanding of the complexities and applicability of zero-shot and few-shot learning techniques in NLP. By offering a roadmap for future exploration, it seeks to contribute to the ongoing advancement and practical implementation of NLP technologies across various domains.
DOI
10.1007/s42452-025-07225-5
Publication Date
9-1-2025
Recommended Citation
Ramesh, G.; Sahil, Mahammad; Palan, Shashank A.; and Bhandary, Darshan, "A review on NLP zero-shot and few-shot learning: methods and applications" (2025). Open Access archive. 12735.
https://impressions.manipal.edu/open-access-archive/12735