Transformers and Attention: Decoding and Understanding of Aspect-Based Opinions in User-Generated Contents

Document Type

Article

Publication Title

IEEE Access

Abstract

Aspect-based opinion mining has become a significant information extraction technique based on natural language processing, driven by the growing volume of online user-generated content. This approach aims to determine the opinion polarity of specific aspects within a given context. The existing models primarily target explicit aspects, often neglecting the identification of implicitly mentioned aspect-based opinion polarity. Consequently, these existing models result in low classification accuracy and struggle to identify multiple aspects within a specific context. This paper proposes an aspect-based attention model (AAM) to address these limitations. We integrate a pre-trained BERT model with an attention mechanism to perform aspect detection. The AAM model is trained and evaluated on the benchmark SemEval-2014 Task 4 dataset. Experimental results demonstrate that the proposed AAM model outdoes other existing methods. Additionally, the robustness and generalizability of the proposed model are calculated using raw textual datasets.

First Page

169606

Last Page

169613

DOI

10.1109/ACCESS.2024.3498440

Publication Date

1-1-2024

This document is currently not available here.

Share

COinS