Abstract
Sentiment classification and prediction are critical research areas in natural language processing (NLP), primarily focused on extracting sentiment tendencies and opinions from text. Previous studies have predominantly relied on sentiment dictionaries, machine learning, or deep learning methods, each of which presents two main challenges. First, sentiment dictionaries and machine learning methods heavily depend on manually constructed sentiment lexicons and features, resulting in relatively low accuracy. Second, deep learning models often suffer from a lack of interpretability and are frequently regarded as “black box” systems, making it difficult to understand their decision-making processes. To address these issues, we propose an optimization method for text sentiment classification that combines a large language model (LLM) with a deep learning BERT model. Specifically, building on the concept of ensemble learning, we perform dual sentiment determination by fine-tuning the BERT model in conjunction with the cue-based GPT-3.5-turbo LLM. This approach utilizes the LLM to provide a robust basis for sentiment judgment. Validation is conducted using two classical datasets: one from the entertainment domain and another from the marketing domain. The results demonstrate that the proposed method achieves an average accuracy exceeding 90%, outperforming the benchmark model by at least 2 percentage points. This improvement highlights the effectiveness of the proposed method in enhancing multi-domain sentiment classification performance.
Get full access to this article
View all access options for this article.
