Abstract
In this study, we present a new multimodal subtitle translating system integrating a Generalized Regression Neural Network (GRNN) based syntactic error correcting mechanism. To produce accurate and fluid subtitles, our system aggregates text, video, and audio inputs. The grammar error correction system based on GRNN finds and fixes syntactic mistakes in the translated subtitles, therefore raising their general quality. Our testing findings reveal notable increases in subtitle translating accuracy and fluency of the proposed method. With a translation accuracy of 92.5%, the proposed method beats the baseline by 10.2%. By means of a 95.1% syntax error correction accuracy, the GRNN-based syntax error correction system lowers the syntax error rate by 70.5% Our method achieves a fluency score of 4.8/5.0, compared to 4.2/5.0 for the baseline technique, therefore improving the fluency of the translated subtitles. With a BLEU score of 0.85, our method shows great degree of similarity between the reference and translated subtitles. In all measures—including BLEU score, translation accuracy, syntax error correction accuracy, and fluency score—the DE-GRNN-based technique beats the GRNN-based method. The results show 8.2% increase in BLEU score, indicating improved subtitle quality, 2.9% increase in translation accuracy, showing better correctness, 3.6% increase in syntax error correction accuracy, indicating improved subtitle accuracy and 2.1% increase in fluency score, so indicating naturalism and readability. The findings show how well the proposed method generates correct, fluid, syntactic error-free subtitles.
Keywords
Get full access to this article
View all access options for this article.
