Abstract
Despite the widespread application of the Internet and the in-depth development of globalization, English machine translation still has problems such as poor translation accuracy, long translation time, and poor ROUGE (Recall-Oriented Understudy for Gisting Evaluation) and METEOR (Metric for Evaluation of Translation with Explicit ORdering) value results. In order to better promote the improvement of English machine translation effectiveness, this article introduced the Transformer model to deeply explore the effectiveness of English machine translation. Firstly, the Transformer model structure was adjusted by integrating absolute and relative positions for position encoding. Subsequently, the encoder and decoder were designed using a multi-head attention mechanism and a feedforward neural network (FNN). The dependency weight design was achieved through the attention mechanism, and the model was pre-trained. Finally, this article also evaluated the actual performance of the adjusted Transformer model in English machine translation. The research results show that in the 5th test, the average score of the adjusted Transformer model BLEU (Bilingual Evaluation Study) reached 0.6110, the translation time was 24.90 minutes, and the average values of ROUGE and METEOR were 0.6809 and 0.6098, respectively. The research results indicate that applying the adjusted Transformer model to English machine translation is completely feasible. It can improve translation accuracy, shorten translation time, and be applied to machine translation tasks to better improve translation quality and efficiency, which is of great significance for promoting cross-cultural communication.
Get full access to this article
View all access options for this article.
