Abstract
The rapid evolution of Neural Machine Translation (NMT) has fundamentally altered the landscape of cross-linguistic communication. However, the chasm between "fluent" machine output and "accurate" human translation remains a critical area of linguistic inquiry. This paper investigates the semantic precision of NMT systems compared to professional human translation. Through a qualitative and quantitative analysis of polysemy, idiomaticity, and contextual cohesion, the study identifies the cognitive limitations of transformer-based models. The findings suggest that while NMT excels in lexical speed, it consistently fails in "deep semantics"—the ability to decode intent and cultural subtext.
References
Baker, M. (2018). In Other Words: A Coursebook on Translation. London: Routledge.
2. Koehn, P. (2020). Neural Machine Translation. Cambridge University Press.
3. Vaswani, A., et al. (2017). "Attention is All You Need." Advances in Neural Information Processing Systems (NIPS).
4. Nida, E. A. (1964). Toward a Science of Translating. Leiden: Brill.
5. Pym, A. (2014). Exploring Translation Theories. Routledge.
6. Newmark, P. (1988). A Textbook of Translation. New York: Prentice Hall.
7. Sutskever, I., et al. (2014). "Sequence to Sequence Learning with Neural Networks." NIPS.
8. Hatim, B., & Munday, J. (2019). Translation: An Advanced Resource Book. Routledge.
9. Kenny, D. (2022). Machine Translation for Everyone. Language Science Press.
10. Venuti, L. (2008). The Translator's Invisibility. London: Routledge