| Nome: | Descrição: | Tamanho: | Formato: | |
|---|---|---|---|---|
| 1.18 MB | Adobe PDF |
Autores
Orientador(es)
Resumo(s)
Este trabalho tem como objetivo averiguar que tipo de erros os motores de tradução Google Tradutor e DeepL mais cometem, em contexto de diálogos informais. Para esse efeito, foram analisadas as traduções de 186 exercícios de diálogos da aplicação ELSA Speak – uma aplicação que auxilia os seus utilizadores na pronúncia de inglês. Todos os erros encontrados foram catalogados nas tipologias de erros de Accuracy e Fluency da tipologia de erros da MQM (Multidimensional Quality Metric), sendo também classificados com diferentes graus de gravidade: menor, grave e crítico.
Na categoria Accuracy, ou adequação do texto de chegada na tradução proposta no texto de partida, foi notório que a maioria dos erros apontados foram provocados por traduções demasiado literais dos textos de partida, tanto do motor de tradução automática Google Tradutor, como do DeepL. No entanto, foram também registados alguns erros de omissões e adições de informação, texto que não foi devidamente traduzido ou que não deveria ter sido traduzido de todo. Já na categoria Fluency, ou fluência do texto de chegada, os erros mais evidentes estão relacionados com erros de coerência textual, existindo também alguns erros de concordância gramatical, de tempos e modos verbais incorretos, de diacríticos e alguns erros de pontuação. Após a análise dos erros encontrados, concluiu-se que o motor de tradução automática DeepL apresenta melhores resultados do que o Google Tradutor.
Será também brevemente apresentada a estrutura global da conversação, assim como todas as fases que a compõem. A nível de microestrutura da conversação, será analisada a tomada de palavra e a sua importância num diálogo.
This study investigates what sort of errors the machine translation systems Google Translator and DeepL most frequently produce in a context of informal dialogues. To do so, 186 dialogues from the app ELSA Speak – an app that helps users to improve their English pronunciation − were analyzed. All the errors that were found were categorized according to the Multidimensional Quality Metric error typology, as well as classified with different degrees of severity: minor, major and critical. In the category Accuracy, it was clear that most errors were motivated by literal translations of the source texts, in both Google Translator and DeepL automatic translated dialogues. However, there were also other types of errors, such as omission or addition of information, and text that should not have been translated or was partly translated. As for the category Fluency, most errors were related to textual coherence, but there were also some grammatical agreement errors, incorrect use of the verb tenses and mood, fewer errors on diacritics and punctuation. After a careful analysis of all these errors, the results pointed out that DeepL currently produces better and cleaner translations when compared with Google Translator. The overall structure of the dialogue will also be briefly presented, as well as all its phases. At a microstructure level, it will be analyzed the turn-taking and its importance in a dialogue.
This study investigates what sort of errors the machine translation systems Google Translator and DeepL most frequently produce in a context of informal dialogues. To do so, 186 dialogues from the app ELSA Speak – an app that helps users to improve their English pronunciation − were analyzed. All the errors that were found were categorized according to the Multidimensional Quality Metric error typology, as well as classified with different degrees of severity: minor, major and critical. In the category Accuracy, it was clear that most errors were motivated by literal translations of the source texts, in both Google Translator and DeepL automatic translated dialogues. However, there were also other types of errors, such as omission or addition of information, and text that should not have been translated or was partly translated. As for the category Fluency, most errors were related to textual coherence, but there were also some grammatical agreement errors, incorrect use of the verb tenses and mood, fewer errors on diacritics and punctuation. After a careful analysis of all these errors, the results pointed out that DeepL currently produces better and cleaner translations when compared with Google Translator. The overall structure of the dialogue will also be briefly presented, as well as all its phases. At a microstructure level, it will be analyzed the turn-taking and its importance in a dialogue.
Descrição
Palavras-chave
Tradução automática - Qualidade - Controlo Erro Tradução Teses de mestrado - 2021
