Severity: Warning
Message: file_get_contents(https://...@gmail.com&api_key=61f08fa0b96a73de8c900d749fcb997acc09&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests
Filename: helpers/my_audit_helper.php
Line Number: 197
Backtrace:
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 197
Function: file_get_contents
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 271
Function: simplexml_load_file_from_url
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3165
Function: getPubMedXML
File: /var/www/html/application/controllers/Detail.php
Line: 597
Function: pubMedSearch_Global
File: /var/www/html/application/controllers/Detail.php
Line: 511
Function: pubMedGetRelatedKeyword
File: /var/www/html/index.php
Line: 317
Function: require_once
98%
921
2 minutes
20
Introduction: The medical care of patients with rare diseases is a cross-border concern across the EU. This is also reflected in the usage statistics of the SE-ATLAS, where most access occurs via browser languages set to German, English, French, or Polish. The SE-ATLAS website provides information on healthcare services and patient organisations for rare diseases in Germany. As SE-ATLAS currently offers its content almost exclusively in German, non-German-speaking users may encounter language barriers. Against this background, this paper explores whether common machine translation systems can translate medical texts into other languages at a reasonable level of quality.
Methods: For this purpose, the translation systems DeepL, ChatGPT, and Google Translate were analysed. Translation quality was assessed using the standardised metrics BLEU, METEOR, and COMET. In contrast to subjective human assessments, these automated metrics allow for objective and reproducible evaluation. The analysis focused on machine-generated translations of German-language texts from the OPUS corpus into English, French, and Polish, each compared against existing reference translations.
Results: BLEU scores were generally lower than those of the other metrics, whereas METEOR and COMET indicated moderate to high translation quality. Translations into English were consistently rated higher than those into French and Polish.
Conclusion: As the three analysed translation systems showed hardly any statistically significant differences in translation quality and all delivered acceptable results, further criteria should be taken into account when choosing an appropriate system. These include factors such as data protection, cost-efficiency, and ease of integration.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.3233/SHTI251380 | DOI Listing |