A PHP Error was encountered

Severity: Warning

Message: file_get_contents(https://...@gmail.com&api_key=61f08fa0b96a73de8c900d749fcb997acc09&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests

Filename: helpers/my_audit_helper.php

Line Number: 197

Backtrace:

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 197
Function: file_get_contents

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 271
Function: simplexml_load_file_from_url

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3165
Function: getPubMedXML

File: /var/www/html/application/controllers/Detail.php
Line: 597
Function: pubMedSearch_Global

File: /var/www/html/application/controllers/Detail.php
Line: 511
Function: pubMedGetRelatedKeyword

File: /var/www/html/index.php
Line: 317
Function: require_once

Quality of Machine Translations in Medical Texts: An Analysis Based on Standardised Evaluation Metrics. | LitMetric

Quality of Machine Translations in Medical Texts: An Analysis Based on Standardised Evaluation Metrics.

Stud Health Technol Inform

Goethe University Frankfurt, University Medicine, Institute of Medical Informatics (IMI), Frankfurt am Main, Germany.

Published: September 2025


Category Ranking

98%

Total Visits

921

Avg Visit Duration

2 minutes

Citations

20

Article Abstract

Introduction: The medical care of patients with rare diseases is a cross-border concern across the EU. This is also reflected in the usage statistics of the SE-ATLAS, where most access occurs via browser languages set to German, English, French, or Polish. The SE-ATLAS website provides information on healthcare services and patient organisations for rare diseases in Germany. As SE-ATLAS currently offers its content almost exclusively in German, non-German-speaking users may encounter language barriers. Against this background, this paper explores whether common machine translation systems can translate medical texts into other languages at a reasonable level of quality.

Methods: For this purpose, the translation systems DeepL, ChatGPT, and Google Translate were analysed. Translation quality was assessed using the standardised metrics BLEU, METEOR, and COMET. In contrast to subjective human assessments, these automated metrics allow for objective and reproducible evaluation. The analysis focused on machine-generated translations of German-language texts from the OPUS corpus into English, French, and Polish, each compared against existing reference translations.

Results: BLEU scores were generally lower than those of the other metrics, whereas METEOR and COMET indicated moderate to high translation quality. Translations into English were consistently rated higher than those into French and Polish.

Conclusion: As the three analysed translation systems showed hardly any statistically significant differences in translation quality and all delivered acceptable results, further criteria should be taken into account when choosing an appropriate system. These include factors such as data protection, cost-efficiency, and ease of integration.

Download full-text PDF

Source
http://dx.doi.org/10.3233/SHTI251380DOI Listing

Publication Analysis

Top Keywords

translation systems
12
translation quality
12
medical texts
8
rare diseases
8
english french
8
french polish
8
analysed translation
8
meteor comet
8
translation
6
quality
4

Similar Publications