Severity: Warning
Message: file_get_contents(https://...@gmail.com&api_key=61f08fa0b96a73de8c900d749fcb997acc09&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests
Filename: helpers/my_audit_helper.php
Line Number: 197
Backtrace:
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 197
Function: file_get_contents
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 271
Function: simplexml_load_file_from_url
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3165
Function: getPubMedXML
File: /var/www/html/application/controllers/Detail.php
Line: 597
Function: pubMedSearch_Global
File: /var/www/html/application/controllers/Detail.php
Line: 511
Function: pubMedGetRelatedKeyword
File: /var/www/html/index.php
Line: 317
Function: require_once
98%
921
2 minutes
20
Aim: To compare the item difficulty and discriminative index of multiple-choice questions (MCQs) generated by ChatGPT with those created by dental educators, based on the performance of dental students in a real exam setting.
Materials And Methods: A total of 40 MCQs-20 generated by ChatGPT 4.0 and 20 by dental educators-were developed based on the Oral Diagnosis and Radiology course content. An independent, blinded panel of three educators assessed all MCQs for accuracy, relevance and clarity. Fifth-year dental students participated in an onsite and online exam featuring these questions. Item difficulty and discriminative indices were calculated using classical test theory and point-biserial correlation. Statistical analysis was conducted with the Shapiro-Wilk test, paired sample t-test and independent t-test, with significance set at p < 0.05.
Results: Educators created 20 valid MCQs in 2.5 h, with minor revisions needed for three questions. ChatGPT generated 36 MCQs in 30 min; 20 were accepted, while 44% were excluded due to poor distractors, repetition, bias, or factual errors. Eighty fifth-year dental students completed the exam. The mean difficulty index was 0.41 ± 0.19 for educator-generated questions and 0.42 ± 0.15 for ChatGPT-generated questions, with no statistically significant difference (p = 0.773). Similarly, the mean discriminative index was 0.30 ± 0.16 for educator-generated questions and 0.32 ± 0.16 for ChatGPT-generated questions, also showing no significant difference (p = 0.578). Notably, 60% (n = 12) of ChatGPT-generated and 50% (n = 10) of educator-generated questions met the criteria for 'good quality', demonstrating balanced difficulty and strong discriminative performance.
Conclusion: ChatGPT-generated MCQs performed comparably to educator-created questions in terms of difficulty and discriminative power, highlighting their potential to support assessment design. However, it is important to note that a substantial portion of the initial ChatGPT-generated MCQs were excluded by the independent panel due to issues related to clarity, accuracy, or distractor quality. To avoid overreliance, particularly among faculty who may lack experience in question development or awareness of AI limitations, expert review is essential before use. Future studies should investigate AI's ability to generate complex question formats and its long-term impact on learning.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1111/eje.70034 | DOI Listing |