Category Ranking

98%

Total Visits

921

Avg Visit Duration

2 minutes

Citations

20

Article Abstract

Objective: To investigate interview and match outcomes of medical students who received pass/fail USMLE reporting vs medical students with numeric scoring during the same period.

Design: Retrospective analysis of a cross-sectional survey-based study.

Setting: United States 2023 residency match.

Participants: Medical student applicants in the 2023 residency match cycle who responded to the Texas Seeking Transparency in Application to Residency (STAR) survey.

Results: Among 6756 applicants for the 2023 match, 496 (7.3%) took USMLE Step 1 with pass/fail reporting. Pass/fail reporting was associated with lower USMLE Step 2-CK scores (245.9 vs 250.7), fewer honored clerkships (2.4 vs 3.1), and lower Alpha Omega Alpha membership (12.5% vs 25.2%) (all p < 0.001). Applicants with numeric USMLE Step 1 scores received more interview offers after adjusting for academic performance (beta coefficient 1.04 (95% CI 0.28-1.79); p = 0.007). Numeric USMLE Step 1 scoring was associated with more interview offers in nonsurgical specialties (beta coefficient 1.64 [95% CI 0.74-2.53]; p < 0.001), but not in general surgery (beta coefficient 3.01 [95% CI -0.82 to 6.84]; p = 0.123) or surgical subspecialties (beta coefficient 1.92 [95% CI -0.78 to 4.62]; p = 0.163). Numeric USMLE Step 1 scoring was not associated with match outcome.

Conclusions: Applicants with numeric USMLE Step 1 scoring had stronger academic profiles than those with pass/fail scoring; however, adjusted analyses found only weak associations with interview or match outcomes. Further research is warranted to assess longitudinal outcomes.

Download full-text PDF

Source
http://dx.doi.org/10.1016/j.jsurg.2024.06.019DOI Listing

Publication Analysis

Top Keywords

usmle step
28
numeric usmle
16
beta coefficient
16
pass/fail reporting
12
interview match
12
match outcomes
12
step scoring
12
step pass/fail
8
medical students
8
2023 residency
8

Similar Publications

Outcomes were to compare the accuracy of 2 large-language models-GPT-4o and o3-Mini-against medical-student performance on otolaryngology-focused, USMLE-style multiple-choice questions. With permission from AMBOSS, we extracted 146 Step 2 CK questions tagged "Otolaryngology" and stratified them by AMBOSS difficulty (levels 1-5). Each item was presented verbatim to GPT-4o and o3-Mini through their official APIs; outputs were scored correct/incorrect.

View Article and Find Full Text PDF

Purpose: This study examined the impact of exam sequence and timing on the performance of osteopathic medical students on the COMLEX-USA Level 1 and Level 2 and USMLE Step 1 and Step 2 examinations.

Methods: Two cohorts were analyzed: 364 osteopathic medical students who completed both COMLEX-USA Level 1 and USMLE Step 1 between 2020 and 2022 (prior to the implementation of pass/fail grading), and 734 osteopathic medical students who completed both COMLEX-USA Level 2 and USMLE Step 2 between 2021 and 2025. Student performance was evaluated based on the sequence of examinations and intervals between them.

View Article and Find Full Text PDF

Introduction: Securing a residency position in the United States remains a significant challenge for International Medical Graduates (IMGs), particularly those from African countries. Although African IMGs contribute to approximately 25% of the U.S.

View Article and Find Full Text PDF

Entrance to neurological surgery residency is highly competitive due to the large number of applicants vying for a limited number of spots. The process has become even more competitive in recent years, with a significant increase in applicants but a consistent number of available residency positions. Program director (PD) surveys offer valuable insights into the selection process and expectations for neurosurgical residency, guiding prospective candidates to navigate the challenging training path.

View Article and Find Full Text PDF

The performance of ChatGPT on medical image-based assessments and implications for medical education.

BMC Med Educ

August 2025

Department of Neurosurgery, West China Hospital, Sichuan University, No. 37 Guo Xue Xiang Alley, Wu Hou Distract, Chengdu, Sichuan Province, 610037, China.

Background: Generative artificial intelligence (AI) tools like ChatGPT (OpenAI) have garnered significant attention for their potential in fields such as medical education; however, their performance of large language and vision models on medical test items involving images remains underexplored, limiting their broader educational utility. This study aims to evaluate the performance of GPT-4 and GPT-4 Omni (GPT-4o), accessed via the ChatGPT platform, on image-based United States Medical Licensing Examination (USMLE) sample items, to explore their implications for medical education.

Methods: We identified all image-based questions from the USMLE Step 1 and Step 2 Clinical Knowledge sample item sets.

View Article and Find Full Text PDF