Publications by authors named "Kyle B Boone"

: To evaluate the positions, policies, and practices regarding test security among psychologists and neuropsychologists who engage in clinical and forensic assessment practice. : The Inter-Organizational Practice Committee (IOPC) undertook a survey of licensed practitioners who regularly conduct neuropsychological and psychological testing. An online survey captured respondent data between October and December 2023.

View Article and Find Full Text PDF

: The purpose of this American Academy of Clinical Neuropsychology (AACN) paper is to provide the neuropsychological community with the fundamentals of a competent forensic review of records. : Narrative review addressing fundamental factors related to review of records. Examples highlighted information necessary for a forensic determination of traumatic brain injury (TBI), and the data from records that can be used to address questions regarding validity of presentation.

View Article and Find Full Text PDF

Mild traumatic brain injury (mTBI) is the most common claimed personal injury condition for which neuropsychologists are retained as forensic experts in litigation. Therefore, it is critical that experts have accurate information when testifying as to neurocognitive outcome from concussion. Systematic reviews and six meta-analyses from 1997 to 2011 regarding objective neurocognitive outcome from mTBI provide no evidence that concussed individuals do not return to baseline by weeks to months post-injury.

View Article and Find Full Text PDF

Objective: To critically examine the assumption that protective orders are adequately protective of sensitive psychological/neuropsychological test information. Attorneys at times claim that to adequately cross-examine neuropsychological experts, they require direct access to protected test information, rather than having test data analyzed by retained neuropsychological experts. As a compromise, judges sometimes order that protected test information be released to attorneys under a protective order.

View Article and Find Full Text PDF

Some attorneys claim that to adequately cross examine neuropsychological experts, they require direct access to protected test information, rather than having test data analyzed by retained neuropsychological experts. The objective of this paper is to critically examine whether direct access to protected test materials by attorneys is indeed necessary, appropriate, and useful to the trier-of-fact. Examples are provided of the types of nonscientific misinformation that occur when attorneys, who lack adequate training in testing, attempt to independently interpret neurocognitive/psychological test data.

View Article and Find Full Text PDF

Objective: The purpose of the present study was to compare performance on a wide range of PVTs in a neuropsychology clinic sample of African Americans and White Americans to determine if there are differences in mean scores or cut-off failure rates between the two groups, and to identify factors that may account for false positive PVT results in African American patients.

Method: African American and White American non-compensation-seeking neuropsychology clinic patients were compared on a wide range of standalone and embedded PVTs: Dot Counting Test, b Test, Warrington Recognition Memory Test, Rey 15-item plus recognition, Rey Word Recognition Test, Digit Span (ACSS, RDS, 3-digit time, 4-digit time), WAIS-III Picture Completion (Most discrepant index), WAIS-III Digit Symbol/Coding (recognition equation), Rey Auditory Verbal Learning Test, Rey Complex figure, WMS-III Logical Memory, Comalli Stroop Test, Trails A, and Wisconsin Card Sorting Test.

Results: When groups were equated for age and education, African Americans obtained mean performances significantly worse than White Americans on only four of 25 PVT scores across the 14 different measures (Stroop Word Reading and Color Naming, Trails A, Digit Span 3-digit time); however, FSIQ was also significantly higher in White American patients.

View Article and Find Full Text PDF

This review provides a summary of historical details and current practice activities related to Forensic Neuropsychology (FN). Under the auspices of the American Board of Clinical Neuropsychology (ABCN), the Forensic Neuropsychology Special Interest Group (FNSIG) views the FN as a subspecialty, which has developed over time as the straightforward result of more than 20 years of numerous publications, extensive continuing education, focused research and growth of forensic practice within neuropsychology. In this article, the FNSIG core work group documents and integrates information that is the basis of efforts to consolidate practice knowledge and facilitate attainment of forensic practice competencies by clinical neuropsychologists.

View Article and Find Full Text PDF
Article Synopsis
  • The text emphasizes the importance of test security for neuropsychological and psychological tests, highlighting the need for clear guidelines to ensure their integrity across various settings like clinical and forensic environments.
  • A group of neuropsychologists collaborated to create detailed recommendations aimed at maintaining test security, explaining the serious consequences of failing to do so for both the field and society.
  • The document provides specific procedures for safeguarding sensitive test information, urging clinical neuropsychologists to take actions to prevent unauthorized exposure to test data.
View Article and Find Full Text PDF

Objective: To cross-validate RAVLT performance validity cut-offs and the RAVLT/RO discriminant function in a large neuropsychological sample.

Method: RAVLT scores and the RAVLT/RO discriminant function were compared in credible (n = 100) and noncredible (n = 353) neuropsychology referrals.

Results: Noncredible patients scored lower than credible patients on RAVLT scores and the RAVLT/RO discriminant function.

View Article and Find Full Text PDF

Citation and download data pertaining to the 2009 AACN consensus statement on validity assessment indicated that the topic maintained high interest in subsequent years, during which key terminology evolved and relevant empirical research proliferated. With a general goal of providing current guidance to the clinical neuropsychology community regarding this important topic, the specific update goals were to: identify current key definitions of terms relevant to validity assessment; learn what experts believe should be reaffirmed from the original consensus paper, as well as new consensus points; and incorporate the latest recommendations regarding the use of validity testing, as well as current application of the term 'malingering.' In the spring of 2019, four of the original 2009 work group chairs and additional experts for each work group were impaneled.

View Article and Find Full Text PDF

Here we report an investigation on the accuracy of the b Test, a measure to identify malingering of cognitive symptoms, in detecting malingerers of mild cognitive impairment. Three groups of participants, patients with Mild Neurocognitive Disorder ( = 21), healthy elders (controls, = 21), and healthy elders instructed to simulate mild cognitive disorder (malingerers, = 21) were administered two background neuropsychological tests (MMSE, FAB) as well as the b Test. Malingerers performed significantly worse on all error scores as compared to patients and controls, and performed poorly than controls, but comparably to patients, on the time score.

View Article and Find Full Text PDF

Objective: Evaluate the effectiveness of Rey 15-item plus recognition data in a large neuropsychological sample.

Method: Rey 15-item plus recognition scores were compared in credible (n = 138) and noncredible (n = 353) neuropsychology referrals.

Results: Noncredible patients scored significantly worse than credible patients on all Rey 15-item plus recognition scores.

View Article and Find Full Text PDF

Objective: To cross-validate the Dot Counting Test in a large neuropsychological sample.

Method: Dot Counting Test scores were compared in credible (n = 142) and non-credible (n = 335) neuropsychology referrals.

Results: Non-credible patients scored significantly higher than credible patients on all Dot Counting Test scores.

View Article and Find Full Text PDF

We reply to Nichols' (2017) critique of our commentary on the MMPI-2/MMPI-2-RF Symptom Validity Scale (FBS/FBS-r) as a measure of symptom exaggeration versus a measure of litigation response syndrome (LRS). Nichols claims that we misrepresented the thrust of the original paper he co-authored with Gass; namely, that they did not represent that the FBS/FBS-r were measures of LRS but rather, intended to convey that the FBS/RBS-r were indeterminate as to whether the scales measured LRS or measured symptom exaggeration. Our original commentary offered statistical support from published literature that (1) FBS/FBS-r were associated with performance validity test (PVT) failure, establishing the scales as measures of symptom exaggeration, and (2) persons in litigation who passed PVTs did not produce clinically significant elevations on the scales, contradicting that FBS/FBS-r were measures of LRS.

View Article and Find Full Text PDF

Objectives: To address (1) Whether there is empirical evidence for the contention of Nichols and Gass that the MMPI-2/MMPI-2-RF FBS/FBS-r Symptom Validity Scale is a measure of Litigation Response Syndrome (LRS), representing a credible set of responses and reactions of claimants to the experience of being in litigation, rather than a measure of non-credible symptom report, as the scale is typically used; and (2) to address their stated concerns about the validity of FBS/FBS-r meta-analytic results, and the risk of false positive elevations in persons with bona-fide medical conditions.

Method: Review of published literature on the FBS/FBS-r, focusing in particular on associations between scores on this symptom validity test and scores on performance validity tests (PVTs), and FBS/FBS-r score elevations in patients with genuine neurologic, psychiatric and medical problems.

Results: (1) several investigations show significant associations between FBS/FBS-r scores and PVTs measuring non-credible performance; (2) litigants who pass PVTs do not produce significant elevations on FBS/FBS-r; (3) non-litigating medical patients (bariatric surgery candidates, persons with sleep disorders, and patients with severe traumatic brain injury) who have multiple physical, emotional and cognitive symptoms do not produce significant elevations on FBS/FBS-r.

View Article and Find Full Text PDF

Objective: The current study evaluated MSPQ sensitivity to noncredible PVT performance in the context of external incentive, and examined MSPQ false positive rates in noncompensation-seeking neuropsychology patients; and investigated effects of ethnicity/culture, gender, and somatoform diagnosis on MSPQ scores, and relationships with PVT and MMPI-2-RF data.

Method: MSPQ scores were compared in credible (n = 110) and noncredible (n = 153) neuropsychology referrals.

Results: Noncredible patients scored higher than credible patients.

View Article and Find Full Text PDF

The current study provides specificity data on a large sample (n = 115) of young to middle-aged, male, monolingual Spanish speakers of lower educational level and low acculturation to mainstream US culture for four neurocognitive performance validity tests (PVTs): the Dot Counting, the b Test, Rey Word Recognition, and Rey 15-Item Plus Recognition. Individuals with 0 to 6 years of education performed more poorly than did participants with 7 to 10 years of education on several Rey 15-Item scores (combination equation, recall intrusion errors, and recognition false positives), Rey Word Recognition total correct, and E-score and omission errors on the b Test, but no effect of educational level was observed for Dot Counting Test scores. Cutoff scores are provided that maintain approximately 90% specificity for the education subgroups separately.

View Article and Find Full Text PDF

Neuropsychologists use performance validity tests (PVTs; Larrabee, 2012 ) to ensure that results of testing are reflective of the test taker's true neurocognitive ability, and their use is recommended in all compensation-seeking settings. However, whether the type of compensation context (e.g.

View Article and Find Full Text PDF

Members of the National Academy of Neuropsychology were surveyed in 2005 to assess then current practices regarding Boston Naming Test (BNT) administration, interpretation, and reporting procedures. Nearly half of 445 respondents followed discontinuation rules that differed from instructions published with the test, and nearly 10% did not administer items in reverse order to achieve the required 8 consecutive item basal. Of further concern, between 40% and 55% of respondents indicated that they did not interpret BNT scores in light of linguistic and ethnic background, and over 25% reported that they did not consider educational level.

View Article and Find Full Text PDF

Practice guidelines recommend the use of multiple performance validity tests (PVTs) to detect noncredible performance during neuropsychological evaluations, and PVTs embedded in standard cognitive tests achieve this goal most efficiently. The present study examined the utility of the Comalli version of the Stroop Test as a measure of response bias in a large sample of "real world" noncredible patients (n = 129) as compared with credible neuropsychology clinic patients (n=233). The credible group performed significantly better than the noncredible group on all trials, but particularly on word-reading (Stroop A) and color-naming (Stroop B); cut-scores for Stroop A and Stroop B trials were associated with moderate sensitivity (49-53%) as compared to the low sensitivity found for the color interference trial (29%).

View Article and Find Full Text PDF

A Rey-Osterrieth Complex Figure Test (ROCFT) equation incorporating copy and recognition was found to be useful in detecting negative response bias in neuropsychological assessments (ROCFT Effort Equation; Lu, P. H., Boone, K.

View Article and Find Full Text PDF

The Rey Word Recognition Test, a brief and simple to administer free-standing neurocognitive performance validity test, was examined in a large known-groups sample (122 credible patients and 134 non-credible patients). Total correctly recognized was the most sensitive score, identifying 54% of non-credible participants using a cut-off of ≤6, while maintaining specificity of approximately 90%. However, specifically rates were somewhat lower in credible individuals with <12 years of education or borderline intelligence, or who were bilingual (spoke English as a second language, or learned English concurrently with another language), indicating that cut-offs may require minor adjustment in these groups.

View Article and Find Full Text PDF

The b Test (Boone, Lu, & Herzberg, 2002a) is a measure of cognitive performance validity originally validated on 91 non-credible participants and 7 credible clinical comparison groups (total n = 161). The purpose of the current study was to provide cross-validation data for the b Test on a known groups sample of non-credible participants (n = 212) and credible heterogeneous neuropsychological clinic patients (n = 103). The new data showed that while the original E-score cut-off of ≥ 155 achieved excellent specificity (99%), it was associated with relatively poor sensitivity (41%).

View Article and Find Full Text PDF

In the present study a large sample of credible patients (n = 172) scored significantly higher than a large sample of noncredible participants (n = 195) on several WAIS-III Picture Completion variables: Age Adjusted Scaled Score, raw score, a "Rarely Missed" index (the nine items least often missed by credible participants), a "Rarely Correct" index (nine items correct <26% of the time in noncredible participants and with at least a 25 percentage-point lower endorsement rate as compared to credible participants), and a "Most Discrepant" index (the six items that were the most discrepant in correct endorsement between groups-at least a 40 percentage point difference). Comparison of the various scores showed that the "Most Discrepant" index outperformed all the others in identifying response bias (nearly 65% sensitivity at 92.8% specificity as compared to at most 59% sensitivity for the other scores).

View Article and Find Full Text PDF