Analysing Reliability and Validity of Reading Comprehension Assessments in Adults
Declan Gilmore. 2016
Abstract
Reading comprehension was assessed amongst native English speaking
students in higher education through four comprehension tests, in order to examine their reliability and validity. In the first testing stage, two multiple-choice variants of the Adult Reading Comprehension (ARC) test, and a non-multiple choice test, the Adult Reading Test (ART), were used for comparison. A re-test was also carried out to determine the reliability of the ARC over time. Due to structural differences, it was hypothesised that test variations would produce significantly different and inconsistent results when assessing the readers overall reading ability, their inferential and literal understanding of text. Analysis of results in the first testing stage showed that participants scored significantly higher in the ART than in the two ARC variants, and that consistent correlations of overall results between the ART and ARC variants were not identified. Some inconsistencies were identified in the inferential component results of each test, but none for the literal results. The re-test analysis indicated no significant differences in the difficulty of the two ARC tests used, but also no significant correlations between results. Overall, these results suggest that reading comprehension is often inaccurately and inconsistently assessed by multiple choice assessments, raising questions about their recommendation for widespread use.