Validity and reliability in writing assessment rubrics

State-approved teacher preparation programs at the graduate level are also available for individuals with non-teaching bachelor degrees interested in licensure at the Elementary, Middle, or Secondary school grade levels. Any changes resulting from these factors supersede the program requirements described in the catalog.

Validity and reliability in writing assessment rubrics

Defined by the reader-text relationship: Validity and reliability in writing assessment rubrics comprehension and recall Evidence of content validity According to Standards for Educational and Psychological Testinga fundamental concern in judging assessments is evidence of validity.

Reading Assessment Database: Clipboard of Selected Reading Assessments - SEDL Reading Resources

Assessments should represent clearly the content domain they purport to measure. For example, if the intention is to learn more about a student's ability to read content area textbooks, then it is critical that the text passages used for assessment be structured similarly. Based on their study of eight widely used and cited IRIs, Applegate, Quinn, and Applegate concluded that there were great variations in the way IRI text passages were structured, including passages with factual content.

They observed that biographies and content area text, in some cases, matched up better with the classic definition of a story. In a similar manner, Kinney and Harry noted little resemblance between the type of text passages included in many IRIs and the text type typically read by students in middle and high school.

Thus, it makes sense that if the goal of assessment is to gain insights on a student's reading of textbooks that are expository, then the text used for the assessment should also be expository. Relative to the IRIs examined for this analysis, text passages varied by genre and length as well as by whether the text included illustrations, photos, maps, graphs, and diagrams.

A discussion of the ways in which the various IRIs approach these issues follows. Passage genre With regard to the text types included in the IRIs under review here aligned with the perspective that reading comprehension varies by text typefive of the eight IRIs provide separate sections, or forms, for narrative and expository passages for all levels, making it easy to evaluate reading comprehension and recall for narrative text apart from expository material Applegate et al.

However, caution is advised. Despite the separation of genres, in some of the current IRIs, consistent with Applegate et al. The passage is placed in the Expository Form LE section; however, the first comprehension question asks, "What is this story about?

In fact, the authors note most of the passages were drawn from textbooks. A few of the IRIs appear to take a more holistic approach in their representation of the content domain. In these IRIs, there is no clear separation of narrative and expository text passages.

Passage length While the passages generally become longer at the upper levels to align with the more demanding texts read by older students, across inventories passage lengths at the same levels vary; some cases, within the same inventory, authors offer passages of different lengths as options at the same levels see Table 1.

Council for the Accreditation of Educator Preparation

For example, finding that beginning readers sometimes struggled with the word, pre-primer passage in earlier editions, Johns now includes in the ninth edition of BRI a second, shorter passage option of 25 words for each form that offers passages at the preprimer level.

In a similar manner, he offers passages of two different lengths at levels Pictures and graphic supplements Noting the benefits and drawbacks of including illustrations and other graphic supplements with the passages, IRI authors vary in their opinions on this matter.

To eliminate the possibility of readers' relying on picture clues rather than their understanding of the text, Silvaroli and Wheelock and Burns and Roe exclude illustrations entirely. BaderCooter et al. Providing examiners with options for comparing beginning readers' performance, Applegate et al.

Moreover, Leslie and Caldwell provide a number of assessment choices at levels 5 through high school, allowing for in-depth and varied evaluations of students' abilities to use different types of graphic supplements typically found in science and social studies textbooks, such as diagrams, maps, photos, and pie graphs.

Evidence of construct validity According to Standards for Educational and Psychological Testinga valid test also captures all the important aspects of the construct i.

Across IRIs examined, comprehension question frameworks varied in terms of which aspects of narrative or expository text comprehension they centered on, as well as what dimensions, or levels, of comprehension they measured. In addition, across the IRIs reviewed, assorted measures were used to identify extraneous factors potentially affecting comprehension scores.

A discussion of the various ways in which each IRI handles these issues follows. All of the IRIs attempt to assess these areas either through their question schemes alone or in combination with a retelling and rubric assessment; however, in some cases, the authors use different terms for the dimensions of comprehension they measure.

For measuring narrative text comprehension and recall, six of the eight IRIs focus their question schemes and retelling rubrics on story elements e. It should be noted that the question schemes of Burns and Roe, Johns, and Woods and Moe are structured differently see Table 1.

Thus, if their question schemes are used to evaluate narrative comprehension independently without a retelling and the associated rubric with story elements criteria, then a student's grasp of narrative text structure will not be evaluated. In the assessment of expository text comprehension and recall, there is greater variety across IRIs.

How to use this page

Four IRIs use question schemes or rubrics based on the levels of importance of information e. Taking a different approach, Woods and Moe and Cooter et al. Johns includes a variety of rubric options specific to narrative and expository text passages but also more holistic rubrics that he suggests can be used with retellings of any text type.

In addition, in the QRI-4, Leslie and Caldwell provide a think-aloud assessment option useful for capturing information about the strategies readers use while they are in the process of constructing meaning based on the text.

To facilitate the use of this assessment option, some of the expository text passages at the sixth, upper middle school, and high school levels are formatted in two different ways that allow for conducting assessments with or without student think-alouds.

The authors also provide a coding system for categorizing the think-aloud types based on whether they indicate an understanding or lack of understanding of the text. It should be noted that Bader and Silvaroli and Wheelock use similar criteria for assessing comprehension and recall of narrative versus expository text.

For example, in using the BRLI Bader, for the assessment of narrative and expository passages, readers are asked to retell the "story" p. In addition, there is a place on the evaluation sheet for checking off whether a student's retelling is organized; however, criteria for making this judgment are lacking.

Without a theoretical framework and clearly defined criteria to guide the examiner, it is difficult to determine if the assessment effectively captures the essential qualities of reading comprehension and recall. For example, in the Reader Response Format section of the IRI the same scoring guide used to evaluate a student's recall of characters, problems, and outcome or solutions for the narrative "It's My Ball" p.The CFT has prepared guides to a variety of teaching topics with summaries of best practices, links to other online resources, and information about local Vanderbilt resources.

Validity Concerns in Rubric Development Concerns about the valid interpretation of assessment results should begin before the selection or development of a task or an assessment instrument.

validity and reliability in writing assessment rubrics

A well-designed scoring rubric cannot correct for a poorly designed assessment instrument. The Power of Reliable Rubrics • Consider how rubric assessment can be improved in our own institutions.

The use of scoring rubrics: Reliability, validity and educational consequences. Educational Research Review, 2(2), – • Petkov, D., & Petkova, O.

(). Development of scoring rubrics for IS projects as. Mar 27,  · Footage includes Interview with Ali Rezaei, professor, College of Education, Cal State Long Beach Scene from Computer Science Lab, Cal State Long Beach. Library Instruction Assessment.

validity and reliability in writing assessment rubrics

A consistent approach to assessment is essential to improve library instruction. The Association of College and Research Libraries identified assessment and evaluation as an important elements of information literacy best practices (ALA, ). A rubric for one writing assignment may not be appropriate for a different written assignment.

Be sure to review and revise your rubrics for different assignments and for different semesters so it remains a valid instrument. reliability, and validity issues.

[BINGSNIPMIX-3
Carroll County School System | Carrollton GA