Display this page in a printer-friendly format  Print

Rubrics


As mentioned earlier, a rubric is a fairly simple measurement tool that is used to rate student performance against a set of criteria. The criteria are usually a basic rating scale, such as the three-point scale: Above Average—Average—Below Average. We use rubrics to simplify the scoring of student performances for two basic reasons. First, a rubric is usually very easy to complete, which makes it more likely for a faculty member to use within an already busy course. Second, a rubric provides information about students at a fairly global level, which is an appropriate level of analysis for assessing broad goals.

There are two main types of rubrics—holistic and analytic. Holistic rubrics are overall ratings of student ability within a goal area. Written communication ability can be scored holistically using one rating to represent the student’s overall achievement. Analytic rubrics separate student performances into separate subcategories, each with its own rating. A writing sample could be rated for technical accuracy, creativity, organization, etc. Analytic rubrics provide a more fine-grained look at student performance, but they take more time. Holistic rubrics are quick but do not allow for further analysis of subcomponents within an achievement area.

A basic rule of thumb for developing rubrics is to begin simply (using more holistic ratings) and allow the rubric to evolve into a more detailed, analytic rubric over time. Once a holistic rubric is in use, we can decide where we need finer-grained information to evaluate and strengthen our programs. Some basic steps for designing a rubric are given below.

Step 1—Consult the professional literature to identify existing rubrics. Many rubrics are already in use in a variety of subject areas and some of these have been refined using professional standards and empirical research. It makes a lot of sense to use these, at least as models, in designing our own rubrics.

Step 2—Adapt an existing rubric to match our program. It would be a mistake to adopt an externally created rubric without comparing it against our specific program goals. The Modern Language Association may have a model statement on written communication, but this does not necessarily equate to our own program goals in this area.

Step 3—Determine rating scale and descriptors. The number of rating points within a scale is not a critical factor, particularly since these levels can be modified as the rubric evolves. It is a good idea to start with the end result in mind: What levels of information do we need to evaluate a program? It may be that a simple two-point scale is sufficient, as in “Does the student meet the competency in this area—Yes, No?” The descriptors should be clear and easy to understand. Generic descriptors like “Exceeds Expectations,” “Meets Expectations,” and “Below Expectations” are clear and easy to differentiate.

Step 4—Pilot test the rubric. Before any “real” data are collected, the rubric should be pilot tested in real-life situations. Pilot testing will help us to see if the rubric is formatted in a convenient way and whether there is confusion over how to use it. This feedback can be used to revise the rubric, which should now be ready for use.

Step 5—Refine the rubric, as needed. Even though pilot testing will correct preliminary problems with a rubric, the rubric should continue to evolve over time to suit the needs of the program. A holistic rubric may become more analytic as new levels of analysis are added. Generic descriptors may become more specifically tailored to a goal area if the program believes this would make the rubric more useful and meaningful.

Examples of rubrics currently used at Anoka Ramsey are given in the tables below.

Table 7—Written Communication Rubric Example

 

0 : No Achievement

1 : Poor

2 : Average

3 : Excellent

Audience Awareness        
Organization        
Thesis/Focus        
Support        
Mechanics/Usage        

Table 8—Science Rubric Example

 

0 : No Achievement

1 : Poor

2 : Average

3 : Excellent

Formulating Hypotheses        
Data Collection and Interpretation        
Data Interpretation        
Oral Communication of Experimental Findings        
Written Communication of Experimental Findings        

Additional rubric examples are available in an appendix to the Assessment of Student Learning Handbook and through the assessment links listed below:

North Carolina State University Assessment Resources
Association of American Colleges and Universities

There are a few other ways to refine a rubric over time to make it more useful. The descriptors used can be fleshed out with additional information to pinpoint the abilities being evaluated. For example, what is included under “Mechanics/Usage” from the example in Table 7? In addition, defining characteristics of student work at each level can be identified, which can strengthen the reliability of measurement by having a common definition of what each level means. For example, what characterizes the Mechanics/Usage dimension of a paper at the four different rating levels? This type of rubric elaboration can be especially helpful for raters from other fields who want to use a rubric outside of its home goal area (e.g., using a writing rubric in a Nursing class to provide students with helpful feedback about their writing skills).

Next Topic: Introduction to eLumen