Abstracts of Scholarship on Peer Review

Guidelines and Tips

 


Foundational Texts

Annotations provided by the University Writing Program's Phillip Troutman, unless otherwise noted.

  • Chisholm, Richard M. “Introducing Students to Peer Review of Writing (PDF).” The WAC Journal 3.1 (1991) 4-19.
    Troutman's annotation: This article articulates a rationale for teaching students peer review and provides a method for introducing them to key concepts and realistic practices, modes of developing trust and accepting criticism, pragmatics of providing and receiving peer review and, briefly, its relevance beyond the academy. Chisholm describes his exercise, in which he has students read his own draft essay about the importance of peer review (which is provided in this article as well) and then respond orally in class to him about that draft. Both his description and his draft essay itself — the one students read — are useful.
  • Elbow, Peter, and Pat Belanoff. Sharing and Responding. 3rd ed. Boston: McGraw Hill, 2000.
    Troutman's annotation: This remains the most important general work on peer response pedagogy for writing courses. It emphasizes the authenticity of the reader’s response as a reader, regardless of expertise, and provides models for peer readers to follow to give the writer specific kinds of response: listening; say-back; centers of gravity; what is almost said; reply; voice; reader’s mind; metaphoric description; believing; doubting; descriptive outline and criterion-based feedback. It emphasizes the writer’s responsibility in asking readers for specific kinds of feedback, but also in listening to readers’ response. The writer ultimately must decide what to do with the response.

 


Case Studies

  • Kelley, Lauren. “Effectiveness of Guided Peer Review of Student Essays in a Large Undergraduate Biology Course (PDF).” International Journal of Teaching and Learning in Higher Education 27.1 (2005) 56-68.

    From author's abstract {and article}: This paper compares types of student commentary received between a control and guided rubric in an introductory biology course in order to determine if guided questions augment the amount of “feedforward” responses, questions and suggestions that consider the next draft and are reported to be more beneficial than feedback. Results indicate that guided rubrics significantly increase “feedforward” observations and reduce less useful categories of feedback, such as problem detection and meanness.

    Guidelines in the form of questions and checklists help students provide commentary that addresses a wider variety of issues and problematic sections of the text. Differences between rubrics, however, had limited influence on student attitudes post-peer review. [Overall, approximately half (48%) of students commented that they thought peer review was useful. Reported reasons why peer review was ineffective remained consistent between rubrics, with the most cited reasons being lack of time/effort from reviewer, inadequate peer reviewer and vague/confusing review.] {Another potential reason is the low level of explanation present in both rubrics [i.e., the lack of] directive comments, or statements commenting on specific changes exclusive to the paper.} {The lack of harsh commentary in this study may be due to the fact that both authors and reviewers were identified on the rubric.}
     
  • Reynolds, Julie, and Vicki Russell. “Can You Hear Us Now?: A Comparison of Peer Review Quality when Students Give Audio versus Written Feedback (PDF),” The WAC Journal 19 (2008) 29-44.

    From authors' abstract {and article}: This study followed first-year composition students for one semester and assessed the quality of their peer reviews when they gave audio versus written feedback to their classmates. In general, the study found that the quality of audio reviews was higher than that of written reviews. Additionally, most students preferred giving and receiving written feedback because, most importantly, they found they could present their ideas best in writing. [The most common reason . . . was that processing audio feedback was more time consuming.] [Students listening to audio feedback have to interpret the reader’s comments and decide how to respond; both of these activities require active learning and thus have much greater potential to enhance students’ development as writers.]
  • Volz, Tracy, and Ann Saterbak. “Students’ Strengths and Weaknesses in Evaluating Technical Arguments as Revealed through Implementing Calibrated Peer Review™ in a Bioengineering Laboratory.” Across the Disciplines 6 (2009) 1-21.
    From authors' abstract: Analysis shows that trained peers' holistic ratings of posters are linearly correlated with instructor ratings (r = 0.6). In peer review, students also demonstrate expert skills, as compared with the instructor, on low-level cognitive tasks such as knowledge of material. However, students routinely overrated their peers' posters and their own as compared to the instructor on high-level cognitive tasks such as evaluation. Student self-evaluations also do not correlate well with instructor evaluations on a holistic scale (r = 0.17).