Select Page

This week I’ll discuss my first couple readings of the article assigned for the second critical review of research.

Ertmer, P. A., Richardson, J. C., Belland, B., Camin, D., Connolly, P., Coulthard, G., … Mong, C. (2007). Using peer feedback to enhance the quality of student online postings: An exploratory study. Journal of Computer-Mediated Communication, 12(2), 412–433.

In this mixed-methods, exploratory study, the researchers investigate the use of peer feedback in online discussion forums.  The purpose of the study was to examine student perceptions of peer feedback and its relationship with the quality of discussion in an online forum.  There are three specific research questions, all addressing student perceptions in different ways: peer feedback compared to instructor feedback, quality of discussion, and the value of providing feedback.

There are several measures used in the study, providing opportunities for triangulation and also enriching the context of the data.  Specifically, the researchers adopted a case study approach collecting data using interviews, Pre/Post surveys, and analyzing discussion postings.  Peer feedback and discussion postings were rated using a rubric based on Bloom’s Taxonomy. The participant size was small (n=15) but appropriate for the case study format.  Results showed students valued instructor feedback more that peer feedback, but did find peer feedback valuable.  Students also valued the process of providing feedback  to their peers.

There are numerous strong elements in this study.  The review of literature develops the authors’ conceptual framework in a clear, relevant and concise manner.  In this way, the scope of the study is addressed.  By presenting the review of literature in this way, the authors clarify a specific focus in their study defined by a gap in the current literature.  This gap is outlined by the existence of little research on peer feedback in online environments and few studies on the quality of discourse in those discussions.  This research explores both, using student-reported perceptions, and also rating discussion postings by researchers using a common rubric for instructors and student, based on Blooms Taxonomy. Reasonable measures were taken to ensure inter-rater reliability among the instructors, however the student rating were less refined.

Data are well-reported, again concise but highly contextual and meaningful.  Qualitative examples are limited to highly demonstrative examples of the aspect of the results being discussed.  An transcribed example of the Student/Student/Teacher interaction is provided for clarity, as the peer-review data collection is somewhat unconventional.  Data reporting could be improved by providing a table listing the qualitative results more clearly.  Results of surveys and the discussion posting ratings (both from students and the researchers) are not fully reported.  Additionally, instruments are not included.  It would be useful to see the survey instruments and the rubrics used in the study.

I found this article to be uniquely relevant to me at this time.  I teach a course that opens this week with peer feedback and discussion boards both elements of that course.  I have significantly changed the way that both elements are structured this year.  The authors of this article provide a list of recommendations to conclude the article, which I found to be useful in distilling practical advice to implement as a result of the research.  While my integration is very different than this one, this article provides further evidence that these practices are useful, making me more confident to continue to try these techniques with recommendations based on research.  Additionally, I can further justify the use of these techniques to students.