Thinking about the settings in a tech-based group member evaluation tool

This is the 13th post in a series titled ‘All Things Group Work’. The home page is here.

In the last post, I discussed how technology can be leveraged to implement group member evalution (GME) pedagogy with more efficiency. In this post, I highlight some of the key pedagogical considerations that accompany the settings’ options that re likely to be common to most GME tools.

Time is everything

The common thread throughout most of the discussion below is the application of time. If all of the settings are enabled in the group member evaluation, a student is going to have to spend quite a lot of time on this task, which may reduce their propensity to do it. It is vital that you have an accurate understanding of how long a student would actually spend on reviewing their peers. To do this, undertake a review yourself based on the settings and feedback format you choose, add a little bit of extra time because you are more trained and expert than the typical student in doing such a task, and then multiply the time taken by the number of students in a typical group. There are no hard and fast rules, and it is going to be dependent on context, but a student having to spend more than 60 minutes on such a task may result in them losing motivation, and therefore reducing the validity of their feedback, and it’s value to the peer.

Which type of feedback criteria should I choose?

The choice of rubric, scale or comments will depend on your specific task, but there are considerations for each of these approaches, as all of the approaches attract their own issues of validity.

A rubric may be more appropriate if you want students to specifically consider criteria for feedback. Validity issues will stem from the inconsistent reading of and application of criteria. The quality of the rubric then is all important, and there is good advice on how to achieve such quality here. Perhaps as equally important as writing a good rubric will be your explanation of it to students before they begin the task and your modelling for them how the rubric is applied to several examples. This will give students a standardisation from which they can more accurately apply the rubric to peers.

The scale approach is more informal, but still needs to be standardised so students have an understanding of what the application of the numbers means. An accessible example within the task’s instructions that shows numbers being applied will guide students in their own decision making.

Similar modelling will need to be done for the comments option. Providing students with parameters and examples of constructive feedback is likely to enable students to provide quality feedback that is useful to peers. I have already discussed how to teach constructive feedback here.

Should students write comments as well as apply the rubric/scale?

The answer to this is YES, but consider how much time this would take a student responding to each member in their group. It may be that you only ask for an overall comment on strengths and an overall comment on weaknesses, or for a comment if negative feedback is given for a specific criterion. It will be important that you provide a model and examples of how to do this well. This will help place time boundaries around the comments.

When to let students receive feedback

If students get to receive feedback immediately or only once all members of the group have submitted their responses will depend entirely on your approach to feedback, or maybe the particular cohort you are dealing with. One positive of allowing students to see their feedback immediately is that they get a live understanding of the type of feedback that they themselves could apply to others. The possible negative of that though is that they mimic poor feedback. To prevent this from happening though, how feedback should be applied should be modelled to the students before they begin. Allowing students to review feedback as it comes in may also help distribute their time in completing this aspect of the task.

How many reviews should each student provide?

This will greatly be determined by time. The typical group would have four or five students in it, so thinking practically, if a student is reviewing four others this is going to impact their time. A rule of thumb here would be for you to consider how long it would take a student to apply feedback, depending on the feedback format (rubric, scale, comments). If you budget for this feedback to take 30 minutes, for example, then work backwards from there in deciding how many reviews a student should complete. This might also impact on the number of criteria you have in a rubric, should that be the form chosen.

Should reviews be anonymous?

Enabling this option is likely to allow students to be a little bit more honest and open with their feedback, without fear of reprimand or upsetting friends. It may in fact provide members of a group the only chance they have of openly identifying issues that they would not otherwise be able to say to someone face to face, particularly if that person is quite a dominating character.

What’s next?

Now that you have designed your task, the next step is to explain it to your students. They need to understand the purpose of the GME if they are to engage willingly and diligently with it. That is the focus of the next post.

I’m Paul Moss. I’m a learning designer at the University of Adelaide. Follow me on Twitter @edmerger or on LinkedIn

Leave a Reply