This is the 12th post in a series titled ‘All Things Group Work’. The home page is here.
We have already discussed the importance of building the fundamental skills that students require to successfully participate in group member evaluation (GME) here. The skills include writing a strong rubric, teaching students how to interpret the rubric and teaching students how to deliver constructive feedback. It is worth refreshing your knowledge of this before you design GME.
This post focuses on how technology can be leveraged to improve GME efficiency.
In the quest to see if GME could be made more efficient, The University of Adelaide invested in a GME tool by a quite innovative company called Feedback Fruits. This post will focus on how the tool enhances GME at the institution.
The tool is very intuitive to use and students report excellent experiences when using it. It has numerous options for configuring GME, syncs with the group set up in most LMSs, and allows for anonymous feedback. More information about the tool can be found here.
Demanding student attention
Perhaps one of the more useful functions is its ability to moderate the final grade each group member receives dependent on the ratings provided by their peers. This ups the ante in terms of the GME being taken seriously, and incentivises members to follow established charters and team processes.
The tool creates factors to assist in the moderation. The factor is a proxy for the perceived reflection of how a student was valued by others in the group. Importantly, however, the factor is not automatically applied to the grade but requires the coordinator to manually agree to the arrived-at score. The tool detects outliers and so reduces the burden of time here. What tends to happen, if the tool’s purpose is well understood by groups, is that few groups require intervention. But if it is required, it is this step that facilitates conversations with groups if one or more of the members have received a significantly reduced grade. Analysis of the comments will go some way to providing the necessary context to make a final decision.
The following explanations of the two options for moderating grades are described below, taken directly from the Feedback Fruits website:
Group contribution factor
This factor uses the average of the ratings received from criteria for a student and compares this to the average rating received from this student’s team members. One out of three things will happen to the student’s project grade in this case:
It will stay the same (nothing gets deducted or added), the project grade gets multiplied by the contribution factor the student receives, or the project grade for this student will be reduced to zero (0).
By default these values are set to give students:
- A zero (0) if their contribution factor is between 0 and 0.49
- The project grade is multiplied by the received contribution factor if their contribution factor is between 0.5 and 1
- An unaltered project grade if the contribution factor is between 1.01 and 2.
Group skill factor
Here, the factors are not calculated relative to the other team members’ scores but rather use the ratings received compared to the maximum points possible for the criteria to come up with a factor between 0 and 1. An important difference here is that a student’s factor does not change depending on the scores the other students receive. Again, the teacher is able to configure the values that decide what will happen to student project grades when they receive their average scores.
The defaults for this group contribution option are set to:
- Give students a zero (0) if they score between 0 and 0.55
- The project grade is multiplied by the average score if they score between 0.56 and 0.8
- An unaltered project grade if they score anywhere between 0.81 and 1
The next post explores some of the pedagogical considerations with the settings you choose in the GME tool.
Aggarwal, P., O’Brien, C. L. (2008). Social loafing on group projects: Structural antecedents and effect on student satisfaction. Journal of Marketing Education, 30, 255-264. doi:10.1177/0273475308322283
I’m Paul Moss. I’m a learning designer at the University of Adelaide. Follow me on Twitter @edmerger or on LinkedIn