Possible rubric criteria for group member evaluations: Teamwork skills

This is the 6th post in a series titled ‘All Things Group Work’. The home page is here.

A lot of group member evaluations will seek to develop some or all of the graduate attributes of teamwork, communication, career readiness, self-awareness /emotional intelligence, as depicted in this illustration by Rebecca Smith from the University of Adelaide. In this series of posts, I present a range of variations of one of these central themes, rationalise why it may be of use, and suggest design strategies to maximise their use.

Teamwork skills

  1. ‘The group member collaborated well with others in the team’ – teamwork is synonymous with collaboration, but many assessments are not designed to be ‘traditionally’ collaborative – they are more about the curation of individual parts, and so to make this criterion applicable, the collaboration may need to focus more on organising the initial distribution of tasks, the setting up of team processes, and the collective review and iteration of the final curation. If the task is more aligned to collaboration within the distinct parts, then collaboration will involve providing constructive feedback to others as the project progresses. Asking students to stipulate what the collaboration entailed in a comment based on this criterion will provide you with an understanding of how they collaborated, and this should be modelled for clarity.
  2. ‘The group member gave consistent constructive feedback on the project’s progress’ – this can be both positive and negative, but it’s important to realise that providing constructive feedback is a great skill that requires training. Providing positive feedback to the team is powerful in helping generate team spirit, and can assist in motivating the team to complete tasks to a strong standard. Negative feedback is perhaps a lot more difficult, as it requires delicate commentary comprising of being able to articulate the lack of progress and its reasons, which may involve the identification of individuals. Both require modelling and plenty of active training and practice to be able to do well, and this can be achieved in dedicated time in tutorials using role plays or scenario(text)-based examples.
  3. ‘The group member gave consistent constructive feedback on their peers’ progress’ – again, this can be positive or negative. Positive feedback is important because sometimes students don’t realise what their strengths are, and identification from peers not only provides that information but also helps identify possible future leadership opportunities related to the strength. Negative feedback requires sensitivity and awareness of how intention can be mis/understood through language. Both require modelling and some active training and practice to be able to do well, and this can be achieved in dedicated time in tutorials. A productive method is to set up some role plays, either physically or in written form. For example, student 1 does A, B, C, so what is your response? If your rubric also requires students to make comments, a good approach to the modelling is to play devil’s advocate and imagine the types of constructive feedback that are likely to occur in the task that you set. Your modelling will demonstrate the tone and also the length of the comment required. What is necessary however for the evaluations to be valid is for students to understand objectively what each peer is required to do in the project. This eliminates the need for subjective interpretation of others’ performance and reduces the likelihood of conflict. Getting groups to create a shared document outlining each person’s responsibility will be useful here.  
  4. ‘The group member completed their share of the work’ – without any shadow of a doubt, this is the dominant gripe of students involved in group work. Bearing in mind that it is mostly a characteristic of groups dominated by low-achieving and/or unmotivated students, a potential way to mitigate this is to design the task in such a way that clearly identifies the parts of the task that can be broken up. Group members indicate which part they have done, the parts are marked individually and then scores are combined to arrive at the group score. Individual scores are then moderated if the peer feedback suggests an imbalance in productivity. It should also be noted that often, when students are left to their own devices to distribute the tasks, they unknowingly unevenly distribute the workload, and it is only later that the individuals who are laboured with a greater share feel aggrieved, as the inequity becomes clear to them. This has implications for the validity of the evaluations and should be taken into account when moderating the final group scores.
  5. ‘The group member completes work in a timely manner’ – this is a lesser version of the previous criterion and is more about the lateness of submissions as opposed to not doing the work. For this to be an effective criterion however, it really needs to be assessable several times, which means that the task should be designed in such a way that requires milestone activity before the final product is due. And this is where standardising the frequency will be necessary: there is little validity in one group creating five opportunities for evaluating timeliness and other groups only doing it based on whether the due date was met. The rubric criterion should stipulate the frequency as this will encourage students to establish the milestone dates, and this will be apparent if you discuss the rubric before the task is begun. It will also be necessary for the group to create a shared doc so the occurrences can be documented. This also provides a recalcitrant with the opportunity to explain their neglect, and the opportunity for the group to consider it when making their final evaluations, something that encourages effective communication.  
  6. ‘The group member completed tasks to an agreed level of quality’ – this type of criterion is fraught with complexities: for a student to effectively evaluate the level of quality of their peers’ work, they need to understand what each assigned component entailed, and then understand what quality looks like in each. For validity across the groups to be maintained, it will be necessary for the tasks to be clearly defined by you before students begin so the evaluation of each component can be consistently measured against a baseline. If you don’t do this and peers assign tasks themselves, then peers may underestimate how much work is involved in each component, and then unfairly perceive and negatively evaluate the variation in quality if components aren’t done well. Bear in mind that modelling each component, so that quality can be understood, is also time-consuming and may interfere with academic integrity. Because of these factors, it may be wiser to leave this criterion out.  

The next post presents options for Communication skills.

I’m Paul Moss. I’m a learning designer at the University of Adelaide. Follow me on Twitter @edmerger or on LinkedIn

Leave a Reply