| by James Mannion |

Last Friday the University of Cambridge and Oracy Cambridge and AQA hosted a conference on Assessing Collaboration at Hughes Hall, Cambridge. Following the recent publication of results from the PISA ‘collaborative problem solving’ test (in which the UK performed pretty well, to not very much fanfare) this was a timely opportunity to reflect on the thorny issue of how to assess collaboration. Here are my potted recollections of the day, bolstered by the insights of Ayesha Ahmed, the conference organiser and host.
The grip of groupthink
I kicked off the day with a short talk on The Importance of Collaboration. One thing that is worth repeating here is the importance of using ground rules to avoid groupthink. The word groupthink was coined by Irving Janis (1972), to describe the fascinating phenomenon whereby a group of people make bad decisions because of weird group dynamics. Janis’s research focused on “policy decisions and fiascos” such as the Bay of Pigs, Pearl Harbour and the Vietnam War. Janis identified a number of practical steps that can be undertaken to prevent groupthink. These include:
- Leaders should assign each member the role of “critical evaluator”. This allows each member to freely air objections and doubts.
- Leaders should not express an opinion when assigning a task to a group.
- Leaders should absent themselves from many of the group meetings to avoid excessively influencing the outcome.
- The organization should set up several independent groups, working on the same problem.
- All effective alternatives should be examined.
- Each member should discuss the group’s ideas with trusted people outside of the group.
- The group should invite outside experts into meetings. Group members should be allowed to discuss with and question the outside experts.
- At least one group member should be assigned the role of Devil’s advocate. This should be a different person for each meeting. (Janis, 1972)
There are strong parallels here with the use of ground rules for group talk, a methodology developed by the Thinking Together research group here at Cambridge (see here for some excellent resources and links to publications).
99 problems and perfection ain’t one
I then pondered some of the practical problems with assessing collaboration – problems I know well, having wrestled with them for a number of years as a teacher and evaluator of Learning to Learn. Perhaps the most obvious issue is logistics. If you’re a class teacher and you’re assessing a group discussion among 4 students, which takes 5-10 minutes say – what are all your other students doing during that time? Recording group discussions in a busy classroom environment is also not easy, and then you have to have someone filming / recording it, and then you have to find the time to watch it. Another issue is the subjective nature of judgment, and the associated problems of reliability, validity and moderation. These relate to the limitations of attentional capacity: when observing and making notes on a group discussion, it’s not possible to pay attention to everything that is going on, since there are so many aspects to group interactions. None of these problems are easy to overcome, and when it comes to assessment, perhaps perfection is something to strive for, rather than ever really expect to achieve.
I concluded by setting out the case for how this conference might just save the world. The argument runs as follows:
- Humanity is faced with a number of existential threats (global warming, artificial intelligence, nuclear war, clash of civilisations, bioterrorism, environmental meltdown, economic meltdown, running out of stuff, topsoil erosion etc)…
- Our ability to overcome many of these threats depends on our ability to:
- Communicate with one another
- Consider others
- Collaborate in determining and executing solutions
- Humans are pretty amazing, and there are loads of examples of us being really good at collaborative problem solving. However, a glance at your average news bulletin would suggest that there is also some room for improvement in this area.
- We need to explicitly teach people
- How to speak and listen effectively
- How to get along with one another, and resolve conflict where it arises
- How to collaborate effectively – internalising and culturally embedding the kinds of rules for productive collaboration outlined by Janis and the Thinking Together team
- How to interthink and interact in productive ways
- Research suggests:
- that this can be done in schools, to a very significant degree; and
- that this does not happen in schools to the extent that it should.
- The word oracy has been around for 50 years. However:
- Many teachers haven’t heard of it
- Even those that have – and who value oracy – often don’t make time for it, for a range of reasons.
- That which is assessed is that which gets done. For examples, league tables incentivise schools to “game the system”.
- We need to come up with reliable ways to assess collaboration. The survival of our species – and others – depends on it!
When I first wrote this argument, I intended it as a kind of joke – “no pressure, but the survival of the species depends on what we come up with today”. But as I read it back now, it doesn’t strike me as particularly funny – only pressing.
The internet of things is watching you…
Several of the talks focused on ways to use technology to overcome some of the problems outlined above. For example, Dr Mutlu Cukurova from the UCL IoE Knowledge Lab shared some findings from his fascinating research, which focuses on assessing collaboration using the Internet of Things. Essentially, Mutlu’s research seeks to automate some aspects of assessing collaboration using cameras embedded in objects to assess nonverbal behaviours, and provide real-time visual metrics as to how well students are collaborating. You can read a recent article on the topic by Mutlu and colleagues here (no paywall!).
One task, multiple uses…
Fazilat Siddiq from the Nordic Institute for Studies in Innovation, Research and Education, Oslo spoke about the development of a novel task for collaborative problem solving in a digital environment. Students read a poem and did an open and creative task in which they drew their interpretations of the poem on screens by collaborating digitally using chat boxes to communicate. Fazilat collected scores and also Think Aloud Protocol data to understand more about the collaborative problem solving processes in this task. A selection of Fazilat’s recent publications can be found here.
Assessing individual participation in collaborative tasks
Ayesha Ahmed from the University of Cambridge and Ruth Johnson from AQA described their current study investigating the features of good participation in collaborative tasks: what sort of talk happens during episodes of progress and success in the problem-solving, and what sort of talk happens when the group is stuck? Ruth and Ayesha are developing resources to help teachers and learners to assess these skills in a formative way in the classroom – watch this space!
Global Perspectives
Ashley Small from Cambridge Assessment International Education shared the findings from a small-scale study of teacher perceptions of the iGCSE Global Perspectives. This international qualification includes a teacher assessment of a collaborative project in which students are awarded individual marks and team marks. Ashley explored how three of the teachers who assess this make judgements about the quality of collaboration, using hypothetical scenarios to investigate their approaches to difficult assessment decisions. A clear message to emerge from this session was the importance of sharing clear guidance on how to assess collaboration.
Group thinking and mathematical thinking: Japan vs UK
Taro Fujita from the University of Exeter and colleagues have developed a test to assess group thinking skills using non-verbal reasoning questions – these are graphical puzzles requiring logical inferences to solve them. Eleven year olds in both the UK and Japan had a go at these tests individually and in groups. Taro showed us some extracts from the group talk from the UK and Japan which gave us a fascinating insight into the different approaches to the tasks. Interestingly, when tested individually the UK and Japanese students performed similarly on the task. However in the group setting, the Japanese students significantly outperformed their UK counterparts. This was a small-scale study, and further research is needed to determine the reasons as to why this may have been the case.
Summary and final discussion
Stuart Shaw from Cambridge Assessment International Education rounded off the day with an impressive summary of the day’s talks and left us with some questions to guide our final discussion session. At the start of the day, I had assumed that collaborative problem solving is something that sits almost entirely within the realm of spoken communication. However, during the conference and in this final discussion, a consensus emerged that there are many unspoken features of collaboration, such as nonverbal cues – and indeed that collaborative problem-solving can be done entirely in the absence, as with Fazilat Siddiq’s work involving collaborations on artwork via the internet, using chat rooms as the basis for communication. There was a consensus that assessing collaboration:
- Is never easy
- Can be done well in a range of ways, and across a range of contexts
- Is worth pursuing, for the reasons outlined above.
This reminded me of a phrase I read in a recent piece by Lauren Ballaera from the Brilliant Club: “don’t let the perfect become the enemy of the good”. When it comes to assessing collaboration, that seems to be a useful adage to bear in mind.
References
Cukurova, M., Luckin, R., Millán, E., & Mavrikis, M. (2018). The NISPI framework: Analysing collaborative problem-solving from students’ physical interactions. Computers and Education, 116, 93-109.
Janis, I. L. (1972). Victims of Groupthink: a Psychological Study of Foreign-Policy Decisions and Fiascoes. Boston: Houghton Mifflin.
Siddiq, Fazilat; Scherer, Ronny; (2017) Revealing the processes of students’ interaction with a novel collaborative problem solving task: An in-depth analysis of think-aloud protocols. Computers in Human Behavior 76, DOI: 10.1016/j.chb.2017.08.007
Wegerif, R., Fujita, T., Doney, J., Linares, J. P., Richards, A., & Van Rhyn, C. (2017). Developing and trialing a measure of group thinking. Learning and Instruction, 48, 40-50.