Can programme level assessment help with wicked AI?

A group of students round a table having a conversation with a generative AI character

I’ve just been reading a paper by Jiahui Luo about trust between students and staff in the context of GenAI and assessments in higher education. We’re discussing this in the Learning and Teaching Team in the Institute for Academic Development at our team meeting tomorrow. This set me thinking about the wicked problem of trust between teachers and students in contemporary higher education. This isn’t a new wicked problem but it’s been amplified by the reduction in public funding for higher education and the rapid growth in the possibilities presented by AI in recent years.

The trust between students and teachers in higher education has always been shaped by complex power relations. For example, teachers have the power to grade students in ways that influence many of their life opportunities. Students can support or harm teachers’ careers by the feedback they give and by complaints they raise. All of this interacts with the difficulties of trust driven by marginalisation of staff and students due to race, gender, disability and more. Then add into the mix a context where public funding per student has declined significantly in many countries. This has led to larger classes and less time for considered and compassionate dialogue about assessment and feedback between staff and students. The use of blunt metrics to guide decision making in higher education has also eroded trust between teachers and institutions, making it harder for teachers to find the emotional energy for relationship building with students.

The along come the complexities of GenAI which accelerate the need for deep and ethical rethinking of the nature of authorship, agency and many roles in society. It’s sad but not surprising that this has led to widespread discourses that focus on preventing cheating, instead of creative and ethical engagement between human and more-than-human contributors. All of these drivers seem to be contributing to the emergence of greater distrust between staff and students.

So what can we do about this as teachers, course leads, programme leads … I think one useful move is to think about assessment at the programme level rather than at the course level*. If we think about how to meet the broad aims of the programme across ALL of the courses – rather than focusing on what’s covered within the assessments for each individual course – that frees up some possibilities:

  1. We can reduce the total number of assessments which gives us more space to have good trusting dialogues with students about how we can support them to use GenAI well and ethically in ways that will help prepare them for their future lives.
  2. We can build progression of learning across a programme to allow students better opportunities to learn good academic practice with AI. This requires a gradual build up of the level of challenge in the tasks we set students that is well aligned with where students are starting from. It also requires multiple opportunities to try out key elements of the practices of the subject areas that students are studying, alongside repeated conversations to unpack any misconceptions and concerns.
  3. We can open up space to design forms of assessment that connect with students’ lived experiences, engage with messy real-world challenges, and build in good use of GenAI.

All of the things in this list can help to build the trust that is so essential to good higher education.

*At Edinburgh we usually say ‘programme to refer to 4/5 years of full-time undergraduate study or 1 year of full-time postgraduate study, or part time equivalents. We say ‘course’ to refer to parts of programmes that typically involve 200 hours of student learning time.

Image created by Vel McCune using DALL-E

Leave a comment