Better Preparing Students for Assessment: Marking Criteria, Mock Assessments and Peer-Feedback
This case study evaluates a set of three complementary interventions to help students better prepare for assessments and improve their performance. In particular, it considers i) the provision of more explicit marking criteria, ii) the implementation of a mock assessment, and iii) the provision of opportunities for peer-feedback, on a third-year undergraduate economics module.
After the interventions, a student survey indicated that 84% of students were positive about the approach, with many comments highlighting the benefits of being able to discuss answers with peers. The case study now continues by further detailing the intervention and its evaluation with reference to the pedagogical literature.
Background and Problem
When I started teaching economics, I was amazed that many students had little idea of what a high-marking ‘good’ assessment answer should look like. I had always assumed that students knew what constitutes a good answer, and that some students failed to deliver such answers because they were just lacking in the necessary skills and/or effort. However, it soon became clear to me that this assumption was incorrect. Even if students have the necessary skills and effort, many under-achieve simply because they are unclear on what the features of a good assessment answer are.
It is not the purpose of this case study to discuss the many potential explanations for this phenomenon. However, as described by O’Donovan et al (2008), this pattern may fit with a legacy of the `laissez-faire’ approach where staff in more historic forms of higher education could always correctly assume that students knew what to do, but where this is no longer the case in modern higher education.
Until we acknowledge this problem and make more deliberate efforts to help students learn the features of good assessment answers, some students will continue to under-perform. In an effort to better tackle this problem within my own teaching, I recently adopted the following related and complementary innovations in a third-year microeconomics module: i) the provision of more explicit marking criteria, ii) the implementation of a mock assessment, and iii) the provision of opportunities for peer-feedback. I now explain each in more detail.
First, I made a deliberate effort to provide clearer information on i) what I expected of students for the assessment, ii) how students could best prepare for the assessment, and iii) how the assessment would be marked. It follows Habeshaw et al’s (1993, p.11) suggestion that, “One of the most effective ways of getting students to write assignments in the way you want them to, and to improve the quality of their assignments is simply to tell them what your assessment criteria are.” This is a simple and obvious step, but one that some educators fail to do effectively.
However, the provision of such information may not be enough to help students understand the features of a high marking assessment answer. As detailed by O’Donovan et al (2004), in order for students to fully digest this information and let it influence their preparation, students may also need to complete further activities such as marking exercises and peer-feedback sessions.
Therefore, I also conducted a mock assessment. For this particular module, this took the form of a mock exam. However, for many forms of assessment, any opportunity for practice or drafting will help students to commit to thinking about their assessment preparation and the marking criteria in more detail.
In addition, after completing the mock assessment, students were asked to mark each other’s answers using my provided marking criteria and then offer feedback to their peers, before I finally showed them the own official answers. This serves to further commit students to think more deeply about the marketing criteria. It also benefits the students by providing an opportunity to gain feedback and learn from their peers. This follows Brown et al’s (1995, p.121) advice: “Use self- or peer-marking as a rehearsal opportunity for summative assessment. Let the students see what is expected of them by giving them a practice assessment task. Let students mark themselves against the tutor-provided marking guide. This can enhance very effectively their performance in the final assessment.”
In addition to the references above, the three interventions are well supported within the wider pedagogical literature. For instance, in a well-cited paper involving a two-year study, Rust et al (2003) document a related intervention where participant students were asked to complete some marking exercises with the provided marking criteria, and discuss their opinions with their peers and tutors. Participation in such a scheme was found to significantly raise student performance and learning. The three interventions also fit neatly into Race’s ‘Ripples on a Pond’ model of learning (2007, 2010) which suggests that learning is underpinned by seven factors:
- wanting to learn (intrinsic motivation),
- needing to learn (extrinsic motivation),
- learning by doing, (experience and experimentation),
- learning through feedback,
- making sense of things (conceptualisation),
- learning by teaching others and
- learning by assessing oneself or others.
The three interventions influence student learning through at least factors c)-g).
The approach had a beneficial effect on my end of module feedback. In particular, students gave 4.64/5.00 in response to a statement regarding “criteria used in marking were made clear in advance”. However, to provide a more direct evaluation, I conducted an anonymous written survey. In aggregate, 84% of the students agreed that the interventions were beneficial.
Positive comments included the following. In regard to the detailed marking criteria, one student wrote “Very helpful as it helps to understand how marks are allocated, what determines whether you get full marks or not”. For the mock assessment, another wrote “Gives indication at how to improve / where you are at”. However, most positive comments referred to the peer-feedback, with statements such as “Explaining answer to others always improves understanding of topic”, “Peer discussion allows you to share insights”, “Good to see another person’s angle at the question”.
Nevertheless, not all comments were positive. The negative comments focussed on two limitations and problems. First, in regard to the limits of peer-feedback, one student wrote “Blind leading the blind”. To minimise this problem, efforts should be made to encourage students to discuss the answers in larger and varied groups. Second, some comments referred to the limitations of the interventions for students who had yet to start revising and were therefore unable to fully participate: “Haven’t had enough time to study the material. Would rather you go through a perfect answer for longer.” Students should therefore be given due notice and opportunity to revise before implementing this approach.
Brown S., Race P. and Rust, C. (1995) “Using and Experiencing Assessment” in “Assessment for Learning in Higher Education” P. Knight (Ed), Kogan Page, London ISBN 9780749415327
Habeshaw S., Gibbs G., and Habeshaw T. (1993) “53 Interesting Ways to Assess Your Students” Third Edition, Cromwell Press, UK ISBN: 9780947885120
O'Donovan B., Price M. and Rust C. (2004) “Know What I Mean? Enhancing Student Understanding of Assessment Standards and Criteria” Teaching in Higher Education, 9, 325-335. https://doi.org/10.1080/1356251042000216642
O'Donovan B., Price M. and Rust C. (2008) “Developing Student Understanding of Assessment Standards: A Nested Hierarchy of Approaches” Teaching in Higher Education, 13, 205-217 https://doi.org/10.1080/13562510801923344
Race P. (2007) “The Lecturer’s Toolkit: A Practical Guide to Assessment, Learning and Teaching” Third Edition, Routledge, Abingdon ISBN 9780415403825
Race P. (2010) “Making Learning Happen – A Guide for Post-Compulsory Education” Second Edition, Sage, London ISBN 9781849201148
Rust C., Price M. and O’Donovan B. (2003) “Improving Students' Learning by Developing their Understanding of Assessment Criteria and Processes” Assessment & Evaluation in Higher Education, 28, 147-164. https://doi.org/10.1080/02602930301671Back to top