The Economics Network

Improving economics teaching and learning for over 25 years

Using a Transparent Analytic Marking Rubric in Distance-Learning Economics

Introduction

There is an ongoing debate regarding the advantages and drawbacks of marking rubrics in the academic literature (Panadero and Jonsson, 2020). But for large, multi‑tutor modules in distance learning settings there is a persistent challenge of ensuring that marking is consistent, fair, and developmental for students while remaining practical for tutors at scale. This case study examines the introduction of a transparent analytic marking rubric on a first‑year, introductory economics module delivered at a distance learning university with a geographically dispersed tutor workforce and diverse student body.

By ‘analytic’ I refer to a rubric that decomposes an assessment into explicit criteria (e.g., economic reasoning, academic integrity) with performance level descriptors for each criterion (Brookhart, 2018). By ‘transparent’ I mean that the rubric criteria were shared with students in advance, embedded within assessment guidance notes, tutorials, and used directly in feedback. This is in comparison to a ‘holistic’ rubric, (previously used on the module and not shared with students) which mixed all the criteria into a single, overall judgement regarding the quality of a piece of work.

The motivation was twofold. Pedagogically, the extant literature underlines how transparency scaffolds students’ understanding of quality, strengthens self‑regulation, and leads to improved student performance (see e.g., Jonsson, 2014), particularly for students new to economics or returning to study. For tutors, research has shown (Panadero and Jonsson, 2013) that analytic rubrics promise more consistent application of standards across a large marker pool by making the basis for judgments clearly visible.

To assess the impact of the transparent analytic rubric, we present a before‑and‑after comparison using consecutive cohorts, utilising assignment results and tutor survey findings. By focusing on both student outcomes and tutor practice, this case study offers an evidence-based approach of how transparent analytic rubrics can strengthen consistency and fairness at scale, while clarifying expectations and improving the developmental value of assessment for learners new to economics.

Context and Design

The Level one introductory economics module enrolled 429 students in 2024 taught by 26 tutors. Students studied across qualifications (e.g., BA Economics, PPE, Mathematics & Economics, Business Management). The module presents a pluralist approach, introducing economic history, macro, micro, and international economics.

The rubric was co‑designed with a team of experienced academics/tutors to reflect module learning outcomes. It was embedded in the assessment student notes, incorporated into marking guidelines for tutors, and referenced in pre‑assessment tutorials and feedback forms for students.

The innovation was implemented across core assessment components (three tutor-marked assignments (TMAs)) where students demonstrate fundamental skills in introductory economics—applying micro/macro concepts and theories to real world events in a qualitative essay. Before the analytic rubric, students received only broad guidance aligned to a holistic rubric.

The final design set five equally weighted criteria with performance descriptors by grade:

  1. analysis and application of economic concepts/theories to real events;
  2. understanding of theories and concepts;
  3. presentation and integration of relevant diagrams;
  4. structure and exposition; and
  5. evidence and referencing from article extracts and module materials.

To support consistent application, tutors received guidelines and workshop training before and during the rollout.

Methods and Data

I evaluated the impact of introducing the analytic marking rubric through two complementary strands. First, I analysed module‑wide patterns in average student assignment marks and cohort marking standard deviations before and after the rubric was adopted. I calculated Welch’s t-test to assess whether there were any significant differences pre and post implementation which might indicate that the transparent analytic rubric improved student outcomes.

Second, I analysed the findings from a tutor feedback survey at the end of the rubric’s first presentation to capture perceptions of usefulness and to uncover the key benefits and challenges experienced in practice.

Findings

Using the mean scores, standard deviations, and script counts from student assignments, I examined whether student outcomes changed significantly following the introduction of the analytic rubric; these results are presented in Table 1. Specifically, I compared the 2024J cohort—the first to use the transparent analytic rubric—with the immediately preceding cohort, 2023J. I also extended the comparison by evaluating the 2024J outcomes against those from the five earlier September-start presentations.

Table 1: Welch t-test results for impact of analytic rubric on student TMA scores

ComparisonTMAMean diff (A–B)Welch t-testdfp (2‑tailed)Sig. at 5%
2024J vs 2023JTMA01−0.750−0.8107890.418No
2024J vs 2023JTMA020.7500.5686000.571No
2024J vs 2023JTMA03−0.390n/an/an/aInsufficient data
2024J vs All priorTMA01−0.710−0.935613.2810.350No
2024J vs All priorTMA022.172.29914420.022Yes
2024J vs All priorTMA0314.576n/an/an/aInsufficient data

Calculated using graphpad.com/quickcalcs/ttest2/

Using the Welch two‑sample t‑tests(note 1) on the summary TMA statistics, I found there were no significant differences between 2024J and 2023J for TMA01 (76.02 vs 76.77; p = 0.418) or TMA02 (74.07 vs 73.32; p = 0.571). Comparing 2024J with the five previous years’ presentations, TMA01 again showed no significant difference (76.02 vs 76.73; p = 0.350), however, TMA02 was 2.17 points higher in 2024J and showed a significant difference (74.07 vs 71.90; p = 0.022). TMA03 could unfortunately not be tested because the St Dev data was not yet available for 2024J.

The results of the tutor feedback survey on the practical use of the analytic marking rubric in their marking is summarised in Table 2 below.

Table 2: Main findings of tutor rubric feedback survey (N = 16 tutors)

ThemeHeadline findingEvidence (count, % of tutors)
AdoptionTutors using the rubric all/most of the time16/16 (100.0%)
Ease of useReport rubric is extremely/somewhat easy to use14/16 (87.5%)
Overall usefulnessRate rubric extremely/somewhat useful overall14/16 (87.5%)
FeedbackHelps provide feedback (extremely/somewhat useful)11/16 (68.8%)
ConsistencyIncreases marking consistency (extremely/somewhat useful)9/16 (56.2%)
Time-savingDecreases marking time (extremely/somewhat useful)4/16 (25.0%)
Student benefit (tutor’s perception)Tutors say students find rubric usefulYes: 4/16 (25.0%) | Not sure: 11/16 (68.8%)
Further guidanceWant more guidance/training on rubric useYes: 1/16 (6.2%) | No: 14/16 (87.5%)
Scaling to other modulesSupport for similar rubrics in other economics modulesYes: 7/16 (43.8%) | Maybe: 7/16 (43.8%) | No: 1/16 (6.2%)

Of the 16 tutors that responded (62% response rate), most found it both easy to use (87.5% extremely/somewhat easy) and useful overall (87.5% extremely/somewhat useful).

Where it appears to help most is in making feedback easier (68.8% extremely/somewhat useful) and supporting consistency across markers (56.2% extremely/somewhat useful). Comments included: “It helps increase the consistency of my marking” and “[It’s] [u]seful for student feedback and standardisation.”

Another tutor said that there was a positive impact on the developmental feedback they provided to students, “Generally I think the rubric works well, helping me take a more structured approach to analysing the essay and helping me provide detailed guidance about overall what the student should focus on to improve their grade.” However, some tutors were unsure about the impact of the rubric on student outcomes: “I do have a concern that following the guidance is causing [me] to mark more harshly, as there have been examples where I have followed the rubric but it has produced a total mark lower than expected, and my average grade for the group is lower than normal probably because of the essay marks.”

In terms of practical use, it seemed that the primary challenge is efficiency. Only 25% of tutors reported time savings, perhaps an unsurprising result for first-time implementation, while several highlighted extra manual steps and limited integration with existing teaching software systems. As one tutor put it, “the use of the rubrics has increased marking time,” and another noted that platforms like Blackboard already have embedded rubrics, which could streamline the workflow.

Tutors themselves were uncertain about student‑perceived benefit (73.3% ‘not sure’; 26.7% ‘yes’). Appetite to see the rubric introduced on other economics modules was broadly positive (46.7% ‘yes’; 46.7% ‘maybe’; 6.7% ‘no’), with some tutors mentioning that they did not tutor on other modules but that consistency across modules would be beneficial to students and tutors. Only a small minority said they needed further training (6.2%).

Conclusion and recommendations

Preliminary analyses of the impact of the transparent analytic marking rubric indicates it led to a modest uplift in student scores. This is notable given tutors’ perception that they were marking more stringently, suggesting that improvements in student performance may have offset any stricter and more nuanced application of assessment criteria. Gains were most visible for TMA02, the assessment point historically associated with the greatest student drop-out on the course, indicating that clearer expectations at this stage could have positive implications for retention. This study therefore contributes to a growing body of empirical evidence indicating that transparent rubrics have a positive impact on student outcomes (e.g. Greenberg, 2015; Lipnevich, McCallen, Miles, & Smith, 2014).

These early findings should be treated as provisional and further analysis is warranted to assess the full effect across presentations and cohorts. Nonetheless, the tutor survey points to the perceived benefits for marker consistency and for providing developmental, criterion‑referenced feedback, both of which are central to supporting student learning and progression.

To request a copy of the economics analytic marking rubric, please email Emilie Rutledge at emilie.rutledge at open.ac.uk.

References

Brookhart S. M. (2018) Appropriate Criteria: Key to Effective Rubrics. Frontiers in Education. 3:22. https://doi.org/10.3389/feduc.2018.00022

Greenberg, K. P. (2015). Rubric use in formative assessment: A detailed behavioral rubric helps students improve their scientific writing skills. Teaching of Psychology, 42 (3), 211–217. https://doi.org/10.1177/0098628315587618

Jonsson, A. (2014). Rubrics as a way of providing transparency in assessment. Assessment & Evaluation in Higher Education39 (7), 840–852. https://doi.org/10.1080/02602938.2013.875117

Lipnevich, A. A., McCallen, L. N., Miles, K. P., & Smith, J. K. (2014). Mind the gap! Students' use of exemplars and detailed rubrics as formative assessment. Instructional Science, 42(4), 539–559. https://doi.org/10.1007/s11251-013-9299-9

Panadero, E., & Jonsson, A. (2013). The use of scoring rubrics for formative assessment purposes revisited: A review. Educational Research Review, 9, 129–144. https://doi.org/10.1016/j.edurev.2013.01.002

Panadero, E., & Jonsson, A. (2020). A critical review of the arguments against the use of rubrics. Educational Research Review, 30, 100329. https://doi.org/10.1016/j.edurev.2020.100329

Notes

^ Welch’s t-test is robust where samples are different sizes and have different standard deviations.

↑ Top
Other teaching ideas in
Contributor profiles
Other teaching ideas in
Contributor profiles