The Economics Network

Improving economics teaching and learning for over 20 years

Whatever your professional discipline, you are likely to encounter a basic planning scheme something like the diagram below:

The details vary, but the principles are constant; and for our purposes the most important stage is how Evaluation feeds into future Planning; how prior experience changes our next efforts.

There are of course clear parallels with the experiential learning cycle. But here, the emphasis is not so much on your own learning (important though that may be) as on the practical impact when you repeat the course or module. And that, of course, is the basis of Action Research.

How I learned to stop worrying and love evaluation

"Evaluation" and "assessment" are two terms which overlap and are sometimes used interchangeably. For present purposes, "assessment" is confined to student learning and "evaluation" to course design and teaching. Obviously each informs the other. Note that US usage often refers to "evaluation" where the UK would use "assessment".

Evaluation is the key to the improvement of teaching: if you do not review what you have done and achieved, how can you know how to improve it? Evaluation is to teaching as assessment is to learning.

I used to dislike it almost as much as students (still) hate being assessed! That is rather too sweeping. What I really disliked was/is spuriously absolutist evaluation which is about judgement rather than information for development (summative rather than formative). I used to "take it personally", particularly when student evaluations implicitly—and sometimes explicitly—compared me adversely with my colleagues. One effect of this personalised reaction is to inhibit you from comparing feedback with your colleagues. (Even when I got good evaluations for my "party pieces" I treasured them but was not prepared to share them.) It was team teaching which overcame that problem, when we naturally shared the evaluation material; and discovered how useful it was to exchange ideas beyond the usual superficial, "Oh, it went OK, I think."

It is potentially painful and even humiliating, and in an era of often oppressive "quality assurance" can be a source of considerable stress. The evidence of student evaluations is often taken out of context and used quite inappropriately by distrustful management, to the extent that many teachers (myself included on occasion, if I am honest) fudge it. Nevertheless, it does pay off, so steel yourself and let's get on with it.

Indeed, there is plenty of evidence from studies of the exercise of similar kinds of judgement in similarly complex fields where social and technical issues interact (such as economics) that the more dogmatic and assured judgements are, the more likely they are to be wrong (Tetlock, 2006) *

It is effective only if it is undertaken with due regard for context and task. Evaluation of teaching for management purposes, intended to contribute to the QA audit trail, tends to rely on standardised and usually quantitative questionnaire and form-based methods, which are often worse than useless for teaching development purposes ("worse than useless" because when students are involved, questionnaire fatigue often sets in early, and you can easily use up your goodwill on required forms well before you reach the useful, customised stuff).

(On the relationship between quality assurance, practice and values, see this paper, and on the impact of the QA culture, see this one (pdf).)

Evaluation is more of a frame of mind than a set-piece activity. Indeed, the end-of-course or -module evaluation may be one of the least useful or informative methods, just as on-going formative assessment of student work is often more valuable than summative grade-awarding procedures.

Evaluating evaluation

Following on from this, it is clear that all evaluation is not equal. The standard test for Quality Assurance purposes is to ask whether some practice is "fit for purpose" or not—and that of course depends on what the purpose is. I'm not going to list all the possible purposes, just to say that some of them are contradictory. The most elaborate (and expensive) form of evaluation in place in the UK is inspection by the Office for Standards in Education (Ofsted); but it is clear that while it may contribute to raising standards in some respects, it may also stifle innovation and creativity and institutionalise some poor practice. See blogs such as Scenes from the Battleground, Pragmatic Education, and this discussion from the LSE blog among many other sources. Note that Ofsted's remit does not extent to universities (apart from Education Departments), so most of the discussion is school-focused.

And that debate poses the question—is it worth it? Forget all the arguments about digging plants up to see whether they are growing, or the resources, time and effort which are tied up in evaluation when they could actually be deployed in developing teaching... The key question in evaluating evaluation is, in the vernacular—how much bang for your buck?

When I was being interviewed many years ago for the headship of a semi-autonomous department within a college, part of the process was a session with the rest of the team. They asked me how I saw the job. I said it was to hold the line and keep the rest of the college at bay to let them get on with their work. I had a great time and the unit was successful for about fifteen years with very low staff turnover. I wouldn't have got away with that answer now.

Evaluation procedures are frequently hopelessly inefficient, and yet for reasons to do with the hierarchical nature of educational institutions, the separation of the burden of generating data from the management process which demands it, the obsession with accountability on the backs of those managers, and all too frequently the unclosed loop of actually using the material—it often appears impossible to abolish them, and to step back.**

The challenge is to find methods which make the least demands on the time of the students or the teacher—in terms of planning, implementation, processing and interpretation, and at the same time deliver the most useful information. You need to get more out of the system than you put in, and that is where many systems fail.

The key is probably to be found in the precision of the questions asked—questions that students themselves would want to respond to—and their engagement with the process. If they can see that they are contributing to the improvement of their learning experience, they will respond. They may well be asking for things you don't want to offer, such as more explicit guidance on how to undertake the assessments to get the best grades, or which items on the reading list they can safely neglect, but even those requests tell you something about their motivation and learning strategies. You will not simply accede to their demands, but you get an idea about their underlying concerns, and you may be able to discuss them with the class explicitly.

I do remember asking one student about her motivation for the course. She replied "I want a Mazda mx-5." She was a nurse, and becoming a nurse tutor would be the next step to her dream car...

This is setting up a conversation about learning. That is what all the devices and instruments and methods are for. I'm fortunate; nowadays I work with relatively small groups of articulate mature students who have fairly clear ideas about their motivation for doing the course. So I don't generally have to use explicit evaluation techniques (except for demonstration purposes when it is the topic of the teaching); the evaluation is implicit in the conversation, and that is what I aspire to.


* See for example Gardner D (2010) Future Babble London; Virgin Books; Silver N (2012) The Signal and the Noise London; Penguin; Tetlock P (2006) Expert Political Judgement Princeton NJ; Princeton University Press

** See Coffield, 2008

This is an archived copy of Atherton J S (2013) Learning and Teaching [On-line: UK] Original material by James Atherton: last up-dated overall 10 February 2013

Creative Commons License
This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 Unported License.

"This site is independent and self-funded, although the contribution of the Higher Education Academy to its development via the award of a National Teaching Fellowship, in 2004 has been greatly appreciated."