The Economics Network

Improving economics teaching and learning for over 20 years

Alternative Forms of Formative and Summative Assessment

1. Introduction

The nature of assessment is central to everything that students ‘do’ – it governs how they study and learn.

1.1 Summary of the chapter

The primary aim of this chapter is to provide economics lecturers and tutors with practical suggestions on ways of improving the process of assessment. In particular, the emphasis is on assessment strategies that promote a wide range of transferable skills – in light of the increasing pressure on departments to develop skills that are more widely relevant to the workplace.

It is designed to be easily accessible – alternative approaches to assessment are illustrated primarily by examples of current practice. For those readers who do not have the time to read the chapter in full, the key ideas are summarised in Table 1. Throughout, considerable attention is given to the resource implications of introducing innovative strategies of assessment and, in particular, to the implications of innovations for staff time. Suggestions are made for how lecturers can introduce innovations incrementally, given the resources available.

A number of general principles emerge that should help lecturers consider how to approach and design assessment; these are summarised below.

The first, and probably most important step in designing assessment is to identify and evaluate learning objectives: that is, the key skills and knowledge that educators expect students to acquire from particular modules. This chapter provides, in section 1.2, a short exercise that should help this.

Identifying the learning objectives is not as straightforward as it may appear. As is well known, higher education serves an increasingly heterogeneous mix of students, and the teaching and learning goals of degree programmes must reflect this heterogeneity.

A key issue for lecturers in economics relates to the relative importance ascribed to ‘knowledge’ of economics (which is traditionally perceived as the ‘core’ learning objective) as opposed to transferable skills, such as the ability to communicate, negotiate, make effective use of information technology, etc. The latter, whilst more generic in nature, are seen as being increasingly relevant to economics graduates. Throughout this chapter, a mix of assessment modes that promote the widest range of desirable goals will be highlighted.

This said, it must be recognised that there are trade-offs regarding learning objectives, with particular assessment modes promoting certain learning goals but doing less well in developing other goals. Box 1 highlights the relative strengths and weaknesses of particular modes of assessment.[1]

The presence of such trade-offs necessitates that lecturers and, more importantly, departments and institutions are clear about their priorities (and that these are communicated to current and prospective students).

Throughout, emphasis is placed upon the importance of departments and institutions in taking a lead role in evaluating and designing assessment practice. This text is designed to be useful to individual lecturers and tutors and contains many ‘tips’ that should help improve practice.

However, the limited scope and range of individual modules and the heterogeneity of their goals suggest that optimal assessment practice must be co-ordinated at a departmental level, and this is a key message of the discussion.[2] To put it another way, if departments are serious about widening the range of skills students acquire from their degree programmes, the range and type of modules provided must be designed with that in mind.

The structure of this chapter is as follows:

  • Section 1.2 contains an exercise that readers should find useful. It is designed to make educators think carefully about what they are trying to achieve in their teaching, and whether the existing system of assessment best promotes those objectives. This section also presents a list of desirable teaching goals and considers to what extent these goals are supported by a traditional approach to assessment in economics. There is a brief discussion of the importance of increasing pressure on departments to promote a wider range of transferable skills.
  • In section 1.3, there is a review of the purpose of assessment, considering the two key forms – summative and formative.
  • Section 2 contains the core material. It takes a number of alternative assessment strategies and discusses each in turn, revealing their strengths and weaknesses and relevance in an economics context. The intention is that the style and content are practical, and, in most cases, ideas are illustrated with examples. The section builds on the excellent generic review of alternative assessment strategies by Brown et al. (1994).
  • In section 3, there is discussion of some important issues related to assessment –marking, feedback, plagiarism and large group assessment.
  • Section 4 contains a generic bibliography for further reading.

Top Tips 1: Thinking about how best to assess students

  • Identify the learning objectives.
    • What are students expected to gain from the module?
    • What are students expecting to gain from the module?
  • Evaluate which learning objectives matter more than others and tailor assessment procedures to meet these goals.
  • Consider implementing innovations initially on a small scale and develop over subsequent years in the light of experience gained and mistakes made.
  • Diversify assessment procedures.
    • This gives much greater opportunity for students to demonstrate their particular skills.
  • Best practice in assessment is co-ordinated at a departmental level.
    • Departments should have an assessment strategy.

1.2 Evaluating learning objectives

‘…assessors will wish to judge the appropriateness and adequacy of provision (of assessment strategies) against the stated course aims and objectives’. (HEFCE Circular 3/93)

An exercise in identifying learning objectives and assessing ‘your’ assessment procedures

Take a sheet of paper and write down what students would be expected to obtain from the module. What are the anticipated learning outcomes, and what is the relative importance of each? Having done that, consider the extent to which existing modes of assessment promote the desired learning outcomes. Think carefully about what types of skill (and knowledge) are being testing in each assessment, and the areas of student activity that are promoted by its nature and design. On the sheet, place a tick (cross) next to those learning outcomes that it is believed are (not) adequately addressed. Finally, take a look at the list of learning outcomes in Table 1, and consider whether the methods of assessment used are consistent with these.

Table 1 Learning objectives
Skill areaDo your assessment methods promote this skill?
Yes/No/A little/Other
Knowledge of economic principles 
Analytical skills
Problem solving under pressure, breadth and depth of understanding of complex problems
 
Written communication skills
Writing well-presented and structured reports and essays
 
Interpersonal skills
Ability to work with others, demonstrating management and leadership skills
 
IT skills
Skill in using basic computer packages – word processing, spreadsheets, PowerPoint; using the web to research information
 
Independence
Autonomy, self-reliance, self-motivation
 
Flexibility and resourcefulness
Ability to respond to unusual and unpredictable circumstances
 
Strategic thinking skills
Ability to determine own strategy and direction, self-knowledge and self-monitoring of effectiveness
 
Research skills
Finding out, using libraries, finding sources of information
 
Organisation skills
Managing time and deadlines – organising material
 

Hopefully, this has been a useful exercise.[3] It gives instructors an opportunity to reflect on learning methods in terms of goals and effectiveness. The purpose is to show any discrepancies between what is and should be achieved in learning delivery, and the kinds of activity and learning process promoted by the assessment procedures employed. No doubt, individual lists of goals look very different from Table 2, and this should raise some interesting questions. The same table emphasises various transferable skills and this point is taken up later in section 1.

Evaluating a traditional approach to assessment

Brown and Glasner (1999) have found that 90 per cent of a typical British degree depends upon unseen time-constrained written examinations, and tutor-marked essays and reports. The typical approach to assessment in economics may well look a little like this.

Students are expected to prepare answers to a series of ‘shortish’ conceptual questions that are subsequently discussed in tutorials in an informal way under the leadership of the tutor, with the implicit expectation that the tutor provides model answers.[4] Midway through the module, students submit an essay from a list of broad questions. The majority of the final mark comes from an unseen examination, usually taken at the end of the module. Students are normally asked to answer three or four questions of a fairly broad nature but closely related to the material of the lecture course and the principal textbook. Typically, answers are in essay form, each of three to four pages in length.

How successful is this approach in promoting learning outcomes?

The traditional approach promotes a number of learning outcomes. The unseen examination requires students to respond to pressure and time constraints. They develop strategic capacity in respect of the topics studied and the questions answered, and selectivity in the material presented.

In other respects, the traditional approach fares less well. Box 1 lists the broad learning goals that are not adequately promoted by this means of assessment – readers may well disagree.

The list is long and reflects a growing and widespread criticism of assessment methods in HE. A broad criticism of assessment practice is that it is too narrow in its goals. Other authors argue that current assessment practice, rather than promoting learning, is in fact injurious to it (see, for example, Boud, 1992; Atkins et al., 1993; Erwin and Knight, 1995).

A core objective of HE is the development of analytical or ‘thinking’ skills. It is expected that graduates should be able to deal with complex problems in a logical manner, and be able to communicate and present solutions in a variety of ways. However, there is increasing and disturbing evidence that students do not engage in the deep learning process that promotes these kinds of skill, engaging in surface learning and regurgitation of memorised material in a disorderly way (Entwistle, 1981; Gibbs, 1992; Boud, 1992).

Whilst students are encouraged to be self-reliant and self-motivating, assessments rely primarily on appreciation of core material available in key textbooks. However, there are inadequate incentives for initiative in identifying alternative sources of material and developing research-type skills. The more entrepreneurial skills, whilst very difficult to promote, are not addressed. These require students to have much greater control over goals than is usually the case, and a greater flexibility in the method of assessment.

The final area of concern relates to students’ motivation (although this is not a learning outcome). It is clear that the motivation of many students is very narrowly defined in terms of exam performance, whilst there is often little evidence of an appreciation or interest in what they are learning or why they are learning it. Educators must consider the relationship between assessment, the issue of motivation and interest in learning. Any approach that emphasises a wider range of skills and actively engages students in different kinds of activity is likely to generate greater motivation.

Box 1 The consistency of the traditional approach to assessment and identified learning goals

Strengths of a traditional approach to assessment
  • Strategic thinking.
  • Responding to pressure and time constraints.
  • Encouraging a broad knowledge base.
Weaknesses of a traditional approach to assessment
  • Thinking skills – identifying and solving complex problems.
  • Presentation and oral skills – presenting complex problems and solutions orally in a comprehensible way, confidence building, use of PowerPoint, responding to unknown questions orally.
  • Interpersonal skills – communicating with colleagues, negotiating, developing leadership skills and managing interpersonal problems.
  • Research skills – finding unknown sources of information, research on the web and using libraries.
  • Entrepreneurial skills – identifying personal goals and the means to achieve them.
  • IT – basic skills, such as familiarity with core software and use of the internet.
  • Self-motivation and assessment – understanding personal motivation and objectives, and assessing progress achieved.

Transferable skills and government policy

Underlying the debate on the effectiveness of assessment strategies is a more fundamental debate on the purpose of HE. Brown et al. (1994) make the following observation:

There is increasing acceptance that it [assessment] is at least in part to do with preparation for later life and work beyond academia. This recognition has brought with it a gathering momentum for a shift in emphasis from the acquisition of knowledge to the acquisition of skills, from product to process, from grading to competence.

This shift in emphasis towards transferable skills is strongly endorsed by the government and HEFCE. It is motivated in part by surveys of employer dissatisfaction with graduates’ skills, particularly regarding negotiation, decision making and leadership.

Pressure on instructors to diversify learning aims and assessment procedures also comes from students themselves in a competitive labour market. The increasing number of students entering HE implies a greater heterogeneity of backgrounds, student objectives and modes of participation (official and unofficial). To demonstrate students’ abilities, and develop their interests, educators are now obliged to offer richer and increasingly diverse modules.

These issues are relevant to the teaching and learning of economics. It is clear that the large majority of students taking economics modules do not go on to become practising economists – future careers are in finance-related professions and, in particular, accountancy. Arguably, these students have less demand for knowledge of economics and its methodology than for work-related skills.

The issues here raise radical questions about the respective roles of secondary and tertiary education and the structure of educational institutions, departments and degree programmes. What is clear, however, is that instructors must diversify their assessment procedures, so this chapter is designed to aid instructors with some ideas and tips on ways to diversify assessment practice.

Readers who are only interested in practical ideas may wish to skip section 1.3 and go directly to the main content (section 2).

1.3 The purpose of assessment - a generic review

One of the primary purposes of assessment is to be summative. In its summative role, the purpose of assessment is to judge the quality and characteristics of the student and summarise these in a clear and widely acceptable format. Traditionally, the principal mechanism for summative assessment is the end-of-module examination. Summative assessment is assumed to help employers by providing ‘costless’ information on the productive potential of job applicants. It is also a mechanism for selecting students for post-compulsory education, and may be a factor in the reputation and financial security of institutions in higher education. Students care most about the results of summative assessment, as these impact on their employability and prospective earnings. Box 2 summarises the role and purpose of summative assessment.[5]

Box 2 Purpose of summative assessment

  • To pass or fail a student.
  • To grade or rank a student.
  • To allow progress to further study.
  • To assure suitability for work.
  • To predict success in future study and work.
  • To signal employability and selection for employment.

Assessment also has a formative function (Box 3). In this role, assessment is intimately linked with students’ learning processes, helping to guide them in their studies, motivating them, providing feedback on areas of learning requiring further work, and generally promoting the desired learning outcome. Whilst most assessment is both summative and formative, it is argued that the summative function increasingly predominates in a way that adversely affects student learning.

Box 3 Purpose of formative assessment

  • To provide feedback to students.
  • To motivate students.
  • To diagnose students’ strengths and weaknesses.
  • To help students to develop self-awareness.

Assessment also contributes to evaluating the strengths and weaknesses of modules and improving the quality of learning delivery (Box 4).

Box 4 Purpose of assessment with respect to quality assurance

  • To provide feedback to lecturers on student learning.
  • To evaluate a module’s strengths and weaknesses.
  • To improve teaching.
  • To ensure the module is creditworthy.
  • To monitor standards over time.

2. Main content

2.1 Summary

Table 2 provides a summary list of the alternative modes of assessment discussed in the main content. Key strengths and weaknesses are detailed briefly.

Table 2 Alternative assessment techniques and their relative merits

Method of assessmentMeaning and skill areas developed
Group assessmentThis develops interpersonal skills and may also develop oral skills and research skills (if combined, for example, with a project).
Self-assessmentSelf-assessment obliges students more actively and formally to evaluate themselves and may develop self-awareness and better understanding of learning outcomes.
Peer assessmentBy overseeing and evaluating other students’ work, the process of peer assessment develops heightened awareness of what is expected of students in their learning.
Unseen examinationThis is the ‘traditional’ approach. It tests the individual knowledge base but questions are often relatively predictable and, in assessment, it is difficult to distinguish between surface learning and deep learning.
Testing skills instead of knowledgeIt can be useful to test students on questions relating to material with which they have no familiarity. This often involves creating hypothetical scenarios. It can test true student ability and avoids problems of rote- and surface-learning.
Coursework essaysA relatively traditional approach that allows students to explore a topic in greater depth but can be open to plagiarism. Also, it can be fairly time consuming and may detract from other areas of the module.
Oral examinationWith an oral exam, it is possible to ascertain students’ knowledge and skills. It obliges a much deeper and extensive learning experience, and develops oral and presentational skills.
ProjectsThese may develop a wide range of expertise, including research, IT and organisational skills. Marking can be difficult, so one should consider oral presentation.
PresentationsThese test and develop important oral communication and IT skills, but can prove to be dull and unpopular with students who do not want to listen to their peers, but want instead to be taught by the tutor.
Multiple choiceThese are useful for self-assessment and easy to mark. Difficulties lie in designing questions and testing depth of analytical understanding
PortfolioThis contains great potential for developing and demonstrating transferable skills as an ongoing process throughout the degree programme.
Computer-aidedComputers are usually used with multiple-choice questions. Creating questions is time consuming, but marking is very fast and accurate. The challenge is to test the depth of learning.
Literature reviewsThese are popular at later levels of degree programmes, allowing students to explore a particular topic in considerable depth. They can also develop a wide range of useful study and research skills.

2.2 Group assessment

Employers are increasingly looking for the ability to work in and direct a team as a key graduate skill. Flexible work patterns and increased dependence on central IT systems are driving this agenda. Nevertheless, assessment in HE continues to relate to activities that students undertake individually. This is perhaps unsurprising, since academics tend not to have much recent direct experience of working in industry and commerce.

It is relatively easy to visualise the benefits of group projects. Apart from the obvious one (enhancement of interpersonal skills), these activities can be readily combined with other key learning objectives. Groups may, for example, prepare projects and present results orally, in the process developing research and oral skills.

The implications of group work for staff time are difficult to assess. There are potential savings in marking, as a project of, say, three individuals may be less time-consuming than three individual projects. However, much depends on how students disseminate their work, and this approach to assessment needs a high(er) level of supervision.

The major challenge in implementing assessment by group work is how to supervise individual contributions and award grades that fairly represent individual effort. In group work, individuals may have some incentive to free-ride and better students in poorly motivated groups may be discouraged. Top Tips 2 suggests some examples of group work in economics and gives hints on how to resolve the problems.

Top Tips 2: Assessing by group work

  • Group work is not a substitute for deep learning.
  • Insist on a group plan for project work, detailing individual responsibilities.
  • Consider random selection of group make-up.
  • Award group marks plus an individual component.
  • It is often appropriate for the group to present their project and results orally.
  • Group work may not be feasible for courses with large numbers of students or core courses.

For example, consider random selection of individuals to groups. In this way, students must develop relationships with their colleagues with whom they are unfamiliar and whom they may not actually like, in the process developing essential interpersonal skills. Typically, the group task is a type of project on which students work over the duration of the module and it involves some research. As with all kinds of assessment, one has to be careful to define the tasks and expected outcomes in a way that promotes deep learning.

Groups are no less likely to engage in superficial learning than individuals and there is always a danger that instructors focus too heavily on the dynamics of the group and the development of interpersonal skills at the expense of deep learning.[6]

Consider requiring students to submit a short report that discusses their initial meetings, attendance, how the project will proceed and who is to take responsibility for what.

Students are likely to meet anyway, but the written report will motivate the group to organise and discuss amongst themselves the best way forward, and it gives some basis for measuring individual contributions.

As with all forms of assessment, the criteria for assessment should be explicit. Often, it is appropriate for groups to disseminate their work in the form of a group presentation to which all individuals contribute, although it may be better to request a supplementary short report, and thereafter communicate to students what is expected in a presentation.

Apart from enhancing various skills related to presentation, oral delivery can make it easier to evaluate and grade individual contributions and the depth of learning involved. Careful consideration should be given to the allocation of marks, allowing sufficient flexibility to reward individual efforts adequately. One way is to allocate individuals both a group and an individual mark, although the proportion of the final mark should weight the group performance more heavily so as to encourage a collective effort.

As with all innovations, consideration must be given to whether group work complements other core learning objectives or whether it draws scarce student and staff resources from other key teaching and learning processes. On the positive side, group work may be combined with other valuable activities, such as a project or a piece of research, and may culminate in a presentation. This may be a valuable learning exercise in its own right. On the other hand, there is a danger that group work induces students to specialise too heavily on one area or topic at the expense of other aspects of the module. Here are two ways of limiting the extent of this problem:

  • Where the assessment procedure also involves a final unseen examination, consider incorporating a question in the exam that is related to the project work.
  • Consider reducing the number of tutorials to allow students additional time to prepare group work.

The resource implications may or may not involve additional staff obligations. Group work is probably not appropriate for large modules delivered by only one tutor. On the other hand, there are likely to be many scale economies resulting from large-group assessment by a team of tutors.

For example, there are innovative ways of allocating individual marks that take account of the group’s inside knowledge of the relative contributions of each individual. This can work a lot better than may be expected. The group is awarded a group mark that they must divide amongst themselves. For example, a group of four students with 240 marks may choose to share these equally – 60 marks each. Alternatively, they may allocate more marks to the strongest contributions.

It may be surprising that there is evidence to suggest that students are willing to allocate marks in a way that reflects their relative engagement in the project (even if it is to their detriment). It is useful for tutors to insist on a written report explaining the rationale for the marks that have been allocated.

Some tips for allocating group-marked projects are contained in Top Tips 3.

Top Tips 3: Allocating individual marks for group work

  • Allocate to individuals both a group and an individual mark. The final mark should weight the group performance more heavily to encourage collective effort (weightings commonly used are 60:40 and 70:30).
  • Award the group a group mark that they must divide amongst themselves as they see fit – they should have to explain their decision.
  • Award an equal mark to all members of the group, then add further individual tasks for each member of the group. This raises the problem of finding sufficient tasks and allocating them ‘equivalently’.
  • The work submitted should be presented by all members of the group in a ‘live’ situation, so that any ‘free-riding’ may be discovered.
  • Include a question in the examination based on group work.

2.3 Self- and peer assessment

The basic idea behind self- and peer assessment is to provide mechanisms that help students to evaluate themselves and their work more critically. An ability to assess one’s own strengths and weaknesses is an essential life-skill that facilitates personal development whether in study or in the workplace.

Readers should note, in the following suggestions, that students are not involved in final marking. There is always the danger that where the assessment does not contribute to their final mark, they may not take it as seriously as desired.

The rationale for peer assessment has been summarised by Boud (1986): ‘Students have an opportunity to observe their peers throughout the learning process and often have a more detailed knowledge of the work of others than do their teachers.’ Brown and Dove (1991) also argue that well-designed peer and self-assessment can produce the advantages listed in Box 5.

Box 5: Rationale for peer assessment

  • It encourages student ownership of their personal learning.
  • It motivates and encourages active participation in learning.
  • It makes assessment a shared activity, by challenging the proposition that the lecturer is the best person to assess the student’s inputs and outputs.
  • It promotes a genuine interaction of ideas.
  • It stimulates more directed and effective learning, whilst encouraging a more autonomous approach.
  • It develops transferable personal skills.

More recent research has provided some support for these arguments (Searby and Ewers, 1997; MacAlpine, 1999), and these ideas are summarised in Top Tips 4.

Top Tips 4: Self- and peer assessment

  • Good self- and peer assessment forms do not ask questions that allow the student to hide from honest appraisal. Avoid questions that elicit ‘yes/no’ answers, such as ‘Is this a good piece of work?’, or questions that are threatening, such as ‘How many hours did you put into this piece of work?’
  • Good forms ask questions that force self-evaluation.
  • Do not give students the same questionnaire time after time.

For example, a common approach is to provide students with a self-assessment form. This contains a series of questions and issues that encourage students to evaluate critically the quality of their work, and it should correspond closely with the criteria that the learning facilitator used when assessing the work. For example, self-assessment forms may ask the student whether the work has a clear structure; whether it has reviewed the existing literature adequately; whether references are properly recorded, and so on.

Box 6 details some general questions that might be put on an assessment form. Other questions should be specific to the particular task at hand. Asking themselves these questions and submitting substantive written answers requires students to supply a more honest appraisal that should feed back into modifying and improving their work. The completed form is not of great value in itself – it is the process that it induces that is important.

Box 6: Some general questions and prompts to use in a self-assessment form

  • Describe briefly the structure of this work
  • What is the principal argument of this essay?
  • What do you think is a fair score or grade for the work you submitted?
  • What was the thing you think you did best in this assignment?
  • What was the thing that you think you did least well in this assignment?
  • What did you find the hardest part of this assignment?
  • What was the most important thing you learned in doing this assignment?
  • What references did you use most in doing this work?

In peer assessment, coursework is usually exchanged between students who use similar forms to comment on the work of their colleagues. Lecturers may then ask for a supplementary submission that reports on how students have acted upon the comments of their peers. Note that student peer assessment should be anonymous, with assessors randomly chosen so that friendship cannot influence the process.

There is a danger that self- and peer assessment degenerates into a superficial process, since much depends upon whether students understand the purpose and their willingness to participate. It is easy to imagine students completing self-assessment forms simply through obligation, having completed the coursework and with no intention of revising the work in light of any weaknesses they uncover. However, equally, self-assessment can be a support to students – well-thought-out forms help to clarify what is expected of students and can form a natural basis for educators’ final comments and feedback.

As regards resource costs, self-assessment is unlikely to save time. It takes time to initiate the process and design a good-quality assessment form. It also takes time to educate the students to complete it well, and to give feedback after they have completed and submitted their assessment form. Of course, it is also time-consuming for students who may already be overloaded with assessment. The benefits of the process to the educator should take the form of better student performance in the final examination.[7]

2.4 Assessing alternative types of activity in economics

Oral examination

Many student activities that are traditionally examined through written reports or essays may alternatively be examined orally in the form of a viva. Potentially, this approach can give a much clearer idea of the depth of students’ understanding. There is no scope for plagiarism, and little scope for regurgitation of material, at least in carefully managed interviews. There are also benefits in terms of development of interpersonal skills and interview technique.

The time costs should not be severe – there is no marking, although the assessor must see each student individually and this is a logistical problem, especially for large groups. One has to think carefully about the questions asked (with different questions to prevent student collusion). Oral examination can be a risky approach, since validation by external review may be complex and there is likely to be some student resistance. Certainly, an assessor will have to write reports on each student’s performance, detailing the questions asked and the basis for assessment.

The approach is definitely worth thinking about however, and should perhaps be tried out on a small scale as a complement to a more traditional assessment such as an exam – say, allocate 30 per cent of the final mark to the oral examination, reducing the requirements of the exam accordingly.

As another example, suppose students are told to attend a 15-minute viva in which they will be examined on the economics of the East Asian financial crisis of 1997. The lecturer has prepared a bank of questions to ask students that relate not only to students’ knowledge, but also to the process they undertook in preparing for the examination. Some questions can be narrow, to test their basic knowledge, whilst others can be broader and more searching, viz.:

  • ‘What countries were involved and which were most affected?’
  • ‘Do you have data to illustrate your answer?’
  • ‘What model would you use to show how exchange rate depreciation, for example, affects the macroeconomy? Talk us through the dynamics.’
  • ‘Could such a crisis happen again?’

If well prepared, the oral examination allows the instructor to investigate students’ knowledge, skills and commitment in a way that is often impossible in written unseen examinations. It also requires them to read and research much more extensively. On the negative side, there is the ‘stress factor’ of undergoing live interrogation.

With regard to resource costs, this approach need not involve much additional staff time, since it saves on the laborious marking of written scripts. It is of course, time-consuming to initiate, although future years should benefit as both experience and the assessment bank are built up.

Top Tips 5: Oral examinations

  • As economics makes heavy use of graphic analysis, in many cases it is a good idea to give students access to a whiteboard or PC during the oral examination.
  • Oral examinations can be an intimidating experience for everyone. One might divide it into two parts, and, in the first part, test students on questions that they have already seen and prepared for – this makes the process more reassuring, less arbitrary and easier to verify. In the second part, students respond to unseen questions.

Testing skills instead of knowledge

One of the problems with unseen exams is that questions are so closely related to the material covered in the course and in the textbook that students tend to memorise and regurgitate without any deep understanding. An alternative approach involves testing students with questions relating to issues or material that is not familiar, but which does require the kind of approach to problem solving that is developed in the module. In this way, the assessor is testing the learning process developed in the course rather than the knowledge provided.

As an example, students are asked to answer ten questions in 40 minutes. Each question is worth 3 marks. The right is reserved to give negative scores for logically wrong answers and bonus points for excellent answers.

Box 7 An example of skills testing

Imagine you are an economics adviser starting an assignment in an unfamiliar society. Please summarise your approach to the following issues in up to three points (the points do not have to be in any order of significance).

Here are four of the ten exemplar questions.

  • What reasons would you stress as to why changes in the money wage rate in a remote rural village with a large subsistence sector might not be a good indicator of changes in the real standard of living?
  • What variables would you use to assess the degree of segmentation in the urban labour market?
  • Why might small borrowers have problems obtaining credit for physical capital investment?
  • What determines how much of a household’s savings are held in the form of money?

Presentations

Presentations are a well-established method of assessment. They help to develop skills required in the workplace, as well as student confidence, oral skills and the use of relevant software. However, it is true that use of presentations in HE is often felt to be inadequate. Students may regurgitate material without properly engaging the audience, and may invest a lot of time in their own presentation at the expense of other work. For example, students seldom prepare for the topics covered by their colleagues’ presentations. Top Tips 6 contains a couple of useful tips to improve the efficacy of presentations.

Top Tips 6: Presentations

  • Make the presentation part of the module mark – this creates a real incentive.
  • As with all methods of assessment, ensure the student knows the broad criteria on which they will be assessed.

Projects

With projects, students are often free to choose the topic, title and methodology to be studied. Projects are useful in developing independence, organisational skills, resourcefulness and a sense of ownership over work, and may induce a deeper level of learning.

On the other hand, they may be unpopular and where the project is an option, take-up may be low. Students may believe that it involves a greater amount of work than a standard module and/or that there will be insufficient supervision. Some tips for improving the take-up rate and usefulness of project work are listed in Top Tips 7.

Top Tips 7: Projects

  • Make the project compulsory.
  • Allow project work to be done as a group work.
  • Require students to submit work to supervisors in stages – this should include a plan of work, a brief literature review with references, and subsequently a first draft.
  • Projects may be examined by oral examination as well as written report (the benefits of this were discussed in the section ‘Oral examination’ above).

Literature/article review

Consider asking students to prepare a literature review on a given topic. This develops a number of research-type skills, encouraging students to source material, use search engines and be able to assimilate large amounts of material and select the most important.

While students are often expected to review literature as a matter of course (for example, as part of an essay submission), they normally underperform, being overly reliant on key ideas presented to them in core textbooks. This approach encourages them to do it, makes plagiarism more difficult and can be quite popular, since the process of searching and understanding a wider literature makes students feel more involved.

Article reviews involve students presenting in written or oral format a critique of one or more articles. However, this approach can be somewhat demanding for undergraduates.

As an example, we could pose the question whether the UK should join the common European currency or not. Students should submit a comprehensive literature review on this controversial topic. They are expected to source a range of material and arguments relating to this debate, and prepare a report referring to the original sources.

Literature reviews are probably easier to read than other types of written work, which therefore eases the burden of marking.

Objective tests and multiple-choice questions

Box 8 Advantages of multiple-choice questioning

  • Quick and easy to mark.
  • Useful for large groups.
  • High inter-tutor reliability.
  • Remedial opportunities – can find out the correct answer.
  • Very useful for self-assessment and learning.
  • Can test a wide range of knowledge quickly.
  • Web-based texts may include a bank of such questions.
  • Marking may be done by computers
 
Table 3 Types of multiple-choice question
  • Yes/no.
  • ‘Multiple’ choice (e.g. select one from four possible answers).
  • True/false.
  • Short answers (students offer short answers – one word or sentence).
  • Completion – students complete a sentence, for example.
  • Matching – students match items to each other.
  • Best answer – students must choose the best answer available.
  • Sorting – putting economic ‘events’ into chronological order, for example.

 

Box 9 Disadvantages of multiple-choice questioning

  • It is difficult to design questions.
  • It is difficult to test the depth of learning in economics.
  • Multiple-choice questioning does not develop or test presentational skills.
  • Text-based answers can be difficult to mark by computer.

Given the disadvantages listed in Box 9, these types of question should not be used as the sole means of assessing student performance in a given module.

Computer-aided assessment

Computer-aided assessment is discussed in another chapter in this handbook. But for now, consider that computer software such as Question Mark can be used to format multiple-choice questions and mark and analyse the results. It may be time-consuming to set, but marking is very fast and accurate. However, as with other objective tests, it is difficult to test the depth of learning.

Portfolios in economics

A portfolio is a collection of work commonly used in the assessment of vocational training (such as industrial placement).

Box 10 Advantages of including portfolio assessment

  • Portfolios can be useful for students with work experience to claim credit for tasks done at the workplace and to tailor work tasks in a way that promotes learning and development. They can also be useful as a basis for interviews and promotion.
  • A portfolio is usually a collection of work developed over time and may help the student think about what is being achieved in an ongoing way.
  • It is felt that students have a degree of control over what goes into the portfolio.
  • As evidence of a student’s achievements, a portfolio can foster confidence and a sense of achievement.
  • The process can foster dialogue with tutors and assessors.

Portfolios may not be an ideal way of testing knowledge and the analytical, conceptual and problem-solving skills required in economics. Also it does not fit readily within a modular structure. However, there is clear potential for portfolios in the development of transferable skills – a requirement of degree programmes can be the demonstration of various activities and skills, such as leadership, co-ordination and research. This could easily be embodied within a portfolio submitted at the final level.

As an ongoing means of assessment, portfolios would be a useful device for indicating the importance of transferable skills, and requiring students, rather than lecturers, to find ways of developing these skills. Traditionally, students have used extra-curricular activities to demonstrate their range of personal and interpersonal skills, which are summarised and publicised in curriculum vitae.

Box 11: Issues concerning portfolio assessment

  • Portfolios do not lend themselves easily to being graded – the student is usually required to demonstrate that they have completed a range of tasks and activities to a minimum standard.
  • Portfolios can involve tedious paperwork – students must give a large amount of paper evidence of their achievements.

For much more detail on portfolios and a case study, see Baume (2001) and Coates (2000).

Other aspects of assessment

There are many other methods of assessment besides those discussed above – poster sessions, open-book examinations, seen-examinations, profiles, single-essay examinations and various combinations of the above. For more detail, consult section 4.

2.5 Judging assessment criteria

One of the major principles of good assessment practice is that the criteria are clearly communicated to the students. This allows the educator to fashion better the learning process and induce desirable learning outcomes. From the point of view of the student, explicit communication of criteria is desirable as it allows them to focus on what they should be doing.

Typically, economics students are provided with a rather stylised description of the characteristics and qualities that constitute the respective grade levels (often at the beginning of their studies in the student handbook). A first-class mark is awarded for outstanding performance containing original thought; a 2.2 grade is characterised by sound understanding and presentation of key concepts but with a number of lapses in argument, and so on. The criteria are rather broad and abstract in nature, and reflect, in part, a preference for developing intellectual and analytical skills.

In this section, the ways in which alternative assessment criteria may be used to promote the development of transferable skills in ‘traditional’ student activities are discussed. An example is provided of how a piece of written coursework may be used to develop core IT skills. Students are required to submit an essay related to a broad issue or question in economics and are told beforehand that 50 per cent of the mark will be allocated on the basis of the use of IT skills.

Top Tips 8: Setting assessment criteria

  • Be absolutely explicit about assessment criteria
  • Use assessment criteria to direct student learning into specific tasks. For example, it may be stated at the outset that:
    • 40% of the mark will go on written presentation;
    • 20% of the mark will be allocated to tutorial contribution;
    • in allocating marks, consideration will be given to the quality of the literature review and range and quality of referencing;
    • to obtain a pass grade you must demonstrate use of econometric software.

The following example is adapted from Brown et al. (1994, p. 17).

Students are required to add an appendix to their submission explaining the uses made of IT. Where it is not obvious (for example, search engines or statistical packages), students must provide comprehensive evidence of their use.

The following sheet (Table 4) can be completed to form a basis for allocating marks in respect of IT use. The assessor ticks the boxes as appropriate.

Table 4 Indicator of IT use
 None, limited or extensiveSimple or sophisticatedAppropriate or inappropriate
Word processing   
Layout and formatting   
Graphics   
Spreadsheets   
Use of other software, e.g. Equation Editor   
Use of statistical and econometric packages   
Use of software for searching literature   
Other   

3. Other issues in assessment

3.1 Marking

For a brief discussion of issues related to marking, see Yorke et al. (2000) and Brown (2001).

Issues relating to marking assessments

  • Different modules and departments clearly allocate marks over different ranges – some mark between 40 and 70 per cent, others across the full range. How is it possible to aggregate and compare marks in these circumstances? There are many ways of dealing with this problem, such as normalisation of scores.
  • Double-blind marking doubles the assessment load and studies have shown that it is no more reliable than single marking and moderating of borderline cases and samples of each grade.
  • The aim of moderation is to check the consistency of the marker(s), not to remark

3.2 Feedback

The purpose of feedback is to give information to students regarding their strengths and weaknesses in the areas covered by the formative assessments. It is also an opportunity to justify the mark/grade awarded.

Issues relating to feedback to students

  • Feedback is exceptionally useful where a piece of work is still ongoing and students submit drafts/consult with assessors at stages in the process.
  • To be effective, feedback needs to recognise positive aspects of the work and not only its shortcomings.

3.3 Plagiarism

This is a huge and increasingly worrying issue for assessors, given in particular the ever-increasing use of the internet as a learning resource. Even under strict invigilation conditions, it is not impossible for students to give and receive ‘answers’ to questions in computer-based examinations.

Issues relating to plagiarism

  • Poor assessment practice invites plagiarism.
  • Choose activities and assessment methods that limit possibilities for plagiarism (some of these have been discussed already).

3.4 Assessment and large groups

Rust (2001) contains an extensive review of issues related to assessment of large groups. The concern is that the ever-decreasing staff–student ratios are likely to reduce the quality of assessment and the amount of formative assessment, thereby reducing the amount and quality of feedback that a student can obtain about his/her strengths and areas that need further work.

Issues relating to large group assessment

  • Use computer-aided marking where appropriate.
  • Change exam regulations and rubric – for example, shorter exams, lower word limits, moderation versus second marking.
  • Consider making a desired but non-assessed piece of work a course requirement for sitting the exam.
  • Do not accept late submissions.

4. Where next?

For an introduction to issues of assessment in higher education, see:

Rowntree, D. (1997) Assessing Students, Kogan Page, London.

Andresen, L., et al. (1993) Strategies for Assessing Students, SEDA (Staff and Educational Development Association), Birmingham.

There are useful chapters in:

Brown, G. and Atkins, M. (1988) Effective Teaching in Higher Education, Routledge, London.

Ramsden, P. (1992) Learning to Teach in Higher Education, Routledge, London.

For a guide to assessment methods and practical tips, see:

Brown, G. and Pendlebury, M. (1992) Assessing Active Learning, CVCP Training Materials.

Habeshaw, S., Gibbs, G. and Habeshaw, T. (1993), 53 Interesting Ways to Assess Your Students, Technical and Educational Services, Bristol.

Issues related to assessment and increasing student numbers are discussed in:

Gibbs, G. (1992) Assessing More Students, Oxford Centre for Staff Development, Oxford.

See also various articles of the journal Assessment and Evaluation in Higher Education and the series of briefing papers on assessment: Learning and Teaching Support Network, Generic Centre, Assessment Series (2001).

Notes

[1] For example, if well managed, group projects promote various interpersonal skills that are relevant to the workplace. However, they are not necessarily a good way of developing a wide knowledge base in a large number of students in a short period of time. Generally, the promotion of ‘transferable’ skills is time-consuming and may draw resources away from more traditional teaching objectives.

[2] The LTSN has published a briefing paper targeted at heads of department, focusing on ‘assessment strategies, why they are important and how to develop them’ (Mutch and Brown, 2001).

[3] In evaluating assessment, focus is placed on its ‘validity’ – the relationship between assessment and the desired learning outcomes. Assessment should also be evaluated on its reliability (consistency of marks, etc.) and practicality (time, cost and legitimacy).

[4] The concept of summative assessment is discussed in section 1.3.

[5] Adapted from Figure 1 on p. 5 of Mutch and Brown (2001).

[6] Readers should consult other material for ideas on project tasks (see projects, literature reviews).

[7] A case study of peer-group assessment in macro dynamics is discussed in Davies et al. (2000, ch. 4).

[8] The author is grateful to the Educational Development Team of the University of Hull for information and advice.

References

Atkins, M., Beattie, J. and Dockrell, B. (1993) Assessment Issues in Higher Education, Employment Department, Sheffield.

Baume, D. (2001) A Briefing on Assessment of Portfolios, Assessment Series, LTSN Generic Centre, York.

Boud, D. (1986) Implementing Student Self-assessment, Higher Education Research and Development Society of Australia, Sydney.

Boud, D. (1992) ‘The use of self-assessment schedules in negotiated learning’, Studies in Higher Education, vol. 17, no. 2, pp. 185–200. https://doi.org/10.1080/03075079212331382657

Brown, G. (2001) Assessment: A Guide for Lecturers, Assessment Series, LTSN Generic Centre.

Brown, S. and Dove, P. (eds) (1991) Self and Peer Assessment, Standing Conference on Educational Development, Birmingham.

Brown, S. and Glasner, A. (eds) (1999) Assessment Matters in Higher Education: Choosing and Using Diverse Approaches, Open University Press, Buckingham.

Brown, S., Rust, C. and Gibbs, G. (1994) Strategies for Diversifying Assessment, Oxford Centre for Staff Development, Oxford.

Coates, G. (2000) ‘Innovative approaches to learning and teaching in economics and business higher education’, in P. Davies, S. Hodkinson and P. Reynolds (eds), Innovative Approaches to Learning and Teaching in Economics and Business Higher Education, Staffordshire University Press, Stoke on Trent.

Davies, P., Hodkinson, S. and Reynolds, P. (eds) (2000) Innovative Approaches to Learning and Teaching in Economics and Business Higher Education, Staffordshire University Press, Stoke on Trent.

Entwistle, N. (1981) Styles of Learning and Teaching, Wiley, New York.

Erwin, D. and Knight, P. (1995) ‘A transatlantic view of assessment and quality in higher education’, Quality in Higher Education, vol. 1, no. 2, pp. 179–88.
https://doi.org/10.1080/1353832950010208

Gibbs, G. (1992) Assessing More Students, Oxford Centre for Staff Development, Oxford.

MacAlpine, J. (1999) ‘Improving and encouraging peer assessment of student presentations’, Assessment and Evaluation in Higher Education, vol. 22, no. 4, pp. 371–83.
https://doi.org/10.1080/0260293990240102

Mutch, A. and Brown, G. (2001) Assessment: A Guide for Heads of Department, Assessment Series, LTSN Generic Centre, York.

Rust, C. (2001) A Briefing on Assessment of Large Groups, Assessment Series, LTSN Generic Centre, York.

Searby, M. and Ewers, T. (1997) ‘An evaluation of the use of peer assessment in higher education: a case study in the School of Music, Kingston University’, Assessment and Evaluation in Higher Education, vol. 22, no. 4, pp. 371–83.
https://doi.org/10.1080/0260293970220402

Yorke, M., Bridges, P., Woolf, H., et al. (2000) ‘Mark distributions and marking practices in UK higher education: some challenging issues’, Active Learning in Higher Education, vol. 1, no. 1, pp. 7–27.
https://doi.org/10.1177/1469787400001001002

Pedagogical topics