The Economics Network

Improving economics teaching and learning for over 20 years

5. Integration into a module

In using experiments there are many challenges that must be overcome. These are for lecturers, students and the modules overall. Lecturers have a limited amount of time in lectures. Students have limited time too (both inside and outside the classroom). Proper assessment and motivation of students can be a challenge as well. Here we try to answer a number of issues that need to be considered when implementing experiments.

  1. Which particular experiments to use?
  2. Which type of experiments to use (homework, hand run, computerised)?
  3. How many experiments to use?
  4. How to count experiments toward the final mark?
  5. How to base exam questions on experiments?

Two brief case studies on modules using experiments and their student evaluations

Case 1: Intermediate Microeconomics (100 students).

Intermediate Microeconomics in Exeter, part 2 (100 students, lectures, surgeries and experimental sessions). We ran simple 2x2 games and auction games within the lecture. Students had to do 8 out of 6 computerised assignments (Wiley Plus), 3 homework experiments and 6 experimental sessions, in order to get 10 out of 100 marks for the module. Apart from that, participation was voluntary, to allow for different learning styles. The incentive was for participating, not for getting it ‘right’. (We intend in the future, however, to have for each experimental session a short questionnaire with simple questions of understanding.) It was not expected that every student would do every session. Each experimental session was run twice to allow many students to participate. Participation in experimental sessions was increasing over the year. The lectures would frequently refer back to the experiments, discuss the results and compare them with the theoretical analysis. The module was surprisingly successful in the students’ evaluations, with an average above 4 out of 5 on the goodness index, and the highest score for the question on how useful the experiments were for the module. The response rate was higher than for many other comparable modules (40%). Exam results were similar to those in previous years, however we did not make a systematic evaluation.

Case 2: Third Year Option (30–40 students)

Another type of module is where each lecture is designed around experiments. Each week there is an experiment followed by a lecture based upon the experiment. This has worked successfully in both a Corporate Strategy course for executives (10–15 students) and a third-year course (30–40 students). The third-year course was meant for economics students that had taken microeconomics and had a diverse number of topics. There were experiments on markets and market structure: Bertrand Competition, Bertrand Complements, Vertical Markets, and Double Auction with Taxes. Experiments on multi-player simultaneous choice games: Bank Runs, and Network Externalities. Two-player sequential games: Hold-Up Problem, Team Draft, Ultimatum Game, and Signalling. Also there were several individual choice experiments: Price Discrimination, Lemons Game, Monty Hall, and Search.

For the third year module we have detailed student evaluations for 14 classroom experiments. Overall, students found they learned from experiments 3.8 on a 1–5 scale. They rated the fun 4.05. This order is consistent with 12 out of the 14. When there were technical difficulties in running the experiments, it significantly hurt the ratings in both categories. In addition, homework experiments (all individual choice) were less popular. The most popular experiment by average rank of learning and fun was the Bertrand Competition experiment (run on FEELE) which was first in fun and second in learning, following by Team Draft (FEELE), Ultimatum game (Veconlab), Signalling (Veconlab), and Bank Runs (FEELE). Another noteworthy experiment was a tax incidence experiment using Econport’s Marketlink double-auction software. It had an average rating of 4.41 out of 5 for fun, even though students rated the learning only average.

Which particular experiments to use?

Here are some recommendations:

Microeconomics is the module with the most experiments developed for it, so it is fairly easy to fill.

Macro: Denise Hazlett’s website has several experiments. In addition, for a large class, Currency Attack (available on the FEELE site) works well.

Money and Banking: There is the bank-run experiment described here as well as a computerised Kiyotaki-Wright experiment based upon an experiment by Denise Hazlett.

Finance: The Holt bubble experiment is recommended. The double auction on Econport is able to impress many, in particular the version for an asset market. During the opening of a new finance centre at Exeter we demonstrated this software and many of those in industry were hooked. There are also some experiments that can be used to introduce behavioural finance. For one, the Monty Hall experiment shows how poorly people do as individuals, but things look quite different when the game is placed in a market setting (one can refer to a Journal of Finance article on this). An experiment that proved popular with the students is the Being Warren Buffett experiment. This was developed at Wharton and we have a computerised version of it on FEELE.

Game Theory and Decision Theory: The Rubinstein website is an ideal source for homework experiments on both topics. Veconlab offers some excellent experiments for game theory, but also for Bayesian learning. Team draft and the Hold up experiment on the FEELE site are good introductions to backward induction. Quick and simple hand run experiments, e.g. many of the questions used by Kahnemann and Tverski and simple experiments on one-shot 2*2 games. For repeated games one can use a repeated prisoners’ dilemma or one can play a repeated Cournot duopoly using Veconlab.

Industrial economics: Again, this is a module for which plenty of experiments exist, for instance on Veconlab or FEELE.

Introductory economics: The size of the lecture is crucial. The guessing game, a simple insurance game (see the classroom experiments site on wikiversity), and a hand-run public good game can be done with little effort. If at all possible one should run a double auction or pit market experiment to discuss market equilibrium. The student activity on decreasing marginal returns using tennis balls or plastic flower pots is highly recommended and can be done with a sample of students even in big lecture halls. A colleague of ours just ran (with some help from other staff) the international trade game (see Sutcliff’s handbook chapter (2002) on Simulations, Games and Role-play) in a group of 100 students.

Which type of experiments to use (homework, hand run, computerised)?

Type guidelines

  • Large lectures (>100 with no tutorials): Use short hand run or homework experiments. It is possible to be more sophisticated with wireless technology.
  • Medium lectures (40<#<100): Make use of computerised experiments or (more labour intensive, but also more fun) hand run longer experiments in tutorials.
  • Small lectures (<40): You can use computerised experiments in place of lectures if you have access to a computer room.

How many experiments to use?

There is no minimum or maximum. We have had classes that have run an experiment a week and particular lectures (like one on game theory) that run several short experiments in a single lecture. In microeconomics we ran weekly experimental sessions on a voluntary basis. We had a regular following, but also people who never came. It is important for the students not to feel overloaded and to experience a variety of teaching approaches. We think that one, sometimes very short, sometimes longer, experiment per topic is ideal.

How to count experiments toward the final mark?

We found the most successful strategy for employing experiments has been to give marks for participation, not success, in an experiment. Participation was optional and a potential replacement for turning in homework. Also, we have successfully required lab reports that consist of explaining students’ strategy in the experiment, analysing experimental results and answering a few simple questions (short answer/multiple choice) on problems relating to the experiment. Implementing a computerised version of such a lab report is quite simple using the Veconlab's surveys.

Dickie (2006) confirmed Emerson and Taylor’s findings that experiments improved TUCE scores: however, they found that this benefit disappears if one bases credit on performance. We guess that this may be due to at least a perceived randomness in performance, although we have noticed that the same students do well across several experiments throughout the term. In any case, perceived randomness can not only hurt evaluations, but could raise the alarm of a teaching committee. Giving prizes for performance seems to draw no criticisms. There does not seem to be an objection to a lottery for a prize just a lottery for a grade.

We feel it is useful to have exam questions based upon the experiments: more the carrot than the stick. This leads us to the next point.

How to base exam questions on experiments?

There are studies showing that experiments helped to improve test scores both on the TUCE (general knowledge) test and in standard exams. Still, the students are unaware of this, and there is always room to tie things together more closely. Moreover, common sense tells us that for a quantitative exam, having a tutorial based upon mock questions similar to the exam is liable to boost scores more than running an experiment with only a tangential connection. The first year we ran experiments, we found that a handful of students thought that the experiments were at the expense of valuable tutorial sessions, and were being run for the benefit of the lecturers. Clearly, tying the exam more closely to the experiments should help.

In many cases, experiments can help students learn a particular exam question. For instance, the network externality experiment on the FEELE site is specifically based on a chapter in Hal Varian’s Intermediate Microeconomics book. More generally, the signalling experiment on Veconlab is extremely helpful in teaching signalling to undergraduates. We believe this may have the most value added, in that without experiments we found it difficult to teach signalling. Likewise, the price discrimination experiment is based upon a style of test question, rather than the other way around.

For other cases, the experiments may help general understanding, rather than learning a particular algebraic manipulation. With Cournot duopoly, an experiment may help students grasp simple comparative statics, while algebraic manipulations are subject to sign errors.

Naturally, any exam question can be used as a homework question, but one can also have homework questions based upon analysing the data from an experiment. The FEELE site has an option to create a link of the results in both numeric and graphic form for the students (via the button of ‘View Results (Subject)’). This makes the task fairly easy. Since the data from the experiment are available, they can be used to develop exam questions: for instance, to what extent does the experimental data fit the predictions of the theory?

General hints

  • Usually do experiments before covering the material in the course.
  • Let students participate in preparation, execution and evaluation (especially in an experimental class).
  • Relate some exam questions to experiments.
  • Do not be too obsessed with preserving a research environment.
  • Use two students per computer to induce discussion and reflection.