Economics Network IREE Virtual Edition

Strategic Voting and Coalitions: Condorcet's Paradox and Ben-Gurion's Tri-lemma

James Stodder
International Review of Economics Education, volume 4, issue 2 (2005), pp. 58-72
DOI: 10.1016/S1477-3880(15)30131-6 (Note that this link takes you to the Elsevier version of this paper)

Up: HomeLecturer ResourcesIREEVolume 4 Issue 2

Abstract

The Condorcet paradox is a classic example of the power of agenda setting – how it can determine the political outcome. A classroom voting game shows how alliances between voting blocs can determine an agenda. Agreements on how to vote after the agenda is set will be broken, however, if the partners are strictly self-interested. That is, no alliance is sub-game perfect. Informal classroom experiments suggest that alliances are more likely when successive opportunities for betrayal fall to both sides, rather than to one side only. These points are illustrated with a three-cornered dilemma posed by Ben-Gurion, one that 'sharpens' the Condorcet paradox by making the third alternative always impossible.

JEL Classification: A22

Introduction

God comes to the Soviet people and says: 'I will give each of you a choice of three blessings in life, but you can only have two out of the three. You can be an honest person, you can be a smart person, or you can be a member of the Communist Party. If you are smart and honest, then you cannot be a communist. If you are a smart communist, then you cannot be honest. And if you are an honest communist, then obviously, you must not be very smart.'
Soviet era joke

Along the lines of this joke, consider a company board of directors who must decide policy. They can commit themselves to being an Honest company, a Profitable company or a company that relies on Government contracts. One can imagine a Left–Centre–Right split among the directors:

(1)

Left (L): G > H > P
Centre (C): P > G > H
Right (R): H > P > G

where '>' means 'preferred to' .The Left prefers government work for an honest company – one that cannot be profitable. The Centre prefers a profitable company with government work – one that cannot be honest. The Right prefers an honest and profitable company – with no government work.

According to Ben-Gurion, a different 'tri-lemma' confronted the early state of Israel:

In November 1947 ... David Ben-Gurion, then the leader of the Zionist movement in Palestine ... did not shrink from clearly laying out the choice before the Jewish people ... Who were they? A nation of Jews living in all the land of Israel, but not democratic? A democratic nation in all the land of Israel, but not Jewish? Or a Jewish and democratic nation, but not in all the land of Israel? Instead of definitively choosing among these three options, Israel's two major political parties – Labor and Likud – spent the years 1967 to 1987 avoiding a choice ... not on paper, but in day-to-day reality.
(Friedman, 1989, pp. 253–4)

This classic problem of democracy was first formalised by the Marquis de Condorcet at the time of the French revolution (Gardner, 1995). Condorcet's 'paradox', often presented as an introduction to Arrow's 'impossibility' theorem, shows a potential instability or indeterminacy in democratic processes. To pose Ben-Gurion's tri-lemma as a Condorcet cycle, define options D, J and G: a Democratic Israel, with equal rights for all its citizens; a Jewish Israel, its state having an explicitly Jewish character; and a Greater Israel, extended to its ancient boundaries. Say that all participants in this game agree that all three goals are desirable. Assume also that there can be no majority without an alliance of at least two groups, Left, Right or Centre. Their rankings over D, J and G are:

(2)

L: D > J > G
C: G > D > J
R: J > G > D

This is a 'cyclical' majority because two out of three groups will vote D > J and J > G, but also G > D. Ben-Gurion's example pushes the original Codorcet problem towards crises: the third option in any combination is always impossible, logically excluded by the first two. The original Condorcet problem did not have this difficulty – merely a voting cycle for at least three alternatives.

This paper describes a Condorcet/Ben-Gurion 'tri-lemma' voting game for the classroom. This is a more strategy-focused approach to Condorcet than teaching games which just illustrate voting cycles (Sulock, 1990), or which have the instructor (rather than students) set the outcome-determining agenda (Holt and Anderson, 1999). This classroom game makes several further points:

None of these pedagogical aims relies on Ben-Gurion's 'sharpening' of the Condorcet paradox – in other words, they do not require that the third-ranked alternative in any ordering be logically excluded by the first two. Ben-Gurion's version may make the strategic game more exciting, however, and its game theory lesson more memorable.

The basic Condorcet cycle could exist with many different orderings. It is only required that preferences over three alternatives form a cyclical, or non-transitive ordering. Instead of (2), for example, one could have:

(2a)

L: J > D > G
C: D > G > J
R: G > J > D

If there are equal votes in each coalition, then we could say that the three alternatives are 'tied': that is, if each side gave 3 points to its first, 2 to its second, and 1 to its third choice, each of the three ordered alternatives would get a score of 6. But in Ben-Gurion's sharpening of the Condorcet paradox, it is impossible to have all three 'good things' at once, so even a tie is impossible.

It often seems that politicians wish to avoid an explicit ranking of alternatives. This may be an acquired instinct to avoid Condorcet-type instabilities. Historic alternatives such as J, D or G, furthermore, are rarely posed explicitly in any vote; they are only implied. Thus it is quite possible for political processes to cycle indefinitely 'instead of definitively choosing' – as Friedman (1989) claims was the case in Israel for many years (and may still be). Events have a way of cutting off debate, however, and forcing a final resolution, one way or the other. That is the implicit warning of Ben-Gurion's tri-lemma.

For a third example, consider current debates around security from terrorism, civil rights and immigration controls. Let's say there are approximately equal voting blocs in favour of these three policies: the protection of Civil rights; an Immigration process that is open and fair by world standards; and Security from terrorist attacks, designated C, I and S, respectively. Then we can form the Condorcet cycle:

(3)

L: C > I > S
C: S > C > I
R: I > S > C

As before, the Ben-Gurion twist is to declare that the two most-favoured items must logically exclude the third.

The first section of the paper describes the structure of the game, and how it can be carried out as a classroom experiment. The second section analyses the results in terms of sub-game perfection and sequential rationality, and some limitations of these equilibrium concepts.

A classroom Condorcet game

Although any three-way Condorcet cycle can be used, I will illustrate with the less controversial 'business reputation' example in (1). This preference ordering shows that if Left (L) is setting the agenda, it ensures its first-best outcome by setting the order of the two pair-wise contests: first H (honest) versus P (profitable), and then G (government contracts) versus H. Then self-interested voting will give the appearance of majority support for L's own preferences: G > H > P. Similarly, each of the other two blocs can achieve its own best result by agenda setting to ensure that its own favourite option is included in the last pair-wise contest.

Outcomes that are first-best for one of the groups L, C or R are shown in pathways numbered (1), (3), (5), (7), (9) and (11) in the game tree of Figure 1. We will analyse this tree formally in the next section, but note here that it summarises the different outcomes and strategic choices for each of the three groups. The winning programme from each of the three rounds of voting is shown by the letter G, H or P. The final outcomes are shown in the political ordering of alternatives (e.g. G > H > P for outcome (1)), and the value ordering of the payoffs (Left, Centre, Right). The bold arrows in Figure 1 represent pathways where every group is voting its true preferences. For example, in the two uppermost arrows, H wins over P because the Left and Right both prefer H, and then G wins over H because the Left and Centre both prefer G.

This agenda-manipulating strategy assumes that every bloc votes its true preferences after 'the agenda' is set. This agenda, which is simply an ordering of the pair-wise contests, is set by simply choosing which alternative is to be in the final round. If there is ever to be such an agenda, however, some compromise is necessary. Given the strategic situation, agenda setting may well lead to a deadlock. The Left may decide, however, that getting its first-best outcome in second place (see outcome (12)) is better than letting it fall to third place, and so may vote for the Right's agenda. In doing so, however, the Left should demand a promise in return. For example, if the Right (the stronger partner by way of its agenda) is going to get its own first-best item in first place, then it can be asked at least to promise to give up second place for its second-best item. This would mean the Right giving up its first-best (9) in favour of (12). The voting game will show, however, that there is a time consistency problem with all such promises.

Figure 1. Business reputation game

Tell the class that its society must finally decide upon just two of the three options. The voting is to be in two successive pair-wise contests. Voting on the agenda will determine the order of these two contests. Each bloc will submit a secret agenda ballot showing the option it wants 'top-seeded', to borrow a term from tennis – the option it wants guaranteed a place in the final round. A copy of the handout and rules given to the students is provided in the Appendix.

A class following this exercise should be divided up into three blocs of Left, Centre and Right, with roughly equal numbers of students in each. In my experience, this game works best with 5 to 15 students in each bloc. The number of people in each bloc need not be equal – each group gets 1 vote. Members of each block are the 'leadership', and by assumption, each group commands roughly equal votes.

The basic set-up and rules are detailed in the appendix. Each bloc will meet together to map out its strategy and pick a negotiator, before meeting to talk with the other blocs. Secret negotiations between each pair should begin before the actual voting begins. Let each pair of blocs meet in sequence, while the third group refines strategy. I find it is best to let each pair meet twice to negotiate – this way each can make representations on what the other side has offered, or really intends to do. Each bloc can promise anything it wants to secure an alliance, but is also free to break any promise. Getting a majority vote to agree on an agenda requires at least one side to agree to vote against its own first-best preferences.

There are thee policy options (H, G and P), and six possible different ways these can be ordered (3! = 3 x 2 x 1 = 6). This ordering will be determined by the voting outcome. To weigh the options to each side, I make the following assumptions: each side's most-preferred option is worth 3, its next-best is worth 2, and its worst is worth 1. After the final vote, let any option winning first place be multiplied by 3, since it will be most important, while the option in second place gets multiplied by 2. The one in third place is multiplied by 0 – since it will then, by Ben-Gurion's logic, be effectively impossible. To save student time and errors, the valuations of outcomes to the three blocs are given in Table 1 in the appendix. Note that, because of the symmetric nature of the valuations, the sum of all the groups' scores is always the same – 30 points in Table 1. The highest payoff for each group is emboldened and underlined, in both the table and game tree.

A statement of the rules (see the appendix) and Table 1 is all the information the blocs have before the game begins. (I do not show the game tree until after the game is over.) They can now proceed to the first step of agenda setting, which will involve negotiations and promises between the various blocs. Voting on both the agenda and the subsequent pair-wise voting contests should be secret – who did what will soon be clear enough. Promises can be broken, so the game is not over once an agenda is set. There are three votes for each bloc (see the appendix), since each must vote on an agenda and then vote over two pair-wise contests. I find that this game usually takes at least 30 minutes to actually play, with an initial explanation and motivation of about the same length. Students do not need formal compensation in such a game – the fun and challenge are more than sufficient.

The cardinal preference weights given in Table 1 do not affect either the Condorcet paradox or, as we will see, the sequential equilibria of Figure 1, as long as each bloc keeps the same ordinal ranking over the three alternatives. These sequential equilibria assume only that in every subsequent sub-game, each bloc will vote for the alternative giving it the highest payoff. The issue of compromise and trust, however, is sensitive to the relative weights of these cardinal values. If each bloc's most-preferred outcome were worth 100 times as much as its next-preferred outcome, then alliances would seem to be that much harder to strike, and less credible if struck. Experimental evidence here is consistent with intuition: large differences in rewards between groups hurt their ability to form mutually improving coalition agreements (Davis and Holt, 1993, pp. 242–50).

I have run this classroom game without assigning explicit points, and the compromise outcomes were similar – suggesting that each bloc implicitly assigns similar weights to its first-, second- and third-best alternatives. If the game is set up with Ben-Gurion's implication that all three options are desirable, explicit points may not be necessary. The points do seem to make it easier to debrief the game, however.

Analysis

After the game has been played and the political priorities set, the class can be shown the 'decision tree' in Figure 1. This tree is given in a summary form, rather than the full extensive form showing all possible moves. The latter would require 3 x 3 x 2 x 2 = 36 final branches, rather than the 12 branches shown here.

Paths that form non-cooperative equilibria in a sub-game (a subsection of the game tree) are drawn with heavy arrows at the second and third level. Such outcomes are 'credible' within a particular sub-game, since they require only that each bloc should vote its true preferences at every subsequent point. This idea of credibility leads to the search for an equilibrium that is 'sub-game perfect' for the entire game: that is, one where the strategies chosen form a Nash equilibrium for every component sub-game (see Gardner, 1995, ch. 6.) However, it is not hard to see that there is no complete path that can be sustained by promises that are 'self enforcing': in other words, strictly in the interest of all promise-makers to carry out.

Agreeing on any agenda gives at least one partner an opportunity for betrayal. An agenda guaranteeing G (government contracts) a place in the final contest favours L (Left), since once this is done L's preferred rank ordering, G > H > P, is the outcome of self-interested voting at (1). Therefore, in getting such a favourable agenda, L will probably have to concede something – that is, a promise not to vote strictly in its own best interest – thereby forging a compromise with either R (the Right) to arrive at outcome (2) or with C (the Centre) to arrive at (4).

In an agreement with R to aim at (2), L will have agreed to vote against its own preferred G in the final round. In aiming at (4) in an alliance with C, L will agree to vote against its own preferred H in the first round, and C, in return, will have agreed to vote against its own best choice P in the second round.

In ostensibly agreeing with R to aim at outcome (2), L has an opportunity to betray its partner simply by voting for its own preferences in both rounds, first for H and then for G, thus arriving at (1). In agreeing with C to aim at (4), the opportunity for betrayal is more symmetric. L has an opportunity to betray C in the first round of voting, when it votes for its own preferred choice of H instead of its promised choice of P.

Saying that an agreement on any agenda has opportunities for betrayal is equivalent to saying that there is no pathway that is sub-game perfect – made up strictly of heavy arrows. Every heavy arrow path can only be reached via a light arrow path – along which at least one side must vote against its most preferred choice (in agenda setting or a pair-wise contest). Since the heavy arrows mean that every bloc is voting for its own first-best choice, going down such a path implies that any earlier promise to vote against that choice must have been broken.

These successive Nash equilibria on heavy-arrow paths are called sequential equilibria (Gardner, 1995), a weaker criterion than sub-game perfection.(note 1) There are three paths that have sequential equilibria at both the second and third levels (1, 5 and 9), and three that have players voting their true preferences at the third level only (3, 7 and 11). These paths have bold arrows in Figure 1, and always follow an 'up' rather than a 'down' direction. Sequential equilibria are 'low-trust' outcomes, since they require only that each bloc votes in its own interest.

If a bloc decides it cannot get its own agenda passed, it might vote for another side's agenda – even assuming that a sequential equilibrium will probably prevail. This is because it would be marginally better off under one side's agenda than the other's. Thus the Right (R) would be marginally better off with the Left's best outcome, G > H > P, which yields it 9 points at (1) or (11), than with the best outcome for the Centre, P > G > H, yielding it 8 at (3) or (5). Therefore, the Right may be more comfortable allying with the Left than the Centre. By allying itself with the Left, furthermore, the Right may forestall an alliance between Left and Centre at (4) or (6), one that would be the Right's worst possible outcome, leaving it with a score of 7.

But there is no obvious reason for the Right to abandon its bargaining power in this way: in terms of the votes for a majority, each bloc is equally 'pivotal', and strategically equivalent.(note 2) I usually see two blocs forming a compromise which promises each a second- or third-best ordering of the three alternatives that is almost as good as its first-best ordering – giving it 12 or 11, as opposed to 13 points.

Here it is probably important that these first-, second- and third-best orderings for each bloc be close in value. This makes a bloc's alliance with either of the other two almost as likely. For example, the Right's first-best outcome at (7) or (9) can be achieved by an alliance with (and subsequent betrayal of) either Left or Centre. But this first-best outcome for Right is only slightly better than its second and third best – 13, 12 and 11 points respectively. Its second-best outcome at (8) or (10) implies Right allying with the Centre, while its third-best outcome at (2) or (12) means allying with the Left. Since either outcome is almost as good as the first best, either alliance is plausible.

However, I will argue that a Left–Right alliance at (12) is more likely than one at (2), a Centre–Right alliance at (8) is more likely than one at (10), and a Left–Centre alliance at (4) is more likely than one at (6) – even though the paired payoffs are identical. That is because in outcomes like (12), (8) or (4) both members of the alliance have the power to break their promise, while in (2), (10) or (6) that power is given to one side only. By aiming at the L–R alliance of (2), Right never has an opportunity to betray, only to be betrayed; the Centre is similarly disadvantaged in the C–R alliance aimed at (10), and the Left in the L–C alliance at (6). This would seem to make these forms of each alliance much less attractive for one side.

Just consider the two'mutual-betrayal opportunity' alliances for R: that of L–R at (12), and the C–R alliance at (8). There are offsetting advantages for each. The C–R alliance at (8) is more closely aligned with Right's preferences, giving it a score of 12 instead of 11. However, while both (12) and (8) give Right an opportunity for mutual betrayal, the L–R alliance at (12) gives Right the opportunity to betray earlier, in round 1 instead of round 2. If you are planning to cheat your partner, and fear that he or she may be thinking along the same lines, then it is better to be the one who can shoot first.

Let us consider the C–R alliance terminating at (8), based on the Centre's best agenda – one that guarantees P will be in the final round. The Right could have agreed with the Centre to vote for P in this agenda setting, on the Centre's promise that it would not vote its true preferences and so go to branch (5). Branch (8) is almost as good from the Right's point of view as its first-best ordering, giving it 12 instead of 13 points. But it can have all 13 if it betrays the Centre in the second round, leading to branch (7). Thus each side has an opportunity to betray the other at different points in the game. In informal classroom games, I have seen 'equal betrayal opportunity' alliances, such as that aiming at (8), more likely to be attempted than those where only one side has the opportunity to betray, such as that aimed at (10), where the Right can betray the Centre, but not vice versa.

Along the path to (8), the Centre can show its good faith to Right in the first-round vote by voting against its own interests, for H instead of G. In the second round, then, it will be up to the Right to resist betraying Centre. By voting for its own best option, H instead of P, Right can guarantee its own first-best outcome of H > P > G, on branch (7). At a cost of 1 point, however, winning 12 instead of 13, it can reward the Centre for its good faith, and vote for G, bringing their alliance to an honourable conclusion at branch (8).

A second Centre–Right alliance path to an equivalent outcome terminates at (10). The agenda favouring Right gives it a chance to betray Centre in the second round, but the Centre never has any chance to betray the Right. Therefore (10) may be less likely for an alliance target than (8), since the latter requires a comparable degree of trust from both parties. This is the pattern I have observed in about a dozen informal classroom experiments.(note 3)

Discussion and extensions

Although it is rare, alliances like (8) along 'equal-betrayal'paths have actually been honoured in some of my classes. This was in a setting where a 'good' reputation might be valued, a small class of executive MBA students who met together on Saturdays for 2 years.

Whether or not an alliance is concluded honourably, however, it still seems more likely to be attempted along an 'equal-betrayal opportunity' path, rather than along one where the opportunity is one-sided. It would be interesting to test this hypothesis in controlled experiments. Having run this experiment 'pedagogically' about a dozen times, I have seen only one instance when the initial alliance attempted was not along an 'equal-betrayal' path: that is, one example of a Centre–Right alliance aiming at (10) rather than the more common (8). Any attempted alliance is easily explained and reconstructed during post-game discussion, using the decision tree.

Reputation is important. Even a politician is careful about promises to other politicians, since losing the power to make deals leaves one without power of any kind. To call someone a 'compromiser' is usually derogatory these days, but the Condorcet paradox shows why some compromise is needed to escape gridlock, and why its likely form is a reciprocal promise not to do harm to the other side. Indeed, 'mutual promise' is an older meaning of the English word 'compromise',(note 4) and is still the primary sense of the Spanish compromiso or Portuguese compromisso – usually translated as 'commitment'.

Alliances require more than immediate self-interest; they need binding commitments, moral or otherwise. In non-cooperative game theory, keeping such a commitment is rational only if future games make good reputation valuable. If alliance commitments in this game collapse – as they usually do, in my experience – then this is a positive science result consistent with non-cooperative game theory. But there is also a normative implication for cooperative game theory. A 'constitutional' commitment to principles beyond self-interest may be required to escape the chaos implied by the Condorcet paradox, or its generalisation in Arrow's impossibility theorem (see Moulin, 1995). The paradox suggested by this classroom game is that mutual betrayal, or rather the opportunity for mutual betrayal, may actually be a crucial ingredient for building mutual trust. The irony is, I think, in the spirit of Condorcet, Ben-Gurion and their insights – born of real-world experience – into the paradoxes of power.

Appendix: Set-up and voting rules for 'business goals' game

1 Preferences of the three groups

The class has been divided up into three groups making up the management of a large corporation: Left (L), Right (R) and Centre (C). These three groups have approximately equal representation on the board of directors, and no final decision will be possible unless at least two of the three agree.

The task at hand is to settle on the two most important goals for the corporation. This is not just a philosophical exercise, but has a direct influence on policy, as will be seen.

All three groups, L, R and C, agree that the most important goals for the corporation are:

Honesty (H),
Profitability (P)
Government contracts (G)

However, even though all agree that these three goals are important, they also understand that at most two out of the three are compatible. Each of the three groups has a different view of the importance of these three goals, and so is willing to 'sacrifice' a different one. The preference orderings of Left, Centre, and Right are:

Left (L): G > H > P
Centre (C): P > G > H
Right (R): H > P > G

where '>' means 'preferred to'. The Left prefers to do government work for an honest company – one that cannot be profitable. The Centre wants a profitable company with government work – which means it cannot be honest. The Right wishes for an honest and profitable company – but this means no government work.

Each group has its own preference rankings: it assigns its own first-best alternative 3 points, its own second-best 2 points, and its own third-best 1 point. At the same time, each group recognises that its benefits depend on the order ranking decided for the company's goals: it will get 3 points from whatever is in first place, 2 points from whatever is in second place, and 0 points from whatever falls into third place (and is thus eliminated from further consideration).

Each group's total value from any ordering of the alternatives is given by multiplying its own preference ranking times by the actual order ranking. In order to make these computations explicit, take a look now at Table 1.

Table 1 Valuations of possible outcomes for Left, Centre and Right

G = Government Contracts; H = Honesty; P = Profitable

Actual outcome Valuation to Left
(preferences: G > H > P)
G > H > P 3 x 3 + 2 x 2 + 0 x 1 = 13
H > G > P 3 x 2 + 2 x 3 + 0 x 1 = 12
G > P > H 3 x 3 + 2 x 1 + 0 x 2 = 11
P > G > H 3 x 1 + 2 x 3 + 0 x 2 = 9
H > P > G 3 x 2 + 2 x 1 + 0 x 3 = 8
P > H > G 3 x 1 + 2 x 2 + 0 x 3 = 7
Actual outcome Valuation to Centre
(preferences: P > G > H)
P > G > H 3 x 3 + 2 x 2 + 0 x 1 = 13
G > P > H 3 x 2 + 2 x 3 + 0 x 1 = 12
P > H > G 3 x 3 + 2 x 1 + 0 x 2 = 11
H > P > G 3 x 1 + 2 x 3 + 0 x 2 = 9
G > H > P 3 x 2 + 2 x 1 + 0 x 3 = 8
H > G > P 3 x 1 + 2 x 2 + 0 x 3 = 7
Actual outcome Valuation to Right
(preferences: H > P > G)
H > P > G 3 x 3 + 2 x 2 + 0 x 1 = 13
P > H > G 3 x 2 + 2 x 3 + 0 x 1 = 12
H > G > P 3 x 3 + 2 x 1 + 0 x 2 = 11
G > H > P 3 x 1 + 2 x 3 + 0 x 2 = 9
P > G > H 3 x 2 + 2 x 1 + 0 x 3 = 8
G > P > H 3 x 1 + 2 x 2 + 0 x 3 = 7

2 Voting rules

2.1 Agenda setting: choosing the 'Finalist'

Before the actual voting can begin, all three groups must agree upon an 'agenda': that is, the order of the voting on the three alternatives. This is equivalent to deciding which one of the three will be guaranteed a place in the final pair-wise contest. We will call this option the Finalist.

Each group will begin by caucusing together quietly, either in its own section of the room or, if possible, in another nearby room. They should try to decide their overall strategy for the voting – which alliances to form,what promises to make, and whether or not, or for how long, they mean to keep those promises.

The first order of business is to determine the Finalist. The groups will meet together in private pairs, in sequence. In this way, no group can be sure of what the other two have agreed upon, although representations will, of course, be made. After caucusing, each group will probably want to choose some member to be its chief negotiator with the other two groups. The ordering of these pair-wise meetings is: L + C, C + R, R + L followed by a second round of L + C, C + R, R + L.

After the negotiations have taken place, each group will write its choice for Finalist (determined by whatever voting method the group finds appropriate) on a slip of paper, and pass it to the Instructor. The Instructor will then read out the choices, and determine if there has been any agreement on a Finalist.

If there has not been an agreement by at least two of the three groups, then the agenda setting must be repeated (but with just one more meeting between each pair) until there is an agreement on the Finalist. For our example, let us assume that G (government contracts) has been selected as the Finalist.

2.2 Voting rounds 1 and 2

The actual voting is simpler than the agenda setting on the Finalist. Each group will make a final private consultation with its members. This can usually be in the same room with the other groups, by gathering closely together and speaking with lowered voices.

Since G has been selected as the Finalist in our example, the first two-way contest must be H (honesty) versus P (profitability). Each group will then write down its choice for the first round of voting, and pass the piece of paper to the Instructor, who reads off the votes. Although they will usually have made a promise on how to vote, each group is free to vote whichever way it chooses.

Whichever alternative gets the most votes wins the first-round contest: let us say H. The second round of voting would then be between that winner and the Finalist: H versus G. Again, each group writes down its choice on a piece of paper and passes it to the Instructor, who reads the votes.

After the winner of the second round has been chosen, let us assume G, then the rank ordering of the three alternatives is determined: in our example, G > H > P. This ordering determines the score of each group, as can be read from Table 1.

The Instructor will then open discussion of the strategic nature of this voting game.

References

Davis, D. D. and Holt, C. A. (1993) Experimental Economics, Princeton, NJ : Princeton University Press.

Friedman, T. (1989) From Beirut to Jerusalem, New York : Doubleday.

Gardner, R. (1995) Games in Business and Economics, New York : Wiley.

Holt, C. A. and Anderson, L. R. (1999) 'Agendas and strategic voting', Southern Economic Journal, vol. 65, no. 3, pp. 622–9.

Moulin, H. (1995) Cooperative Microeconomics: A Game-theoretic Introduction, Princeton, NJ : Princeton University Press.

Oxford English Dictionary (1971) Oxford : Oxford University Press.

Shubik, M. (1984) Game Theory in the Social Sciences: Concepts and Solutions, Cambridge, MA: MIT Press.

Sulock, J. M. (1990) 'The free rider and voting paradox "games"', Journal of Economic Education, vol. 21, no. 1, pp. 65–9.

Notes

[1] As Gardner says in his accessible treatment (1995, p. 243),'sub-game perfection is the inspiration for the concept of sequential equilibrium. A sequential equilibrium satisfies sub-game perfection on sub-games [only].' A sequential equilibrium path represents each bloc's best moves forward from some intermediate point, as assessed from the final outcome after performing 'backward induction' on both the payoffs and the probabilities over these payoffs.

Unless this is an advanced course, I would not recommend developing the probability side of sequential equilibria or the related concept of perfect equilibria, both of which involve Bayesian induction. The important point here is that sequential equilibria allow one to examine credibility only over a game's component subgames. In the present game, I ignore any precise probability, subjective or otherwise. All that is needed for a sequential equilibrium here is that each bloc assesses it as highly likely that the other two blocs will vote according to their true preferences.

[2] Formally, each bloc has the same Shapley value (Gardner, 1995; Shubik, 1984).The Shapley value for any bloc is the expected marginal value it brings to all possible future alliances – a measure of how much it may be able to extract in bargaining for its share of the spoils. Roughly speaking, this marginal value is the gain to each possible coalition if this bloc is part of it, rather than voting independently, multiplied by the probability that a particular coalition will actually be formed. The Shapley value uses a 'neutral' or zero prior-knowledge measure of these probabilities, randomly permutating all possible sorting of blocs into coalitions, and seeing how often a particular alliance comes up.

[3] I know of no theoretical result to buttress this conjecture. As for empirical verification, I have not run a large number of game-experiments, and none has been under controlled conditions. I can say only that this is the tendency observed in the informal games I have run over the past several years.

[4] The Oxford English Dictionary gives 'mutual promise' as one of the earliest meanings of the word. It cites a 1448 publication, The Craft of Lovers, which advised its readers that 'Ye should be trusty and trew [true] of compromis [compromise]' (1971, p. 746)

Contact details

James Stodder
Lally School of Management and Technology
Rensselaer Polytechnic Institute at Hartford
275 Windsor Street
Hartford, CT 06120-2991
USA

Tel: (860) 548 7860
Email: stodder@rh.edu

Top | IREE Home | Economics Network | Share this page