Top Banner
How to predict the frequency of voting events in actual elections Florenz Plassmann * Department of Economics, State University of New York at Binghamton Binghamton, NY 13902-6000 [email protected] T. Nicolaus Tideman Department of Economics, Virginia Polytechnic Institute and State University Blacksburg, VA 24061 [email protected] This version: March 19, 2012 Extended abstract: Two commonly-used criteria for evaluating voting rules are how infrequently the rules provide opportunities for strategic voting and how infrequently they encounter voting paradoxes. The lack of ranking data from enough actual elections to determine these frequencies with reasonable accuracy makes it attractive to investigate ranking data simulated with Monte Carlo methods. But such simulations permit inferences about actual frequencies only if they are conducted through statistical models that generate ranking data with the same statistical properties as ranking data from actual elections. We offer statistical evidence that ranking data simulated with a spatial model of vote-casting are extremely similar to ranking data from actual elections. Every voting rule is vulnerable to strategizing and to multiple voting paradoxes. Strategizing refers to a voter misrepresenting his true preferences for the candidates to increase the probability that a preferred candidate will win the election. Strategizing is undesirable if voting is intended to provide information about voters’ preferences, for example, to establish a political * Corresponding author. Phone: (607) 777-4304, Fax: (607) 777-2572.
36

How to predict the frequency of voting events in actual ...bingweb.binghamton.edu/~fplass/...VoteCasting_SSRN.pdf · 4 submits a ballot with the rankings ACB, BAC, or CAB.If the ballots

Jul 30, 2020

Download

Documents

dariahiddleston
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: How to predict the frequency of voting events in actual ...bingweb.binghamton.edu/~fplass/...VoteCasting_SSRN.pdf · 4 submits a ballot with the rankings ACB, BAC, or CAB.If the ballots

How to predict the frequency of voting events in actual elections

Florenz Plassmann* Department of Economics, State University of New York at Binghamton

Binghamton, NY 13902-6000 [email protected]

T. Nicolaus Tideman Department of Economics, Virginia Polytechnic Institute and State University

Blacksburg, VA 24061 [email protected]

This version: March 19, 2012

Extended abstract:

Two commonly-used criteria for evaluating voting rules are how infrequently the rules provide

opportunities for strategic voting and how infrequently they encounter voting paradoxes. The

lack of ranking data from enough actual elections to determine these frequencies with reasonable

accuracy makes it attractive to investigate ranking data simulated with Monte Carlo methods.

But such simulations permit inferences about actual frequencies only if they are conducted

through statistical models that generate ranking data with the same statistical properties as

ranking data from actual elections. We offer statistical evidence that ranking data simulated with

a spatial model of vote-casting are extremely similar to ranking data from actual elections.

Every voting rule is vulnerable to strategizing and to multiple voting paradoxes. Strategizing

refers to a voter misrepresenting his true preferences for the candidates to increase the

probability that a preferred candidate will win the election. Strategizing is undesirable if voting

is intended to provide information about voters’ preferences, for example, to establish a political

                                                            * Corresponding author. Phone: (607) 777-4304, Fax: (607) 777-2572.

Page 2: How to predict the frequency of voting events in actual ...bingweb.binghamton.edu/~fplass/...VoteCasting_SSRN.pdf · 4 submits a ballot with the rankings ACB, BAC, or CAB.If the ballots

 

mandate. Voting paradoxes are situations in which voting rules yield counterintuitive results—

for example, a voting rule might fail to declare a candidate as winner who receives an absolute

majority of the votes (the absolute majority paradox). Voting theorists know the paradoxes to

which different voting rules are vulnerable—for example, they know that the Borda rule and the

anti-plurality rule, two commonly discussed voting rules, are both vulnerable to the absolute

majority paradox. However, voting theorists do not know the frequencies with which this

paradox occurs for these rules in actual elections. Analyses of our simulated ranking data

suggest that, if we use the Borda rule, we can expect the absolute majority paradox to occur no

more than once in 2,500 elections with 1,000 and more voters, and no more than once in 100

elections with exactly 11 voters. In contrast, if we use the anti-plurality rule, then we can expect

to observe the absolute majority paradox about once every 34 elections with 1,000 and more

voters, and as frequently as once every 9 elections when there are exactly 11 voters. The

variations in these frequencies indicates that mere knowledge that a voting rule is vulnerable to a

specific paradox is not sufficient to assess the rule’s attractiveness relative to that of other voting

rules. Ranking data simulated with our model can provide new information that will help to

identify the more attractive voting rules.

Keywords: spatial model of voting, ordinal ranking data, urn models, Kullback-Leibler

divergence, Kolmogorov-Smirnov test

Page 3: How to predict the frequency of voting events in actual ...bingweb.binghamton.edu/~fplass/...VoteCasting_SSRN.pdf · 4 submits a ballot with the rankings ACB, BAC, or CAB.If the ballots

 

1. INTRODUCTION

Which voting rule should we use if we need to choose among three or more candidates? Two

commonly used criteria for assessing a voting rule are the rule’s resistance to strategizing and its

ability to limit the frequency of voting paradoxes. Gibbard (1973) and Satterthwaite (1975)

showed that no minimally acceptable voting rule is immune to strategizing, but voting rules

differ in the magnitudes of the incentives they offer for strategizing. Similarly, while most

voting paradoxes occur only for some voting rules, no voting rule is immune to all voting

paradoxes (see, for example, Nurmi, 1999, and Tideman, 2006). Voting theorists agree that the

most widely used voting rule, plurality voting, provides strong incentives for strategizing and

that it is susceptible to many severe paradoxes. Thus it is worthwhile to look for a better rule.

But voting theorists do not agree about which voting rule is best, to a large extent because they

have been unable to make reliable assessments of the frequencies with which voting events like

voting paradoxes and opportunities for strategizing arise under different voting rules. In this

paper we identify a procedure for simulating data that have the same statistical properties as

ranking data from actual elections. Analyses of data simulated with this procedure will provide

new information about the occurrence of voting events and thus permit progress towards identi-

fying more attractive voting rules.

Evaluating voting rules according to how frequently they encounter different paradoxes is

not a new idea. However, previous research has not been as informative as one might have liked.

Most voting rules with attractive properties require voters to rank the candidates—as opposed to

casting a vote only for the voter’s most preferred candidate—and there are not nearly enough

data from actual elections in which voters are asked to rank the candidates to estimate these

frequencies with any acceptable degree of accuracy. Gehrlein (2006) lists a large number of

empirical analyses of Condorcet’s paradox that use data from actual elections and surveys. The

largest data set with complete rankings contained information on 87 elections, and most analyses

use data from between 1 and 24 elections. Such analyses can illustrate that voting events like

Page 4: How to predict the frequency of voting events in actual ...bingweb.binghamton.edu/~fplass/...VoteCasting_SSRN.pdf · 4 submits a ballot with the rankings ACB, BAC, or CAB.If the ballots

 

strategic behavior and voting paradoxes occur, but they do not permit estimation of the

frequencies of occurrence with acceptable degrees of accuracy and precision.

To circumvent the problem posed by the scarcity of election ranking data, analysts have

undertaken Monte Carlo simulations of elections. But there is considerable evidence that the

results of such analyses depend greatly on the statistical model of vote-casting used to simulate

the ranking profiles.1 (A ranking profile describes how many ballots voters have cast for each of

the m! strict rankings—rankings without ties—of the m options.) There is also a sizeable

literature that derives such frequencies analytically, assuming specific models of vote-casting

(for a summary, see Gehrlein and Lepelley, 2011). However, analysts have generally chosen

these models of vote-casting because of their analytical properties, and not because they believed

that these models reflect the distribution of ranking profiles that one would expect to observe in

actual elections. Tideman and Plassmann (2012) compare 12 models of vote-casting to assess

how well these models describe observed ranking profiles. They find that a spatial model fits

observed election data much better than the others—by some measures the fit is an order of

magnitude better than that of the second best model—and that ranking profiles simulated with

the spatial model are far more similar to ranking profiles from actual elections than profiles

simulated with any of the other 11 models. But are ranking profiles simulated with the spatial

model sufficiently similar to profiles from actual elections so that analyses on the simulated

profiles can substitute for analyses on observed profiles? In this paper we establish that this is

indeed the case.

We compare ranking profiles that we simulate under the spatial model with profiles from

three data sets: one that we compiled from actual elections and two others that we compiled from

surveys that contain rankings of political candidates. In each data set we consider all possible

comparisons of three candidates within each election, thus constructing three new series of three-

                                                            1 See, for example, Merril (1984), Chamberlin and Featherston (1986), Nurmi (1992, 1999), and Tideman and Plassmann (2012).

Page 5: How to predict the frequency of voting events in actual ...bingweb.binghamton.edu/~fplass/...VoteCasting_SSRN.pdf · 4 submits a ballot with the rankings ACB, BAC, or CAB.If the ballots

 

candidate elections. This construction of three-candidate elections is standard practice in

empirical analyses of elections (see section 5.1). We assess the similarity of observed and

simulated ranking profiles with a Kolmogorov-Smirnov (KS) test. For comparisons with ranking

profiles of three-candidate elections constructed in this way from actual election profiles

simulated with the spatial model pass the KS test virtually every time! The test results for the

two data sets compiled from surveys are not quite as good, although the simulated profiles still

pass the KS tests most of the time by some measures. The results beat our expectations and

suggest that ranking profiles simulated with the spatial model can provide reliable information

about the frequencies of opportunities for strategic behavior and voting paradoxes in actual

elections. Until we have access to more ranking profiles from many additional actual elections,

we consider such simulations to be the most promising approach for analyzing the attractiveness

of voting rules.

The remainder of this paper is organized as follows: in section 2 we discuss what type of

framework is appropriate for the analysis of ranking profiles. In section 3 we formalize a

statistical model of vote-casting and in section 4 we describe our technique for comparing

observed and simulated data. In section 5 we summarize the spatial model. In section 6 we

describe and compare our three data sets. Section 7 is the heart of the paper, where we provide

evidence that the spatial model can simulate ranking profiles that are very similar to observed

ranking profiles. Section 8 concludes.

2. A FRAMEWORK FOR ANALYZING THE FREQUENCY OF VOTING EVENTS

Voting theorists identify the occurrence of voting events like paradoxes and opportunities for

strategic voting through an election’s ranking profile. As an example, consider an election with

three candidates, labeled A, B, and C, and the six strict rankings ABC, ACB, CAB, CBA, BCA,

and BAC. Assume that ten voters submit ballots with the ranking ABC, nine voters submit

ballots with the ranking BCA, and two voters submit ballots with the ranking CBA. No voter

Page 6: How to predict the frequency of voting events in actual ...bingweb.binghamton.edu/~fplass/...VoteCasting_SSRN.pdf · 4 submits a ballot with the rankings ACB, BAC, or CAB.If the ballots

 

submits a ballot with the rankings ACB, BAC, or CAB. If the ballots are evaluated with the

plurality rule, then candidate A wins the election because he receives the most first ranks (11).

But if the candidates are compared with each other, then candidate B beats candidate A (11:9

votes) as well as candidate C (19:2 votes). A candidate that beats all other candidates in pairwise

comparison using majority rule is called the Condorcet winner, and many voting theorists

consider a Condorcet winner, if one exists, a natural winner (see, for example, Felsenthal, 2012).

If there is a Condorcet winner and a voting rule chooses a different candidate as winner, then

voting theorists say that the Condorcet winner paradox has occurred. In the example, the

plurality rule winner A also loses against candidate B (9:11 votes) as well as against candidate C

(9:11 votes). If a voting rule elects a candidate as winner who loses against all other candidates

in pairwise comparison, then voting theorists speak of the occurrence of Borda’s paradox.

On might argue that an analysis of the occurrence of voting events like the Condorcet

winner paradox and Borda’s paradox requires a model of voter behavior that describes each

voter’s preferences over all candidates as well as his decisions about how much information

about the candidates to acquire, whether or not to go to the polls, and whether to vote according

to one’s true preferences or according to a strategic calculation. Such a model would illustrate

the voters’ decisions to cast ballots with particular rankings and thus explain why one might

observe a specific ranking profile. However, our goal is much more modest—we are interested

in calculations that can be made on the basis of probabilities with which different ranking

profiles are observed, without asking why the probabilities are what they are. A model of voter

behavior that would permit the derivation of probabilities from more basic facts would clearly be

useful. But we believe it is appropriate to address the simpler question of the empirical

distribution of election profiles before tackling the much more ambitious project of a model of

voter behavior.

Thus all that is needed for our purpose is a statistical model of vote-casting that describes

the observed distribution of ranking profiles. Such a model reflects the decisions that voters

Page 7: How to predict the frequency of voting events in actual ...bingweb.binghamton.edu/~fplass/...VoteCasting_SSRN.pdf · 4 submits a ballot with the rankings ACB, BAC, or CAB.If the ballots

 

make when casting the ballots. It does not distinguish the circumstances of one election from the

circumstances of another and modulate its predictions accordingly. But one can use the

distribution of ranking profiles implied by such a model to assess the frequencies with which one

should expect to observe voting events of interest. While we will not learn the reason for

observing Borda’s paradox, we can derive the probability of Borda’s paradox, as a function of

the number of voters and the voting rule that is used.

We are not concerned about the possibility that the observed ranking profiles to which we

compare our simulated profiles reflect voters’ strategic considerations rather than their genuine

preferences. If strategic voting occurs in actual elections, then it is appropriate to assess the

frequencies with which voting paradoxes occur in actual elections on the basis of ranking

profiles that take account of such strategizing. One might argue that an analysis of the frequency

of opportunities for strategic voting itself requires ranking profiles that reflect voters’ genuine

preferences. However, to answer the question of how frequently voters are able to manipulate

the outcome of an election by misrepresenting their preferences, it is again appropriate to

consider the ranking profiles that voters actually face, rather than the hypothetical profiles that

voters would face if everybody else reported their genuine rankings. It is true that such an

analysis does not provide information about the likelihood that an election outcome reflects the

voters’ genuine preferences. But even an analysis of strategic voting that is based on genuine

ranking profiles cannot provide this information, because identifying opportunities for strategic

voting when a voter knows the rankings of all other voters does not imply that an actual voter

with limited access to this information will actually misrepresent his preferences in the same

situation. To the extent that voting theorists are generally interested in identifying a voting rule’s

resistance to strategizing, it is appropriate to assess this resistance with the ranking profiles that

voters face.2

                                                            2 As an analogy, assume that we wanted to determine how frequently one should expect to observe a Straight Flush in a hand of Poker, and we do not know the number of suits and the number of cards per suit in a deck of cards. We can estimate the frequency of a Straight Flush empirically by observing all hands in different games of Poker. If

Page 8: How to predict the frequency of voting events in actual ...bingweb.binghamton.edu/~fplass/...VoteCasting_SSRN.pdf · 4 submits a ballot with the rankings ACB, BAC, or CAB.If the ballots

 

3. A STATISTICAL MODEL OF VOTE-CASTING

We describe a framework of elections in which different elections have the same number of

candidates but might have different numbers of voters. Consider an election i with candidates

and voters. There are ! strict rankings (rankings without ties) of the candidates. Assume

that every voter submits a ballot with one of these strict rankings, and that , j = 1, …, !

voters submit ranking j, with ∑ . Let , … , ! be the election’s ranking

profile. Assume that the properties of ranking profile are described by the random vector

with multivariate density function , ), where denotes the parameter set of . Because

the components of might vary across elections—for example, elections might differ in the

number of voters—the random vector R has a subscript i.

To formalize , ), let be a random variable that describes the distribution of

and let N , … , ! be a random vector with joint density function

N N ; N , (1)

where N is a vector of parameters. Because the m! components of are integers, N ; N

is a discrete m!-variate distribution. We call a specification of N a “model of N.”

Consider the vote share / of the ballots with ranking j in election i. Let be the

corresponding vote-share probability,3 with ∑ 1, and let , … , ! be the vector

of vote-share probabilities. Let be a random variable that describes the distribution of , and

let , … , ! be a random vector with joint density function

                                                                                                                                                                                                dealers cheat in some of the games that we observe, then our estimates incorporate the frequency of cheating and thus indicate how often one should expect to observe a Straight Flush in actual games, rather than in a hypothetical environment in which cheating does not occur. We assert that (1) the dealers’ motivations for cheating are irrelevant for our estimates so we do not need a model of dealer behavior to estimate the overall frequency of a Straight Flush, and (2) it is appropriate to incorporate the frequency of cheating if cheating occurs in actual games. Incorporating the dealers’ motivations for cheating to determine how the frequency of a Straight Flush varies across dealers would be a much more ambitious project than we are undertaking.

3 The probability differs from / because / is the realized rate of success from trials of a process with

a previously determined probability of success of .

Page 9: How to predict the frequency of voting events in actual ...bingweb.binghamton.edu/~fplass/...VoteCasting_SSRN.pdf · 4 submits a ballot with the rankings ACB, BAC, or CAB.If the ballots

 

P P ; P . (2)

The parameter vector P does not vary across elections, so the random vector P does not have a

subscript. Because the m! components of are real numbers that sum to 1, P ; P is a

continuous m!-variate distribution with support of the unit m!-simplex. We call a specification

of P a “model of P.” The model of N describes the distribution of and the dependence among

the ballots in election i, given a specific realization of P, while the model of P specifies the

probabilities of the vote shares and describes how they vary across elections, if they do.

If P ; P is not degenerate, then the vote-share probabilities can vary across elections.

Assume that N , … , ! , , where describes any parameters in addition to the vote-

share probabilities of the ballots. We define a model of vote-casting as a mixture of N and P

with

, N ; , … , ! , P ; P,…, !, (3)

where , P . We describe several models of N as well as the spatial model of P in

section 5.

4. ASSESSING THE ACCURACY OF A STATISTICAL MODEL OF VOTE-CASTING

Consider a series of e independent elections that might have different numbers of voters but that

have the same number of candidates. We observe the collection of ranking profiles , … ,

that we view as e independent draws from the density functions , ) with parameter set ,

i = 1, …, e. We need to index the density functions if the observed elections have different

numbers of voters. The task is to use the collection of observed ranking profiles to identify a set

of density functions , ) whose draws are indistinguishable from , … , .

Let the model of vote-casting , ) be a guess of , ). To assess the similarity

of , ) and , ), obtain e draws from , , where each simulated has the same

number of voters as the corresponding observed , to yield the collection of simulated

ranking profiles , … , . To be able to compare the properties of ranking profiles from

Page 10: How to predict the frequency of voting events in actual ...bingweb.binghamton.edu/~fplass/...VoteCasting_SSRN.pdf · 4 submits a ballot with the rankings ACB, BAC, or CAB.If the ballots

 

elections with different numbers of voters, divide the m! components of and by their sum

to obtain the collections of normalized ranking profiles / , … , / and / , … , /

. Each normalized ranking profile / or / describes a point in the unit m!-simplex.

To determine the probability that the two collections of normalized ranking profiles are

draws from the same density function, consider a proper subset S of the unit m!-simplex that

does not have full Lebesgue measure relative to the unit m!-simplex. An extreme example of S

is a subset with the single element si 1/ !, … ,1/ ! , that is, the point of measure zero at the

center of the unit m!-simplex. We call such a subset S the “reference subset.” For each element

i in / , … , / and / , … , / , identify the element in S whose position

minimizes the Euclidean distance to / and / , respectively.4 Denote these minimum

distances as i and oi. Let and o be two random variables with density functions

and that describe the distributions of i and oi, respectively. View each distance i

and oi as an independent draw from the respective density function. Ordering the two

collections of ri and oi according to their magnitudes yields the empirical cumulative

distribution functions (ecdf) of r and o, r and o . Use the Kolmogorov Smirnov

(KS) test to assess the likelihood that the ecdfs r and o are representations of the cdf

of the random variable . Because the distribution of the KS test statistic under the null-

hypothesis that both data sets were drawn from is unknown, it is necessary to bootstrap

this distribution; we follow the procedure suggested in Abadie (2002), with 999 bootstrap

repetitions.5

A single KS test will be misleading if the collection of simulated profiles , … , does

not represent a set of typical draws from , ). We therefore obtain 1,000 collections of

                                                            4 The Euclidean distance is the square root of the sum of squared differences between the m! elements of / and

/ , respectively, and the corresponding elements of . 5 We found that the p-value that we obtained through bootstrapping was generally within 1 percent of the approximation suggested in Stephens (1970).

Page 11: How to predict the frequency of voting events in actual ...bingweb.binghamton.edu/~fplass/...VoteCasting_SSRN.pdf · 4 submits a ballot with the rankings ACB, BAC, or CAB.If the ballots

 

simulated profiles , … , and we use the KS test to compare each collection of simulated

profiles with the collection of observed profiles , … , . We then evaluate the ecdf of the

resulting 1,000 p-values. The larger the degree of similarity between , ) and , ),

the closer is the distribution of p-values to a uniform distribution and the closer is its ecdf to a

straight line.6

5. THE SPATIAL MODEL OF VOTE-CASTING

5.1 Models of N

If the voters submit their ballots independently, then N ; N is the pdf of a multinomial

distribution with N , … , ! , , , 1 , and

, .

An intuitive way of modeling dependent ballots is by conceiving of voting in terms of an

urn model, with different possibilities for the replacement rule. While the multinomial

distribution describes drawing with replacement, drawing without replacement leads to the

multivariate hypergeometric distribution with parameter vector N , … , ! , , ,

where .7 The first two moments are , 1 Φ , and

, Φ , with Φ / 1 . As approaches infinity, the

distribution converges to the multinomial distribution.

Drawing with replacement plus the addition of another ballot of the type drawn leads to

the multinomial-Dirichlet distribution, which can be derived by letting the m! – 1 independent

vote-share probabilities of the multinomial distribution follow a Dirichlet distribution with

                                                            6 If one examines two sets of (representative) draws from the same distribution, then in 1,000 repetitions the p-value of the KS test statistic exceeds 0.5 about 500 times and 0.95 about 50 times. With an infinite number of repetitions, the cdf of the p-values is a straight line from [0, 0] to [1, 1]. 7 The standard parameterization of the hypergeometric distribution is N , … , ! , with ∑ , so

that / ∑ . We reparameterize the as with ∑ which yields N

, … , ! , , .

Page 12: How to predict the frequency of voting events in actual ...bingweb.binghamton.edu/~fplass/...VoteCasting_SSRN.pdf · 4 submits a ballot with the rankings ACB, BAC, or CAB.If the ballots

10 

 

parameter vector , … , ! , .8 The first two moments of the resulting compound

distribution are , 1 Ψ , and , Ψ ,

with Ψ / 1 . As approaches infinity, the variance of the distribution imposed

on the original vote-share probabilities approaches zero and the compound distribution

converges to the multinomial distribution. Further variations in the sampling and replacement

procedures lead to distributions that can accommodate additional forms of dependence among

the ballots (see, for example, Berg, 1985, and Johnson et al., 1997, pp. 200-231).9 However, in

section 6.4 we show that the multinomial and the multinomial-Dirichlet distributions provide

remarkably good descriptions of the dependence among the rankings in our three data sets.

4.2 A spatial model of P

Voting theorists have suggested several models of P. Tideman and Plassmann (2012) assess

twelve such models, using two sets of observed ranking profiles that represent elections with m =

3 candidates, with one set being compiled from actual elections and the other set from surveys.

They conclude that a spatial model of voting describes the distributions of these observed

ranking profiles much better than any of the other eleven models. In this paper we therefore

focus exclusively on the spatial model. Good and Tideman (1976) describe the philosophy as                                                             8 The standard parameterization of the Dirichlet distribution is , … , ! with / ∑ . We

reparameterize the as with ∑ with = kji which yields , … , ! , . Note that

this Dirichlet distribution does not describe a model of P because the parameter vector , … , ! , contains pi—compounding the multinomial distribution with the Dirichlet distribution leads to a model of N that is described

by N ; N with N , , .

9 Note that we model the distribution of the m! rankings by specifying the vote-share probabilities of the rankings among the ni ballots, rather than by specifying the probabilities with which individual voters submit these rankings. The vote-share probabilities equal these individual probabilities if indistinguishable voters submit their ballots independently, but a given vector of vote-share probabilities is consistent with a range of assumptions about the combination of probabilities and interdependence assigned to voters when there is dependence among the ballots. An alternative approach to model dependence among the ballots is to specify the probabilities with which individual voters submit the rankings, which then imply the vote-share probabilities (see, for example, Gelman et al., 2002). We do not follow this approach because we find it easier to calibrate our model to data from actual elections in terms of observable shares of rankings rather than in terms of unobservable individual probabilities.

Page 13: How to predict the frequency of voting events in actual ...bingweb.binghamton.edu/~fplass/...VoteCasting_SSRN.pdf · 4 submits a ballot with the rankings ACB, BAC, or CAB.If the ballots

11 

 

well as the technical details of the spatial model; we therefore summarize only its essential

elements.

The spatial model assumes that candidates are defined through their “attributes,” which

form a multi-dimensional “attribute space.” Every candidate possesses a specifiable quantity of

each attribute and thus has a unique location in attribute space. Every voter has an ideal point in

attribute space that describes the quantities of each attribute that the voter’s ideal candidate

would possess, as well as an indifference map that describes sets of locations of candidates that

the voter would find equally attractive. If the attribute space has at least m – 1 dimensions and a

marginal change in any candidate’s position does not alter the dimensionality of the space that

they span, then the positions of the m candidates in attribute space span an (m – 1)-dimensional

candidate space that is a subspace of attribute space. Voters’ indifference maps are defined in

candidate space through their definitions in attribute space.

We illustrate the spatial model for an election with three candidates, A, B, and C.

Assume that every voter submits a truthful ranking that reflects his ideal point, his indifference

surfaces, and the positions of the candidates. To determine, for each of the six strict rankings,

the fraction of voters who submit a vote for this ranking, consider the triangle in the two-

dimensional candidate plane that is formed by the locations of the three candidates (see figure 1).

The dashed lines in the figure are the perpendicular bisectors of the three sides of this triangle,

which intersect at the triangle’s circumcenter T and which divide the candidate plane into six

sectors. Assume that every voter’s utility loss from the choice of a particular candidate is the

same increasing function of the distance in candidate space between the candidate’s location and

the voter’s relative ideal point, so that the indifference surfaces in candidate space are concentric

spheres centered on the voter’s ideal point. Then in each sector, the order of the distances to the

locations of the three candidates from points in that sector represents the ranking of the

candidates by all voters with ideal points in that sector.

Page 14: How to predict the frequency of voting events in actual ...bingweb.binghamton.edu/~fplass/...VoteCasting_SSRN.pdf · 4 submits a ballot with the rankings ACB, BAC, or CAB.If the ballots

12 

 

The model is closed by specifying the distribution of ideal points. We follow Good and

Tideman (1976) in assuming that the distribution of ideal points in attribute space is spherical

multivariate normal, which implies that the distribution of “relative” ideal points in the candidate

plane is circular bivariate normal. Figure 1 shows the mode of this distribution as point O. The

integral of the density function of this distribution over each triangular sector is the expected

value of the fraction of voters who rank the candidates in the order corresponding to the sector’s

rank order. The six integrals over the six triangular sectors determine the vote-share

probabilities of the six rankings. In general, evaluating the spatial model for elections with m

candidates requires numerical integration over m! non-central wedges of an (m – 1)-variate

normal distribution. Currently we have such an algorithm only for the bivariate standard-normal

distribution (see DiDonato and Hageman, 1980), and we therefore need to restrict our analysis,

for the time being, to elections with three candidates.

Note that even though sectors that are opposite each other have the same angle at T, they

do not have the same integral of the density function and therefore do not imply the same vote-

share probability, unless O is on the perpendicular bisector of the line connecting the two

candidates. If O is exactly at the triangle’s circumcenter T, then the spatial model describes a

distribution of ideal points with pABC = pCBA, pBAC = pCAB, and pBCA = pACB. If, in addition, the

triangle is equilateral, then the spatial model yields pABC = pCBA = pBAC = pCAB = pBCA = pACB =

1/6.

Because the locations of the candidates are independent of (a) rotations around O and (b)

changes that move the locations of all candidates proportionately along rays emanating from the

triangle’s circumcenter, the spatial model has not six but rather four degrees of freedom. Figure

2 shows one way of using these four degrees of freedom to parameterize the spatial model:

(1) Place the intersection of the perpendicular bisectors T at the origin of a Cartesian

coordinate system. The fact that the vote shares are independent of rotations around the

Page 15: How to predict the frequency of voting events in actual ...bingweb.binghamton.edu/~fplass/...VoteCasting_SSRN.pdf · 4 submits a ballot with the rankings ACB, BAC, or CAB.If the ballots

13 

 

mode of the distribution of voters’ ideal points, O, permits us to rotate the coordinate

system so that O is located on its horizontal axis.

(2) The first degree of freedom specifies the distance between O and T.

(3) The remaining degrees of freedom specify the angles 1, 2, and 3 formed by the line

and the three perpendicular bisectors.

Thus any feasible set of values of the four degrees of freedom corresponds to a particular vector

of vote-share probabilities. Conversely, any set of six observed vote shares (derived by dividing

the six components of the observed ranking profile by the number of voters) permits us to derive

a set of values of the four parameters by choosing those values whose implied vote-share

probabilities (the integrals over the triangular-shaped slices under the bivariate normal

distribution) match as closely as possible the six observed vote shares.

To simulate elections with the spatial model, we assume that the three angles between

pairs of perpendicular bisectors follow a Dirichlet distribution, that the distance between O and T

follows a Weibull distribution, and that the division of the sector containing O into two parts by

OT follows a uniform distribution.10 Below we describe how we calibrate the parameters of the

first two distributions to our election data. We then use independent draws from the Dirichlet,

the uniform, and the Weibull distributions to construct six vote share probabilities that we use as

input into the model of N to draw the ranking profile.

4.3 A model of voter behavior versus a model of P

The spatial model can be interpreted either as a model of voter behavior or as part of a model of

vote-casting. It is important to distinguish the two interpretations. When viewed as a model of

voter behavior, the task would be to identify the positions of the ideal points in actual elections.

Knowledge of these positions would explain why one observes a particular ranking profile. The                                                             10 The Dirichlet distribution is a natural choice for a distribution of three angle shares. Among various possible distribution of the distance , the cdf of the Weibull distribution comes remarkably close to the empirical cdf of the distance that we derive from observed ranking profiles.

Page 16: How to predict the frequency of voting events in actual ...bingweb.binghamton.edu/~fplass/...VoteCasting_SSRN.pdf · 4 submits a ballot with the rankings ACB, BAC, or CAB.If the ballots

14 

 

key question would be whether observed ranking profiles can be interpreted as revealed

preferences that provide information about the positions of the ideal points. This question has

received considerable attention in the literature; Henry and Mourifié (2011) assess and reject the

validity of the spatial model of voting when it is interpreted as a model of voter behavior.

In contrast, when the spatial model serves as a model of P and thus as part of a model of

vote-casting, the focus is on the distribution of ranking profiles rather than the positions of ideal

points. The task is to parameterize the spatial model so that the model of vote-casting yields a

distribution of ranking profiles that corresponds to the distribution of observed ranking profiles.

Knowledge of the positions of ideal points is not required for this task. Thus our work is neither

related to nor affected by the literature on revealed preferences. We adopt the spatial model as a

model of vote-casting solely because, among all contenders for a model of P of which we are

aware, it comes closest to describing the distribution of observed ranking profiles, not because

we want to defend it as a model of voter behavior.

6. THE DATA

6.1 Description of the three data sets

We use one set of ranking data from actual elections and two sets from surveys. The ranking

data from actual elections are from a set of 58 elections that were administered by the Electoral

Reform Society (ERS) in the United Kingdom. The two sets of survey data are “thermometer

scores” from 19 election surveys conducted by the American National Election Studies (ANES)

between 1970 and 2008 and from 31 political surveys (the “Politbarometer” or PB) conducted by

the German Institute for Election Research between 1977 and 2008. The three data sets contain

individual ballots and individual survey responses, respectively, with rankings of between 3 and

29 candidates. For the sake of brevity, we will refer to the surveys as “elections” and to the

survey respondents as “voters.” The fact that we have to supplement election data with survey

data reflects the paucity of ranking data from actual elections. It is possible that the ballots in the

Page 17: How to predict the frequency of voting events in actual ...bingweb.binghamton.edu/~fplass/...VoteCasting_SSRN.pdf · 4 submits a ballot with the rankings ACB, BAC, or CAB.If the ballots

15 

 

ERS elections reflect voters’ strategic considerations, and it is possible that there are strategic

considerations of survey respondents that differ from those of voters. As mentioned earlier, we

consider it appropriate to use data that might reflect strategic behavior. We can test whether

strategic considerations differ across data sets by comparing the estimates of the spatial model

parameters that we obtain from these data sets.

Because we can currently evaluate the spatial model only for elections with three

candidates and also because even the 108 elections in the three data sets are not enough to permit

reliable assessment of the accuracy of the spatial model, we use the ballots to construct all

possible combinations of three candidates within an election, treating each combination as one

election with three candidates. Henceforth we will refer to each such three-candidate compar-

ison as an election.11 Because voters often do not rank all available candidates, some of these

constructed three-candidate elections have few voters, sometimes only a single voter. We found

that elections with too few voters contain too much random variation, and we therefore restrict

our analysis to constructed three-candidate elections for which we have at least 350 ballots.

From the ERS data we assembled 855 three-candidate elections with between 350 and 1,957

voters, with a mean of 716.4 voters, and from the ANES data we assembled 1,078 three-

candidate elections with between 450 and 2,521 voters each, with a mean of 1529.1 voters. The

PB data yielded more than 82,000 three-candidate elections, which are too many to analyze with

our numerical methods. We therefore analyze a random subsample of 1,000 elections, whose

range of 351 to 3,676 voters and mean of 890.2 voters is very close to those for all elections in

the PB data. Comparison of different subsamples from the “universe” of 82,754 PB elections

                                                            11 Assembling elections in this way is a common practice in the empirical analysis of voting methods. For example, Chamberlin and Featherston (1986) use thermometer scores from five ANES surveys to construct combinations of four candidates, Stensholt (1995) uses thermometer scores for 8 persons from the 1993 omnibus political opinion

poll conducted by Opinion Ltd., Bergen, Norway, to construct 8!/(5!3!) = 58 combinations of three candidates, and Regenwetter et al. (2002 and 2003) use thermometer scores from four ANES surveys to construct combinations of three candidates. Our method of constructing three-candidate elections is the same as that in these earlier analyses. It is also customary in the theoretical literature of voting events to restrict the number of candidates, and many theoretical results are available only for elections with three candidates.

Page 18: How to predict the frequency of voting events in actual ...bingweb.binghamton.edu/~fplass/...VoteCasting_SSRN.pdf · 4 submits a ballot with the rankings ACB, BAC, or CAB.If the ballots

16 

 

(see below) provides some information about the likely effect of sampling variation on our KS

assessment method.

In the appendix we provide detailed information about the original data and the steps that

we undertook to assemble our three final data sets. We acknowledge the possibility that the

properties of our assembled three-candidate elections might differ from those of genuine three-

candidate elections. However, until additional ranking data from actual three-candidate elections

become available, we consider this procedure of assembling three-candidate elections a viable

second-best approach.

6.2 Choosing the reference subset S of the unit 5-simplex to determine the distances and o

Because we have no reason to prefer one particular subset as reference subset S, we adopt the

subset that permits only a single ranking profile with measure zero: the center of the unit 5-

simplex where p1 = p2 = p3 = p4 = p5 = p6 = 1/6. We will refer to this subset as reference subset

A. To assess the effect that the choice of reference subset has on our results, we adopt the subset

of the unit 5-simplex that is formed by the spatial model as a comparison reference subset (see

Tideman and Plassmann, 2012, for a characterization of the subset formed by the spatial model).

We will refer to this subset as reference subset B. Note that it is appropriate to use the subset

formed by the spatial model to evaluate ranking profiles generated by the spatial model itself.

Although we do expect very small distances between the normalized ranking profiles generated

by the spatial model (and the associated model of N) and the closest point in the unit 5-simplex

that is permitted by the spatial model, the magnitude of these distances is irrelevant; what is

relevant is whether the distances to simulated ranking profiles and observed profiles are the

same. Thus if the distances to simulated profiles are smaller than the distances to observed

profiles, then this would constitute evidence against the hypothesis that the observed profiles

were generated by the spatial model.

Page 19: How to predict the frequency of voting events in actual ...bingweb.binghamton.edu/~fplass/...VoteCasting_SSRN.pdf · 4 submits a ballot with the rankings ACB, BAC, or CAB.If the ballots

17 

 

6.3 Comparison of the three data sets and calibration of the spatial model

Figure 3 shows the three ecdfs for , , and that we obtained by evaluating the

distances between the observed normalized ranking profiles and reference subset A (the center of

the unit 5-simplex), calculated over all three-candidate elections in each data set and ordered by

magnitude. The ecdfs of and are much more similar to each other than to .

Despite the apparent similarity between the ecdfs of and , comparison of the three ecdfs

yield KS test p-values of 1, indicating that the three underlying cdfs are likely to be different

from one another.

To further assess the similarities and differences among the three data sets, we compare

the parameter values of the spatial model implied by these data sets. Section A of table 1 reports

the estimates of the means and standard deviations of the four parameters of the spatial model—

the distance and the three angles of the perpendicular bisectors with the line —that we

obtained from each of the three data sets.12 The estimates of the mean angles are very similar for

all three data sets, while the mean distances for the ERS and PB data are about 50 percent

larger than the mean distance for the ANES data. Because the winning ranking contains the

center of voter ideal points O, the distance affects the relative size of the share of the winning

ranking and its immediate neighbors (rankings in which the winning candidate is ranked highly):

the larger , other things being equal, the stronger is the support for the winning candidate and

the lower is the support for the candidate ranked last in the winning ranking. The estimates of

therefore indicate that the ERS and PB elections contain more “heroes and villains,” that is,

the support for the winner as well as the lack of support for the lowest-ranked loser tends to be

stronger in the ERS and PB elections than in the ANES elections.

Section B of table 1 shows the parameters of the distributions that we use to simulate

elections with the spatial model. We calibrate the two parameters of the Weibull distribution so

                                                            12 Fitting the spatial model to one observed election yields a set of four parameter values whose implied six probabilities match as closely as possible the six observed vote shares. The estimates in section A of table 2 are the mean values of the parameters, calculated over all elections in the respective data set.

Page 20: How to predict the frequency of voting events in actual ...bingweb.binghamton.edu/~fplass/...VoteCasting_SSRN.pdf · 4 submits a ballot with the rankings ACB, BAC, or CAB.If the ballots

18 

 

that the mean and standard deviation of the distribution correspond to the mean and standard

deviation of that we report for this data set in section A. The standard parametrization of the

3-variate Dirichlet distribution is , , ; we reparameterize the Dirichlet distribution

by , j = 1, …, 3, where the three s are the shares of the three pairs of opposite angles,

with ∑ 1, and determines the variance of share i as 1 / 1 . To calibrate ,

we calculate the mean of angle j over all elections (shown at the bottom of section A), and use its

share among the three angles as .13 We calibrate as the mean of the three variance

parameters implied by share

1, where share is the observed variance of the share

of angle j over all elections in the data set. Comparison of the six parameters among the three

data sets supports the earlier result that the ERS and PB elections are more similar to each other

than either set is to the ANES elections.

6.4 Replicating the degree of variation in the vote-shares

The remaining issue is whether or not the spatial model leads to the appropriate variation among

the expected vote-shares, that is, whether or not the six vote-share probabilities that we simulate

with the spatial model can properly be viewed as the parameters of a multinomial, a multinomial-

Dirichlet, or a multivariate hypergeometric distribution. We do not find any evidence that the

spatial model generates too little variation among the vote-share probabilities which would

justify using the multivariate hypergeometric distribution, so we only assess the magnitude of the

Dirichlet parameter .

Our strategy is to find the value of , call it , for which the observed data set j, j

{ANES, ERS, PB} and data simulated with this value of are most similar to each other. We

estimate as the value that minimizes the Kullback-Leibler divergence D between the two

                                                            13 To obtain the angle between the line and the first perpendicular bisector, we draw a random number from the interval [0,1] and use it to determine the place where divides the first angle into two parts.

Page 21: How to predict the frequency of voting events in actual ...bingweb.binghamton.edu/~fplass/...VoteCasting_SSRN.pdf · 4 submits a ballot with the rankings ACB, BAC, or CAB.If the ballots

19 

 

empirical probability density functions (epdf) of the distances oi and that we obtain by

evaluating the observed ranking profiles and the profiles simulated with this value of . If we

allocate the distances oi and into bins according to their size and let and

represent the shares of distances in bin i that we obtain when we use reference subset k,

, , of the unit 5-simplex, to evaluate observed data set j and simulated data set h,

respectively, then measures the difference between the two epdfs as14

∑ . (4)

We use the combination of the multinomial-Dirichlet model and the spatial model to simulate

one million elections each for values of between 100 and 50,000.

Figures 4 – 6 show the relationships between and that we obtain when using the

two reference subsets to evaluate the observed and simulated ranking profiles for each of the

three data sets. For all three data sets, is minimized at ∞, suggesting that ranking

profiles simulated under the multinomial distribution are most similar to the observed profiles.

is also minimized at ∞. In contrast, is minimized at = 330, while

is minimized at = 5,800. Because the minima of and are below the minima of

and , we adopt = 330, = 5,800, and = as parameter estimates of

the respective . Thus we compare the observed ERS and ANES ranking profiles with

profiles simulated under the multinomial-Dirichlet model of N and the observed PB profiles with

profiles simulated under the multinomial model of N.

                                                            14 We derive the epdfs by allocating the distances into 21 symmetric bins around the median value of and oi, respectively, that we obtain by evaluating the observed data. We experimented with 11 and 31 bins and found that the choice of bandwidth does not affect our results.

Page 22: How to predict the frequency of voting events in actual ...bingweb.binghamton.edu/~fplass/...VoteCasting_SSRN.pdf · 4 submits a ballot with the rankings ACB, BAC, or CAB.If the ballots

20 

 

7. ASSESSING RANKING DATA SIMULATED WITH THE SPATIAL MODEL

We simulate 1,000 sets of ranking profiles, using the spatial model with the parameter estimates

reported in part B of table 2. As described in section 3, for each simulated set of ranking profiles

we determine the p-value of the KS test to assess the likelihood that the simulated and the

observed ranking profiles were generated by the same statistical model. We report the ecdfs of

the 1,000 p-values for the three data sets in Figures 7 – 9.

7.1 The ERS data

Figure 7 shows the two ecdfs of p-values for the ERS data. Slightly more than 95% of p-values

for each reference subset are below 0.95. Thus we obtain p-values below 0.95 about as many

times as we would expect to observe for two data sets that were generated by the same statistical

process. The two ecdfs constitute strong evidence that the ranking profiles simulated with the

spatial model are very similar to the observed ERS profiles.

7.2 The PB data

Figure 8 shows the two ecdfs of p-values for the PB data. More than 95% of p-values for

reference subset A are below 0.95. However, 99% of p-values for reference subset B exceed

0.95. Thus depending on the reference subset used, our results suggest that the simulated

ranking profiles differ from the observed PB profiles in some ways.

The effectiveness of our analysis is limited by the fact that we compare our simulated

ranking profiles to the profiles of a single collection of observed elections rather than the

universe of possible elections. Unless the ecdf that we obtain by analyzing this collection of

observed elections is identical to the cdf of the distribution from which these elections were

sampled, the distribution of p-values of our KS tests will deviate from a uniform distribution,

even if we have discovered the exact generating mechanism. We assess the extent of the

sampling variation in the observed data by analyzing nine additional sets of 1,000 elections each

that we drew from the 82,754 PB elections. Figure 9 shows the ten ecdfs for reference subset A;

Page 23: How to predict the frequency of voting events in actual ...bingweb.binghamton.edu/~fplass/...VoteCasting_SSRN.pdf · 4 submits a ballot with the rankings ACB, BAC, or CAB.If the ballots

21 

 

the dashed line corresponds to the ecdf of p-values in figure 8 and the nine other lines represent

the ecdfs of p-values that we determined from the other nine PB samples. The dotted line

represents the mean of the ten ecdfs. The variation among the ten ecdfs is considerable, even

though all ten samples of 1,000 PB elections were drawn from the same universe of 82,754 PB

elections. However, 97.5% of all p-values in the 10 sub-samples are below 0.95, indicating that,

by this measure, there is no significant difference between the simulated and observed data.

7.3 The ANES data

Figure 10 shows the ecdfs of p-values that we obtained for the ANES data. The ecdf of

reference subset B indicates a good fit, with 98.2% of the p-values below 0.95, but all of the p-

values for reference subset A are above 0.95.

To assess how serious it is to fail the KS test, we compare, in figures 11 and 12, the ecdfs

of that we determined with the two reference subsets from the observed ANES data and one

set of simulated data. Evaluating the two pairs of ecdfs with the KS test yields a p-value of

0.998 for reference subset A and a p-value of 0.584 for reference subset B. Visual inspection

confirms that the difference between the ecdfs for reference subset B is somewhat smaller than

the difference for reference subset A, although the differences between the two ecdfs in the two

figures do not seem unreasonably large. Thus even though the simulated rankings pass the KS

test with the appropriate frequency only for one of the two reference subsets, the ecdfs in figures

11 and 12 suggest that the simulated data and the ANES data are not too different.

Overall we conclude that the ranking profiles that we simulate with the spatial model are

very similar to the observed ERS profiles and reasonably similar to the observed PB and the

ANES profiles. The fact that our simulated ranking profiles do not pass the KS test for both

reference subsets for our two data sets derived from surveys suggests that we have not yet

identified the true generating process that describes survey data best. However, the KS test is a

very rigorous test because it requires that we obtain the same distribution of the collections of

distances and oi for simulated and observed ranking profiles. Figures 10 – 12 indicate that

Page 24: How to predict the frequency of voting events in actual ...bingweb.binghamton.edu/~fplass/...VoteCasting_SSRN.pdf · 4 submits a ballot with the rankings ACB, BAC, or CAB.If the ballots

22 

 

the differences between two ecdfs must be very small for the simulated profiles to pass the KS

test, and the simulated profiles pass the KS test very easily for the ERS ranking profiles. All

collections of distances and oi pass less rigorous tests for all three data sets. For example,

for all three data sets and both reference subsets, the mean distance ∑ / for the simulated

profiles is within two standard errors of estimates of the mean distance ∑ / for the

observed data. Thus by several measures, the spatial model comes very close to describing the

statistical properties of observed ranking profiles.

8. Conclusion

The simulation procedure that we describe in this paper yields ranking profiles that are, by

several measures, very similar to ranking profiles in actual elections. Voting theorists have

examined the frequency of voting cycles (that is, the lack of a Condorcet winner) and the

Condorcet efficiency (the likelihood that a voting rules elects the Condorcet winner, if one

exists) of different voting rules, but they have not done this with models that describe vote-

casting in actual elections. There is almost no research on the frequencies of other voting

paradoxes, the resistance to strategizing, the occurrence of ties, or even on how often voting rules

disagree on the winning candidate. Using ranking profiles simulated with the spatial model to

analyze the frequencies with which any of these events occur can thus teach us many new things

about voting rules.

A limitation of our current analysis is that we can evaluate the spatial model only for

elections with three candidates. Evaluation of the spatial model for elections with m candidates

requires numerical integration over m! non-central cones of an (m – 1)-variate normal distri-

bution, and currently we have such an algorithm only for the bi-variate standard-normal

distribution. We are working on an algorithm that permits us to simulate ranking data for

elections with four candidates. Nevertheless, our inquiry into what one should expect to happen

in three-candidate elections is a promising start.

Page 25: How to predict the frequency of voting events in actual ...bingweb.binghamton.edu/~fplass/...VoteCasting_SSRN.pdf · 4 submits a ballot with the rankings ACB, BAC, or CAB.If the ballots

23 

 

APPENDIX: INFORMATION ABOUT THE THREE DATA SETS

1. THE ERS DATA:

The full ERS data set contains ballots from 84 elections that were administered by the Electoral

Reform Society (ERS) as well as three elections from another source, all tabulated by Nicolaus

Tideman in 1987 and 1988. For 29 of these elections we have only a subsample of the ballots;

because we are not confident that these are random subsamples, we decided it would be prudent

to exclude these elections from the analysis. This leaves us with 58 elections with between 3 and

29 candidates for which we have all ballots. For each of these 58 elections we consider all

combinations of three candidates, which yields a total of 18,169 three-candidate elections with

between 1 and 1,957 voters and a mean of 59.9 voters. Because elections with too few voters

contain too much noise, we restrict our analysis to so-constructed three-candidate elections for

which we have at least 350 ballots. Our final ERS data set consists of 855 three-candidate

elections with between 350 and 1,957 voters and a mean of 726.8 voters.15

2. THE PB DATA:

The PB data contain information from Politbarometer surveys that were administered by the

German Institute for Election Research between 1977 and 2008. These surveys are undertaken

each month, and for the years from 1990 onwards there are separate surveys for the areas of

former East and West Germany. Survey respondents are asked to evaluate political candidates as

well as political parties, on an 11-point “thermometer” scale from -5 to +5. Because our goal is

to make the PB data comparable to our two other data sets, we consider only the thermometer

scores for the political candidates and ignore the evaluations of political parties. While there are

many ways in which one can combine these surveys, we decided to keep them in the format in

which they are available online at the GESIS-Leibnitz-Institute for the Social Sciences: combine

the monthly surveys to obtain an annual survey and keep the separate surveys for East and West

                                                            15 Our final data set would have consisted of 883 three-candidate elections had we compiled it from all 84 elections. Thus excluding the 29 elections for which we do not have all ballots is not too costly.

Page 26: How to predict the frequency of voting events in actual ...bingweb.binghamton.edu/~fplass/...VoteCasting_SSRN.pdf · 4 submits a ballot with the rankings ACB, BAC, or CAB.If the ballots

24 

 

Germany, for a total of 49 different “elections.” Our main rationale for this decision was that

assembling all three-candidate combinations within each election yields a set of elections whose

numbers of voters are comparable to those in the other two data sets. Compiling all

combinations of three candidates leads to a total of 83,701 three-candidate elections with

between 155 and 3,934 voters and with a mean of 883.8 voters. When we eliminate those

elections with fewer than 350 voters, we are left with 82,754 three-candidate elections with

between 350 and 3,934 voters and with a mean of 890.5 voters. Because undertaking our

analyses with all 82,754 elections proved to be too time-consuming, we decided to analyze

random sub-samples with 1,000 elections instead. The subsample with which we undertook all

analyses in the paper has a range of 351 to 3,676 voters and a mean of 890.2 voters, thus its

characteristics are comparable to those of the full sample.

For each response, we rank the three candidates according to their thermometer scores,

thereby eliminating any information about the intensity of the voter’s preferences. If a response

yields a strict ranking of candidates, then we count it as one vote for this ranking. Voters are

allowed to assign equal scores to different candidates, and we adopt the following intuitive rule

of accommodating ties: If all candidates are tied, then we count the response as 1/6 vote for each

ranking, and if two candidates are tied, then we count the response as half a vote for each of the

two possible strict rankings that break the tie. Thus our data set consists of the total number of

votes for each of the six strict rankings in each three-candidate election. We confirmed that our

results are unchanged when we resolve ties by randomly assigning one vote to one of the

possible strict rankings that break the tie.

3. The ANES data:

The ANES data contain information from the 19 time-series surveys that were undertaken by the

American National Election Studies between 1970 and 2008. In the surveys conducted before

1970, a candidate whom the survey respondent did not know received a score of 50 on the

participant’s answer sheet, while such a candidate was coded as “unknown” in the surveys from

Page 27: How to predict the frequency of voting events in actual ...bingweb.binghamton.edu/~fplass/...VoteCasting_SSRN.pdf · 4 submits a ballot with the rankings ACB, BAC, or CAB.If the ballots

25 

 

1970 onwards. To avoid ambiguities between unknown candidates and candidates evaluated at

50, we restrict our analysis to surveys conducted from 1970 onwards. ANES undertook 18 bi-

annual times series surveys between 1970 and 2004 and another time series survey in 2008;

ANES did not undertake time series surveys in 2006 and 2010. All possible three-candidate

combinations have more than 350 ballots, and we assembled a total of 1,078 three-candidate

elections with between 450 and 2,521 voters each, with a mean of 1529.1 voters. We

accommodate ties in the thermometer scores in the same way as we do for the PB data.

Page 28: How to predict the frequency of voting events in actual ...bingweb.binghamton.edu/~fplass/...VoteCasting_SSRN.pdf · 4 submits a ballot with the rankings ACB, BAC, or CAB.If the ballots

26 

 

REFERENCES

Abadie, Alberto. 2002. Bootstrap tests for distributional treatment effects in instrumental variable models. Journal of the American Statistical Association, 97:457, 284-292.

Berg, Sven. 1984. Paradox of voting under an urn model: The effect of homogeneity. Public Choice, 47:2, 377-387.

Chamberlin, John R. and Featherston, Fran. 1986. Selecting a voting system. The Journal of Politics, 48:2, 347-369.

DiDonato, A. R. and Hageman, R. K. 1980. Computation of the integral of the bivariate normal distribution over arbitrary polygons. Naval Surface Weapons Center, Government Accession Number ADA102466.

Felsenthal, Dan. 2012. “Review of paradoxes afflicting procedures for electing a single candidate.” In: Dan Felsenthal and Moshé Machover (eds.) Electoral systems: Paradoxes, assumptions, and procedures. Berlin: Springer, 19 – 91.

Gehrlein, William V. 2006. Condorcet’s paradox. Berlin and Heidelberg: Springer.

Gehrlein, William and Dominique Lepelley. 2011. Voting paradoxes and group coherence: The Condorcet efficiency of voting rules. Berlin: Springer.

Gelman, Andrew, Katz, Jonathan N., Tuerlinckx, Francis. 2002. The mathematics and statistics of voting power. Statistical Science, 17:4, 420-435.

Gibbard, Alan. 1973. Manipulation of voting schemes: a general result. Econometrica, 41:4, 587–601.

Good, I. Jack and Tideman, T. Nicolaus. 1976. From individual to collective ordering through multidimensional attribute space. Proceedings of the Royal Society of London (Series A), 347: 371-385.

Henry, Marc and Mourifié, Ismael. 2011. “Euclidean revealed preferences: Testing the spatial voting model.” Journal of Applied Econometrics, DOI:10.1002/jae.1276.

Johnson, Norman L., Samuel Kotz, and N. Balakrishnan. (1997). Discrete multivariate distributions. New York: Wiley.

Merrill, Samuel. 1984. A comparison of efficiency of multicandidate electoral systems. American Journal of Political Science, 28: 1, 23-48.

Nurmi, Hannu. 1992. “An assessment of voting system simulations.” Public Choice, 73, 459-487.

Nurmi, Hannu. 1999. Voting paradoxes and how to deal with them. Berlin and Heidelberg: Springer.

Regenwetter, M., Grofman, B., Marley, A. A. J. 2002. On the model dependence of majority preference relations reconstructed from ballot or survey data. Mathematical Social Sciences, 43, 451-466.

Page 29: How to predict the frequency of voting events in actual ...bingweb.binghamton.edu/~fplass/...VoteCasting_SSRN.pdf · 4 submits a ballot with the rankings ACB, BAC, or CAB.If the ballots

27 

 

Satterthwaite, Mark A. 1975. Strategy-proofness and Arrow’s conditions: Existence and correspondence theorems for voting procedures and social welfare functions. Journal of Economic Theory, 10, 187-217.

Stensholt, Eivind. Circle pictograms for vote vectors. 1996. SIAM Review, 38:1, 96-119.

Stephens, M. A. 1970. “Use of the Kolmogorov-Smirnov, Cramer-Von Mises and related statistics without extensive tables.” Journal of the Royal Statistical Society Series B, 32:1, 115-122.

Tideman, T. Nicolaus. 2006. Collective Decisions and Voting. Burlington, VT: Ashgate.

Tideman, T. Nicolaus and Plassmann, Florenz. 2012. Modeling the outcomes of vote-casting in actual elections. In: Dan Felsenthal and Moshé Machover (eds.) Electoral systems: Paradoxes, assumptions, and procedures, Springer, 217-251.

Working paper version: http://bingweb.binghamton.edu/~fplass/papers/Voting_Springer.pdf

Page 30: How to predict the frequency of voting events in actual ...bingweb.binghamton.edu/~fplass/...VoteCasting_SSRN.pdf · 4 submits a ballot with the rankings ACB, BAC, or CAB.If the ballots

28 

 

Table 1. Spatial model parameter estimates from the ERS, ANES, and PB data

A. Parameter estimates

Data set Mean distance OT Mean angles of the perpendicular bisectors with the line OT (1) (2) (3) (4) ERS 0.6083

(0.2642) 0.5476 (0.3438)

1.5466 (0.3263)

2.5574 (0.3372)

PB 0.6111 (0.3212)

0.5310 (0.3501)

1.5633 (0.3659)

2.5876 (0.3390)

ANES 0.4086 (0.2398)

0.5450 (0.3633)

1.5487 (0.4274)

2.5900 (0.3737)

Corresponding angles between pairs of bisectors: ERS 1.1317 0.9990 1.0109 PB 1.0850 1.0322 1.0243 ANES 1.0965 1.0037 1.0414

B. Calibrated parameters of the two distributions that describe the spatial model of P

Weibull parameters

Dirichlet parameters

Scale

parameter Shape

parameter Share 1 Share 2 Share 3 Variance

(1) (2) (3) (4) (5) (6) (7) ERS 0.6858 2.4608 0.3602 0.3180 0.3218 73.5008 330 PB 0.6894 1.9888 0.3454 0.3286 0.3261 40.6752 ANES 0.4589 1.7603 0.3490 0.3195 0.3315 24.9207 5,800 Note: Standard deviations are in parentheses. We show standard deviations rather than standard errors to make it transparent how we derive the distributional parameters in part B from the estimates in part A.

Page 31: How to predict the frequency of voting events in actual ...bingweb.binghamton.edu/~fplass/...VoteCasting_SSRN.pdf · 4 submits a ballot with the rankings ACB, BAC, or CAB.If the ballots

29 

 

Figure 2. The four parameters , 1, 2, and 3 that define a spatial model observation. (The figure is taken from Tideman and Plassmann, 2012, p.249.)

1 2 

 A CAB 

CBA 

BCA 

BAC 

ABC 

ACB 

OT  

Figure 1. Division of the candidate plane into six sectors by drawing the perpendicular bisectors of the three sides of the triangle formed by the candidates’ locations, and the associated rank orders of the sectors. (The figure is taken from Good and Tideman, 1976, p. 372.)

    B 

C A 

ABC 

BAC BCA 

CBA 

CAB 

ACB 

T

Page 32: How to predict the frequency of voting events in actual ...bingweb.binghamton.edu/~fplass/...VoteCasting_SSRN.pdf · 4 submits a ballot with the rankings ACB, BAC, or CAB.If the ballots

30 

 

Figure 3. Comparison of the ecdfs of the Euclidian distance obtained by evaluating the ERS, ANES, and PB data with the spatial model

0.000

0.010

0.020

0.030

0.040

0.050

0.060

0.070

0.080

4.5 5.5 6.5 7.5 8.5 9.5 10.5

Dk ERS

ln ()

Reference: Center of simplex Reference: Spatial model

Figure 4. The relationship of and .

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 0.1 0.2 0.3 0.4 0.5 0.6

ecdf ()

ERS ANES PB

Page 33: How to predict the frequency of voting events in actual ...bingweb.binghamton.edu/~fplass/...VoteCasting_SSRN.pdf · 4 submits a ballot with the rankings ACB, BAC, or CAB.If the ballots

31 

 

0.000

0.005

0.010

0.015

0.020

0.025

0.030

0.035

0.040

0.045

0.050

4.5 5.5 6.5 7.5 8.5 9.5 10.5

Dk P

B

ln ()

Reference: Center of simplex Reference: Spatial model

0.000

0.050

0.100

0.150

0.200

0.250

4.5 5.5 6.5 7.5 8.5 9.5 10.5

Dk A

NES

ln ()

Reference: Center of simplex Reference: Spatial model

Figure 5. The relationship of and .

Figure 6. The relationship of and .

Page 34: How to predict the frequency of voting events in actual ...bingweb.binghamton.edu/~fplass/...VoteCasting_SSRN.pdf · 4 submits a ballot with the rankings ACB, BAC, or CAB.If the ballots

32 

 

Figure 7. Ecdfs of KS test p-values, obtained from 1,000 KS test evaluations that compare the ERS data with simulated rankings. 

Figure 8. Ecdfs of KS test p-values, obtained from 1,000 KS test evaluations that compare the PB data with simulated rankings. 

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 0.2 0.4 0.6 0.8 1

ecdf (p‐value)

p‐value of the KS test

45 degree line Reference: center of simplex Reference: spatial model

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 0.2 0.4 0.6 0.8 1

ecdf (p‐value)

p‐value of the KS test

45 degree line Reference: center of simplex Reference: Spatial model

Page 35: How to predict the frequency of voting events in actual ...bingweb.binghamton.edu/~fplass/...VoteCasting_SSRN.pdf · 4 submits a ballot with the rankings ACB, BAC, or CAB.If the ballots

33 

 

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 0.2 0.4 0.6 0.8 1

ecdf (p‐value)

p‐value of the KS test

Figure 9. Ecdfs of KS test p-values, obtained by analyzing 10 different random samples of 1,000 elections each drawn from the “universe” of 82,754 PB elections.

Figure 10. Ecdfs of KS test p-values, obtained from 1,000 KS test evaluations that compare the ANES data with simulated rankings.

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 0.2 0.4 0.6 0.8 1

ecdf (p‐value)

p‐value of the KS test

45 degree line Reference: Center of simplex Reference: Spatial model

Page 36: How to predict the frequency of voting events in actual ...bingweb.binghamton.edu/~fplass/...VoteCasting_SSRN.pdf · 4 submits a ballot with the rankings ACB, BAC, or CAB.If the ballots

34 

 

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7

ecdf ()

ANES data Simulated data (p value = 0.998)

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 0.01 0.02 0.03 0.04 0.05

ecdf ()

ANES data Simulated data (p value = 0.584)

Figure 11. Ecdf of and oi obtained with reference subset A and the ANES data 

Figure 12. Ecdf of and oi obtained with reference subset B and the ANES data