CAMPAIGN EFFECTS IN THEORY AND PRACTICE American Politics Research, 2001 (vol. 29, pps. 419-437) CHRISTOPHER WLEZIEN University of Oxford ROBERT S. ERIKSON Columbia University Authors’ note: An earlier version of this manuscript was presented at the Annual Meeting of the Southern Political Science Association, Atlanta, 2000. Portions of the research also were presented at the 1999 Conference on the Design of Election Studies, Houston. We thank Bruce Carroll and Jeff May for assistance with data collection and Pat Lynch, Tim Nokken, and especially Tom Holbrook for comments and suggestions. The research has been supported by a grant from the Institute for Social and Economic Research at Columbia University and forms part of a project supported by a grant from the National Science Foundation (SBR-9731308).
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
CAMPAIGN EFFECTS IN THEORY AND PRACTICE
American Politics Research, 2001 (vol. 29, pps. 419-437)
CHRISTOPHER WLEZIEN University of Oxford
ROBERT S. ERIKSON
Columbia University Authors’ note: An earlier version of this manuscript was presented at the Annual Meeting of the Southern Political Science Association, Atlanta, 2000. Portions of the research also were presented at the 1999 Conference on the Design of Election Studies, Houston. We thank Bruce Carroll and Jeff May for assistance with data collection and Pat Lynch, Tim Nokken, and especially Tom Holbrook for comments and suggestions. The research has been supported by a grant from the Institute for Social and Economic Research at Columbia University and forms part of a project supported by a grant from the National Science Foundation (SBR-9731308).
Abstract
While scholars debate the influence of election campaigns on electoral decision-making,
they agree that campaigns do have effects. That is, there is broad agreement that campaign
events can cause voters’ preferences to change. This is straightforward. Empirically identifying
the effects of the campaign is much less so. We simply do not have regular readings of voter
preferences over the election cycle, and the readings we do have are imperfect. Clearly, then, an
important question is: Can we actually detect the effects of election campaigns? This is a
fundamental empirical question. It forms the subject of this essay.
In the essay, we outline the primary theoretical perspectives on campaign events and their
effects. We then turn to the practice of empirically identifying these effects, focusing
particularly on survey error and its consequences for empirical analysis. Using selected poll data
from the 2000 presidential election cycle, we illustrate how the various forms of survey error
complicate the study of campaign effects. We also offer certain solutions, though these take us
only part of the way. Indeed, given the available data, it appears that all we can hope to offer are
fairly general conclusions about the effects of election campaigns.
1
Scholars debate the influence of election campaigns on voting behavior and election
Table 1: An Analysis of the Effects of Conventions and Debates on the Variance Of Presidential Election Polls, 2000 ------------------------------------------------------------------------------------------------------------------ Variable Election Year After Labor Day ------------------------------------------------------------------------------------------------------------------ Convention Season 2.08 --- (24 degrees of freedom) (0.01) Debate Season 0.30 0.48 (15 degrees of freedom) (0.99) (0.94) R-squared 0.29 0.14 Adjusted R-squared 0.08 -.15 Mean Squared Error 8.19 6.54 Number of cases 173 59 ------------------------------------------------------------------------------------------------------------------ Note: The numbers corresponding to the variables are F-statistics. The numbers in parentheses are p values.
Figure 1: Trial-Heat Presidential Polls Aggregated by Date, 2000
Perc
ent G
ore,
Tw
o-C
andi
date
Pre
fere
nces
Days Before Election-300 -200 -100 0
40
50
60
Figure 2: Results of Gallup Polls and All Other Trial-Heat Polls, Labor Day to Election Day, 2000
Perc
ent G
ore,
Tw
o-C
andi
date
Pre
fere
nces
Days Before Election
Gallup Other
-60 -40 -20 0
45
50
55
Figure 3: Results of Voter.com and Washington Post Tracking Polls, Final 20 Days of the Campaign, 2000
Perc
ent G
ore,
Tw
o-C
andi
date
Pre
fere
nces
Days Before Election
Voter.com Washington Post
-20 -10 0
45
50
Christopher Wlezien is University Lecturer and a Fellow of Nuffield College at the University of Oxford. His research and teaching interests encompass a range of fields in American and comparative politics, and his articles have appeared in various journals and edited volumes. He recently finished editing a special issue of Electoral Studies (forthcoming) on “The Future of Election Studies” and has begun writing a book with Robert Erikson on The Timeline of Political Campaigns. Robert S. Erikson is Professor of Political Science at Columbia University. He is coauthor of Statehouse Democracy (Cambridge University Press) and American Public Opinion (Allyn and Bacon). His research on American elections has been published in a wide range of scholarly journals, including the American Political Science Review, American Journal of Political Science, Electoral Studies, Journal of Politics, Legislative Studies Quarterly, and Public Opinion Quarterly. He is the former editor of the American Journal of Political Science.
1 The data were drawn from pollingreport.com. For the exact procedures used, see Wlezien (2001). 2 Specifically, the period of the conventions encompasses all days between July 31 and September 3, inclusive; the period of the debates includes all days between October 3 and October 20. 3 Analysis of polls in 1996 offers similar results. Using the same procedure, conventions and debates account for up to 26.0 percent of the poll variance over the full year; debates account for less than 14 percent of variance during the fall. 4 Of course, Vt is an aggregation of preferences across individuals, and equation 1 summarizes dynamics across individuals i:
vit = ai0 + Bi vit-1 + eit,
where the lower case vit signifies the preference of individual i at time t. This is of consequence for the study of campaigns effects, which we consider very generally below. For specifics, see Erikson and Wlezien (1998). 5 Of course, the rate of decay may vary across different shocks and individuals. In the aggregate this may produce
fractional integration, where effects decay but much more slowly than a stationary series (see Box-Steffensmeier and Smith, 1998; DeBoef, 2000; Lebo, Walker, and Clarke, 2000). Our series of polls then would represent the sum of an integrated series and a fractionally integrated series. 6 Holbrook’s (1996) characterization is similar, though the effect of events is dependent on the error correction component, i.e., whether and the extent to which reported preferences differ from an underlying equilibrium. Put differently, events induce equilibration. 7 Of course, analysis at one level may conceal campaign effects at lower levels, that is, if these effects cancel out. It may be, for example, that presidential candidates campaign most heavily in different states and that this activity has meaningful effects in those states (see Shaw, 1999b), but that they have little net consequence for national preferences. What happens in the states still is of obvious importance. 8 The consequences of sampling error for individual level analyses are much more complex and sobering. See
Zaller (n.d.). 9 Shaw (1999a) provides similar estimates using a larger set of elections. 10 It is important to note that detecting the aggregate effect of an event is more likely if one has regular readings of
preferences, that is, where it is possible to pool results from different polls both before and after the event occurs. This pooling increases statistical power. The degree to which this is true depends on a number of things, however, including the size of the effect and the pooled samples themselves as well as the permanence of the effect and the effects of other events. For example, if the effect of an event decays and other events impact preferences over the period, it may be difficult to detect a fairly large effect even with large pooled pre- and post-event samples. 11 For each poll the expected sampling error is: p (1-p) / N, where p is the proportion voting for, say, the Democratic candidate rather than the Republican. This gives us the estimated error variance for every poll. The error variance for a series of polls is simply the average error variance. The arithmetic difference between the total variance of the poll results themselves and the error variance is the estimated true variance. The ratio of (estimated) true to total variance is the statistical reliability. 12 We nevertheless do know quite a lot, at least for presidential elections. See, e.g., Timpone (1998).
13 Erikson and Wlezien (1999) show that these general differences in the polling universe did not matter in 1996.
14 We should be clear that these numbers are presented for expository purposes. The tracking polls cannot
effectively be used for actual data analysis because the polls reported on consecutive days are not independent: They literally share respondents. Also see Erikson and Wlezien (1999).
15 A less well-known but similar flip-flop occurred right between the Republican and Democratic conventions (not
shown in Figure 2). Here, Bush’s lead dropped from 16 points on one day to 1 point two days later and back up to 16 points five days after that. This accounts for the spike in Gore’s poll share about 90 days before the election in Figure 1. 16 There is one notable exception, namely, Annenberg’s rolling cross-section that accumulated more than 90,000 respondents over the 2000 presidential campaign. Alas, it appears that these data won’t be in the public domain for some time. 17 There is a benefit to weighting, at least under some circumstances. If the variables used—whether party identification or something else—are exogenous to the campaign and actually do structure the vote on Election Day, weighting will reduce sampling error. 18 Note, however, that Voter.com changed its design at least once during the 2000 campaign. 19 This characterization applies to analyses conducted at both the aggregate and individual levels. 20 It is tempting to turn to the multi-day tracking polls now conducted by various organizations, particularly during the fall campaign. Even assuming a reasonable likely voter screen, this is not an appropriate solution unless one has access to the daily readings from which the moving averages are constructed. As noted earlier, the results of multi-day polls reported on consecutive days are not independent—they not only share polling periods, they literally share respondents themselves. Thus, one only can use results for every nth day, e.g., with a three-day tracking poll, using results for every third day. 21 Note that this approach does not provide perfectly independent readings, since the results on consecutive days still will include polls with overlapping reporting periods. 22 For a broader consideration of this other issues, see Franklin and Wlezien (N.d.).