You are viewing an obsolete version of the DU website which is no longer supported by the Administrators. Visit The New DU.
Democratic Underground Latest Greatest Lobby Journals Search Options Help Login
Google

Reply #57: OK [View All]

Printer-friendly format Printer-friendly format
Printer-friendly format Email this thread to a friend
Printer-friendly format Bookmark this thread
This topic is archived.
Home » Discuss » Topic Forums » Election Reform Donate to DU
Febble Donating Member (1000+ posts) Send PM | Profile | Ignore Fri Sep-16-05 03:30 AM
Response to Reply #56
57. OK
Edited on Fri Sep-16-05 04:10 AM by Febble
Point a)

1. I agree. The report does not allow us to quantify the effect. However, I frequently read the assertion that the E-M report offers no evidence for its hypothesis. It may not quantify the evidence, but in fact it gives details of the evidence the authors believe support the hypothesis. We certainly do not have enough information in the report to weigh that evidence. But evidence it is. If you think your sample was biased, you would predict that your results would be most biased where there was most opportunity for biased sampling, and this appears to be the case. Not all evidence in favour of fraud has been quantified either and in many ways the best evidence - the sheer insecurity of the tabulating software - has not been quantified at all. It is still evidence.

2. Yes, this is a point made by USCV, but I think is fallacious. It assumes that only one factor applied to any one precinct. A multifactorial statistical model is required to determine whether collectively, the factors cited account for all the shift. Clearly no one factor does. More homework required from E-M!

3. Good point, about the education factor. This puzzled the report authors too. It may be a genuine stumbling block. However I don't find it intrinsically implausible that more highly educated interviewers might introduce greater bias. Precisely what is required for good random sampling is an unswerving adherence to blind selection protocol. In my experience, the more highly educated a person is the less likely they are to adhere unswervingly to anything! But that is a post hoc rationalisation. I agree it is odd. It would be good to know the size of the effect, and whether education was collinear with some other factor. If the highly educated interviewers were students, and the less highly educated interviewers were sensible 50 somethings, then that might account for the effect.

Point c)

refers to calculation of MoE. The "Design Effect" has been much talked about, and is a factor that accounts for the greater degree of sampling error introduced by the fact that the voters were not randomly selected from an entire state, but were clustered within selected precincts. It can be calculated, though not by me. Steve Freeman used a "Design Effect" of 30% in his paper, and there was an extensive discussion on Mystery Pollster about the appropriateness of this figure. Its value depends on details of the survey design, and it appears that values of between 50% and 80% were computed for the 2004 election, depending on the number of precincts in each state. When the appropriate Design Effect term is used, no state is outside its MoE (I verified this myself with the screenshot data at the time).

This of course does not mean that the polls were accurate - what you also have to consider is that so many states were out in the same direction. Because, indeed, the overall red shift was highly signficant. However, with the issuing of the E-M report we can move on from this - we know that the important shift was at precinct level, and in fact the effect size was far greater when WPE is considered. However, the ranking of states now changes somewhat, when "noise" from precinct selection error is effectively removed. IIRC, Ohio moves up the list. However, I think New Hampshire, also near the top of the list, needs to be borne in mind, as I've said elsewhere in this thread. You need to go through quite a lot of hoops to account for fraud as an explanation of the large precinct-level discrepancy in NH - so the NH example, if anything, weighs on the side of polling error as a plausible explanatory factor, at least in principle. And in both Ida Briggs' NH study, and ESI's Ohio study, vote in 2000 was used as a control. In NH deviation of precincts from vote in 2000 does not appear to have been associated with fraud (recount) and in OH, deviation of precincts from vote in 2000 was not associated with degree of red shift in the exit polls (ESI).

However, E-M note that previous years in which a large red-shift was observed, public awareness of the election was high. IF red-shift is affected by public awareness of the election (presumably by affecting voters' attitude to pollsters) then you might expect greater red-shift in the states where the campaigns were focussed, i.e. swing states.

I am offering this merely as an alternative explanation. Your interpretation may well be right. But I am really just trying to explain why the exit poll evidence is not a slam dunk. And in some instances - NH, OH, and now San Diego, we do seem to have SOME evidence that "red-shift" may be a real polling phenomenon.

BTW - I do appreciate the civilized tone of this discussion!


(edited for typo)

Printer Friendly | Permalink |  | Top
 

Home » Discuss » Topic Forums » Election Reform Donate to DU

Powered by DCForum+ Version 1.1 Copyright 1997-2002 DCScripts.com
Software has been extensively modified by the DU administrators


Important Notices: By participating on this discussion board, visitors agree to abide by the rules outlined on our Rules page. Messages posted on the Democratic Underground Discussion Forums are the opinions of the individuals who post them, and do not necessarily represent the opinions of Democratic Underground, LLC.

Home  |  Discussion Forums  |  Journals |  Store  |  Donate

About DU  |  Contact Us  |  Privacy Policy

Got a message for Democratic Underground? Click here to send us a message.

© 2001 - 2011 Democratic Underground, LLC