Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

DonaldsRump

(7,715 posts)
Sat Feb 22, 2020, 11:31 PM Feb 2020

Exit polling

It seems to work all over the world except one very unique scenario: when a USA Republican makes a "surprise" victory that exit polling CLEARLY indicates will NOT happen. Witness W in 2000 and 2004 and Trump in 2016.

Let's get real: exit polling has been the basic tool to detect election fraud all over the world. However, AFAIK, exit polls fail in one particular scenario: Republicans in the USA.

Gee, I wonder why.....?

Does anyone have anything to shed light on this?

14 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies

underthematrix

(5,811 posts)
1. Read or listen to EVERYBODY LIES. many many people were embarassed
Sat Feb 22, 2020, 11:34 PM
Feb 2020

to tell pollsters they would or had voted for Trump. Also people of color who said they were definitely going to vote didn't. It's a great listen.

DonaldsRump

(7,715 posts)
4. Really? AFAIK
Sat Feb 22, 2020, 11:40 PM
Feb 2020

this is the tool to detect election fraud around the world. Can you please tell me exactly why the unique scenario of surprise R victories in the US is the one exception?

Inquiring minds want to know.

DonaldsRump

(7,715 posts)
7. It's pretty clear
Sat Feb 22, 2020, 11:45 PM
Feb 2020

Why exactly are exit polls a good way to detect election fraud around the world except in the USA when a Republican wins in a "surprise".

What exactly do you not understand?

Calista241

(5,586 posts)
2. Yeah, people are going to be afraid to admit they voted for Trump.
Sat Feb 22, 2020, 11:39 PM
Feb 2020

I witnessed my mom, a serious Repub and Trump supporter, tell a pollster she was voting for Klobuchar and voted for Clinton in 2016.

I would estimate that we need to have more than a 5-7 point advantage in polls to make up for the difference.

 

VarryOn

(2,343 posts)
8. A lot of Trump voters wont/don't admit who they are...my dad is one.
Sat Feb 22, 2020, 11:54 PM
Feb 2020

He's been polled twice this year, and he told them he was a Sanders voter. He's proud of it.

I've noticed that once very political people have gone silent over politics the last few years. I translate that. To mean they are Trumpers. I do think reticents to admit Teump supportnis a really thinkg. Don't trust polls!

Maybe their being silent is a good thing....except for polls.

applegrove

(118,613 posts)
3. In 2004 kerry was winning earlier in the day of the election. Exit polls
Sat Feb 22, 2020, 11:39 PM
Feb 2020

were released then way before the polls closed. It was on the news Kerry was winning everywhere. Then democrats, who get home from work harried, trying to cook dinner and trying to find someone to watch the kids, did not bother to vote in my estimation because they thought Kerry had it in the bag. They've never released exit polls earlier in the day since. As to 2000 and 2016, both Gore and Hillary won the popular vote in actuality. They just didn't win the electoral college.

yonder

(9,663 posts)
10. Agree
Sun Feb 23, 2020, 12:07 AM
Feb 2020

My understanding is that historically (and normally), exit polls have been reliable indicators of final election results. All of the sudden it seems, not so much. If I remember, Ohio in 2004 was kind of squirmy that way

DonaldsRump

(7,715 posts)
11. I've seen a lot of activity on my OP on a Saturday night
Sun Feb 23, 2020, 12:12 AM
Feb 2020

than ever before. Fascinating!

My basic question remains: why is exit polling a reliable indicator of the actual results/election fraud except in one unique scenario: US Republicans winning "surprise" elections when all other polling indicates they will lose.

I wonder why???

budkin

(6,699 posts)
12. 2004 was the end of using exit polling
Sun Feb 23, 2020, 12:14 AM
Feb 2020

To predict the winner. Why? Because of Ohio and Ken Blackwell. That shit was stolen straight up.

DonaldsRump

(7,715 posts)
13. That is exactly my point
Sun Feb 23, 2020, 12:16 AM
Feb 2020

Ohio in 2004 and Fl in 2000 are exactly what I am referring to.

The weird results indicate that something is...."weird".

Let's get real. Exit polls ARE accurate.

Igel

(35,300 posts)
14. There isn't one thing called "exit polling"--if you look at the details.
Sun Feb 23, 2020, 12:49 PM
Feb 2020

In most countries, exit pollers blanket the country. Your sample is large and if not random, it's so large that the bias has to be either small or glaring.

You get 80% of the polling places and there's only 20% of the remaining to implement the bias.

Exit polling in the US has been mixed. Sometimes it's been really wrong--but we only notice what we want to, so most of the time we didn't notice. In one case in a smaller election, a key precinct had a pollster near the front of the polling place. If you took the bus or walked, she got you; if you drove, you entered through the back, near the parking lot. In another case, the pollster had a long form and a short form, randomly assigned to people--but the people getting the short form sometimes thought they were being told they weren't smart enough for the long form and refused to answer, and that group wasn't randomly distributed between candidates' supporters.

Exit polling in the US relies more on statistics than elsewhere.

Let's say you poll 1000 people and say, "That's going to represent the voting population of California." You pick polling sites in a way to represent the population. Do you pick them all in upscale portions of San Diego, San Fran, and LA? Only in working-class areas? All rural? No, you pick them in various places. Black, white, Latino, upscale and working, urban and rural.

Then when you poll people, you have the pollsters write down demographics: Age (approximate), sex, race/ethnicity, party, all the "intersectional" bits (for the purposes of polling, at least).

Now for problems. What if conservatives don't want to be polled? They walk on out. What if black Democrats look at the old white guy with the clipboard and don't want to interact with him? What if the Vietnamese-speaking pollster can't deal with the surprise 30 voters who don't feel comfortable in English but instead want Korean? You note the people that leave, because they're part of the electorate and you have to backfill in their information ... somehow. You assign them an identity, what's important, and assume that they're part of the mix in a predictable way.

How do you know when your mix matches the population of California that will vote on election day? You made assumptions in picking polling places. You made assumptions about those who don't respond. The assumptions are built on past performance and likely guesses about what'll happen this time. When the raw data comes in, it might be biased as a result of bad assumptions--this election isn't the previous election. So early on you'd report the raw data, because they're so incomplete and you can't do much with it--large portions of the electorate don't vote before 3 p.m. and the raw data at that point are so incomplete as to be of very limited value.

Later, when there are more data, they're fit into a model that you think will match the electorate. You think 15% of the electorate will be black, your pollsters show 20% were black, you weight things according to your model--the data must be biased. Then when polls close and the actual numbers per precinct come in, you find that in fact black precincts underperformed and were no more than 10% of the electorate, so you reweight your data again.

We insist that the raw, biased, unweighted data are right when it suits us. We insist that the initial weighting is right when it suits us. We insist that the final weighting is right when it suits us. But it's all predictive statistics--even the actual exit poll results are still rooted in predictive statistics and are only as good as the assumptions baked into the results--until the final electoral poll results are in. We know the exit poll results have a lot of built-in assumptions, so pollsters conclude that it's more likely their a priori assumptions are wrong than the electoral poll results were.

Predictive statistics come with built-in methods of estimating the size and likelihood of error. When the exit polling is wrong, people cite the margin of error. If the actual error is greater than the estimated error, then that's taken as prima facie evidence of electoral irregularities, fraud, cheating. What estimated error *cannot* account for is error in the assumptions and how measurements are taken. Estimated error is what's built into the methodology, and assumes that the only errors are random, with a truly random data set and accurate measurement.

Latest Discussions»General Discussion»Exit polling