menu_open
Columnists Actual . Favourites . Archive
We use cookies to provide some features and experiences in QOSHE

More information  .  Close
Aa Aa Aa
- A +

A Pollster Helps Us Manage Our Election Anxiety

11 10
22.10.2024

Advertisement

transcript

This transcript was created using speech recognition software. While it has been reviewed by human transcribers, it may contain errors. Please review the episode audio before quoting from this transcript and email transcripts@nytimes.com with any questions.

And that’s how you ended up with the focus group of conflicted conservative “New York Times” columnists who —

The last truly undecided voters in America. The call is coming from inside the House.

That’s right. It’s just going to be me and Bret Stephens in your next focus group doing extensive therapy together.

Oh, I’d pay to watch that. I’d so pay to watch that.

From “New York Times Opinion,” I’m Ross Douthat.

I’m Michelle Cottle.

I’m Carlos Lozada.

And this is “Matter of Opinion.”

[MUSIC PLAYING]

We are less than a month away from the 2024 election. And through all the chaos, all the drama, all the excitement, the polls have been there to scare us, comfort us, infuriate us, or simply leave us totally baffled about what’s actually going on. Right now, “The New York Times” polling average, the most trustworthy of all polling averages, shows —

The gold standard.

Why the laughter? We set the bar high. It shows Kamala Harris and Donald Trump essentially tied across seven key battleground states, and that’s remained basically unchanged for weeks. So it’s a toss-up, or maybe it’s not because a lot of people on both sides don’t trust the polls. Even the polling averages, even “The New York Times” polling average, because — I know — because they’ve been wrong before. Maybe especially in elections where Trump is on the ballot.

So we wanted some help interpreting the numbers and whatever story they’re telling us about voters. What’s different from 2016 and 2020? What happens if the polls really get it wrong or if they get it right? So to help with all that, we’ve invited Kristen Soltis Anderson to join us. Hi, Kristen.

Hello, Ross.

Thank you for helping us.

Yes. So Kristen, her qualifications for helping us are that we pulled her randomly off a street corner and — no. She is a Republican pollster. She’s a contributing writer to “New York Times Opinion.” And she also moderates “New York Times” focus groups, some of the best focus groups in —

These are what melt my butter.

They’re super focused.

OK. But first, we’re going to start. We brought you in, Kristen, just to tell us who’s going to win. Go.

I have an extremely satisfying answer for you, which is that you should prepare yourself for a wide range of outcomes. Anything is possible.

Wait, wait, wait, wait, wait — aren’t there only two outcomes?

Jill Stein.

What’s the wide range?

A wide range of outcomes is that we could conclusively know on Wednesday morning that Donald Trump has been elected, we could know conclusively on Wednesday morning that Kamala Harris has been elected, or we could not know for weeks.

Yeah, I’m going with option C because that’s just how my life is this year. I’m just saying.

Which of those outcomes do you find most likely, Kristen?

I know. That’s a very fancy way to try to push me into a prediction. I honestly think that at this point, making any kind of prediction, even one that is a not so firm prediction, almost feels irresponsible because knowing what I know about how precise polls aren’t, seeing how close they are, even if you assume that they are absolutely on the money, it would be irresponsible to make a prediction. And that’s to say nothing of the very real possibility of some kind of systematic polling error, which could happen in either direction. So that is why I am deeply reluctant to do anything close to giving a prediction.

OK. That brings me to a very big question I had for you, which is, what do people misunderstand about what polling can and cannot do well?

So the analogy that I have been using is that a poll is to prediction as your bathroom scale would be to measuring ingredients for a baking recipe. Like you just — you wouldn’t really do it. And it doesn’t mean that your bathroom scale is wrong. It doesn’t mean that it is inaccurate. It means that it is not built to be precise enough to tell you that you have exactly the right number of grams of flour.

And so if you use your bathroom scale instead of a kitchen scale, you are going to get some kind of baking monstrosity that comes out of your oven that is not correct.

Wait, so just to be clear on the metaphor, in this metaphor, you are weighing the ingredients themselves on the scale. You’re not putting your body on the scale and trying to figure out — no, that was what I — I just want to understand —

I thought she was going in a different direction, too, like putting your body on the scale today to predict how much you’re going to weigh in November.

How often do any of you bake? Question before I go any further, because this could explain a lot of this.

I grill. I grill.

That is not the same thing at all.

So the kitchen scale gives you precision.

Yes. And so your bathroom scale may be accurate. It just isn’t built to do certain levels of very precise measurement. It just isn’t. And so for a poll even well-conducted, you will have a margin of error of plus or minus 3 percent, 4 percent for some of these statewide polls. And what a lot of people don’t understand about that is that the margin of error applies to each individual number in the poll.

So let’s say that I have the new Michigan poll that came out that showed Harris 46, Trump 46. The margin of error of plus or minus 3 or 4 points applies to both of those numbers.

So it could be 49 to 43.

Look at that math.

At the 95 percent confidence interval, which I won’t go bore everybody too deep with.

That’s not boring. That’s why you’re here. That’s fine.

Well, the idea is just that this is just how statistics work. And so when a poll goes from, hey, last month you had Harris up by 2, now you have Harris up by 3, a lot of times someone like me will get asked, what has caused her surge in the polls? And I’m like, it could honestly just be complete randomness.

OK. But wait a minute, Kristen. This is all very persuasive and convincing. But isn’t this why we have the magic of poll averaging? Isn’t this — wasn’t poll averaging the genius invention of RealClearPolitics and Nate Silver, whoever you want to give credit for, wasn’t it supposed to at least minimize this margin of error problem a little bit?

Yes. It is supposed to minimize the risk that you are drawing big conclusions based on one data point that could be an outlier. So polling averages are intended to keep those from driving our understanding of what’s going on in the race too aggressively. So whenever a poll comes out that looks very strange, that’s why you can only say keep calm and throw it in the average. Don’t let any one individual poll determine your mental or emotional state that day. Just accept that sometimes weird polls happen.

Unless it’s a New York Times/Siena poll, in which case —

In Nate Cohn, we trust. I really do. I’m not just saying that because I’m now a part of The Times family. But I really do think that The Times/Siena polling partnership is really exceptional. And I like that they are willing to put out numbers that sometimes bounce around a lot. I think it causes a lot of people real heartburn. But in the real world, this is how polling works.

It’s also important to remember that the vast majority of polling done by someone like me, who is a professional in the field, is not intended for public consumption to help people predict elections. To the extent that it is polling done for political purposes at all, it’s usually to help a campaign or a client figure out how to change that number.

What groups would you need to speak to? What messages would you need to drive in order to make your 3-point deficit turn into a 3-point advantage? And that’s I think, another thing people really miss when they say, well, political polling is all broken, a lot of it is happening out of the sight of the public and for purposes that have nothing to do with predicting how an election will go.

All right. So I want to ask you this —

Can I ask you one tiny thing about the polling averages? May I?

You may. You may, yes.

So when you hear the term polling average, it has this sense of authority because it says it’s not just a single one. We’re taking in all these different polls. And this should give you a better sense of what is actually going on. And when I look at the lists of the polls that are included in the average, not being a professional myself, I have no idea of what some of these things are, who these places are, who these people are. I’ve never heard of some of them. And they’re there in the average. Like, how many of the polls in these big averages do you actually trust in terms of their approaches and their methods?

So I try not to be too biased against a pollster that I’ve never heard of, so long as they are very transparent about their methods. And that is not always a given. And usually, for many of these polling aggregators, they are not counting each poll equally. They now not all polls are created equal. And so —

It’s not a simple average. It’s like it’s a weighted average.

It’s not a simple average. It is usually a weighted average. And this is where things get really complicated. Every pollster has their own unique approach to how they do survey research. So a reminder that this is in some ways as much an art as it is a science.

And then you have these aggregators where each aggregator has different assumptions about which polls they trust and don’t trust. So a 538 is going to give different weight to different polls than Nate Silver’s Silver Bulletin, than The Times polling average, than RealClearPolitics. So even within that, there are these different averages and they look different because they are counting each poll as having slightly more or less weight, depending on what the average maker believes.

OK. I know that seemed kind of technical, but that really helped me. Thank you.

OK, wonderful. I live to serve.

And for the poll geeks out there, Carlos, you should know that there are people who actually rate the pollsters.

But who rates —

Who rates the raters?

That’s right. Jinx.

You guys are so dorky. I swear.

We are. All right. So, Kristen, but you’ve been making a very powerful case for a certain degree of basic humility, agnosticism, all of these things around polling. But I’m just curious, how unusual do you think it is to have an election that is this close?

So there are two things that make this election unusual, and it is both the closeness of the election and the unbelievable stability of the polling averages. So as a thought experiment, last week, I went back and I looked at the RealClearPolitics polling averages going back to the 2008 election, because many of the other sites have not been around that long. RealClearPolitics has been around since the dinosaurs roamed the Earth.

I tried to get there 2004 averages, but I couldn’t get it to display properly on my computer, like the internet doesn’t work the same as it did in 2004, I guess. But what I did was I looked at all of the polling averages, national polls from Labor Day on, so the final two months of the election.

And what you find is that in the last two months of most of the elections of the last 20 years, the polls move a fair amount. In the McCain-Obama race, I believe they range anywhere from McCain being up by 3 to Obama being up by 7. In the Obama-Romney race, the movement is a little less aggressive, but still they swing like a 6-point range.

In 2016, the polls also had about a 5 or 6-point range showing Hillary Clinton either up by 1 or up by, I think, 6 or 7. And even last time around, they still had about a 4-point range of movement in the final two months. And so then I looked at this month and I looked at all the polls that have been done since Labor Day, and the movement was something like Kamala Harris plus 1.4 to Kamala Harris plus 2.2, like less than a full percentage point of movement from her best to her worst in the polling averages. So that to me is the feature of the polling in this election that is the most astonishing, both the closeness and the stability.

I have to admit, I follow the college football polls more closely than I follow the presidential election polls. And that’s because the college football polls change from time to time. Someone wins, someone loses, and it’s different, right. It’s Texas or it’s Bama or it’s somebody else on top.

It took one party actually dumping its candidate to change the makeup of this race, according to the polls. So I feel like I’ve internalized the fact that this is a coin flip election, that it’s an incredibly stable polling environment. And so I have a hard time like focusing too much on the polls for precisely the reason you outlined, Kristen, that they’re just — it’s just this extraordinary stability.

So if we know what they don’t do well, what should we be looking to polls for? Tell us.

What I think polls are useful for is telling us the story underneath the surface. So what are the issues that people say matter to them? What are the attributes that they think Donald Trump has over Kamala Harris or that Kamala Harris has over Donald Trump? I do think it’s really interesting that the polls have consistently shown Donald Trump overperforming expectations among Black men. And I think it’s really interesting that the polls are consistently showing Kamala Harris kind of overperforming expectations among senior women.

It’s less the prediction, like I’ve described election polling that intends to try to predict as like going up to the Christmas tree and shaking the presence on Christmas Eve to figure out what’s inside. Do I get Bob Casey or do I get Dave McCormick? Like, what’s in the Pennsylvania Senate box?

On the other hand, I like understanding what’s going on in America through the other questions we ask in a poll. Like, I think it’s really interesting that when you ask voters, do you approve or disapprove of the job Joe Biden is doing as president? And then in the same poll you ask, did you approve or disapprove of the job Donald Trump did as president? What interests me is that there’s a difference between people who approve of Biden and people who remember approving of Trump, and that benefits him. That makes people think more fondly of him. That’s why this race is so competitive. Those are the kinds of things I look for in polls that make me think they have real value.

So conceding that is important and valuable, I want to pull us back to the Christmas tree box shaking because, well, I think that —

Who can resist that?

But I think that many people’s experience of both 2016 and 2020 was just a sense that the polls just did not adequately capture Donald Trump’s support. I remember going into 2020, it seemed reasonable to expect close to a Joe Biden landslide. And obviously, we got a much narrower election with substantial polling misses, including, since we’re praising ourselves here at The Times all the time in this episode, including by our own Times-Siena polling.

So I just want, Kristen, to talk about those errors in particular, because I know that pollsters take them very seriously. But what is your quick theory of what went wrong in 2020 or 2016? If you had to explain in the simplest terms why you think polling miss Trump supporters in those elections.

Sure. So I know you said quick, but I’ll be a little less than quick —

Ish, ish.

— a little bit of the history, just a little bit of background of polling error over the last few elections. So 2012 was an election where the polls missed by overestimating Republicans. They did so for a number of reasons, in part missing late movement in the polls, missing unlikely voters, and doing that in part by not calling enough people on cell phones.

So polls were very heavily done by landline phone back then. Younger people, voters of color, unlikely voters were more likely to use cell phones. There you go. So pollsters fix that problem. And they get to 2016 and they go, great. We fixed the last problem. And then the 2016 election happens.

And the post-mortem finds that a big reason the polls were wrong is they were systematically undercounting voters without college degrees, which had been a variable that was not really important in elections past, but had become quite important with Donald Trump on the ballot. So then we get to 2020 and pollsters go, OK, we fixed it. We’re weighing on education. We’re making sure we have enough of those non-college educated voters in our poll. And still, as you noted, they were wrong.

And so the industry, the AAPORs, the big association for pollsters, they put together a big commission and they did a big study. And the report came away with a handful of potential reasons why the polls were wrong, but no conclusive single one answer. It ranged from things like COVID made stuff weird to we’re still missing —

Well, COVID, I’m sure, did make stuff.

And COVID did make stuff weird, let’s be clear.

But the COVID theory, right, as I understood it, was that during COVID, liberals were more likely to be home because they were more likely to be following lockdown procedures and social distancing. And so they were more likely to answer the phone or at least that was one — that was literally one theory, right. That summer, 2020, liberals were sitting home wearing three masks and ready and willing to answer the phone. And conservatives were out at Disney World, which this is an exaggeration. But that’s the theory I’ve heard.

So that’s one way to describe a piece of the COVID made things weird umbrella theory. I think it was really hard for pollsters to make assumptions about what turnout would look like. And as cliche as it is to say, it all comes down to turnout. And when you have a once in a lifetime global pandemic, it was hard for people —

We had to get that quote.

We had to.

We have to get that quote at some point in the episode.

Kristen, one question about it. If pollsters are always kind of looking at what just happened and seeing what they got wrong or maybe what they under overestimated. Is there a problem of if you’re always sort of compensating for past errors, that you’re always kind of fighting the last war when, in fact, the electorate shifts from in its habits and its motives, from election to election?

So it’s good to fix the things that were wrong last time because those things will probably stay problems moving forward. But it doesn’t mean you’ve solved the problem fully. And that’s why I remain so worried about this year, because the conclusions coming out of 2020 didn’t give pollsters a ton that was really concrete they could do to fix what went wrong.

I mean, something that was, in my view, pretty strange about the polls in 2020 is you would have demographically similar states that were geographically adjacent where the polls were not off in the same direction by the same magnitude. So I think, for instance, in Georgia, the polls were actually pretty good. But in Florida, the polls were way off. In Arizona, the polls were pretty good. But in Texas, the polls had been saying, hey, maybe Biden will win Texas. Spoiler alert, Biden did not win Texas.

The white whale.

And so that to me was the thing that was more worrisome is that in the past, you’ve had these clear patterns for what went wrong and what needs to be fixed. And I think coming out of 2020, it was a little bit of a, well, that was strange, wasn’t it? Maybe it’ll go back to being better when there’s not a massive global respiratory pandemic.

And for a firm like mine, we’ve tried doing things like this, waiting to past vote history, but you do see things in the data that look — you could be a pollster who looks at your data and says, I don’t know that I buy that senior women are breaking for Harris by this much, or I don’t know that I buy that young men are breaking for Donald Trump by this much. But anything you do to try to correct for that is just introducing your own assumptions into the poll.

And so that’s the other worry pollsters have is you can be very........

© The New York Times


Get it on Google Play