Advertisement

Lots of people have questions about the USC/L.A. Times tracking poll; here are some answers

Here’s how the USC/Los Angeles Times Daybreak tracking poll is different from the others.

Share

A lot of readers have noticed that our USC/Los Angeles Times Daybreak tracking poll is different from other polls. Since we started publishing the poll in July, it has had Donald Trump in the lead more often than not. That’s in contrast to overall polling averages.

Here are some of questions we’ve been asked so far, including why when something big happens in the campaign, reaction to it is not immediately reflected in the poll:

No, one 19-year-old Trump supporter probably isn’t distorting the polling averages all by himself »

Advertisement

How is the Daybreak poll different from other surveys?

The poll asks a different question than other surveys. Most polls ask people which candidate they support and, if they are undecided, whether there is a candidate they lean to. The Daybreak poll asks people to estimate, on a scale of 0 to 100, how likely they are to vote for each of the two major candidates or for some other candidate. Those estimates are then put together to produce a daily forecast.

Have you done this before?

Yes, the team of researchers at USC who conduct the poll used the same technique four years ago to forecast the 2012 election.

How’d that turn out?

The poll was one of the most accurate of the year. It predicted that President Obama would be reelected with a margin of victory of 3.32 percentage points. He won by 3.85 points. Most other polls underestimated Obama’s margin by more than that. The 2012 poll was done for the RAND Corp. RAND is doing its own, completely separate, poll this year that uses a somewhat different methodology.

Then are you wrong this year?

There’s no way of knowing until the votes are counted. Obviously, the poll’s results have been an outlier compared with other surveys, but if ever there was a year when the outlier might be right, it’s this year.

Advertisement

What else do you ask?

The poll also asks people to estimate how likely they are to vote — again, using the 0-to-100 scale. We also ask respondents about issues relevant to the election.

So someone who is 100% sure of their vote counts more heavily than someone only 60% sure? And someone who says she is 100% certain to vote weighs more heavily than someone only 70% certain?

Exactly. To be technical, we calculate a ratio of a person’s likelihood of voting for a specific candidate to his or her estimated chance of voting.

So that means a candidate with very enthusiastic supporters who say they are certain to vote may do better than one with wishy-washy backing?

Yes. Earlier this summer, Trump benefited from this method — he had more supporters than Hillary Clinton who were 100% certain of their vote. Clinton has now caught up on that measure.

Why ask the questions that way?

Lots of people don’t know for sure how they’re going to vote. Forcing them to choose before they are ready can distort what they’re thinking. Asking people to estimate the probability of voting for one or the other captures their ambivalence more accurately. And asking people to estimate their chance of voting allows us to factor in information from everyone in the sample. By contrast, polls that use a likely voter screen can miss a lot of people who don’t meet the likely-voter test but who, in the end, really do vote.

Can people really estimate their preferences that accurately?

As we said, four years ago, this technique worked very accurately. It’s also been tried in other contexts.

Advertisement

I’ve noticed that something important in the campaign will happen, and the poll doesn’t move until several days later. Why?

Each day, we ask about 450 people to participate in the poll. To give them time to respond and to make sure that we get as many of them as possible, respondents have a week to give us their answers. Each day, we post results that are an average of the previous seven days of responses. Between those two factors — people taking up to seven days to respond and the poll averaging seven days of results — the impact of an event might not be completely reflected in the poll for as long as two weeks. In practice, most people respond within two days, so typically, almost all the impact of an event is factored into the poll within nine days.

Do you contact a different group of people each day?

No. Unlike typical polls, which contact a different sample of people for each survey, the Daybreak poll uses the same panel of approximately 3,200 people, questioning about 450 of them each day in order to get to everyone each week.

Why?

One of the problems polls face is that sometimes partisans on one side are more enthusiastic about responding to questions than those on the other side. Maybe their candidate has had a particularly good week or the opposing candidate has had a bad one. When that happens, polls can suddenly shift simply because of who is willing to respond to a pollster’s call. That problem, called differential response, has been well-documented. By using a panel of the same people, we can ensure that when the poll results change, that shift reflects individuals changing their minds.

If the same people are questioned each week, does that change how they respond?

We haven’t seen evidence of that, either this year or in the past. It’s possible, however, that taking part in the poll could lead respondents to pay more attention to the election.

How do you find the respondents?

Members of the panel were recruited from a large pool of people who take part in a continuing project known as the Understanding America Study, which is conducted by USC’s Center for Economic and Social Research. The members are randomly recruited nationwide. Because the poll is conducted online, we gave a tablet computer and Internet access to those who didn’t already have it.

How do you make sure the people in the panel aren’t skewed to one side or the other?

Our detailed methodology is publicly available. Like all polls, we adjust the sample to make sure that it matches known facts, such as the racial and ethnic makeup of the population, the gender balance and the age distribution. Those are all matched to data from the U.S. census, a process known as weighting.

Advertisement

What else gets factored into the weighting?

We ask people if they voted in 2012 and, if so, whom they voted for. We adjust the sample to match that, so 25% are people who say they voted for Mitt Romney in 2012, 27% are people who say they voted for Obama and 48% either did not vote or were too young to vote last time. Using 2012 votes as a weighting factor is designed to get the right partisan balance in the sample and to ensure that we’re also polling people who did not vote last time, a group that can get left out of some other surveys.

Can weighting lead to errors?

Sometimes. For example, we know that after an election, some people say they voted for the winner even if they didn’t. That creates a risk when we weight the sample to reflect how people say they voted. Out of our sample of about 3,200 people, 27%, or approximately 810, should be voters who cast ballots for Obama in 2012. If some people who voted for Romney or who didn’t actually take part in the election claim to have voted for Obama, some of those 810 people might not really be Democratic voters. On the other hand, not weighting at all can also skew a sample.

Sometimes, the results for subgroups in the poll, such as African Americans, seem suddenly to change a lot. Why?

All polls are subject to the laws of chance. Sometimes results can change for no reason other than random variation. Random changes are particularly an issue for small sub-samples because the margin of error gets bigger as the size of a sample gets smaller.

What if the poll turns out to be wrong?

We’ll learn from the errors. That’s how research works in any scientific field, including polling. The only way to see how new techniques work is to try them.

Detailed results from the USC Dornsife/LAT daily tracking poll »

David.Lauter@latimes.com

Advertisement

For more on Politics and Policy, follow me @DavidLauter

ALSO

Red vs. blue states: Check out our interactive Electoral College map

Why is Trump still winning our poll? White men and uncertain voters

Town hall debate poses risks for candidates as pressure grows for Trump


UPDATES:

Advertisement

1 p.m. Oct. 24: This article was updated to note that this year’s RAND Corp. poll and the Daybreak poll are two separate surveys done with different methods.

The article was originally published at 7 a.m. Oct. 7.

Advertisement