FHQ uses all state-level, trial-heat polls in its averages for each state. We use all the polls available to us since Super Tuesday, when the race for the Democratic nomination officially became a two person race; one with two seemingly evenly matched candidates. The argument can be made that Obama was even in the race following his Iowa victory, but did not fully quash the "flash in the pan" argument until after the split of the contests on February 5.Why use those past polls at all?
Also, I use only the polls that avoid the selection bias inherent in internet-based polls or mail-in polls. As such, the three waves of Zogby Interactive polls are excluded as are the mail-in Columbus Dispatch polls.
Finally, the data used at this stage in the game is the data attendant to the "likely" voter samples. With a month to go, those sample are more accurate than they would have been only a couple of months ago. Also, in the event that a polling firm posts two different versions of a poll based on whether third party candidates are included, it is FHQ's policy to take the version with those candidates on the sample ballot.
Indeed, why not just use the most recent poll or polls like everyone else? Well, if I'm just doing what everyone else is doing, why even do it? I can quit now and go look at what Pollster or Real Clear Politics, to name just a couple, have to say on the matter. That's part of the reasoning, but the main reason for the inclusion of past polls is to avoid the volatility of polling. FHQ doesn't want fluctuation for the sake of fluctuation. If one poll is an outlier, fine, but that one poll should not be able to fundamentally shift the average and the projected outcome of any given state. The past polls are included because they represent the feelings of a group of respondents at a particular point in the race. Those feelings may be latent in the current environment, but in FHQ's estimation, should be accounted for in some way, shape or form. If the McCain campaign were able to effective make Jeremiah Wright an issue again, we could return to some degree to the polling distribution of that period. Will that happen? Maybe, maybe not, but that will be controlled for nonetheless.
How do you determine which state goes go into which categories on your map?
Early on in this process, it was simply a matter of averaging the polling data we had at our disposal. But as new polling data emerged, the older data served as an anchor on trends of the race -- at that time in the midst of the Democratic nomination battle. From May through the close of the nominating phase of the race, FHQ took the average of a state's polls, but discounted all but the three most recent polls. Following Clinton's withdrawal from the race, we took the opportunity to tweak that yet again, discounting all but the single most recent poll in a given state. The goal then was to make the average more responsive to developing trends in the race, but not responsive to the point that a single poll fundamentally shifted the outlook in a state.How exactly are you weighting those past polls?
That responsiveness balance is an important element here. Lately, as the polls have trended toward Obama, FHQ's averages have stagnated, moving very little in the face of the Obama flavor to the polls out in the wake of the economic situation on Wall Street. So we have once again fine-tuned our formula in the hopes of being responsive to a new direction in the race, but not simply responsive to one potentially outlier poll.
As I said in Saturday's electoral college post, our method of averaging serves us well in most states, but the exceptions are potentially consequential to the race for the White House. If you look at the Electoral College Spectrum, for example, that rank ordering of the states seems about right. The underlying averages in states like Florida, Minnesota and North Carolina, though, place them in positions outside of where the current trend would likely place them. At issue is the weighting formula for all the past polls backing up to Super Tuesday. All but the most recent poll had been discounted at the same rate and that meant that polls in March were treated the same as polls in September. Under the old configuration, that most recent poll counted as two-thirds of the average and all the other polls, treated with a blanket discount rate, accounted for the remaining one-third.
As I explained above, FHQ's practice has been to discount each poll at the same rate. However, that is likely causing problems for the averages in some states. There is, then, a need to re-examine those weights specifically. The method we have settled on is to use what we are calling a graduated weighted average. And what that does is to discount polls in February at a level greater than more recent polls from August or September.Why are the thresholds between categories on the map where they are?
So, how exactly is FHQ weighting those past polls? The first step was to determine how many days there will have been between Super Tuesday (February 5) and election day (November 4). There are 273 days counting November 4, but that number won't be useful until that actual day. The real point of that determination is to assign a number to each date in between. February 6, then, was day one and yesterday, October 6, was day number 244. To determine the weight, the median point at which a poll was in the field, is used as the numerator while the day we are currently in -- today's numbers reflect yesterday's changes, so 244 -- is the denominator. That equation gives us the weight of any given poll. The poll numbers on that day are then multiplied by that weight.
However, there is one more twist I'll add to this. The effect this change has is only at the margins. Why? Well, there are a couple of things happening here. First, the graduated weighting essentially averages out to the blanket weight applied to all polls before. There are differences, but they are minimal in most cases. The other, related issue is that the relative weight of the most recent poll shrinks after the reweighting of the other polls. The blanket discount rate on past polls basically cut each past poll's value in half. Now that polling frequency has increased, though, there are a lot more polls that are at greater than 80% value. That threatens the preeminent position of the most recent poll. It is too much of an anchor on that poll. To confront this problem, and to give the most recent poll a little more oomph, we cut the weights in half. Relative to each other, then, the past polls are treated with the same basic weight they had before, but relative to the most recent poll they have been minimized.
For much of this process, the threshold between a strong state for either candidate and a lean state was arbitrarily set at a 10 point margin. Likewise the margin separating a lean state from a toss up state was 5 points. However, as we have approached election day, it has obviously become more difficult for the candidates to make up enough ground to, if not overtake the other candidate in a state, become competitive there. In a nod to that fact, the thresholds were dropped to 9 points and 4 points, respectively, following the first debate. After the final debate, with just less than three weeks left in the race, the threshold will be dropped again to three points between the toss up and lean states. At that point, it probably will not be necessary to discuss the race in terms of three categories. It will be a question of which states are close and which states aren't then. However, FHQ will evaluate where the potential breaking point is between the lean states and the strong states at that point. It may not be necessary to talk about lean states at that point, but that distinction does add an element of clarity to how we perceive all the states in relation to each other.
I like it! Even if it doesn't produce the totally blue map I would have liked. Seems like a good compromise between throwing out old data and putting too little weight on the most recent polls.
ReplyDelete