by Ken Pomeroy on Monday, November 1, 2010
You might be surprised to hear this, but I’m a big fan of the pre-season AP poll. There is no doubt poll participants have their biases in the pre-season. They may tend to over-estimate the importance of the previous postseason, especially when a team needed more than its fair share of luck to advance. But otherwise, whatever biases are present are uniquely individual, and in the collection of 70 or so ballots, those biases are cancelled out, leaving a useful signal. The end result is that it provides a better picture of the state of college hoops before the season begins than any single person or algorithm could produce. It’s informed groupthink at its finest.
While experts naturally try to declare which teams are too high or too low in the polls, I imagine their success is like financial experts trying to pick a winning mutual fund. Some people are going to be right some of the time, but if we tracked such things, I think we’d find that the groupthink found in the polls would be tough to beat by any one person in the long run.
It’s when the games start being played that things fall apart. Then, each voter’s bias becomes the same. You lose, you drop in the poll. You win, and you move up. (I exaggerate slightly. Later in the season at the bottom of the poll, there’s some flexibility. I’m primarily referring to the top ten or so teams here.)
However, in the preseason, voters are free from such restrictions. With every team at 0-0, there is also no conflict in voting a team with a worse record over one with a better record, another thing that mid-season voters try to avoid. With the voters having to use their hoops expertise as opposed to adhering to certain conventions, you end up with an accurate picture of which teams are truly the best.
As things stand, the in-season polls are not very useful in this regard. During the season, the AP poll isn’t a ranking of the best teams at that moment. If you doubt me, here’s an example.Tournament performance of AP #1 ranked team since 1990
Win CH F4 E8 S16 R2 R1 Preseason #1 6 10 12 14 16 20 21 Final #1 3 6 10 13 17 21 21
Six times the preseason #1 has won the national title compared to three for the top-ranked team at the end of the regular season. The preseason #1 has made it to the title game a total of 10 times compared to just six for the final #1. It’s stunning to me that armed with 25-30 games of additional information, the writers’ ability to identify the nation’s best team* gets worse!
Now, you’re saying, “Ken, the ratings aren’t designed to identify the best teams! No pollster is doing that!” My response to you would be “Stop yelling.” And then - why aren’t they doing that? I also ask you – what are they trying to do? You probably can’t answer the last question, and I couldn’t either so I contacted the AP to determine what instructions are given to voters. They did not respond. Fortunately, like magic, the guidelines appeared via Jerry Tipton a few days later and are reprinted here.
- Base your vote on performance, not reputation or pre-season speculation.
- Avoid regional bias, for or against. Your local team does not deserve any special handling when it comes to your ballot.
- Pay attention to head-to-head results.
- Don’t hesitate to make significant changes in your ballot from week to week. There’s no rule against jumping a 16th-ranked team over the eighth-ranked team, if No. 16 is coming off a big victory and No. 8 squeaked by an unranked team.
- Teams on NCAA probation ARE eligible for the AP poll.
These instructions are nearly identical to the ones for the football poll, and don’t have enough clarity for my taste (and are impossible to apply to a pre-season ranking).
For instance, who decided that head-to-head results are important? It’s a completely arbitrary suggestion. In fact, head-to-head results are not a very good indicator of superiority. Put emphasis on that line of thinking and you end up believing that Georgetown is better than Duke, or that Vanderbilt is better than Tennessee. Furthermore, by January you are going to go crazy trying to keep all of the head-to-head results consistent on your ballot.
Research I did for College Basketball Prospectus 2008-09 showed that a team that beats an opponent at home by 10-19 points ends up losing the re-match against the same opponent about half the time. Now, think about this. You should quickly realize that even a dominant home win in isolation provides very little information. Sure, if Missouri beats Iowa State at home by double-digits this January, you are going to feel like they have better than a 50/50 shot of winning the rematch in Ames. But that judgment has much more to do with the information you have about the other games each team has played.
I agree that one shouldn’t hesitate to make big changes if something significant happens. But also consider that even the best team in the land is going to lose multiple games this season. If you thought that Team X was the best team in the land and it loses to a quality opponent on the road, it’s not always logical to drop it when teams below them had easier games during the previous week. As evidenced by the analysis of how #1 teams have fared, your earlier gut instinct that Team X was the best in the land is probably still very good.
And what exactly does “base your vote on performance” mean? It sounds great, but it’s apparent that voters put much more stock in achievement than performance. When you read various participants defend their poll, they often talk about which team is better as if that is their criteria, but really they just end up following the previously-described lose-and-you-drop convention as the season progresses.
If you think those conventions are archaic, I would have agreed. But the AP poll’s history indicates that these customs are more modern than ancient. Take January 19, 1960. The previous week, #1 Cincinnati lost by one point at #4 Bradley. Both teams had one loss. Neither #2 nor #3 lost the previous week, and both of those teams had but one loss. This is pure speculation, but it’s hard to imagine the Bearcats not dropping from the top spot under a similar scenario in 2010. Back when the Cold War was heating up, people should have been oblivious to the notion that a one-point win on your home court is not evidence of superiority. Now we know better, but somehow in 1960, Cincinnati remained at #1 when the new poll came out and stayed there for the rest of the season.
I should note that Cincinnati had also easily beaten Bradley weeks earlier, so certainly that factored into the situation. Still, I stand by my assertion. They would have dropped in 2010, because the #1 team always drops in modern times.
This case was one of 11 occurrences since the AP poll was introduced in 1949 when top team lost and maintained the #1 spot after a week where #2 did not lose. Ten of those cases occurred before 1984. The only case since was rather extreme. When Illinois suffered its first loss in the final game of the regular season (and by one-point on the road), no other top ten team had fewer than three losses. Isn’t it possible for the best to lose and still be the best team? 1991 UNLV isn’t happening again. The best team will lose the occasional game, especially if they have a difficult schedule.
The flip side is also illustrative. There have been 12 cases of a #1 team being dropped without losing a game the previous week (at least ten games into the season). The last was in 1983. And even that was because a loss suffered by #1 Memphis State the previous week occurred too late to be considered by pollsters. Thus, they remained at #1 with one loss before dropping a week later, still with one loss.
The most recent legitimate case is from 1981 when unbeaten Oregon State and Virginia swapped places in February. One would think it’s possible that the #2 team can do something impressive enough to move up, especially if #1 is unimpressive in winning. But apparently that hasn’t happened in 30 years. The rules are in place and if you’re #1, you stay there unless you lose, no matter how poor your performance is, or how impressively teams below #1 perform.
(In fact, it’s unfair to lump all pollsters into one group. But the outside-the-box thinkers are rare. Credit goes to the six voters who kept Kansas at #1 after their late season loss to Oklahoma State last season, while the rest of the voters sided with Syracuse. And in 2009, 11 voters stuck with UNC at the top despite a semi-final loss in the ACC tournament.)
I have a feeling if we could get 50 hoops experts together each week during the season and poll them on who they thought the best team was, we’d get a better power ranking than any computer could produce. If we could free them from the shackles of modern-day voting tendencies, the wisdom of the crowd would provide some very useful information. As it is, we have to settle for this situation only once a year. The rest of the season we get a ranking of, well, nobody really knows what it is, but it’s not a ranking of the best teams.
*In so much as tournament performance is indicative of which team is best, which we know is not always the case, but over 21 years the final poll should have a leg up on the pre-season poll.