Tuesday, April 7, 2009

The Predictive Power of AP Poll Rankings

In my previous post, I showed that based on the AP Poll rankings for college football and basketball, upsets are just as likely at the beginning of the season as at the end. This means that the polls don't increase in accuracy over the season.
Here I show a few other graphs on topics that college sports fans might find interesting. These topics are:
1) The overall average winning percentage for each rank in the AP Poll
2) How often the higher-ranked team wins for a given difference in ranking
3) Whether the top ten rankings are more accurate than lower rankings
4) Whether the AP Poll rankings have changed in accuracy over the last fifty years or so
If the graphs are too small to see, click on them to see them in full-size.
Winning Percentages by Rank
This graph shows how successful ranked teams are against all opponents (including unranked opponents and higher ranked opponents). The numbers are calculated over the same years of data as in the previous post (roughly 50 seasons for basketball and 70 for football).
The team ranked number one wins 83% of its games in football and 87% of its games in basketball. This winning percentage falls steadily to 63% and 66% for the teams ranked 25.
It's interesting how parallel the curves are for the two sports. This graph shows that the rankings are strong predictors of team success in subsequent games. Given how much attention is paid to rankings, it would be surprising if that were not true.

Winning Percentage by Difference in Ranking
This graph (by request) shows how often the higher ranked team wins as a function of the difference in rank between the opponents. For example, when teams ranked one apart (meaning number one playing number two, or number 16 playing number 17) play each other, the higher ranked team wins 49% of the time in basketball and 51% of the time in football. When the difference in rankings is ten (for example, number 10 playing number twenty or number one playing number 11), the higher ranked team wins 61% of the time in basketball and 65% of the time in football. These results seem sensible. It appears that a rank difference of one is almost meaningless, but that the predictive power of the difference in ranking rises fairly quickly after that.
The graph is noticeably more jagged when the rank difference is higher. This is because there are many fewer games where, for instance, teams ranked twenty spots apart play each other than teams ranked one spot apart. Fewer data points to average over means more random variation.

Are the Top Ten Rankings More Accurate?
This shows that the top ten rankings are more accurate than the next fifteen. This graph repeats the analysis in the previous graph, but segregates the data into games between teams ranked in the top ten and games between teams ranked 11-25. We can see that for a given difference in ranking, the higher ranked team is more likely to win if both are the top ten than if both are in the next fifteen. We can see this because the line in the graph for top ten is consistently above the line for 11-25, for each sport.
This result isn't too surprising, although there several reasons why it might be true. It could be that differences in talent are actually larger among the very best teams. Or perhaps it's just that the people who do the rankings have limited time and effort, and pay more attention to ranking the very best teams accurately.
One other point about this graph is that the difference in rank appears to be quite a bad predictor for teams ranked 11-25 when the difference in rank is greater than ten. There are very few actual games underlying those data points, so we probably shouldn't draw strong conclusions from them.
Aside for the extra curious: in a related regression analysis, I find that for a given rank difference, both teams being ranked one spot lower increases the chance of an upset by 0.6% with a standard error of 0.2%

Are Rankings More Accurate Now than in the Past?
These graphs (similar to the graphs in the last post) show that the rankings aren't any more accurate now than they were, for example, fifty years ago. The graphs show the fraction of games that ended in upsets for each season back to 1956 in basketball and 1936 in football. If the AP Poll rankings became more accurate over time, then we would expect to see the graph slope downward over time. The graphs are actually quite flat.
This is surprising. Given a half-century of sports analysis, and significantly better information technology in the form of cable tv and computers, I expected that the sports writers doing the predicting would do better in recent years than in earlier years.
Note that this graph only includes games between ranked teams, but that the conclusion doesn't change if we include games against unranked opponents. Also, as in the previous post, I have shown separate graphs in which I only include games in which teams in the top ten play each other. These graphs have more random variation, but the basic conclusion is the same.
More details about the data, including my sources, are available in the previous post.

3 comments:

  1. Interesting stuff. Thanks for the plot.

    Some error bars (binomial) might bolster your conclusion in part 3?

    If the trend down to below 50% for large rank difference for teams ranked 11-25 is real, it's hard to think of a model that would predict that...

    ReplyDelete
  2. Nice stuff, Mike. Very much appreciated.

    It would be interesting to apply the same analysis to one of the statistical models, e.g. Sagarin's model. In principle, some of the problems with the polls lie with voters who are biased in various ways or who don't pay attention to teams outside their part of the country. I'd bet that the Sagarin "computer rankings" are better than the human polls. Of course, that's why they play the actual games and why it's worth looking at the historical data.

    Thanks again.

    ReplyDelete
  3. I feel like there's a problem here in using ranking as a measure of the collective ability of analysts to predict the outcome of a game between two teams. I don't follow college, but an example from pro basketball is the Golden State Warriors (8 seed) upset of the Dallas Mavericks (1 seed) in the 2006-2007 NBA playoffs. The Mavs had a much better record than the Warriors and given a random choice of opponent, you'd have chosen the Mavs to win much more often than Warriors. Thus in any reasonable system, the Mavs should be ranked much higher even though in this case, the analysts knew that the Mavs had problems with the Warriors and many predicted an upset. In this case, the rankings are accurate, but inappropriate for assessing the analysts' predictive abilities.

    To me, it seems a more direct, though maybe unattainable measure of the change in analysts' predictive abilities as the season progresses would be to look at their actual predictions. Nevertheless, it's fun work. And congrats on the Times article. Also, at some point I need to get the whole story of you getting hit in the eye. (This is Andy Stein by the way)

    ReplyDelete