{"id":354,"date":"2012-12-26T20:41:33","date_gmt":"2012-12-27T02:41:33","guid":{"rendered":"http:\/\/67.227.157.91\/~kenpom\/wp_blog\/preseason-ratings-why-weight\/"},"modified":"2012-12-26T20:41:33","modified_gmt":"2012-12-27T02:41:33","slug":"preseason-ratings-why-weight","status":"publish","type":"post","link":"https:\/\/kenpom.com\/blog\/preseason-ratings-why-weight\/","title":{"rendered":"Preseason ratings: Why weight?"},"content":{"rendered":"<p>Now that nearly every team has played at least 10 games, one might think we have enough data to form an accurate assessment of any team based on what they have done on the court this season. Then why still have the influence of pre-season ratings in the current ratings? Because you actually <em>don\u2019t<\/em> have enough data to work with. The opinion one had of a team before the games started being played still has some predictive value. <\/p>\n<p>To illustrate this, I looked at the teams that had deviated the most from their preseason rating at this time last season. For instance, shown below are the ten teams that exceeded their preseason rating the most heading into the 2011 Christmas break, listed with their preseason rank and their ranking on December 24.<\/p>\n<pre>              Pre  12\/24\nWyoming       273    88\nLa Salle      217    69\nMiddle Tenn.  178    58\nMercer        247   116\nW. Illinois   332   216\nWagner        206    80\nIndiana        50     6\nVirginia       88    24\nWisconsin      10     1\nSt. Louis      62    15\n\n<\/pre>\n<p>(To compare ratings differences, I\u2019m using the Z-score of the Pythagorean winning percentage. If this means nothing to you, basically I\u2019m accounting for the fact that a certain difference in Pyth values at the extremes of the ratings is equivalent to a larger difference in the middle of the ratings. Or Indiana\u2019s move in the ratings represented the same improvement as Wagner\u2019s even though the Hoosiers moved up fewer spots.)<\/p>\n<p>If the preseason ratings are weighted properly, then there shouldn\u2019t be a pattern to how these teams will trend from December 24 through the rest of the season. Some teams will see their numbers improve and some will see their ranking get worse. I\u2019ve expanded the outlier list to 20, and added two columns \u2013 each team\u2019s final ranking and the difference in that ranking from the December 24 edition. <\/p>\n<pre>             Pre  12\/24 Final Diff\nWyoming      273    88    98   -10\nLa Salle     217    69    64   + 5\nMiddle Tenn  178    58    60   - 2\nMercer       247   116    91   +25\nW Illinois   332   216   186   +30\nWagner       206    80   112   -32\nIndiana       50     6    11   - 5\nVirginia      88    24    33   - 9\nWisconsin     10     1     5   - 4\nSt. Louis     62    15    14   + 1\nGeorgia St.  182    76    71   + 5\nToledo       337   267   208   +59\nCal Poly     192   103   165   -62\nMurray St.   110    43    45   - 2\nLamar        214   121   113   + 8\nOhio         111    48    62   -14\nDenver       159    75    80   - 5\nIllinois St. 181    98    81   +17\nGeorgetown    48    14    13   + 1\nOregon St.   124    61    85   -24\n\n<\/pre>\n<p>I suppose if you had some interest in Toledo you might have had a legitimate beef with the preseason influence on December 24. But the other teams didn\u2019t move all that much, except for Cal Poly which moved downward significantly. If you average the ranking differences (I realize this isn\u2019t the most scientific way to do this analysis), you get -0.9 per team. Pretty much unbiased.<\/p>\n<p>For symmetry, let\u2019s take a look at the teams that were outliers in the other direction. These programs underperformed their preseason ratings the most through December 24.<\/p>\n<pre>             Pre  12\/24 Final Diff\nUtah         140   316   303   +13\nWm &amp; Mary    160   322   285   +37\nMt St Mary's 210   314   294   +20\nMaryland      47   166   134   +32\nUC Davis     251   330   326   + 4\nNicholls St. 276   338   332   + 6\nGrambling    343   345   345     0\nTowson       309   343   338   + 5\nMonmouth     262   328   277   +51\nUAB           72   175   133   +42\nRider        147   248   199   +49\nN Illinois   315   340   330   +10\nN Arizona    229   301   341   -40\nPortland     130   229   278   -49\nJacksonville 151   239   228   +11\nBinghamton   324   342   343   - 1\nArizona St.   70   158   223   -65\nKennesaw St. 272   323   313   +10\nUC Riverside 224   291   284   + 7\nRhode Island 180   258   202   +56\nVMI          200   268   254   +14\n\n<\/pre>\n<p>There\u2019s more of a trend here. Each team\u2019s ranking improved by 10 spots on average between Christmas and the final ratings. (The average for the top 10 was 21 spots, with every team but Grambling improving. And Grambling\u2019s numbers did improve, but they were so far in last place they couldn\u2019t catch #344.) <\/p>\n<p>For comparison, let\u2019s look at a world where preseason ratings aren\u2019t used. They\u2019re created for fun and discarded once games are played. The next set of tables looks at the same groups of teams, but the 12\/24 column depicts a team\u2019s ranking had there been no preseason influence on 12\/24.&nbsp; First, the early-season improvers.<\/p>\n<pre>             Pre  12\/24 Final Diff\nWyoming      273    43    98   -55\nLa Salle     217    62    64   - 2\nMiddle Tenn  178    44    60   -16\nMercer       247    82    91   - 9\nW Illinois   332   122   186   -64\nWagner       206    58   112   -54\nIndiana       50     6    11   - 5\nVirginia      88    15    33   -18\nWisconsin     10     1     5   - 4\nSt. Louis     62    13    14   - 1\nGeorgia St.  182    73    71   + 2\nToledo       337   205   208   - 3\nCal Poly     192    72   165   -93\nMurray St.   110    30    45   -15\nLamar        214    88   113   -25\nOhio         111    33    62   -29\nDenver       159    50    80   -30\nIllinois St. 181    75    81   - 6\nGeorgetown    48    11    13   - 2\nOregon St.   124    46    85   -39\n\n<\/pre>\n<p>Where the average ranking decline in the world influenced by preseason ratings was about one spot, this group drops by an average of 23 spots. Clearly, the lack of preseason influence would cause a bias. The opposite effect is observed with the decliners\u2026<\/p>\n<pre>             Pre  12\/24 Final Diff\nUtah         140   330   303   +27\nWm &amp; Mary    160   340   285   +55\nMt St Mary's 210   331   294   +37\nMaryland      47   236   134  +102\nUC Davis     251   335   326   + 9\nNicholls St. 276   338   332   + 6\nGrambling    343   345   345     0\nTowson       309   343   338   + 5\nMonmouth     262   336   277   +59\nUAB           72   201   133   +68\nRider        147   276   199   +77\nN Illinois   315   341   330   +11\nN Arizona    229   321   341   -20\nPortland     130   252   278   -26\nJacksonville 151   261   228   +33\nBinghamton   324   344   343   + 1\nArizona St.   70   188   223   -35\nKennesaw St. 272   324   313   +11\nUC Riverside 224   313   284   +29\nRhode Island 180   279   202   +77\nVMI          200   292   254   +38\n\n<\/pre>\n<p>The average improvement under the preseason-weighting scheme was 10 spots, but in a no-preseason scheme it\u2019s 27 spots. Without preseason influence at this time of year, you can be nearly certain that a team that has overachieved relative to the initial ratings would be overrated. Likewise, a team that has dramatically underachieved would be almost certain to see its rating improve. That is to say, the ratings would be biased without preseason influence.<\/p>\n<p>And this is because a dozen games are not enough to get an accurate picture on a lot of teams, especially when most of those games involve large amounts of garbage time. That\u2019s not to say there isn\u2019t a lot of value in the games that have been played. The fact that the preseason ratings are only given 2-3 games worth of weight at this point is an indication of that. Teams that have deviated substantially from their preseason ratings are almost surely not going to revert to that preseason prediction. But what\u2019s nearly as certain is that a team&#8217;s true level of play is closer to their preseason prediction than their performance-to-date suggests.<\/p>\n<p>If you\u2019ve made it this far, you\u2019ve earned some bonus visuals. So let\u2019s take a look at how the ratings changed last season in the entire D-I population, comparing change from the beginning of the season to Christmas and change from Christmas to the end of the season (using Z-score). <\/p>\n<p>The plot on the top is without preseason ratings and the plot on the bottom is under the existing system. Notice that without preseason ratings, the change between the beginning of the season and Christmas is correlated with the change between Christmas and the end of season. While in the case with pre-season ratings, the two changes are almost uncorrelated, as they should be in an unbiased system. <\/p>\n<p><img src=\"http:\/\/kenpom.com\/assets\/nopre.png\" width=\"720\" \/><br \/>\n<img src=\"http:\/\/kenpom.com\/assets\/pre.png\" width=\"720\" \/><\/p>\n<p>Another conclusion that can be drawn from these plots is that the system would be more volatile without the influence of preseason ratings. Changes after December 24 are greater in the plot on the top than the one on the bottom. This begs the question: How long should preseason influence last? Based on <a href=\"http:\/\/fivethirtyeight.blogs.nytimes.com\/2012\/03\/13\/fivethirtyeight-picks-the-n-c-a-a-bracket\/\">Nate Silver\u2019s findings<\/a>, there\u2019s some strong evidence that it would improve predictions to include one or two games worth of preseason expectation through the end of the season instead of having them expire in late January. The plot on the bottom suggests there\u2019s still enough rebound that the Christmastime ratings should include more preseason juice. But it appears the mix is close enough to being right \u2013 certainly much closer than not including preseason ratings at all &#8211; to not lose any sleep over.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Now that nearly every team has played at least 10 games, one might think we have enough data to form an accurate assessment of any team based on what they have done on the court this season. Then why still have the influence of pre-season ratings in the current ratings? Because you actually don\u2019t have [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[1],"tags":[],"_links":{"self":[{"href":"https:\/\/kenpom.com\/blog\/wp-json\/wp\/v2\/posts\/354"}],"collection":[{"href":"https:\/\/kenpom.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/kenpom.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/kenpom.com\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/kenpom.com\/blog\/wp-json\/wp\/v2\/comments?post=354"}],"version-history":[{"count":0,"href":"https:\/\/kenpom.com\/blog\/wp-json\/wp\/v2\/posts\/354\/revisions"}],"wp:attachment":[{"href":"https:\/\/kenpom.com\/blog\/wp-json\/wp\/v2\/media?parent=354"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/kenpom.com\/blog\/wp-json\/wp\/v2\/categories?post=354"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/kenpom.com\/blog\/wp-json\/wp\/v2\/tags?post=354"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}