There have been two common concerns regarding the pre-season ratings that need to be addressed. The first deals with what happens to them now that games are being played. Currently, the pre-season ratings hold the weight of a little less than five games of play. This figure was selected somewhat arbitrarily, but in doing some testing I felt like it provided suitable resistance to the results of the first few games of the season. I saw enough warts in the system to know that teams need some freedom to move around in the first week or two, but I still trust the system enough to value DePaul’s initial rating more than its 33-point win over Chicago State.
The pre-season ratings will be degraded as real data accumulates. Starting next Monday, the influence of the initial rating will be dropped gradually each day until it reaches zero on the morning of January 23. This seems like a long ways away, but this date was chosen for a couple of reasons.
If you’ve followed my ratings, you understand that the early January version can still have quite a few outliers. Another problem at that time is that unrealistic values can exist for team offense and defense for a handful of teams. Including a bit of pre-season rating will mitigate those issues. Another benefit of stretching the influence out that far is that from day to day, any change in the ratings will be much more due to the previous day’s games than from removing a small amount of pre-season influence. And it’s not like the ratings will be any different on January 22 than January 23. By the time mid-January arrives, the influence of the pre-season ratings will be tiny compared to 15-20 actual games that will have been played by each team.
The other issue that has been raised is what to do about “raw data”. This is not such a big concern to me because the adjusted data, even as a blend of pre-season prediction and actual schedule-adjusted game values, tells the truest story of a team’s ability. That doesn’t mean there aren’t outliers in the rankings, of course. (But as my critics will tell you, there are outliers in the rankings in April as well.) Most teams are reasonably close to where they should be, but some teams aren’t. However the adjusted, pre-season influenced, data is much better than drawing conclusions based purely on data from games played.
That said, there’s surely some value in being able to peek under the hood and compare raw values to adjusted values to get a sense for the effect of the adjustments. I thought I would jam the values into the four factors page, but there’s too much info there already. I’m still looking.