You asked for it for years, America, and now you’re getting it. The algorithm behind the Pomeroy Ratings is getting some tweaks to handle runaway scoring margins.

Back before anybody knew about my work, I would do ratings of all kinds of sports. You haven’t lived until you’ve attempted to do ratings for the Western Hockey League. In those days, I had a method to give variable weight to games in my otherwise elementary least squares system. The weight was based on three ingredients – how close the game was expected to be, how close the game actually was, and when the game was played.

The result is that games perceived by the system as big upsets get the most weight, while the influence of expected lopsided wins is minimized. For instance, last season’s non-conference games involving Grambling would be largely ignored. Whether a team beat the Tigers by 30 or 60 would make little difference in its rating.

So I’ve dusted off that algorithm, spent some time tuning the various parameters, and applied it to the efficiency model to improve the predictive power of adjusted offense and adjusted defense. These aren’t changes to make everyone feel good about limiting the influence of buy-games against last-place SWAC teams. They’re done because they improve the predictive power of the system. In backtesting over the past 11 seasons, average error in February and March game predictions under this system decreases by about one percent (8.33 to 8.25).

(I don’t think you can’t come up with a prediction method that will have an error of less than eight points. And if you can, don’t tell anyone! Because that would be a really good system. That should also tell you a lot about why it’s difficult to anticipate what will happen in a single contest between teams. It’s also a good illustration of the large role randomness in any single game. So even if you know it all, you can’t possibly know it ALL.)

Meaning, if you liked the system before, you’ll still like it. If you thought it was junk, it’s still junk. It remains incapable of accurately predicting that Cal Poly will beat UCLA. Some teams will have a rating that better reflects their ability and some teams will be rated worse. There will be more of the former than the latter, but regardless of what you think of the system, there will still be outliers.

Even though these changes are not designed specifically to make you feel better, maybe they will. For example, Wisconsin would have been the top team for just three days in the 2012 season as opposed to the four weeks they were rated first under the old system. By the end of the season, though, changes between the new and old rankings are largely minimal.

The most consistent exception is the impact on dominant mid-majors. Their movement tends to be more volatile since more emphasis is placed on postseason play when they finally get to battle teams of comparable strength.

For instance, in 2008, Davidson moves from 20th in the original system to 7th in the new version. In 2007, Southern Illinois goes from 26th to 13th. But Belmont teams of recent seasons take a hit, dropping from 23rd to 34th in 2012 and 19th to 27th in 2011. The data we have to work with in a single college basketball season is limited to begin with and the meaningful data involving a dominant team from a lesser conference is even more limited.

The most noticeable change will be to the values of offense and defense. The ranges are narrower resulting from the decreased impact of outlier performances. Accordingly, I’ve had to the raise the pythagorean exponent to 11.5 to re-calibrate the predicted win probabilities using the log5 method.

After some debate with myself, I decided to apply the ratings to past seasons, so I’ve gone ahead and rewritten history. (Congrats, Pitt, on your new 2003 kenpom title.) Although, history is decided on the floor when you think about it. But for the purposes of doing preseason projections, it’s necessary to use the new numbers and since they figure to be better (if only marginally) it makes sense to post the output from the updated algorithm. However, predictions listed in the FanMatch archives for seasons prior to 2014 as well as the rankings evolution show on team schedules of those seasons continue to reflect the original formula and likely always will.