Subscribe!
CourtIntelligence powered by kenpom.com

The good stuff


At other venues...
  • ESPN.com ($)
  • Deadspin
  • Slate

  • Strategy
  • Whether to foul up 3 late
  • The value of 2-for-1’s

  • Philosophy
  • Brady Heslip’s non-slump
  • The magic of negative motivation
  • A treatise on plus-minus
  • The preseason AP poll is great
  • The magic of negative motivation
  • The lack of information in close-game performance
  • Why I don’t believe in clutchness*

  • Fun stuff
  • The missing 1-point games
  • Which two teams last lost longest ago?
  • How many first-round picks will Kentucky have?
  • Prepare for the Kobe invasion
  • Predicting John Henson's free throw percentage
  • Can Derrick Williams set the three-point accuracy record?
  • Play-by-play Theater: earliest disqualification
  • Monthly Archives

  • October 2014
  • September 2014
  • July 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • July 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2007
  • September 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006
  • December 2005
  • November 2005
  • October 2005
  • September 2005
  • August 2005
  • July 2005
  • June 2005
  • May 2005
  • April 2005
  • March 2005
  • February 2005
  • January 2005
  • December 2004
  • November 2004
  • October 2004
  • September 2004
  • August 2004
  • July 2004
  • June 2004
  • May 2004
  • April 2004
  • March 2004
  • February 2004
  • January 2004
  • December 2003
  • November 2003

  • RSS feed

    Pomeroy Ratings version 2.0

    by Ken Pomeroy on Sunday, October 20, 2013


    You asked for it for years, America, and now you’re getting it. The algorithm behind the Pomeroy Ratings is getting some tweaks to handle runaway scoring margins.

    Back before anybody knew about my work, I would do ratings of all kinds of sports. You haven’t lived until you’ve attempted to do ratings for the Western Hockey League. In those days, I had a method to give variable weight to games in my otherwise elementary least squares system. The weight was based on three ingredients - how close the game was expected to be, how close the game actually was, and when the game was played.

    The result is that games perceived by the system as big upsets get the most weight, while the influence of expected lopsided wins is minimized. For instance, last season’s non-conference games involving Grambling would be largely ignored. Whether a team beat the Tigers by 30 or 60 would make little difference in its rating.

    So I’ve dusted off that algorithm, spent some time tuning the various parameters, and applied it to the efficiency model to improve the predictive power of adjusted offense and adjusted defense. These aren’t changes to make everyone feel good about limiting the influence of buy-games against last-place SWAC teams. They’re done because they improve the predictive power of the system. In backtesting over the past 11 seasons, average error in February and March game predictions under this system decreases by about one percent (8.33 to 8.25).

    (I don’t think you can’t come up with a prediction method that will have an error of less than eight points. And if you can, don’t tell anyone! Because that would be a really good system. That should also tell you a lot about why it’s difficult to anticipate what will happen in a single contest between teams. It’s also a good illustration of the large role randomness in any single game. So even if you know it all, you can’t possibly know it ALL.)

    Meaning, if you liked the system before, you’ll still like it. If you thought it was junk, it’s still junk. It remains incapable of accurately predicting that Cal Poly will beat UCLA. Some teams will have a rating that better reflects their ability and some teams will be rated worse. There will be more of the former than the latter, but regardless of what you think of the system, there will still be outliers.

    Even though these changes are not designed specifically to make you feel better, maybe they will. For example, Wisconsin would have been the top team for just three days in the 2012 season as opposed to the four weeks they were rated first under the old system. By the end of the season, though, changes between the new and old rankings are largely minimal.

    The most consistent exception is the impact on dominant mid-majors. Their movement tends to be more volatile since more emphasis is placed on postseason play when they finally get to battle teams of comparable strength.

    For instance, in 2008, Davidson moves from 20th in the original system to 7th in the new version. In 2007, Southern Illinois goes from 26th to 13th. But Belmont teams of recent seasons take a hit, dropping from 23rd to 34th in 2012 and 19th to 27th in 2011. The data we have to work with in a single college basketball season is limited to begin with and the meaningful data involving a dominant team from a lesser conference is even more limited.

    The most noticeable change will be to the values of offense and defense. The ranges are narrower resulting from the decreased impact of outlier performances. Accordingly, I’ve had to the raise the pythagorean exponent to 11.5 to re-calibrate the predicted win probabilities using the log5 method.

    After some debate with myself, I decided to apply the ratings to past seasons, so I’ve gone ahead and rewritten history. (Congrats, Pitt, on your new 2003 kenpom title.) Although, history is decided on the floor when you think about it. But for the purposes of doing preseason projections, it’s necessary to use the new numbers and since they figure to be better (if only marginally) it makes sense to post the output from the updated algorithm. However, predictions listed in the FanMatch archives for seasons prior to 2014 as well as the rankings evolution show on team schedules of those seasons continue to reflect the original formula and likely always will.