Most recent entries

  • Your 2016 kPOY: Brice Johnson
  • Spiking the football on the 30-second shot clock
  • Sweet 16 probabilities
  • Need scheduling help?
  • 2016 NCAA tournament log5
  • 2016 Big West tournament preview
  • 2016 WAC tournament preview
  • 2016 Sun Belt tournament preview
  • 2016 American tournament preview
  • 2016 Big Ten tournament preview
  • The good stuff

    At other venues...
  • ($)
  • Deadspin
  • Slate

  • Strategy
  • Whether to foul up 3 late
  • The value of 2-for-1’s
  • Whether to foul when tied (1, 2, 3)
  • Who's the best in-game coach?

  • Philosophy
  • All points are not created equal
  • Brady Heslip’s non-slump
  • The magic of negative motivation
  • A treatise on plus-minus
  • The preseason AP poll is great
  • The lack of information in close-game performance
  • Why I don’t believe in clutchness*

  • Fun stuff
  • The missing 1-point games
  • Which two teams last lost longest ago?
  • How many first-round picks will Kentucky have?
  • Prepare for the Kobe invasion
  • Predicting John Henson's free throw percentage
  • Can Derrick Williams set the three-point accuracy record?
  • Play-by-play Theater: earliest disqualification
  • Monthly Archives

  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • July 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • July 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2007
  • September 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006
  • December 2005
  • November 2005
  • October 2005
  • September 2005
  • August 2005
  • July 2005
  • June 2005
  • May 2005
  • April 2005
  • March 2005
  • February 2005
  • January 2005
  • December 2004
  • November 2004
  • October 2004
  • September 2004
  • August 2004
  • July 2004
  • June 2004
  • May 2004
  • April 2004
  • March 2004
  • February 2004
  • January 2004
  • December 2003
  • November 2003

  • RSS feed

    Ratings Explanation

    by Ken Pomeroy on Wednesday, November 29, 2006

    [The ratings methodology was given a tweak before the 2014 season. See this for the details. -kp]
    [Also see this page for an explanation on adjusted efficiency calculations. -kp]

    The first thing you should know about this system is that it is designed to be purely predictive. If you’re looking for a system that rates teams on how “good” their season has been, you’ve come to the wrong place. There are enough systems out there that rank teams based on what is “good” by just about any definition you can think of. So I’d encourage you to google college basketball ratings or even try the opinion polls for something that is more your style.

    The purpose of this system is to show how strong a team would be if it played tonight, independent of injuries or emotional factors. Since nobody can see every team play all (or even most) of their games, this system is designed to give you a snapshot of a team’s current level of play.

    This season, I scrapped the old A-B=C power ratings and went to something that appears a little more complex. It is a little more complex, but it also has the advantage of being based on basketball things. The old system I used wasn’t special for hoops. It could be applied to any sport where a score is kept. Be it the NHL, college lacrosse, or grandma’s bridge league. But now we have the technology to do better. Besides, there are plenty of other power ratings of the old style out there, if that’s what you really prefer. I don’t really do this to imitate what everyone else does.

    I would describe the philosophy of the system as this: it looks at who a team has beaten and how they have beaten them. Same thing on the losses, also. Yes, it values a 20 point win more than a 5 point win. It likes a team that loses a lot of close games against strong opposition more than one that wins a lot of close games against weak opposition.

    The core of the system is the pythagorean calculation for expected winning percentage. In previous experiments, I found the best exponent for college basketball was between 8 and 9. But for whatever reason, when using adjusted efficiencies, the best exponent is between 11 and 12, probably because previous experiments only included conference games.

    I am using 11.5 as the exponent.

    {Update: Beginning with the 2012 season, I'm using 10.25 for the exponent. More rigorous testing determined this to be the best exponent to produce predictive game probabilities. Previous ratings have been updated to reflect this.]

    How did I determine the best exponent? I applied the log5 formula to every game last season and found the exponent with the best fit for expected winning percentages. (A problem here is that I applied the final ratings retroactively to the last season’s results, so it’s a little high for predictive purposes. This will be revisited eventually.) You can get an idea of the chance one team beats another by applying the log5 formula to the two teams’ pythagorean rating. There is a home court advantage consideration, also. More on that, later.

    The inputs into the pythagorean equation are the team’s adjusted offensive and defensive efficiencies. Any time you see something “adjusted” on this site, it refers to how a team would perform against average competition at a neutral site. For instance, a team’s offensive efficiency (points scored per 100 possessions) is adjusted for the strength of the opposing defenses played. I compute an adjusted offensive efficiency for each game by multiplying the team’s raw offensive efficiency by the national average efficiency and dividing by the opponent’s adjusted defensive efficiency. The adjusted game efficiencies are then averaged (with more weighting to recent games) to produce the final adjusted offensive efficiency.

    While the pythagorean winning percentage is calibrated to the likelihood of winning, the efficiencies are based purely on scoring per possession with no consideration of winning or losing. This allows us to get both a chance of winning and a predicted final score with the system, and makes the system much more predictive than if we ignored scoring margin. It also has the advantage of giving a rating in offensive and defensive terms, and an SOS in those terms, as well. Want to know which team has faced the toughest defenses? Well, with my system you can.

    Now let’s do this in Q&A form based on e-mail I’ve received.

    How do you cap margin of victory?

    [This is no longer true, exactly. See the link referred to at the beginning of this piece.] This is the most obvious problem with the system - there is no cap on margin of victory. It’s not that I’m particularly comfortable with it, but I’ve looked at quite a few ways to limit the impact of MOV, and I haven’t found one that I like, yet. I’ll find something someday, but until then we have to deal with things like Georgia being ranked 11th and Oklahoma being ranked 17th at this point (12/10/06) in the season. More games will push these teams to their rightful location.

    How do you incorporate home court advantage?

    I add 1.4% to the home team’s OE and visiting team’s DE, and subtract the same amount from the opposite parameters.

    What do all the columns mean?

    The new ones are Cons (Consistency) and Luck. The easiest one to understand is Luck, which is the deviation in winning percentage between a team’s actual record and their expected record using the correlated gaussian method. The luck factor has nothing to do with the rating calculation, but a team that is very lucky (positive numbers) will tend to be rated lower by my system than their record would suggest.

    Consistency is basically the standard deviation of scoring difference by game for a team. Again, it’s not included in the ratings calculation. It can be an aid in determining which teams are overrated by my system. Highly rated teams that are inconsistent tend to look beatable more often. As of this writing, Georgia is ranked 329 in consistency and Oklahoma is at 334. They’ve played their best games against poor teams, and their worst against good ones.

    Ideally, I’d synthesize the consistency and rating into one number, but I haven’t found a way I’m comfortable with. So right now, I’m throwing this system out there with all its warts for everyone to see. The warts tend to decrease as more games are played, but at least I’ve made you aware of them and where they can pop up.

    Strength of Schedule now has three columns. It’s potentially more confusing, but worth it in the end. The way I compute SOS is to average the opponents offensive and defensive ratings and to apply the pythagorean calculation to them to rank the overall schedules. So those are the three columns you see, Pyth (Overall SOS), AdjO (Opponents’ average adjusted offensive efficiency), and AdjD (Opponents’ average adjusted defensive efficiency). When comparing the offensive performance of players on different teams, there’s quite a bit of an advantage having their average opponents’ defense quantified. There’s also a column for non-conference SOS which attempts to capture the portion of the schedule under a school’s control. Thus, no postseason or conference games are included in that calculation.