[The ratings methodology was given a tweak before the 2014 season. See this for the details. -kp]
[Also see this page for an explanation on adjusted efficiency calculations. -kp]

The first thing you should know about this system is that it is designed to be purely predictive. If you’re looking for a system that rates teams on how “good” their season has been, you’ve come to the wrong place. There are enough systems out there that rank teams based on what is “good” by just about any definition you can think of. So I’d encourage you to google college basketball ratings or even try the opinion polls for something that is more your style.

The purpose of this system is to show how strong a team would be if it played tonight, independent of injuries or emotional factors. Since nobody can see every team play all (or even most) of their games, this system is designed to give you a snapshot of a team’s current level of play.

This season, I scrapped the old A-B=C power ratings and went to something that appears a little more complex. It is a little more complex, but it also has the advantage of being based on basketball things. The old system I used wasn’t special for hoops. It could be applied to any sport where a score is kept. Be it the NHL, college lacrosse, or grandma’s bridge league. But now we have the technology to do better. Besides, there are plenty of other power ratings of the old style out there, if that’s what you really prefer. I don’t really do this to imitate what everyone else does.

I would describe the philosophy of the system as this: it looks at who a team has beaten and how they have beaten them. Same thing on the losses, also. Yes, it values a 20 point win more than a 5 point win. It likes a team that loses a lot of close games against strong opposition more than one that wins a lot of close games against weak opposition.

The core of the system is the pythagorean calculation for expected winning percentage. In previous experiments, I found the best exponent for college basketball was between 8 and 9. But for whatever reason, when using adjusted efficiencies, the best exponent is between 11 and 12, probably because previous experiments only included conference games.

I am using 11.5 as the exponent.

{Update: Beginning with the 2012 season, I’m using 10.25 for the exponent. More rigorous testing determined this to be the best exponent to produce predictive game probabilities. Previous ratings have been updated to reflect this.]

How did I determine the best exponent? I applied the log5 formula to every game last season and found the exponent with the best fit for expected winning percentages. (A problem here is that I applied the final ratings retroactively to the last season’s results, so it’s a little high for predictive purposes. This will be revisited eventually.) You can get an idea of the chance one team beats another by applying the log5 formula to the two teams’ pythagorean rating. There is a home court advantage consideration, also. More on that, later.

The inputs into the pythagorean equation are the team’s adjusted offensive and defensive efficiencies. Any time you see something “adjusted” on this site, it refers to how a team would perform against average competition at a neutral site. For instance, a team’s offensive efficiency (points scored per 100 possessions) is adjusted for the strength of the opposing defenses played. I compute an adjusted offensive efficiency for each game by multiplying the team’s raw offensive efficiency by the national average efficiency and dividing by the opponent’s adjusted defensive efficiency. The adjusted game efficiencies are then averaged (with more weighting to recent games) to produce the final adjusted offensive efficiency.

While the pythagorean winning percentage is calibrated to the likelihood of winning, the efficiencies are based purely on scoring per possession with no consideration of winning or losing. This allows us to get both a chance of winning and a predicted final score with the system, and makes the system much more predictive than if we ignored scoring margin. It also has the advantage of giving a rating in offensive and defensive terms, and an SOS in those terms, as well. Want to know which team has faced the toughest defenses? Well, with my system you can.

Now let’s do this in Q&A form based on e-mail I’ve received.

How do you cap margin of victory?

[This is no longer true, exactly. See the link referred to at the beginning of this piece.] This is the most obvious problem with the system – there is no cap on margin of victory. It’s not that I’m particularly comfortable with it, but I’ve looked at quite a few ways to limit the impact of MOV, and I haven’t found one that I like, yet. I’ll find something someday, but until then we have to deal with things like Georgia being ranked 11th and Oklahoma being ranked 17th at this point (12/10/06) in the season. More games will push these teams to their rightful location.

How do you incorporate home court advantage?

I add 1.4% to the home team’s OE and visiting team’s DE, and subtract the same amount from the opposite parameters.

What do all the columns mean?

The new ones are Cons (Consistency) and Luck. The easiest one to understand is Luck, which is the deviation in winning percentage between a team’s actual record and their expected record using the correlated gaussian method. The luck factor has nothing to do with the rating calculation, but a team that is very lucky (positive numbers) will tend to be rated lower by my system than their record would suggest.

Consistency is basically the standard deviation of scoring difference by game for a team. Again, it’s not included in the ratings calculation. It can be an aid in determining which teams are overrated by my system. Highly rated teams that are inconsistent tend to look beatable more often. As of this writing, Georgia is ranked 329 in consistency and Oklahoma is at 334. They’ve played their best games against poor teams, and their worst against good ones.

Ideally, I’d synthesize the consistency and rating into one number, but I haven’t found a way I’m comfortable with. So right now, I’m throwing this system out there with all its warts for everyone to see. The warts tend to decrease as more games are played, but at least I’ve made you aware of them and where they can pop up.

Strength of Schedule now has three columns. It’s potentially more confusing, but worth it in the end. The way I compute SOS is to average the opponents offensive and defensive ratings and to apply the pythagorean calculation to them to rank the overall schedules. So those are the three columns you see, Pyth (Overall SOS), AdjO (Opponents’ average adjusted offensive efficiency), and AdjD (Opponents’ average adjusted defensive efficiency). When comparing the offensive performance of players on different teams, there’s quite a bit of an advantage having their average opponents’ defense quantified. There’s also a column for non-conference SOS which attempts to capture the portion of the schedule under a school’s control. Thus, no postseason or conference games are included in that calculation.