Subscribe!

Most recent entries

  • Your 2016 kPOY: Brice Johnson
  • Spiking the football on the 30-second shot clock
  • Sweet 16 probabilities
  • Need scheduling help?
  • 2016 NCAA tournament log5
  • 2016 Big West tournament preview
  • 2016 WAC tournament preview
  • 2016 Sun Belt tournament preview
  • 2016 American tournament preview
  • 2016 Big Ten tournament preview
  • The good stuff


    At other venues...
  • ESPN.com ($)
  • Deadspin
  • Slate

  • Strategy
  • Whether to foul up 3 late
  • The value of 2-for-1’s
  • Whether to foul when tied (1, 2, 3)
  • Who's the best in-game coach?

  • Philosophy
  • All points are not created equal
  • Brady Heslip’s non-slump
  • The magic of negative motivation
  • A treatise on plus-minus
  • The preseason AP poll is great
  • The lack of information in close-game performance
  • Why I don’t believe in clutchness*

  • Fun stuff
  • The missing 1-point games
  • Which two teams last lost longest ago?
  • How many first-round picks will Kentucky have?
  • Prepare for the Kobe invasion
  • Predicting John Henson's free throw percentage
  • Can Derrick Williams set the three-point accuracy record?
  • Play-by-play Theater: earliest disqualification
  • Monthly Archives

  • April 2016
  • March 2016
  • February 2016
  • January 2016
  • December 2015
  • November 2015
  • October 2015
  • September 2015
  • August 2015
  • July 2015
  • June 2015
  • April 2015
  • March 2015
  • February 2015
  • January 2015
  • December 2014
  • November 2014
  • October 2014
  • July 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • July 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2007
  • September 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006
  • December 2005
  • November 2005
  • October 2005
  • September 2005
  • August 2005
  • July 2005
  • June 2005
  • May 2005
  • April 2005
  • March 2005
  • February 2005
  • January 2005
  • December 2004
  • November 2004
  • October 2004
  • September 2004
  • August 2004
  • July 2004
  • June 2004
  • May 2004
  • April 2004
  • March 2004
  • February 2004
  • January 2004
  • December 2003
  • November 2003

  • RSS feed

    Tiers of joy

    by Ken Pomeroy on Thursday, January 14, 2016


    Home-court advantage is important in college basketball - even though it may be at an all-time low - but too often it gets ignored. I suspect we have the RPI to blame for this. The RPI doesn’t include venue in its strength of schedule calculation, and more profoundly, encourages users to look at a team’s record against say, the top 50 teams, without considering where those games were played. (Before I go any further, I have to say I don’t mind the RPI in general. It’s not a bad formula considering its origins and the history behind it is kind of endearing.)

    In the spirit of home-court advantage awareness I’ve gone ahead and added additional information to the schedule page. In a fair world, when people talk about top 50 wins they should be accounting for where the game is played. And so I’ve added a notation for whether a game was Tier A or Tier B to each game on a team’s schedule. A game in Tier A represents a top 50 opponent adjusting for the location of the game, and Tier B is the same concept for a top 100 opponent. This is similar to what already exists on the player pages.

    It turns out those adjustments are important. Beating the 90th-ranked team on the road is about as difficult as beating the 50th-best team on a neutral floor, which is roughly as difficult as beating the 20th-best team on one’s home floor. (The exact relationship can vary by season.) So it’s poor form to ignore this when some teams in college never go on the road in non-conference and others rarely get to play a non-conference game at home.

    It’s with some reluctance that I do this. Anybody’s ranking system should stand on its own. For instance, as I write this Evansville is ranked 41th in my system. That means that the system thinks Evansville is the 41st-best team in the land right now. It’s true the Aces don’t have a top 50 win whether you consider home court or not, and you might propose that Evansville is overrated because of that. But unless you have knowledge that the system tends to overrate teams that have played a weak schedule, then you don’t have a basis for this statement.

    Furthermore, just ranking a team based on its performance in various tiers can also be problematic. Teams taking on opponents ranked in the top ten on the road are going to have a worse record than teams playing opponents ranked in the 40-50 range. That gets to why computer ratings exists - it’s practically impossible for a human to compare all of this information and make their own rating on the fly. So humans try to simplify things by looking at a team’s top 50 record and lose information in the process.

    In this way, the RPI is unique as far as I know. The RPI devotee is instructed to ignore a team’s actual rank. There are 40+ ranking systems listed on Ken Massey’s composite page. Sure, they’re all flawed, and some are more flawed than others, but I seriously doubt any of the folks behind those systems would say you can’t use the ranking of the team in their system and that you must look at a host of other things to determine which teams are best.

    But so it is in the RPI, where the user can’t put any trust in a team’s actual ranking but must look at the underlying data: who a team has played and who it has beaten. And thus, things like record vs. top 50 teams are deemed more important than a team’s own RPI rank. But in doing so, one puts trust in an opponent’s RPI ranking, the very thing the user is told to ignore for the team in question.

    Through repetition over three decades, this construct has been ingrained in the at-large selection process so that few people question it. I suspect part of this is because it makes it more difficult for the casual fan to be an expert. Expert: “No, casual fan, I know Team X is ranked 20 spots ahead of Team Y, but it’s just not that simple. Team Y is 4-3 against the top 50 whereas Team X is only 1-2. So team Y is better. Leave it to me to interpret the data.”

    But what if Team X was 7-2 against the top 100 and Team Y was 4-4? Well, some expert is going to evaluate that and tell us. You just can’t look at a team’s ranking, silly. And what if all of team X’s games against the top 100 were on the road, but only two of team Y’s games were? Somebody will figure it out. Probably by spending hours and hours looking at the data. And this doesn’t even consider Teams A, B, C, and D who have similar data. The effort involved removes much of the convenience of having an automated ranking.

    If you were asked to come up with a ranking of [some things], imagine reporting back, “Here’s my ranking, except don’t pay attention to the ranking, we have to look at what [these things] did when they interacted with [other things]. I know [this thing] is ranked as the best [thing] but it’s not, because of [this reason].” That would be an awkward moment.

    But the worst aspect of the RPI is that it encourages the user to ignore the location of a game. Top 50 record has become a ubiquitous measure despite the fact that it’s going to screw over teams that have to play most or all of their games against top 50 competition away from home. To take one example, consider the Monmouth/UCLA game. I don’t know where UCLA is going to end up in the RPI, but in the real world a win on UCLA’s floor is the equivalent of beating a top 50 team on a neutral floor, whether UCLA is ranked in the top 50 or not (and according to rpiforecast.com they probably won’t be). So Monmouth will not get credit for a top 50 win, not because they don’t deserve it, but simply because they are unable to schedule a top 50 team at home.

    The oddity is that, Monmouth figures to end up in the RPI top 50 itself. So UCLA will have the appearance of playing a quality opponent, suffering an excusable loss if you will, while Monmouth will only get credit for a pretty ordinary win in the RPI’s view. This is a bad thing and people that are interested in fairness shouldn’t accept this, especially since the fix is easy. I know the basketball committee is made of experts that are smarter than to look at things that way, but it adds another layer of complexity to the process that isn’t necessary.

    So that is the motivation behind the tier notation on the team schedules. My rankings are designed to stand on their own, but if you are interested in assessing a team’s quality wins and losses, the tier approach is a more fair way of thinking.