Subscribe!
CourtIntelligence powered by kenpom.com

Most recent entries

  • Introduction to the PASR recruiting model
  • On unbalanced conference schedules
  • Play-by-Play Theater: Quickest individual 3’s
  • Weeks in Review II, 11/22-12/4
  • The ACC/Big Ten Challenge bar chart
  • Week in Review I, 11/14-11/21
  • The slowest season(?)
  • What I did this summer
  • The first annual #ShootersClub
  • A look back on last season’s preseason ratings
  • The good stuff


    At other venues...
  • ESPN.com ($)
  • Deadspin
  • Slate

  • Strategy
  • Whether to foul up 3 late
  • The value of 2-for-1’s

  • Philosophy
  • Brady Heslip’s non-slump
  • The magic of negative motivation
  • A treatise on plus-minus
  • The preseason AP poll is great
  • The magic of negative motivation
  • The lack of information in close-game performance
  • Why I don’t believe in clutchness*

  • Fun stuff
  • The missing 1-point games
  • Which two teams last lost longest ago?
  • How many first-round picks will Kentucky have?
  • Prepare for the Kobe invasion
  • Predicting John Henson's free throw percentage
  • Can Derrick Williams set the three-point accuracy record?
  • Play-by-play Theater: earliest disqualification
  • Monthly Archives

  • December 2014
  • November 2014
  • October 2014
  • September 2014
  • July 2014
  • May 2014
  • April 2014
  • March 2014
  • February 2014
  • January 2014
  • December 2013
  • November 2013
  • October 2013
  • September 2013
  • August 2013
  • July 2013
  • June 2013
  • May 2013
  • April 2013
  • March 2013
  • February 2013
  • January 2013
  • December 2012
  • November 2012
  • October 2012
  • September 2012
  • August 2012
  • July 2012
  • June 2012
  • May 2012
  • April 2012
  • March 2012
  • February 2012
  • January 2012
  • December 2011
  • November 2011
  • October 2011
  • September 2011
  • August 2011
  • July 2011
  • June 2011
  • April 2011
  • March 2011
  • February 2011
  • January 2011
  • December 2010
  • November 2010
  • October 2010
  • August 2010
  • July 2010
  • June 2010
  • May 2010
  • April 2010
  • March 2010
  • February 2010
  • January 2010
  • December 2009
  • November 2009
  • October 2009
  • July 2009
  • February 2009
  • January 2009
  • December 2008
  • November 2008
  • October 2007
  • September 2007
  • July 2007
  • June 2007
  • May 2007
  • April 2007
  • March 2007
  • February 2007
  • January 2007
  • December 2006
  • November 2006
  • October 2006
  • September 2006
  • August 2006
  • July 2006
  • June 2006
  • May 2006
  • April 2006
  • March 2006
  • February 2006
  • January 2006
  • December 2005
  • November 2005
  • October 2005
  • September 2005
  • August 2005
  • July 2005
  • June 2005
  • May 2005
  • April 2005
  • March 2005
  • February 2005
  • January 2005
  • December 2004
  • November 2004
  • October 2004
  • September 2004
  • August 2004
  • July 2004
  • June 2004
  • May 2004
  • April 2004
  • March 2004
  • February 2004
  • January 2004
  • December 2003
  • November 2003

  • RSS feed

    Pre-season ratings 2014

    by Ken Pomeroy on Saturday, October 26, 2013


    Pre-season ratings have been posted for the upcoming season. When I first started doing these before the 2011 season, I thought I was pretty awesome. It was kind of a big deal to get every team’s lineup data, mix in some limited recruiting info, and produce a rating that wasn’t laughably horrible. But then Hanner came along with his lineup-based approach and TeamRankings did something that is probably fairly sophisticated, and my preseason ratings became the simplest algorithm possible without being a complete joke.

    The system is largely the same as in recent seasons. It independently predicts a team’s adjusted offensive and defensive efficiency. As a reminder, it uses information split into two categories:

    - Base level of the program. This takes into account the last five seasons of data for the same unit (offense for predicting offense) and the last season for the opposite unit (defense for predicting offense). It also includes data for how much money the program has spent on men’s basketball for the last three seasons. The bulk of this component is determined by the most recent season’s performance of the unit.

    You can make a decent predictive system just by knowing what is normal for a program. If we were predicting the Big 12 standings in 2025 (assuming the conference exists), it would bereasonably safe to say that Kansas will have a winning record and TCU will have a losing record. We can say that with some confidence even though some of the players on those rosters haven’t picked up a basketball yet.

    - Personnel. This component handles who’s coming back from last season’s team and which impact recruits are being added to the roster. More impact is given to returning players from earlier classes. And minutes played by those with a high-efficiency/high-usage profile are particularly important. Recruits in the RSCI top 100 have some influence here as well, although most of the influence is in the top 50.

    The goal here is really to get each conference’s pecking order correct and to predict end-of-season ratings. To that extent, if a player is expected to be available by late-January or so, he’s included in the personnel calculations. This applies to Louisville’s Chane Behanan and Florida’s Chris Walker, while Georgetown’s Greg Whittington is not included although he may well see action later in the season.

    You can find additional discussion in last season’s piece

    Now let’s get to the question a lot of people will be asking.

    Why is [state your favorite team] rated lower than it should be?

    It’s because one or both of the components is missing something. Perhaps recent seasons are not representative of your team’s normal level. The personnel component doesn’t consider transfers or recruits outside the top 100. It does have knowledge of players that played two seasons ago but missed last season, but that is a small influence. So if your team has players that the personnel component can’t see (transfers, junior college players, and non top-100 recruits for mid-majors), then it’s possible your team is underrated. Keep in mind, though, that the first component handles some of this. It effectively sets a “replacement level” for new players on the roster that aren’t accounted for in the personnel component.

    The system doesn’t think as highly of freshmen as AP voters will and it likes good teams that return a lot of players. Hence Oklahoma State, Iowa, UConn, Creighton, and Stanford are ranked higher than the humans and Kentucky, Kansas, and Arizona are ranked lower. (Hey, the Fab Five were ranked #20 in the preseason by the humans, so leave me alone.) Andrew Wiggins and Julius Randle are not your typical first- and second-ranked recruits, so perhaps I could have made some subjective adjustments here, but I chose not to.

    Last season, the system managed to hold its own against others, with a mean absolute error of about 2.14 on predicting conference wins. It had some good calls and some bad ones, some of which were discussed in the linked piece. Refer to your local message board archives for additional details.

    It’s worth mentioning that at the end of the season, any conference’s standings will not look like what is currently predicted. Meaning, it’s obviously going to take more than 12 wins to win the Big East or 13 to win the Big Ten. But the top teams in those conferences are similar enough that a reasonable expectation for the win total of each of those teams can not be very high.