Over the next three weeks, the usual conversation will take place regarding the selection process. We are pretty certain to hear two things from bracketologists:
1. This is the weakest bubble ever. Or at the very least, “this bubble is so weak.” Of course it is! You are are dealing with the 45th to 50th best teams in the country. These teams lose a bunch of games. They occasionally play like the 150th best team in the country. This is why they aren’t a shoo-in for a 68-team tournament.
2. The selection committee only cares about RPI-related information. This may or may not be true, but the certainty to which people claim it’s so is a bit overstated, I believe.
Not that I’m under the illusion that any other ratings system is used in the selection process. The process is built around looking at data based on RPI rankings, and the information analyzed by committee members is far too extensive to allow any of the other rankings available to be used in a productive way.
However, that doesn’t mean that there isn’t some indirect influence. The Easiest Bubble Solver as invented by Drew Cannon has had a pretty good track record the past three years sorting out bubble teams. For those unaware, the EBS simply adds the ranking of a team in the RPI to its ranking in my system. You can take the lowest ranked teams from that method to determine who is in and who is out.
Last season, it missed on one at-large team. Of the 115 bracketologists shown at the Bracket Matrix, only one did better. And these are people spending a lot of time to figure this out. In 2011, it missed on two teams. Just one of 89 bracketologists did better. In 2010, it missed one team in a season when just 10 of 85 bracketologists picked a perfect field.
Note here that I don’t think my system is a special ingredient in this. You could probably take any reputable predictive system and get similar results. The point is the committee may subjectively include information beyond what the RPI reveals by itself.
Before 2010, the results are less robust, which means one of two things. Either the method got particularly lucky the past three seasons, or the committee is starting to account for how a team has played as opposed to simply what it has accomplished. Given my own experience with a member of the basketball committee, I’d be foolish to think the light bulb has suddenly gone on with that group, but I think it’s reasonable to assume that the committee considers more than just RPI-driven data. (Final scores are even included on the team sheets used in deliberations.)
So if your team is suffering from a particularly sluggish RPI while being judged better by more sophisticated methods, you have a little more hope than you might think.