By now, you’ve noticed the preseason ratings have been posted. Thanks to all that have stopped by the past 24 hours. My server thought it was March on Sunday night. (h/t to Matt Norlander for the tweet that generated the traffic. I usually enjoy flipping the switch and watching twitter spread the word organically over the course of a few hours, but since Norlander spilled the beans approximately five minutes after the site turned over, I got an immediate firehose of traffic.)
I’ve discussed the formula in some detail in previous seasons and it hasn’t changed much in the five years I’ve been doing this. Here are some semi-random thoughts on them.
People always want to know why a team is ranked in an unexpected spot. Think of the ratings formula as [team baseline + personnel]. The personnel portion is looking at who is returning from last season’s roster, how much the returnees played, what kind of role each returnee had, and what class they are in. Actually, there’s a two-year window for this, so Butler gets some credit for getting Roosevelt Jones back, for instance.
The system does not give any special consideration to new players entering the program. There is some credit given for high-profile recruits, but the poor performances in 2012-13 of UCLA and Kentucky, among others, in recent years have tended to mute the impact of recruits in the model. Recruiting rankings are useful, but the impact of high-level prospects on their respective teams as freshman can vary wildly.
There is no allowance for impact transfers or redshirt freshmen. So if your program has a high-profile transfer joining the team, the system may be underrating it. But this is where the program baseline can pick up some of the slack. The system is looking at the performance of a team over the past five seasons and its men’s basketball budget over the two most recent seasons for which data is available to figure out what should be expected of a team in the absence of any other information. A lot more weight is given to the past two seasons in terms of team performance. So the system is going to be forgiving about personnel losses on teams like Louisville and Syracuse and Creighton that spend a bunch of money on men’s hoops and have had recent success.
Let’s face it, while people like to talk about how much parity there is in the sport, the reality is that if I wanted to predict the Pac-12 race in 2025, I’d do pretty well forecasting Arizona and UCLA at the top and, well, I won’t call out the teams at the bottom, but despite not knowing who will be coaching or playing for these teams that far in the future, we could make a reasonably good forecast of either end of the conference standings. And that’s true of most leagues. The purpose of the team baseline is to handle this bit of knowledge which is more program-dependent than roster-dependent.
Conference gravity is also thrown into the mix, so that teams that have had outlier performances relative to their conference tend to get pulled back towards the conference mean. Coaching changes are also considered, and teams with a coaching change get punished, though this effect is stronger for teams with a better baseline.
There’s a slight distinction that needs to be made regarding what is being projected. Technically, the system is forecasting a team’s final pythagorean rating and not its final ranking. For instance, take Oklahoma State’s forecasted rating of .8546, which is the 21st-best projection. Last season that rating would have ranked 29th, and two seasons ago it would have been 33rd.
Which is to say that a highly-ranked team is more likely to be overrated than underrated in terms of its ranking. That’s an obvious statement once you get to the top-ranked team, but I expect it’s underappreciated for teams elsewhere in the top 20. By the way, this the fifth season I’ve done preseason ratings and the top-ranked team in preseason has finished first on two occasions – 2012 Kentucky and 2014 Louisville. But both of those teams had to improve on their preseason rating to earn the ratings title at the end of the season.
As far as ratings’ eyesores, Oklahoma State probably topped my personal list, although there are always plenty to go around. Mississippi State was Norlander’s favorite. Indeed, a team going from 208 to 83 may not be the best look for the ratings. Note, too, that TCU is listed at 130 after finishing 234th last season. If you’re in a decent conference and players are staying with the program and the coach isn’t getting fired, you can’t suck forever. That is the theory here.
But in the case of the Bulldogs and the Cowboys, the system likes high-usage guys that have been in the program for multiple seasons. Sure, Oklahoma State loses Marcus Smart and Markel Brown, but at least they know Le’Bryan Nash is capable of being a go-to guy. For Mississippi State, the ratings will turn to Craig Sword for credibility. Unfortunately, the high-usage shooting guard is battling back problems and may miss some games to start the season. Get well soon, buddy! I’ll be waiting to break out the #RickRayBandwagon hashtag until you’re back.
And that brings us to the injury/suspension portion of the show. Basically, if a player is not ruled out for the entire season, they are included as a returnee. So guys like Sword are in as are more extreme cases of guys who are expected to miss multiple weeks.
At any rate, these are the ratings and I’m sticking to them. Unless there is major personnel news in the next week, that is. They’re just a starting point to generate reasonable score and record predictions early in the season. Never let a number define your team, kiddos.
Next up, I’ll take a look at how various projection systems, including my own, did last season.