Perhaps a new format…
Rankings and Polls:
The current way things are done, with the preseason media polls and rankings, ie; the “beauty contest” methodology, is old, outdated, biased, and flat inaccurate. It is based on the previous year’s output, so and so’s history as a major team, conference bias, individual voter bias, and hype. Each year the polls are frontloaded with the perennial favorites and each year several of the favs tank. However, it takes an act of congress to dislodge them from the polls and an avalanche of wins by unranked teams to enter the realm of the top teams. This season ranked teams are falling like leaves in November, ten ranked teams lost just this past week, twenty or so in the last 2 weeks. Perhaps now is the time for change.
“But we’ve always done it like this”. This is what we know”.
A few folks have previously put forth ideas to improve the current setup.
1) Wait until after the nonconference portion of the season to assess which are the top teams.
In this way we have a 11-13 game history from which to assess SOS, RPI, W/L records, NPI/BPI, and the rest. By waiting until December to initialize rankings real data can be used rather than, or in conjunction with, solely what happened last season, eye tests, coaching resumes, and bias. Who cares what a “so and so coached team” is potentially capable of…what have they done on the court this season. It took 5 losses, including 3 in a row to finally knock Duke from the rankings this week. However, using the current predominantly unscientific, preferential, and biased-based methods used by both the coaches and writers polls, Duke will be back in the polls by the following week, whereas a deserving newcomer such as Pitt, Valpo, St Marys, or even Arkansas LR will not get the time of day.
2) Let the metrics decide who deserves to be ranked.
Between Warren Nolan, Sagarin Rankings, Ken Pomeroy, Torval, StatsSheets, TeamRankings, RPI Forecast, Bracket Matrix, and several others, there is more than enough data to compile a metrics driven ranking solution. Again, by waiting to allow the non conference portion play out, much of the data is available by then, and such rankings would be more accurate. Now, that is not to say a data driven poll is infallible and such polls would be updated weekly as is done currently. Several sites already use composite rankings using a handful of these source metrics and others.
3) RPI/SOS Stand-alone valuations.
Who is playing who? How many cupcake games does it take to equal one quality game. By using a 3 year or 5 year composite average RPI/SOS as a base one can assign a beginning December/January value to each team to rate a top 25-50 index. Current in-year play and their current record then determines a teams up/down movement for the duration of the season.
Other notions for modification or changes in the ratings structure have surfaced over the last several years. None are perfect, including the methodologies in use today, but improvements are needed.