This post is part of a series tracking the successes and failures of various NCAA Men's Basketball ranking systems and bracket models throughout the 2014 NCAA Tournament. Click here to check out the full series.
Hot dog, we have a wiener!
There were three upsets on Saturday. In the first, 7 Connecticut upended overall #1 seed Florida. In the second, the 8 Kentucky took down 2 Wisconsin. As a result of the first and second upsets, the AP and USA Today NCAA Basketball polls issued all the way back on November 4, 2013 (using chalk as a tiebreaker) succeeded in picking a more successful bracket than any competing system, be those systems advanced computer models, betting markets, or the opinions of the NCAA Selection Committee.
Perhaps (or perhaps not, as this is not really a new phenomenon) more surprisingly, those ninety-seven Associated Press and USA Today voters in November beat out the opinions of those same ninety-seven Associated Press and USA Today voters from three weeks ago.
The only remaining question is how big the victory. The preseason poll brackets could earn one last win on Monday night if Kentucky beats Connecticut. That would earn them another 320 points and move them even higher up ESPN's leaderboard. Or, Connecticut will be victorious and their percentile rankings will remain pretty much the same.
Of interest is the fact that, while the USA Today and AP Preseason Poll systems may have won based on March Madness bracket rules, they were not the most accurate. Right now, both FiveThirtyEight and Vegas have a higher correct pick percentage. The best that the preseason polls can manage is to tie those systems. So while brackets informed by the preseason polls may have been more profitable, FiveThirtyEight and Vegas' success rates are more indicative of future performance.
But only by a little bit. You'll note that the spread between the best pickers and the worst pickers is very narrow: only four correct picks. We shouldn't make any decisions based on one year of data, or any sample with such a narrow distribution. Tune in next time when we compare three-to-four years of Rating Systems Challenge results to see which have performed above and beyond over the long run.