Although it would be fitting for the final season of the BCS to end with a hopeless muddle of teams, we could end up with a clear No. 1 and No. 2 in the final two-team playoff. Alabama and Florida State have jumped well ahead of the rest of the field. Those two teams will be double-digit favorites in each of their remaining games, with the Tide's trip to No. 7 Auburn being the most possible source of chaos.
Pick just about any of the advanced measures, and you'll get the Tide and Noles on top of the 125-team pile. Take a look at those two teams up against unbeaten Baylor, a team that has been close to Bama and Florida State in the numbers and also has enough tough games in its remaining schedule to create some noise:
|F/+ Combined||Sagarin Predictor||SRS||Off. YPP||Def. YPP||Adj. Off. YPP||Adj. Def. YPP|
The non-BCS rankings agree that Alabama and Florida State are the two best teams in college football. But the numbers don't agree on which of the two is the best.
If the two teams are so close, both in the numbers and according to the eye test, how can Alabama be the clear No. 1 in the polls? Obviously Alabama started No. 1, but why are so few voters changing their minds? This week, the Tide received all 105 first-place votes in the BCS-impacting Harris Poll, as well as 58 of 62 in the Coaches Poll, and 56 of 59 in the AP Poll. This doesn't matter in practical BCS terms, since both are in line to play for the title, but it's still an example of groupthink.
So what is causing dozens of human voters to all reach the same conclusion on what should be a closely-contested question? Let's look outside this season's results for the explanation.
Florida State has been out of the national conversation for the better part of a decade, plus they have ACC baggage. We've heard a lot of "FSU is back" in recent years, but not until 2012 did the Noles actually start to look like their former, dominant selves.
Alabama, on the other hand, benefits from a gale-force tailwind. Pollsters remember the teams that have made them look good, as Nick Saban's charges have two years running.
So the natural question arises: should voters consider results from prior seasons when ranking teams?
For starters, it's not exactly fair. What a team did last year has nothing to do with its résumé this year. But what about as a factor in evaluating team quality?
On the one hand, it's easy to see how this is a terrible idea.
Take Ohio State, for example. The last two times the Bucks were in the national title game, they were drawn and quartered by SEC opponents. Is this a reason to keep them out of the title game if one of FSU or Alabama stumbles in the next month? Maybe in the sense that the Bucks come from the same unimposing Big Ten that inflated their records in 2006 and 2007. However, this Ohio State team has a different head coach, one with a terrific record in the postseason, and a more modern offense than the guy whose teams were clobbered. It would be unfair to punish Braxton Miller for Troy Smith's sins.
On the other hand, Alabama has such a consistently impressive track record that voters are hardly irrational to consider past seasons. The general criticism of using evidence from prior campaigns is that the turnover in college football is so rapid and inevitable that referencing a team from last year -- let alone one from four years ago -- is an exercise in relying upon irrelevant data. The Tide are something of an exception to this rule. The head coach is the same. Much of the coaching staff is the same. The offensive and defensive styles are relatively constant, although subject to a degree of evolution. The names on the field have changed, but Bama is such a machine of a program that Saban just replaces one high four- or five-star with another.
And as with all things Crimson Tide, there is a historical angle. The 1966 Alabama team was, like the 2013 squad, the two-time defending national champions. That team entered the season ranked No. 1, won every game (with all but one win by double digits), and was beaten out in the polls by a Notre Dame team that shut up shop at the end of its titanic clash with Michigan State, settling for a 10-10 tie in one of college football's most famous games.
The prospect of a two-time defending national champion in the modern era going unbeaten and not heading to the end with the No. 1 ranking is frankly unthinkable, but that's exactly what happened to the Tide in 1966. And Bama fans with a sense of history, i.e. 97 percent of them, will remind you of this at the drop of a houndstooth hat.
As Michael Weinreb pointed out in his excellent piece on Alabama and Notre Dame in 1966, the Tide faced a disadvantage in that they were an all-white team from a state whose governor was leading a campaign of massive resistance against school integration and political rights for African-Americans. That context hung over the voters' decision to vote for Notre Dame over Alabama. In an odd way, the fact that pollsters went outside of the results of the 1966 season acts as a precedent for voters this year looking outside of the results of the 2013 season to exempt Bama from comparison with Florida State.
College football's governing bodies put pollsters in the impossible position of having to pick the two best teams from a huge field of contestants who play few, if any, common opponents. That task is difficult in the best of times; it's impossible when Alabama and Florida State position themselves as essentially equal to one another and superior to every other team in the FBS. And Lord help the voters if Baylor does something inconvenient like wipe the floor with Texas Tech, Oklahoma State, TCU, and Texas.
With a limited sample size from which to separate the inseparable, voters have to look outside of the games played this year to make their decisions. In 2013, they are using Alabama's recent track record as a tie-breaker for the Tide. This use of data is fraught with danger, but without better ways to pick between two great teams, it's as good as any other.