clock menu more-arrow no yes

Filed under:

9 potential problems with the College Football Playoff selection committee

New, comments

The Playoff committee's first official set of rankings arrives October 28. We already have gripes.

Jerome Miron-USA TODAY Sports

1. Is a weekly top 25 necessary to finish with the best ranking?

The committee will release its first top 25 on October 28, with a new list weekly thereafter. The final one, the only one that decides the Playoff and the New Year's bowls, arrives December 7. So why create all the ones before it?

The answer's simple. ESPN owns the broadcast rights to the College Football Playoff and the four other major bowls selected by the committee. The network will reveal those top 25s on air each Tuesday night. A weekly top 25 is good business for all, including SB Nation, which will happily cover those rankings.

But when it comes to the quality of those rankings, think about confirmation bias. It's not easy to start your mind over from scratch each week.

The committee's protocol says, "Committee members will be required to discredit polls wherein initial rankings are established before competition has occurred." That's aimed at preseason polls, which allow voters to later slide teams around based on preconceptions. But an in-season ranking is established before all competition has occurred. November preconceptions can be just as faulty as August ones.

Is there any evidence that subjecting preliminary rankings to scrutiny, paranoia, and feedback from angry fans will make for a more objective final list?

2. The committee's membership is its own list of concerns.

With former Ole Miss quarterback Archie Manning out due to health reasons, here are the committee's 12 members:

  • Chairman Jeff Long, Arkansas athletic director
  • Barry Alvarez, Wisconsin athletic director and former head coach
  • Mike Gould, former Air Force superintendent
  • Pat Haden, USC athletic director
  • Tom Jernstedt, former NCAA executive
  • Oliver Luck, West Virginia athletic director
  • Tom Osborne, former Nebraska head coach and athletic director
  • Dan Radakovich, Clemson athletic director
  • Condoleezza Rice, former secretary of state
  • Mike Tranghese, former Big East commissioner
  • Steve Wieberg, former USA Today reporter
  • Ty Willingham, former FBS head coach

Six current or former ADs; no dedicated statistical analysis experts. You'd prefer six stats people and maybe an AD or two, not the other way around, right?

And the recusal list, which declares the teams certain members aren't allowed to discuss ...

  • Air Force - Mike Gould
  • Arkansas - Jeff Long
  • Clemson - Dan Radakovich
  • Nebraska - Tom Osborne
  • USC - Pat Haden
  • Stanford - Condoleezza Rice
  • West Virginia - Oliver Luck
  • Wisconsin - Barry Alvarez

... shows seven have immediate ties to power-conference programs (Rice has taught classes at Stanford), with at least one each for all five power leagues. Only one, Gould, has a non-power school listed, and he's a retired non-AD. That matters for a lot of obvious reasons, as sort of spelled out by Luck:

"I think it makes a lot of sense to ask Barry Alvarez, ‘Hey, you guys played Michigan last week, tell us what you think. Tell us what your coaches said.' I think it's an asset to listen to Pat Haden talk about a Pac-12 team."

If the fourth Playoff spot were to come down to either a 12-0 Marshall (an imperfect example with a weak schedule, but roll with it) or a two-loss SEC West team, members could ask Long how that SEC team looked in its win against Arkansas. Whose coaches know much about mid-majors?

(Rice's inclusion also raises another issue; no, not the sexist one.)

3. Just like the BCS, it won't use margin of victory.

How do you rank teams? It's not just deciding which are better. It's deciding how much things like win-loss records, head-to-head records, and strength of schedule matter. The committee, perhaps to its credit, codified those "criteria," with the following as tie-breakers between similar teams:

  • Championships won
  • Strength of schedule
  • Head-to-head competition (if it occurred)
  • Comparative outcomes of common opponents (without incenting margin of victory)
The first is straightforward. The second ... we'll get to in a moment. The third makes sense. But the fourth is an anti-stats flaw held over from the BCS:

"Nothing correlates as well to winning percentage as MOV, but that isn't politically correct."

Football is a man's man's game. It's about grit and fortitude, leadership and sacrifice. It's a battle. It's a war. No sensitive souls need apply.

But my goodness, if you don't call off the dogs in the fourth quarter, somebody might get their feelings hurt! And we can't take that chance!

4. No one knows what "strength of schedule" means.

Members have said since the beginning that SOS will be a major factor. Every time two power schools announce a home-and-home series beginning in like 2039, some analyst says the Playoff's encouraging teams to schedule tough.

And then there was this, on the only strength-of-schedule stat members are able to use:

"They separate out wins against teams with records above .500 and losses to those below .500."

Your eye is going to naturally gravitate toward whatever information you are given when attempting to differentiate between Team A and Team B. Because it is available, the committee will look at the "above .500" and "below .500" records.

And they'll be using a number that values a win over a 7-5 team (by one point or by 50) the same as a win over a 12-1 team (by one or by 50). If you lose to a team that went 5-7 (by one or by 50), that's bad. If you lose to a team that went 7-5 (by one or by 50), that might be OK.

This might be even worse than simply providing a team's record.

There's no perfect way to rank schedules. All methods disagree at least a little bit.

But try to come up with a worse way to figure it out than assigning the same value to beating 2013 Florida State and beating 2013 UNLV, both of which finished above .500.

Committee members will be free to make up their own minds. So why bother with such a useless set of numbers? Does the choice to use those numbers suggest an unfamiliarity with quality data?

5. Are we sure conference titles should mean bonus points?

The committee says it wants "an emphasis on winning conference championships." This is in part because 2011 non-conference-champion Alabama reached the BCS title game over conference-champion Oklahoma State, making everyone mad and hastening the Playoff's arrival. I don't know which team deserved to go.

The committee also says it wants "enough flexibility and discretion to select a non-champion or independent under circumstances where that particular [team] is unequivocally one of the four best teams in the country."

"Unequivocally" is key. When it gets down to team No. 4 vs. team No. 5 (likely the only tough call the committee will have to make, based on history), the one without a conference title could be out.

Why should winning a conference championship be considered more or less equal to playing a tougher schedule? If a conference championship game is a quality victory we can add to a team's resume, shouldn't it already be included in that team's SOS metric and not considered more informative than any other big game? Oh right, there is no SOS metric.

A conference championship is a trophy awarded to the winningest school in a group of schools that joined together at some point to pool revenue. A trophy is easy to point to as evidence of a team's quality. But they give out rivalry trophies and Chick-fil-A Kickoff trophies, too. Should those factor?

Each team enters each season hoping for a conference title. Winning one is its own reward. It's hard to see how winning one is evidence that a team is one of the four "best" (the committee's ideal), though winning one could be evidence of a team being one of the four "most deserving" (not the committee's ideal).

6. Do losses degrade over time?

You know that observation about it being better to lose early in the season than to lose late? It's a myth. The polls more or less rank teams according to wins, with some fiddling based on strength of schedule. The month in which a team loses usually doesn't matter much.

Well, the committee could make that myth real on purpose:

Among some of the more curious things Osborne said Wednesday, one was that the ever-ambiguous metric of "momentum" would be considered by the committee.

"We want an emphasis of where those teams are at that moment," Osborne said.

Two similar losses being different just because of their dates seems asinine.

7. Do teams get demerits for injuries?

The committee will consider "other relevant factors such as key injuries that may have affected a team's performance during the season or likely will affect its postseason performance." That "postseason performance" part hasn't been explained much, but it sounds troublesome.

One speculative example some have thrown around is 1998's BCS title game, in which a Florida State team without injured quarterback Chris Weinke lost to Tennessee. Since Weinke's November injury lowered expectations of an FSU championship, would the committee have picked one of the five other one-loss power-conference teams as the No. 2 seed? If it had, it would've missed watching the injury-affected FSU's chance to tie the game with 89 seconds left.

One helpful way to factor injuries would be to, say, give 2014 Ohio State full credit for beating a Cincinnati team that still had its soon-to-be-injured starting quarterback. That win looked better at the time than it would over the following weeks. But that doesn't have much to do with injuries "that likely will affect" a team's "postseason performance."

A team's resume is its resume. Any ranking that attempts to guess how a team would perform in the Playoff, rather than guessing which teams have earned the right to play in it, is playing football god.

8. Do teams get bonus points for scheduling teams that used to be good?

"I think a lot of it is your intent to play a strong schedule in your non-conference. It's pretty easy for me to take a look at a schedule and see what the intent of the schedule is," Alvarez said months ago, reiterating in October.

At any school, one of the athletic director's critical jobs is scheduling future opponents. That requires balancing finances, easy wins that will keep the coach happy, and marquee games that will make fans and TV happy.

So it makes sense that Alvarez, an AD, sympathizes with "the intent" of a team's schedule. The implication is that if an AD tried to assemble a tough schedule, but the schedule ended up being not so tough, the team should get an E for the AD's effort. How was Oklahoma supposed to know in 2005 that Tennessee wouldn't count as an impressive opponent in 2014? The Vols had been 20-6 in 2003 and 2004!

But that doesn't matter. A team's opponents are its opponents. If it plays bad teams, so be it; it must beat those teams handily (without, uh, incenting margin of victory). Giving extra credit for beating a team with some big bowl wins a decade prior and a famous logo is bad evaluation.

Half of the committee's members are current or former ADs. While at least one, Luck, doesn't agree with Alvarez, this flawed thinking reveals the risk in including so many people who have the same job.

9. The lack of transparency is weird, at least.

The committee won't release rankings by each of its members, instead having Long explain the group's top 25.

That's not necessarily a problem, though analyzing individual ballots for whiffs of bias would get several fan bases through entire offseasons.

But it is odd, considering the number of times BCS-turned-Playoff exec Bill Hancock has touted the process' transparency. The committee's protocol says, "Polls that are taken into consideration by the selection committee must be completely open and transparent to the public." Shouldn't that include the committee's own rankings from the week prior?