Below are this week’s new S&P+ rankings.
A reminder: S&P+ is intended to be predictive and forward looking.
Good predictive ratings are not résumé ratings, and they don’t give you bonus points for wins and losses. They simply compare expected output to actual output and adjust accordingly. That’s how a given team can win but plummet or lose and move up.
Through seven weeks, the S&P+ rankings are performing well, hitting 54 percent against the spread and 53 percent on the over/under point totals for the year.
As you would hope, the absolute error — the average size of miss between projection and reality — has settled into a healthy area as well.
If you’re interested in a decent résumé ranking of sorts, I encourage you to visit this post on strength of schedule. I created a Resume S&P+ ranking and will be updating it on Mondays throughout the rest of the season.
Below, however, are the predictive ratings, the actual S&P+.
(You can find full unit rankings, plus a yearly archive, at Football Outsiders. The offense and defense pages will start getting updated in the coming weeks.)
2018 S&P+ rankings through 7 weeks
|Team||Rec.||S&P+ Rating||S&P+ Rank||Last Wk||Change|
|Team||Rec.||S&P+ Rating||S&P+ Rank||Last Wk||Change|
|San Diego State||5-1||4.1||51||54||3|
|New Mexico State||2-5||-14.9||122||118||-4|
|San Jose State||0-6||-18.1||126||123||-3|
A quick reminder:
As non-conference play ends and conference play begins, the scoring margins tend to get closer on average. As a result, the overall spread of S&P+ ratings — which is distributed along the bell curve for scoring margins — tends to get smaller, too.
You’ll notice that Alabama’s S&P+ rating fell from plus-29.6 adjusted points per game to plus-27.0 despite obliterating Missouri on Saturday. That “fall” is a product of the scoring adjustment, not anything the Tide did on the field. Their percentile rating improved from 99.3 percent to 99.4 this week.
Because of this, you’ll also notice that all the top conferences’ average ratings fell, while all the bottom conferences rose. Same concept there.
OK, now that that’s out of the way ...
Actual movement at/near the top!
For six weeks, things were orderly, at least by college football’s standards. Alabama was No. 1, three teams traded the No. 2-4 spots (Ohio State, Clemson, Georgia), Notre Dame began to rise when Ian Book took over at quarterback, the top 10 was filled in by a reliable set of names, etc.
Week 7 shook all of that up. Alabama’s still No. 1 for very obvious reasons, and Clemson timed its bye week pretty well, avoiding all the chaos to remain No. 2. But Georgia laid an egg at LSU, Ohio State looked as mediocre as possible against Minnesota, and Notre Dame tried to lose to Pitt. The result: a mini-shakeup!
- Oklahoma had the best bye week ever, jumping from sixth to third. And I promise there was no “They fired Mike Stoops” adjustment in the algorithm.
- Michigan, fresh off of its domination of Wisconsin, leap-frogged still-unbeaten Ohio State for fourth.
- Also, a non-shakeup shakeup of sorts: Georgia fell only to sixth because almost nobody in the No. 7-12 range took advantage of the opportunity to move up. Penn State and Washington remained basically where they were despite losing, and UCF did as well despite nearly losing.
The week’s top movers (good)
- Louisiana Tech (up 19 spots, from 90th to 71st)
- Marshall (up 14 spots, from 74th to 60th)
- Maryland (up 13 spots, from 68th to 55th)
- Miami (Ohio) (up 13 spots, from 79th to 66th)
- Buffalo (up 12 spots, from 62nd to 50th)
- EMU (up 12 spots, from 82nd to 70th)
- Duke (up 10 spots, from 42nd to 32nd)
- UAB (up 10 spots, from 59th to 49th)
- Seven teams up nine spots
C-USA teams have been crazy-volatile this year, but I wanted to point out that UAB, not even two years back from the dead, is a top-50 team. Damn, Bill Clark.
Top movers (bad)
- Arizona (down 17 spots, from 70th to 87th)
- USF (down 14 spots, from 27th to 41st)
- Missouri (down 14 spots, from 23rd to 37th)
- WVU (down 13 spots, from 11th to 24th)
- Cal (down 12 spots, from 53rd to 65th)
- Wyoming (down 11 spots, from 88th to 99th)
- Northwestern (down 10 spots, from 58th to 68th)
- TCU (down 10 spots, from 36th to 46th)
- Four teams down nine spots
WVU’s offense was on the field for just 42 snaps and had a 29 percent success rate in the Mountaineers’ 30-14 loss at Iowa State. Just four of Will Grier’s 22 passes resulted in successful plays. I’m honestly not sure how ISU’s defense improved only from 33rd to 30th in Def. S&P+, but WVU damn near fell all the way out of the top 25.
FBS conferences, ranked by average S&P+ rating:
- SEC (plus-9.8 adjusted points per game, down 1.9 points)
- Big 12 (plus-6.0, down 1.2)
- Big Ten (plus-5.8, down 1.1)
- Pac-12 (plus-4.2, down 0.6)
- ACC (plus-4.0, down 0.6)
- AAC (minus-0.2, same)
- Mountain West (minus-2.5, down 0.1)
- Sun Belt (minus-5.2, up 1.6)
- Conference USA (minus-6.7, up 1.5)
- MAC (minus-6.7, up 1.5)
Again, the scoring curve is the primary reason for the top conferences falling and the bottom conferences rising, but there was still movement within this movement.
Another reminder: I have made a few philosophical changes in this year’s S&P+ rankings.
When I get the chance (so, maybe in the offseason), I will update previous years of S&P+ rankings to reflect these formula changes, too.
- I changed the garbage time definition. S&P+ stops counting the major stats once the game has entered garbage time. Previously, that was when a game ceased to be within 27 points in the first quarter, 24 in the second, 21 in the third, and 16 in the fourth. Now I have expanded it: garbage time adjustments don’t begin until a game is outside of 43 points in the first quarter, 37 in the second, 27 in the third, and 21 in the fourth. That change came because of a piece I wrote about game states at Football Study Hall.
- Preseason projections will remain in the formulas all season. Fans hate this — it’s the biggest complaint I’ve heard regarding ESPN’s FPI formulas. Instinctively, I hate it, too. But here’s the thing: it makes projections more accurate. Our sample size for determining quality in a given season is tiny, and incorporating projection factors found in the preseason rankings decreases the overall error in projections. So I’m doing it.
- To counteract this conservative change, I’m also making S&P+ more reactive to results, especially early in the season. If I’m admitting that S&P+ needs previous-year performances to make it better, I’m also going to admit that S&P+ doesn’t know everything it needs to early in a season, and it’s going to react a bit more to actual results.
Basically, I’ve added a step to the the rankings process: after the rankings are determined, I go back and project previous games based on those ratings, and I adjust the ratings based on how much the ratings fit (or don’t fit) those results.
The adjustment isn’t enormous, and it will diminish as the season unfolds.
Testing this process for past seasons improved performance against the spread a little and, more importantly, decreased absolute error (the difference between projections and reality) quite a bit. I wouldn’t have made the move if it didn’t appear to improve performance.