An article in The Toronto Star recently made the case that Team USA brought the wrong goalies to the Sochi Olympics.
Their argument wasn't that the goalies the team brought were inferior, but that they were too consistent. The article argued that the Americans, as underdogs in the tournament, don't necessarily want consistency -- that they should favor the sometimes-great-and-sometimes-mediocre over the consistently good, because consistently good might not be enough to beat a superior team.
There are some interesting ideas to explore there. Given two goalies of equal talent, it's probably true that the less-skilled team should prefer the more variable goalie, the one who might have an amazing game and steal a win for them.
But how strong should that preference be? Should they be willing to take a goalie who stops 0.1% fewer shots overall if he has more of that variability that they crave? 0.5%?
Unfortunately, this is where The Star's method fell short. They didn't separate the goalie's variability from his ability at all -- they just asked "how often did this goalie have a great game", which is an obvious convolution of the two.
In fact, the goalies they ended up preferring -- Bishop and Schneider -- actually had less variable save percentages over the period they studied (last year and this year). Their thesis implied that Team USA should sacrifice a bit of talent for variability, but they ended up making the opposite choice!
Let's see what we can figure out about how important goalie variance really is.
An average team
For starters, let's discuss the role of variance for an average team.
Suppose we have a team with an average offense and an average defense. We can calculate how often we'd win games where our goalie posted a certain save percentage (see sidebar).
So then we can think about the role of variance. If our goalie stops 91.3 percent of the shots he faces in every game, our team will win half the games. What if he performs like a more typical .913 goalie, with some good games and some bad ones?
To answer that properly, we need to know what a typical .913 goalie's distribution of results looks like. Over the last three years, eleven goalies have had a save percentage between .912 and .914 over a minimum of 20 games in a season. I pulled their results -- a total of 465 games -- and used that as the expected distribution for a generic .913 goalie.
So for example, this chart tells us our run-of-the-mill .913 goalie will have a save percentage between .880 and .895 in 34 out of 465 games, or 7.3 percent.
From our calculations, we find that a save percentage in that range leads to a win for our average team about 35 percent of the time. So we expect our goalie to win 35 percent of those 34 games, or about 12 of them.
Running through the same sort of arithmetic for all 465 appearances, we end up with an estimate that this typical goalie would win 239 of 465 games, or 51.5 percent -- a game per season more than the impossibly consistent goalie who was always at .913.
Goaltender variance is actually helping our average team win more games.
The reason is that the distribution is left-skewed. When we move from the always-.913 goalie to the more realistic distribution of results, we aren't just symmetrically adding games a couple ticks above or below .913; we're adding a bunch of games slightly above and an occasional game way below.
An average team wins more than half their games if they have four games at 94 percent and one at 80 percent, so spreading the results out like this works out well for them.
The more variable the goalie's performances are (at least within reason), the better the team does. If I take each game's save percentage and move it 30 percent farther from average, making the good games better and the bad games worse, the expected win percentage rises still further, to 52.2 percent.
It's not a large change -- picking a goalie with a save percentage 0.1 percent higher would have a similar impact -- but at least in principle, goaltender variance seems to help our average team win slightly more often.
What if we repeat the calculations for a team that's actually an underdog?
We'll keep the shots against the same, but make the offense worse. Dropping their scoring by 15 percent lowers them to about 2.3 goals per game, on a par with Calgary, New Jersey, or Florida. We'd expect this team to finish somewhere around 80 points per year; they're not a good team.
With a perfectly consistent .913 goalie, they'll win about 41.4 percent of their games. With a more realistic .913, they'll win about 44.3 percent, and with our super-variable .913, they'll win about 46.1 percent.
Variability is a bit more than twice as important for this team, but that's still a tiny factor -- even our super-variable goalie's unreasonably large spread is only as important as an increase in save percentage of about two tenths of a percent.
So while it's interesting to note that goalie consistency does not seem to be a desirable feature for an average or inferior team, I don't think variability should ever be the first or even the second thing a team looks at when choosing a goalie.
The effect is small in general, and when goalie results come so close to matching a random coin flip model, it's hard to ever be sure that a goalie's past consistency or inconsistency is a sign that he'll be particularly consistent or inconsistent going forwards. Some might be more consistent than others, but the differences are small enough that we'll have a hard time reliably identifying them.
There's a good case to be made that the USA should have brought Cory Schneider to the Olympics, but it's because he's performed really well, not because he's performed variably.