Colley's Bias Free Matrix Rankings An Official Bowl Championship Series Ranking for 2001–2010
Advantages in this Method
First and foremost, the rankings are based only on results from the field , with absolutely no influence from opinion, past
performance, tradition or any other bias factor. This
is why there is no pre-season poll here. All teams are assumed
equal at the beginning of the year. If you include some kind of
human input, what's the point of a computer poll in the first
place? Garbage in, garbage out.
NOTE: Bear in mind that because there is no pre-season poll, the early
rankings will not look much like the press polls. The rankings are
based on results so far within
the season of play.
Second, strength of schedule has a strong influence on the final
ranking. Padding the schedule wins you
very little.
For instance, Wisconsin with 4 losses
finished the 2000 season ahead of well ahead of TCU with only 2
losses. That's because Wisconsin's Big 10 schedule was much,
much more difficult than TCU's WAC schedule.
Third, as with the NFL, NHL, NBA, and Major League, score margin does not matter at all in
determining ranking, so winning big, despite influencing
pollsters, does not influence this scheme. The object of
football is winning the game, not winning by a large margin. Now, other games have other metrics. In golf we have strokes; in texas holdem we have winnings; in NASCAR we have points standings; but in football, we have one simple overriding metric: did you win the football game?
Ignoring margin of victory eliminates the need for ad
hoc score deflation methods and home/away adjustments. If
you have to go to great lengths to deflate scores, why use
scores?
What about home/away? Though reasonable arguments can be made
for a home/away factor, I do not know of a simple, mathematically
consistent means of rating the relative difficulty of playing at
the Swamp vs. playing at Wallace-Wade Stadium. The home
advantage for some teams is simply more than it is for others.
There are further complicating factors, such as home weather for a
northern team in November vs. home weather for a southern team in August.
Even the pollsters seem to forgive or forget big scores or
surprisingly close scores, home or away, after a few weeks.
Usually, after a few weeks, a W is a W and an L is an L, as it
should be anyway.
Fourth,
in this method, only very simple statistical principals, with absolutely no fine tuning or ad hoc adjustments are
used to construct a system of 120 equations with 120 variables,
representing each team according only to its wins and losses,
(see Ranking Method). The computer
simply solves those equations to arrive at a rating (and ranking)
for each team.
Fifth, comparison between this scheme
and the final press polls (1998, 1999, 2000, 2001, 2002) proves that the scheme
produces sensible results.
The fractional ranking discrepancy between my system and the
polls varies between 0.200 and 0.298
in all cases, typically a quarter. In other words, a typical
ranking difference between my poll and either press poll would be
around 1 place at a ranking of #4, and around 5 places at #20.
The press polls themselves vary between 0.037 and 0.095 over
those years. So one might expect, for a given team, a ranking
disagreement of about 1 in the top 15.
So the Coaches and AP pollsters agree with each other about 4
times better than they agree with me. However, one would expect
that the Coaches and AP polls agree very well for a very simple
artificial reason. The coaches read the AP poll, and the media
voters read the Coaches' poll, so there is statistical feedback
between the two polls, right or wrong. A computer scheme, like
this one, does not read other polls.
The bottom line in these comparisons, is that my rankings have
agreed in all nine years with the media and coaches on the
national champion,
agreed with the media and coaches on the top two teams in 8 out of 9 years
most often agreed on the top 5, and
agreed on the top 10 within a place or two,
which I call a remarkable success, given the radically
different systems of ranking system in question. Since we don't
really know the "true" rankings of the teams, we, in fact, are not
able to say whether the media polls or my rankings are better, but the
fact that the agreement is, in practice, quite good provides reason
to believe that neither is totally out to lunch.
So, here we have a scheme to rank college football teams that is
absolutely free from human influence or opinion, accounts for
schedule strength, ignores runaway scores, and yet produces
common sense results, which at the end of the season compare
very favorably with human rankings (and other computer rankings). What else do you want?