Wednesday, October 14, 2009

Poll Dancing, by Chris Miller

What’s the best way to rank college football teams?

It has been assumed and accepted for some time now that the AP and USA Today Coaches’ polls are the gold standard for college football rankings. Regardless of your opinion on playoff vs. BCS, (I’m pro- playoff myself), each system no doubt requires some sort of rankings to determine the seeding. So which poll really is the best? Is there a best way to objectively rank something that is so subjective, as college football cannot ever be a round robin format? There are now alternative methods of ranking teams around the web, and I’ll take a look at a few of them.

Mumme Poll (http://mummepoll.3sib.com/index.php)

The brainchild of two different college football blogs, the Mumme Poll is named after Hal Mumme. Mumme is well known for multiple reasons: (a) former Kentucky Wildcats coach responsible for Tim Couch; (b) top of the coaching tree consisting of Mike Leach, Tony Franklin, Sonny Dykes, Chris Hatcher; (c) abusing his privilege to vote in the coaches’ poll by voting Hawaii #1 in the final coaches’ poll in 2007. Thus, the Mumme Poll.

How is it different from other polls? Glad you asked:

1. The first official ballots aren’t cast until after the games of week 6
2. Teams aren’t ranked in order, the pollsters merely select their top 12 teams and teams are ranked based on the number of times the team appears on ballots
3. A pollster suspected of bucking the system with outrageous rankings (Mumme) will be tossed from the voting

Is it reliable/ valid?

There is limited evidence so far, with 2008 being the inaugural poll. However, the final 2008 poll had Florida #1, so take that for what you will. After week 5 of the 2009 season, the top 5 in the Mumme Poll:

1. Alabama
2. Florida
3. Texas
4. LSU
5. Virginia Tech

Which is very similar to the AP and coaches polls, except for Alabama at the top and Texas at #3. Based on the ballot style and the comparisons to current and past polls, the Mumme Poll looks to be reliable on a week to week basis and at the end of the season, without the tradition of the mainstream polls of keeping teams at its current ranking unless they lose.

Summary

Advantages
o Pollsters mostly fans, who watch many games each weekend
o Over 300 voters currently (statistically, a sufficient sample size for a good study is > 30)
o Each pollster votes on top 5, then next 7 (top 12 total), in no particular order
o Rankings based on presence in top 12 or not (binaryà less room for gray areas)
o Subjective rankings (can weed out smaller, undefeated teams)
o Fluid (#1 not stuck there until they lose)

Disadvantages
o Fans are biased
o Subjective rankings (may be influenced by program tradition and/ or media)

Who-beat (http://www.whobeat.net/)

Who-beat (est. 1995) is exactly what it sounds like: a rankings system based simply on who you have beaten and who those teams have beaten. It uses only wins to evaluate teams, so before the season, each team (even Syracuse) is ranked #1.

How it works:
1. Team A beats Team B and Team A earns 1 win
2. Team B beats 3 teams (3 more wins for Team A)
3. Those 3 teams beat a total of 15 teams (15 more wins for Team A)
4. Team A for that week has a total of 19 wins
5. #1 team in the rankings has the most total wins

Is it reliable/ valid?

Looking at the rankings before the 2007 bowl games, the rankings looked like this:
1. LSU (won national title 38-24)
2. Oklahoma (lost in blowout in bowl game)
3. Virginia Tech (lost bowl game)
4. Missouri (won in blowout in bowl game)
5. Southern Cal (won in blowout in bowl game)
6. West Virginia (won in blowout in bowl game)
7. Ohio State (lost national title 24-38)

Before the 2008 bowl games, who-beat rankings looked like this:
1. Florida (won national title 24-14)
2. Oklahoma (lost national title 14-24)
3. Texas (won bowl game)
4. Alabama (lost bowl game in blowout)
5. Pittsburgh (lost bowl game)
6. Southern Cal (won bowl game in blowout)
7. Utah (won bowl game in blowout)

After week 6 of the 2009 season:
1. Alabama
2. Virginia Tech
3. Iowa
4. Florida
5. LSU
6. Washington
7. Ohio State

What can we learn by this look at who-beat? It looks as though #1 in the who-beat rankings following the regular season will most likely win the national championship and should absolutely be in the game, but after #1 who knows. It does look like any team ranked higher than 5 in the final who-beat rankings does not deserve to be in the national title game (see Ohio State circa 2007). Based on the week 6 rankings of this year with 3-3 Washington in the top 6, it looks as though these rankings are unreliable on a week to week basis.

Interesting observations:
- Ohio State (11-1) finished #7 in the 2007 post regular season rankings despite one of the best numerical regular season records and had a poor showing in the national title game
- Oklahoma’s rankings may be enhanced by the Big 12 schedule and the conference title game
- USC’s rankings may be too low based on the Pac-10 schedule and lack of conference title game
- BCS “busters” Hawaii in 2007 and Utah in 2008 are ranked much lower in these rankings despite their undefeated regular seasons
- Maybe Texas deserved to play Florida in 2008, they did beat Oklahoma and only the Big 12 championship game rules kept Texas out of their conference championship, which may have given Texas the rankings boost to play Florida

Summary

Advantages
o Objective (eliminates bias)
o Doesn’t detract for losses
o Not affected by “style points” or blowouts
o Strongly favors strength of schedule (eliminates BCS “busters”)

Disadvantages
o Doesn’t detract for losses
o Strongly favors strength of schedule (may hurt BCS leagues with no conference title game)

Coaches/ AP poll (http://espn.go.com/college-football/rankings)

For years, these have been the gold standard for ranking football teams, but should they be so? The AP poll is comprised of > 60 sportswriters from across the country with a preseason poll and weekly polls. Sportswriters are generally regarded as experts in the sports that they cover. But, if sportswriter A covers Team A and writes a daily column on Team A, does sportswriter A really know enough about teams B-Z to accurately rank them, or will the writer simply look at results and rank based on previous week and is he or she susceptible to “style point” rankings?
The Coaches’ poll is comprised of > 60 coaches from across the country with a preseason poll and weekly polls. Similar to sportswriter A, does Coach A really have time to research all the teams outside of his conference and accurately rank the teams without bias?

Are they reliable/ valid?

Before 2007 bowl games:

AP Poll
1. Ohio State (lost national title in blowout)
2. LSU (won national title in blowout)
3. Oklahoma (lost bowl game in blowout)
4. Georgia (won bowl game in blowout)
5. Virginia Tech (lost bowl game)
6. USC (won bowl game in blowout)
7. Missouri (won bowl game in blowout)

Coaches’ Poll
1. Ohio State (see above)
2. LSU
3. Oklahoma
4. Georgia
5. Virginia Tech
6. USC
7. Missouri

Before 2008 bowl games:

AP Poll
1. Florida (won national title)
2. Oklahoma (lost national title)
3. Texas (won bowl game)
4. Alabama (lost bowl game in blowout)
5. USC (won bowl game in blowout)
6. Penn State (lost bowl game in blowout)
7. Utah (won bowl game in blowout)

Coaches’ Poll
1. Oklahoma (see above)
2. Florida
3. Texas
4. USC
5. Alabama
6. Penn State
7. Utah

After week 6 of 2009:

AP Poll
1. Florida
2. Alabama
3. Texas
4. Virginia Tech
5. Boise State
6. USC
7. Ohio State

Coaches’ Poll
1. Florida
2. Texas
3. Alabama
4. Virginia Tech
5. USC
6. Boise State
7. Ohio State

The two groups seem to vote very very similarly (maybe the result of the same bias and same limited exposure to other teams?). The deserving national champion is usually in the top 2. Week to week rankings may not make complete sense, but through the course of the season, the “cream of the crop will rise to the top”, showing that these polls are very reliable when it comes down to the end of the season.

Summary

Advantages
o Respected for years
o Voters are professionals in the sport
o Reliable at the end of each year
o Detracts for losses
o Each has good number of voters, statistically

Disadvantages
o Static (#1 remains there until they lose)
o Detracts for losses (all losses are not created equal)
o Vulnerable to “style points” and blowouts
o First ballots are cast in the offseason based on assumptions from the previous year’s finish and # of players returning
o Coaches and sports writers tend to weight their teams and conferences higher and don’t get to see as many games as fans

Conclusion

Each ranking style has its unique strengths and it appears that the Mumme Poll may actually be the most accurate way to rank teams because many fans who watch multiple games a week vote, however there is limited evidence so far to support it. Who-beat looks to be GREAT when it comes to determining the actual national champion, but again there is limited evidence. The traditional polls have great evidence but are way too vulnerable to bias and may not be the best method. It appears as though a combination of all of these would provide the best rankings system ... which actually validates the BCS. Hmmm…

No comments:

Post a Comment