Monday, August 24, 2015

2015 College Football!

It's baaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaack! College football is back! I can't even fully articulate why, but college football is just my favorite. I grew up in Portland as a Blazers fan, I live in Seattle and am a Seahawks fan, I like the Mariners because they're something to do on a warm summer evening.

But nothing is quite like college football. It's just different! I'm still working on getting the week 1 excitement index up, and the team dashboards live, but in the meantime I have plenty to share! I'll also be tweeting out team dashboards for every team over the next few days @actuarygambler.


Who are the best and worst teams?

To the surprise of no one, Ohio State is #1. On one hand they have three of the best quarterbacks in the country, and on the other hand their team stacked basically everywhere. On the other end of the scale, FBS newcomer UNC Charlotte starts the season on the bottom. Welcome to the show Charlotte!




Who plays the toughest schedule?

Bama. They have a brutal stretch mid-season of @Georgia, Arkansas, @Texas A&M, Tennessee, Bye, LSU, @Miss St. Yikes.


We can dispel that idea about SEC playing cupcake schedules. The SEC plays the toughest overall schedule (followed by the Pac-12), and they play a non-conference schedule on par with every other conference.

If you're looking for teams to mock for playing a pansy non-conference schedule, here's that list: Mississippi State, Baylor, Arizona, Oklahoma State, NC State.



What's new this year in the model?

Uncertainty in team ratings, and a more developed CFP model! And I'm changing the name of the Watchability index to Excitement index. Watchability never sat well with me as a word.

Uncertainty
After much consultation with my colleagues, I've added components to the model to account for the idea that, while a team's rating does represent an average estimation of that team's strength, there's uncertainty around that estimation. Most people think UCLA will be pretty good, and they probably will be! But they might not be, and we have no way of knowing that. To reflect that, when calculating outputs that involve simulating the season, the model samples a team's rating in each simulation from a given distribution. In some simulations UCLA is exactly what we thought, while in some simulations they're a little bit better, and in some simulations, they're a little bit worse.

What does this look like? Let's look at the distribution for UW and UCLA:


The model has UCLA rated as probably between 0.800 and around 0.920, and UW between probably around 0.400 and 0.800. So every time it runs a simulation of the season, instead of using one value for each of UCLA and UW, it picks a random value from these distributions. For UW it's likely to pick a number around .550-.600, but sometimes the season run a simulation with UW rated at 0.800! We have to recognize that there's a chance UW IS actually better than UCLA this year, and simulate some seasons that reflect this.

So that's the first new thing. It figures into some parts of the model's output like CFP likelihoods, conference championships, and not into others, like single game odds. The reason it doesn't figure into single game odds is they already have that uncertainty baked in. If UW and UCLA were to meet, the model would give UW a 14% chance to win that game. Inherent in that 14% are scenarios where UW is actually the better team this year.


CFP Model

The CFP committee are a crafty bunch. Last year when the rankings started, I watched all the talks by Jeff Long, tried to get an idea of what it was they cared about, and built an ad-hoc model to try and predict their rankings. This year I took it up a notch. Based on their rankings, and their defense of those rankings, I identified these 6 things the committee cares about:


  • How a team has performed on the field (who they've beaten, and by how much)
  • How a team is predicted to perform in the future
  • Number of losses (losses to good teams are discounted)
  • Wins (of any kind) over good teams
  • General strength of schedule
  • Recency

Each of these things independently improve the predictive power of my CFP model. I used Stata and Excel to figure out how much to weight each element and generated "predictions" fit to last year's data. The scatter plot below shows how the model's predicted CFP ranking compared to its actual rating, in each of committee's weekly rankings. For example, in week 9, Mississippi State was the predicted #1 and the actual #1; this is indicated by a dot at 1,1 on the chart. There are dots of different sizes because some dots have multiple observations. The CFP model correctly guessed the #1 team in each of the 8 rankings, so the dot at 1,1 is big.

You can see it all on the graph below; ultimately the correlation between Predicted CFP Ranking and Actual CFP ranking was 0.878! Not bad when you're trying to use math to predict what a committee of people using a completely opaque process will do.










No comments:

Post a Comment