Tuesday, January 19, 2016

Thoughts on my college bowl pool

For the second year in a row I participated in an against-the-spread contest with some high school friends for the college bowl season. For the second year in a row I didn’t win (15/16 this year, 4/25 last year).

I did, however, get some good anecdotes:

Familiarity

This group of people mainly from Columbus, Ohio and mainly living in the Midwest still has a bit of a Big Ten problem. Last year the group was 15% more confident in games involving Big Ten teams than games not involving them while underperforming against the spread – the ATS win percentage was 47.2% for these games and 48.2% for non-Big Ten games.

This year the group was 17% more confident when a Big Ten team was involved while managing to win 48.8% of them. The win percentage on non-Big Ten games was 53.4%.

Confidence

Last year the group was overconfident in what turned into losses, betting 21.0 points against 19.5 on games they ended up picking correctly. This year was flipped with the wins worth 21.6 while losses were 20.6.

As is evident in the chart below, however, there was no correlation between confidence and success.



Confidence (pt. 2)

The game with the highest confidence was (inexplicably) the Outback Bowl between Northwestern and Tennessee featuring 13 out of 16 people picking the wrong team and an average confidence of 30.4. The least confident game was San Jose State vs. Georgia State with 9 of 16 teams picking correctly at an average confidence of 8.9.

Confidence (pt. 3)

After assigning the national championship game the lowest aggregate wager last year (8.8 out of 39), this year people were feeling a bit more lucky and gave it a higher risk than 8 other bowls (16.4 out of 41) despite not knowing which two teams would be contesting the game.

14 of 16 got the Orange Bowl right while only 2 of 16 got the Cotton Bowl correct (Big Ten) with 11 of the 16 ending up with a viable championship game pick (5 Clemson, 6 Alabama). The Alabama picks were worth 19.8 – above the 14.3 for losers – so maybe those people were on to something.

Thursday, November 5, 2015

Searching for sunk costs: UFAs vs. 7th round picks

In my last post I looked at the propensity for coaches to disproportionately use players drafted highly. Higher draft picks play slightly more than their underlying ability (i.e., performance over the next few seasons) would predict. In this post I want to take a quick look at whether this same effect is visible at the margin between 7th round draft picks and undrafted free agents.

The underlying assumption here is that 7th round picks are not all that different from undrafted free agents who get a look from teams. While I would love to be able to validate that assumption with some data, we don’t exactly have those populations in our available data. What we can compare is 7th round picks who have made NFL rosters to undrafted free agents who have made NFL rosters. As I noted in my previous post on this topic, this obscures the most likely place for this bias to manifest itself – decisions on who makes the roster – but can’t really be helped.

Approach

From 1994 through 2010[1] we have 2,043 undrafted free agents and 540 7th round picks who made NFL rosters in at least one season. This analysis will compare their playing time – games started are equal to 1 and games played but not started vary by position as a proportion of games started – with two factors: whether they were drafted or not and how well they played over the next 3 seasons. Performance over the next 3 seasons serves as a proxy for underlying skill. I am using the square root of that performance because I want to weight the player who has a 3 year line like 1-2-13 close to the player whose line is 9-10-8. I am assuming that both have a similar level of skill but the 1-2-13 player may have been blocked from starting or overlooked because he was undrafted.

Results

As with the other analysis, it’s important to note first of all that the relationship here is not that meaningful (R = 0.36, R^2 = 0.13). For players who never play another season in the NFL, an undrafted one is expected to play the equivalent of 2.69 games while a 7th round pick would be expected to play 3.01. Being drafted alone moves the expectation by 0.32 games (p-value 0.03), more than 10% of the baseline. Compared to underlying skill, however, being drafted is much less meaningful. For the hypothetical “1-2-13” player above, underlying skill adds 3.29 games to the expectation (coefficient is 0.82 per unit, p-value 0.00).

Moving to players in their 2nd season the effect of being drafted goes away completely (p-value 0.99) while underlying skill becomes more powerful (coefficient is 1.12 per unit, p-value 0.00).

Based on this analysis I am pretty confident that there is a weak positive effect of being drafted on playing time for rookies. Given the way it evaporates in the second season I would not be surprised if it is strongest early in the first season on a per-game basis. I still believe there is a larger effect that is hidden by lack of data in terms of roster decisions. If anyone has any idea how to get at this question, feel free to let me know.



[1] For this analysis only players with at least 3 subsequent seasons possible (whether played or not) are eligible

Monday, May 11, 2015

Sunk cost and the NFL Draft

I’ve looked at the NFL Draft a lot since starting this blog. As the draft was here in Chicago this year, I found myself running into a number of jerseys on the street when I went out for lunch on Thursday and Friday. Even more surprising than the fact that people had travelled – in some cases from pretty far away according to the jerseys – was the fact that a lot of them were wearing jerseys of players who were disappointments if not outright busts. It got me thinking about sunk costs and whether teams are any better than their fans about cutting their losses.

To try to get at this we’ll need to know how much teams value their draft picks – conveniently we do know this via the Jimmy Johnson-popularized draft value chart – and then compare this to how much those players are used. Usage is a bit tricky but I’m going to approximate it with games started (1 full game) plus games played (2014 avg snaps non-starter / 2014 avg snaps starter, by position).

Before even getting to questions of usage, there is a significant disparity in the proportion of players from each round who end up making a roster.

Round
% on Roster Year 1
1
97%
2
94%
3
83%
4
81%
5
70%
6
62%
7
52%

I am guessing that most of this comes down to talent disparity, but there is certainly some aspect of sunk cost at work here. Lots of later round picks – to say nothing of undrafted players – never make it onto a roster to get into the rest of this analysis. They are, however, not the topic of this analysis. I want to see if a player’s draft value still impacts playing time even after making a roster.

The first cut of this is simply to look at draft weight and usage, checking how much the former impacts the latter. The regressions for each of a player’s first 6 seasons are below:

Usage vs Draft Weight



Draft Weight
Year
R^2
Intercept
Coefficient
P-Value
1
0.22
4.53
15.87
0.00
2
0.16
6.88
13.87
0.00
3
0.10
8.10
10.71
0.00
4
0.08
8.84
9.18
0.00
5
0.05
9.49
6.58
0.00
6
0.04
9.88
6.14
0.00

The draft weight is a significant variable throughout the first 6 years of a player’s career, but the strength of that relationship declines over time. The 1st year model explains 22% of the variation in usage while the 6th year model explains just 4%.

Wednesday, April 8, 2015

Buyer Beware: Are players on better college teams more likely to be busts in the NFL?

Since I’m dipping my toe back into the water with this post, I figured I should stay in safe territory and look at some NFL Draft-related stuff. Enjoy!

With the recent release of Trent Richardson, it seems pretty safe to say that he has been a bust in the NFL. His performance has underwhelmed at a position that is held in low regard in the league. This is especially surprising given that Richardson was held in extremely high regard coming out of college.

As I was reflecting on this I saw an opportunity to test one of my theories on the draft – that players on good teams, specifically those on good units, are overdrafted relative to those from worse units. A player from a good unit, the theory goes, benefits by his skilled teammates taking away attention (no double teams for the second best DE or second best WR) and a better team will execute better in general, making all players look better.

Methodology

I’ll be keeping things pretty simple for this one. For 1994-2010 (I don’t have 2014 data yet and want to use 4 years of data for each player) each player’s first 4 years AV will be compared against the log regression for their draft position. Then I will check to see whether the sum of draft value spent on other players from the same school/same unit in that year or the next explains the over or underperformance.

Results

It does not. 

The overall regression shows no relationship at all (R=0.01) between players from the same unit in the same year (p-value=0.73) or the following year (p-value=0.69). When I tried to splice it by position the results were similarly underwhelming.


While there is a slight uptick in R and R-squared for QBs and offensive linemen, it is extremely slight. It's possible this is related to the draft combine effect I noted a few years ago. QBs and tackles were among those positions for which predictions actually got worse after the combine, guards stayed in place and there weren't enough centers included in Mel Kiper's Big Board (typically just the first round) to include in the analysis. Since these positions are relatively less influenced by raw physical skill than WR, DB and others, teams are more dependent on game film where the quality of teammates could confuse things more. This is all very speculative because, as noted above, this is a very slight effect.

At least the way I approached it with this analysis it appears that playing on a good team isn’t the reason Trent Richardson was overdrafted, he’s just a bust.

Monday, September 22, 2014

No league for old men


Back in December of last year I went through some calculations with the data set I stitched together including both performance and salary data. You can check the post out here for a series of simple graphs that go a long way toward explaining team behaviors and why certain things are the way they are in the NFL, particularly as they relate to player tenure and performance over time.

As I was reading through that post again recently, however, I was struck that it makes the implicit assumption (explicit above) that things are the way they are. What I mean is that this data is the average over a period of time, while there may in fact be some trends to observe by looking at the period year-by-year.

Charts



The average age of all NFL players[1] fluctuated between 26.5 and 27 for most of the post-1994 period (the salary cap era) before dropping from 2008 through 2013 almost without interruption. Applying a weight based on games played or games started, the other two lines on the chart, indicate that the pattern is consistent for the starters as well as the backups. NFL rosters are roughly half a year younger than they were from 94-08.