Monday, December 2, 2013
Player performance curves and value for money
As mentioned in this space recently, I am the proud owner of a shiny new database full of player performance and salary data from 2003 to 2009. I will be trying to extend it in both directions as I have time. For now, however, the analysis will be applicable to that period’s decisions. Once 2010-2012 are added in it might provide a nice contrast in allocation and relative performance under the conditions of the new CBA.
The jumping off point for this data set is getting a good baseline on the efficiency of spending in the NFL. How much does it cost to squeeze one unit of Approximate Value out of a given position? Approximate Value is a stat from Pro-Football-Reference.com developed by Doug Drinen that works by allocating out a team’s offensive and defensive performance to different positions based on various assumptions. The summaries Doug has produced introducing the stat are extremely helpful, but you won’t be at too much of a disadvantage if you just read on without understanding exactly how AV works. As he says in the introduction they are “simple, intuitive and approximate."
In the post I wrote about spending on running backs (see here for more) I noted that prior to the current, 2011, collective bargaining agreement, teams frequently failed to spend up to the level allowed under the salary cap. To correct for this, I represented allocation decisions in terms of percentage of team spending. This way you have a team like the 2005 Seattle Seahawks who spent $67 million against the cap give or take a few while they were permitted to get up to $85.5 million. The $0.75 million cap number for Isaiah Kacyvenski – a fine linebacker and fellow 2011 Harvard Business School grad – would be 0.9% of the salary cap but the 1.1% of team spending is a better representation of the allocation decision. The assumption here is that teams were working under a budget set externally (owner, rather than salary cap) and that they had to allocate those scarce dollars according to that budget. If you still have a problem with this approach please do check out that article I referenced earlier.
To convert this into current cap dollars and give consistent “values” for various levels of performance, I will represent all dollar figures in 2013 cap dollars (% of team spending * 2013 salary cap). The relative difference between amounts won’t change and we get the benefit of a more relatable output than % of team spending.
Finally, I thought about using age as the independent variable for the efficiency curves but it just doesn’t work. Using tenure in the league rather than age allows you to capture the significant difference between a player on a rookie contract and any other player. As Brian Burke noted with respect to the Massey/Thaler paper on behavior in the NFL Draft, even if first round picks are less efficient than later rounds (they are in the sample period I looked at) they are still better than equivalent skill replacements in free agency. Don’t just take my word for it, read on and see.
The target of this analysis is the value teams get for their spending at different points in the average player’s career. To provide a baseline on the two parts of this let’s start with a quick look at performance and compensation along the course of a player’s career.
Elite performance (data from 2003 to 2009) peaks in seasons 4-6 of the average player’s career. Those 3 seasons collectively have 36% of all All-Pro/Pro Bowl seasons. The later seasons have a higher concentration of the AP/PB seasons as lesser players drop out and only the elite and a bunch of kickers and punters are left – though the kickers and punters have been excluded from this data set so that’s just a gratuitous shot at them.
The distribution of starters, on the other hand, peaks earlier and drops off faster than AP/PB. The first five seasons are, in order, 130% higher, 55% higher, 20% higher, 2% higher and 2% higher than the corresponding proportion for AP/PB. Elite players, however, become a higher proportion of starters as their careers continue. Given the higher minimum salaries for veteran players along with other benefits in the CBA, teams are less willing to settle for average output from longer-tenured players.
Unlike performance as defined by proportion of starters or proportion of AP/PB players, the tenure of all players in the league (again 2003 through 2009) shows a decrease as tenure increases. The rate of decrease starts out in the 10-15% per season range, climbing erratically to over 20% from season 8 to 9 and over 30% from year 10 to 11. I would be a bit worried about data quality if there were more players in a given season than there were in the one before, except at the high end where the data is affected by long-tenured holdovers from the pre-2003 period who outliers without adding to the numbers in previous seasons.
League-wide spending (2003 through 2009) has a relatively similar peak to AP/PB teams with a peak in seasons 5 and 6. Compared to AP/PB the season-by-season proportions are very similar with a gap in the first year (6% of salary but only 2.5% of AP/PB seasons) as the biggest disconnect outside of the late-career seasons. In seasons 2 through 12 the proportion of AP/PB seasons stays between 80% and 120% of the proportion of salary spent on players of that tenure.
This is the meat of it here, and the chart I will be picking apart and slicing different ways in future posts. If you want to know why teams invest scarce playing time in players who are inferior to potentially available alternatives, this is it. Production from rookies – and players on their rookie contract – is absurdly efficient compared to production from veterans. That this is the case even in a period in which some rookies were earning astronomical guarantees (JaMarcus Russell was… not efficient) is striking. Sure the teams are also trying to develop the player and this development plays a role in later success, but they also can’t afford to field a competitive team without significant contributions from players in their first few seasons.
In this same 2003 to 2009 period, teams that played in a Super Bowl averaged 236 AV. If a team were constructed of solely veterans (seasons 5 through 15 in their career) they would average 163 AV at a cost of $756k per unit. 163 AV is a 5 win team.
But wait, aren’t Super Bowl participants more efficient in their spending? They certainly are.
Super Bowl participants pay $625k per AV for a player in seasons 5-15 of their career, meaning they could afford 197 AV for the full team at that level. At the 195-200 AV range a team is roughly 50/50 to make the playoffs with just 2 Super Bowl participants out of 16 teams in that range. The only Super Bowl winner among this group is the 2007 New York Giants, the lowest AV and second-worst point differential of all Super Bowl winners in the last decade – second only to the 2012 Giants.
Players on their rookie contract are a better value than players not on their rookie contract. Seems simple enough. This relationship ought to be strengthened by the reduction in rookie salaries under the 2011 CBA as well as the prohibition on renegotiating prior to the end of a player’s third season.
Over the next few weeks or months I’ll take a look into just how much the value of rookies has changed under the new agreement. Time permitting, I will also examine the value of players who change teams – are they more or less efficient than “home grown” players – and the differences across positions. By the time the draft rolls around I might even be done.
 Look, I don’t love AV and maybe you don’t love AV but we can all agree that it’s the best of a bad set of “comprehensive” stats in football. Even if it’s not the best it is definitely the least bad one to which I have access. My main issue is with the static allocation of points between positions and the fact that this allocation is based on the relative value of draft picks “spent” on players of that position according to the Jimmy Johnson draft value chart. My issues with that chart are numerous and well-documented.