The primary goal of an NFL front office, absent any special directives from ownership, is to put together a team that can win games. Different teams go about this in wildly different ways. The Washington Redskins of the early- to mid-Daniel Snyder tenure exemplified a win-now mentality focused on free agency. The Green Bay Packers and Pittsburgh Steelers have focused to a fault on home grown players drafted into the organization. Just leaving it at that would be a nice quip, something you could probably squeeze down to a tweet.
As quickly as you might cite the Redskins, however, you could bring up the New Orleans Saints who put together a Super Bowl-winning team around key free agent signings and had only 12 starters who began their careers with the team. On the other side of things the 2006 Colts had 22 homegrown starters and won the Super Bowl while the 2004 49ers rode 22 homegrown starters all the way to a number one draft pick.
If the NFL didn’t have a salary cap, there might be a clear case for free agency to be a better strategy to assemble a team for teams with the highest budgets. Why go through the effort of evaluating college players and trying to pick out the best ones when you can just wait a couple years and sign them?
Despite anecdotal examples to the contrary, under the collective bargaining structure that is in place in the NFL – with a hard salary cap – it is less effective in general to use players who were signed from another team than those from your own team in both performance and value for money.
To ground the analysis and get started on what is likely to be an indulgently long series of posts, let’s take a few moments to look at the output and make sure this hypothesis even makes sense to review. We will use data from the 2003-09 period (because that's where our other data will come from) to show that teams made up of homegrown players do outperform those with imported players.
According to the definition we’re using of a starter and the database of where players started their careers, we can quickly pull together a summary of how many “homegrown” starters are on each team. To reduce the amount of luck influencing the comparison, we’ll look at Pythagorean winning percentage rather than the regular kind. Check out this article for a primer on Pythagorean winning percentage (PW%) and why it’s better for use in things like this.
|Figure 1 - PW% by Homegrown Players|
To start with let’s take a look at a regression of the number of players still with the team at which they began their career against PW%. The relationship here is small but significant. With a t stat of 2.26 and a p-value of 0.025, incremental homegrown players have a small positive effect on PW%. The range from the team with the lowest number of homegrown players (2007 Washington with 20) to the team with the highest number (2009 Indianapolis with 57) would be expected to account for about 2.5 wins.
Using all players for the analysis, however, is problematic. As demonstrated in this previous article the NFL is heavily weighted toward younger players who are very likely to spend the first couple years of their career with the same team. Almost 50% of the player-seasons in the 2003-2009 sample we are using are in Career Season 1 through 3 while starters are concentrated a bit later.
|Figure 2 - PW% by Homegrown Starters|
If we look at only the starters it gives an opportunity to highlight the differences between teams. For all players on the roster the maximum is almost 3x the minimum. For starters only the maximum (2008 Indianapolis and 2009 San Diego with 25) is more than 4x the minimum (2003 Houston with 6). The relationship is also stronger (t stat=3.52, p-value=0.0005). The gap between the lowest and highest numbers would be expected to account for 3.8 wins.
|Figure 3 - PW% by Homegrown Starters (excl. QB)|
Looking just at the teams who come out on top and those at the bottom it seems there might be a risk that this is simply picking up teams with a homegrown quarterback (Rivers and Manning in this case). Just to confirm let’s take a quick look at homegrown starters excluding quarterbacks. The minimum of 5 (2003 Houston again) and maximum of 24 (2008 Indianapolis and 2009 San Diego, again) are each knocked down one as they all had homegrown starting QBs. In total 2/3 of the 224 team seasons featured a homegrown starter at the position. Interestingly, the proportion of All-Pro or Pro Bowl QBs on their original team over this period is very similar at 71%. The relationship between homegrown non-QB starters and PW% is stronger than when QBs are included. With a t stat of 3.81 and a p-value of 0.0002 we can be fairly confident that there is a significant correlation.
|Figure 4 - PW% by Homegrown Starters (excl. QB) 03-09 and 10-12|
If we look at the same data for the 2010-2012 period – just to make sure the period for which we happen to have clean data is not an outlier – we can see there is still a relationship but it is much weaker. Interestingly, the 10-12 data doesn’t show any new top-left quadrant teams (success via free agency) but does show new bottom-right teams (rebuilding or bottoming-out with young talent). There are also fewer free agent dominated teams in the 10-12 period (only 2 in 3 years as opposed to 15 in 7, though 2 of those were the expansion Houston Texans in 2003 and 2004).
The analysis so far has looked at whether teams do better or worse depending on the number of homegrown players they have. While there is a significant relationship between these variables the size of the relationship is relatively small. This is, regrettably, not enough to answer the question of how decision makers should assemble their teams so we will press on to look at the impact of changing teams on player performance in the next part of this series.
Click here for Part 2 (Performance)
Click here for Part 3 (Salary/Value)
 I’m going to go ahead and assume, though, that if you got this far then you probably already know what it is