Tuesday, June 24, 2014

The 2013 NBA Draft was historically bad



As the 2014 draft approaches, I thought it might be interesting to look back at last year’s version to see just how bad it was.


All Draft Picks - 1st Season
Year
Win Shares
Minutes Played
1998
56.9
40,838
1999
56.0
35,790
2000
32.2
33,170
2001
57.2
41,005
2002
39.5
33,039
2003
48.7
40,515
2004
58.7
40,391
2005
64.6
42,957
2006
50.5
37,159
2007
43.8
37,499
2008
79.7
50,720
2009
74.3
48,425
2010
40.4
33,951
2011
63.0
48,380
2012
53.6
40,967
2013
34.7
32,635
98-12 Avg
54.6
40,320


The top line numbers don’t look so good. From 1998 to 2012, the draft class averaged 55 win shares in their first season. The 2013 total was 34.7. Only the 2000 draft – featuring the murderer’s row of Kenyon Martin, Stromile Swift, Darius Miles, Marcus Fizer and Mike Miller as a top five – pulled in a lower total with 32.2 win shares as rookies.

If we restrict the measurement to just the top 10 players selected in each draft, the 2000 edition is still the lowest of the 98-12 period with 13.5 win shares. The 2013 draft comes in with only 7.4 and less than a third of the 15 year average of 26. The gap between an average top 10 and the 2013 top 10 is basically the entire gap between the overall average and the full 2013 draft.

For those who don’t like win shares we can check out an even more basic measure: minutes played. 2013 is the lowest in minutes played for both the full draft and the top 10 only.


Top 10 Picks - 1st Season
Year
Win Shares
Minutes Played
1998
36.4
20,354
1999
37.2
18,980
2000
13.5
14,723
2001
24.6
16,139
2002
28.9
18,391
2003
28.5
19,471
2004
32.2
16,763
2005
35.1
18,193
2006
15.8
14,112
2007
15.1
15,233
2008
35.8
21,059
2009
24.5
15,492
2010
21.4
17,503
2011
16.0
15,981
2012
24.9
17,553
2013
7.4
11,755
98-12 Avg
26.0
17,330


Finally, here are a couple graphs showing the cumulative build-up of win shares for 2013 compared to the average and the extremes from 1998-2012.

As mentioned above, the 2013 draft looks especially bad when you look at the performance of picks in the order selected (2nd chart). The top part of the first round was less than a quarter of the 2008 draft while the first several picks lacked anyone who could come in and deliver performance anywhere near expected. Given Anthony Bennett's negative win share total and middling totals of 1-2 for most of the other top picks, Nerlens Noel looks like the big hope from this group to have any elite players.

For those wondering, the top 5 2013 picks for win shares were Mason Plumlee (uh oh) with 4.7, Tim Hardaway with 3.1, Steven Adams and Kelly Olynyk at 2.9 and Cody Zeller with 2.6. It's hard to see who among this group will become a regular all star.

Given the hype for the 2014 class it seems more likely we'll be comparing it to the 2008 version than last year's. GMs in the lottery certainly hope so.

Monday, June 16, 2014

The value of a Major League Baseball general manager (Review)



Lewis Pollis, a recently graduated senior at Brown University and past/future intern for several MLB teams, put together an analysis of the value of a general manager in Major League Baseball for his senior thesis in economics. He also had a summary version posted to Regressing, Deadspin’s analytics-oriented sub-site.

The paper estimates the value of the top-performing GMs to be tens of millions of dollars higher than that of middling GMs, who are likewise tens of millions of dollars more valuable than the worst performers. From there the paper links this value differential to the current narrow band of salaries and advances the notion that “the best baseball operations employees are paid substantially less than they are worth to their teams.”[1]

What is being measured in this research is how well teams perform. This performance is measured across a narrow set of activities (signing free agents, making trades) and attributed to the specific GMs who conducted the activity with some caveats about what can and can't be included as well as how it is measured. I am bought-in on this approach. These are hard things to measure and this seems to be a good way to get at it.

The outcome of this performance measurement is a distribution of value generated from best to worst and the conclusions noted above – the value that a good GM provides over a bad GM flow from this distribution. This is where I have a couple of issues.

A couple of issues

For one thing, Pollis notes that the specific GM is not an accurate predictor of how a trade or signing will work out[2]. If a GM is not a predictor – or more accurately if making good picks in the past doesn’t make them any more likely in the future – then the value difference may be coming from luck rather than skill.

In the posts I did looking at the NFL draft I found no evidence for excess skill in selecting players. In the year-over-year data there were a few more streaks of bad selections than a purely random model would suggest, but the streaks of good picks were well within the expectations from the random model.

Another point offered in support of the model is that “subjectively speaking, the individual rankings seem more closely aligned with how well the GMs’ teams performed than with outsiders’ views of their decision-making processes.”[3] I would suggest that this is somewhat worrisome in that it implies the model may simply be a more complicated way of measuring success for teams – the model is drawn from wins above replacement, wins above replacement are highly correlated with team performance, team performance is what tends to drive GM reputations. Now I’m not sure there is a good way around this but I would be interested in any counter-intuitive results – maybe some GM has had bad injury luck in the draft but free agent signings and trades show a skilled evaluator.

Finally, the paper notes that the correlation between GMs’ measured ability in trades and measured ability in free agent signings is effectively 0[4]. If the model were measuring the combination of skill in identifying good players as well as skill in paying them salaries advantageous to the team – and this is what I understand the model is attempting to measure – we should expect those skills to be common across trades and free agent signings. I can’t think of a skill-based explanation for why a GM would excel at one and not the other. I can think of a non-skill-based explanation: luck.

Conclusions

I enjoyed this paper, and I think this is a great start at separating good from bad in GM performance. What I would like to see is more effort put into demonstrating that good or bad performance relates to the specific GM.This is far easier said than done and might require a lot more observations in the data set.

In the absence of additional data, the fact that skill across trades and free agent signings is not correlated leads me to suspect the answer is that GMs who measure highly were unusually lucky in these activities while those who measure poorly are unusually unlucky. This fits squarely into the paradox of skill the importance of luck in determining outcomes rises as the overall group becomes more skilled. 

In baseball, as in football and investments, the decision-makers tend to be very smart people and the organizations have developed sophisticated infrastructure to evaluate talent. The result of this is that individual teams are not likely to be significantly more skillful in their evaluation than others and the outcome will increasingly defined by luck as the skill level rises. See here for a fuller explanation.

My biggest disappointment with all of this is that the next iteration is unlikely to be available for public consumption with Pollis now working in-house for the Cincinnati Reds. I hope he still manages to publish occasionally, and I wish him and the Reds good luck.


[1] Page 62
[2] Pages 47-48
[3] Page 49
[4] Page 49