Vantage recently collaborated with Amin Elhassan of ESPN for a very interesting article on the 10 best screen setters in the league on a budget.
Due to the interest this piece generated, I wanted to follow up by providing some clarity on the importance of screens in generating efficient offense.
The "big news" out of the MIT/Sloan conference is that researchers can now recognize 80% of on-ball screens (with a sensitivity of 82%) using SportVu location data. I guess this makes them the Carmelo Anthony of analytics (OK, fine, even Carmelo probably recognizes more than 80% of on-ball screens ... he just doesn't use them).
While we have no doubt that the algorithms to accomplish this are very interesting, recognizing 80% of on-ball screens is like having 80% of a ball -- you're not getting any roll there. This is a perfect example of the disconnect between the analytics bubble and the real world. People spend all this time and effort in producing something that has no value to players and coaches.
Even if we were talking about recognizing 100% of screens, this information isn't helpful at all without the context of screen effort (did the screener make contact with the defender or reroute the defender?), screen defense (did the defender hedge, soft show, etc.?), screen usage (did the ball-handler use the screen or split the defenders?), and screen outcome (did the player get an open shot, get a teammate an open shot, or develop a play that resulted in an otherwise effective outcome?).
Spoiler alert - Vantage already tracks every screen and all this context to boot. If you missed it, here is a background on Vantage screen analysis.
Recognizing screens is actually the easy part. We're analyzing why they are important and who is good at them.
Who Cares About Screens?
As the only legal method for blocking in basketball, screens are vital to creating open shots, favorable matchups, and forced rotations. Brand new research using Vantage data is verifying what coaches (and smart fans) have long understood: with a league full of world-class athletes, teams that don't screen effectively cannot score efficiently.
The guiding star for any offense is "Offensive Efficiency" or, in other words, Points Per 100 Possessions. Research by Vantage Contributor Lorel Buscher verifies that two Vantage metrics, "Received Screens Per Chance" and "Set Screen Points Per Chance," are significant predictors of Offensive Efficiency.
Set Screen Points Per Chance is the amount of points generated through screens per offensive chance, while Received Screens Per Chance is the amount of screens set per offensive chance. The upshot is that teams that set a lot of high-quality screens have more efficient offenses.
More detail is provided below for the statistics-inclined.
The top two teams in Set Screen Points Per Chance over the past 2.5 seasons are the Spurs and Thunder, each at .032. In other words, these teams have generated .032 points per offensive chance from their screens.
Both Tiago Splitter and Tim Duncan generated .101 points per chance through screens. For the Thunder, Kendrick Perkins (.115) and Nick Collison (.081) led the way.
Here is an example from each screen setter for the Thunder:
Simple plays that require (1) a screener that creates space, (2) a ball handler defenders must respect (or do respect even if they should not), and (3) a shooter to make a non-contested shot. We see these plays every night, and they are the backbone of efficient teams.
On the other end of the spectrum, the Jazz have struggled to generate points through screens. As a team, they've only generated .015 points per offensive chance through screens, almost half of what the top teams generate. The Jazz are also in the bottom four in amount of screens set.
Even a team like the Heat that can rely on a dominant one-on-one player is still in the top eight in Set Screen Points Per Chance.
Researcher Lorel Buscher led our research analysis on this project by employing a Bootstrapped Regression Model to verify the importance of screens. Bootstrapping is simply a method of resampling that assigns measures of accuracy among the included metrics.
By bootstrapping, we were able to take 100 random samples from the original data, allowing each sample to be considered independent of the others. A requirement was made that a metric appear in at least 70% of the models built to be deemed a significant predictor.
Once the conclusion was made that a metric was not only significant but also a positive predictor of Offensive Efficiency, its relative importance (a weighted average of its recorded estimate across all 100 models) was calculated.
Both "Received Screens Per Chance" and "Set Screen Points Per Chance" were positive predictors with relative importance measures exceeding 0.8.
Our clients have moved beyond the data-collection problem to the integration and analysis problems, and it is time for fans (and the analytics community itself) to do the same. Stay tuned as we roll out more discoveries from our data.