The post Regular Season vs. Playoffs: Contest+ % appeared first on .

]]>First, let’s refresh on the definition of Contest+. We define Contest+ as any shot that is altered, blocked, or contested. Contest+ % will simply be the percentage of shots that are altered, blocked, or contested. For an exact definition on those terms, review this article.

Now, let’s explore the methodology and results. I looked at Team Contest+ % for a sample of regular season games and the playoff games in the 2011-12 season. For those interested in just seeing the results and the conclusions, skip ahead. First, I conducted Levene’s test for equality of variance and found that we have to reject the hypothesis of equality of variances in the two samples (p-value < .01). Next, I conducted Welch’s t-test to see if the means were statistically different. Here are the results:

t-Test: Two-Sample Assuming Unequal Variances | ||

Reg Season | Playoffs | |

Mean | 0.4729023 | 0.456445 |

Variance | 0.0080041 | 0.004745 |

Standard Deviation | 0.0894656 | 0.068886 |

Observations | 905 | 168 |

Hypothesized Mean Difference | 0 | |

df | 283 | |

t Stat | 2.7022436 | |

P(T<=t) one-tail | 0.0036517 | |

t Critical one-tail | 1.6502557 | |

P(T<=t) two-tail | 0.0073034 | |

t Critical two-tail | 1.9683819 |

*Note: Playoff teams had a regular season Contest+% of 47.5%

Notice that Contest+ % in the playoffs has a *lower* average than the regular season. Moreover, not only is the average lower, but the difference in means is statistically significant (p-value < 0.01). So what do we make of this? Does this actually mean that teams contest a lower % of their shots in the playoffs than in the regular season? The evidence certainly seems to say “yes.” However, before coming to this conclusion, I think we have to keep a few things in mind. First, the sample sizes are drastically different and there are nowhere near as many playoff games than regular season games. Second, there are fewer teams in the playoffs, and theoretically better teams that may be just better at getting open looks. More than a quarter of the playoff games are coming from the Heat and the Celtics, who played in 23 and 20 games, respectively, that year. Finally, the actual difference in averages is not that large (just 1.6%). Regardless of whether it is statistically significant, this isn’t a major difference. So with all this being said, while we may not be ready to conclude that players contest fewer shots in the playoffs, one thing we can safely say is that they aren’t contesting more shots. And *that* *is *surprising.

So what about our expectation that teams give more effort in the playoffs and, therefore, should contest more shots? The fact that players don’t contest more shots in the playoffs (despite what we assume to be marginally better effort) suggests one of three things (or some combination): (1) teams can’t shake bad habits in the playoffs that were developed in the regular season; (2) skill is the determining factor in contesting shots, not effort; or (3) offenses that are good at getting uncontested shots trump the marginal increase in contesting shots that greater defensive effort produces.

I’ll end this post with a graph of Contest+ % for each playoff game for the conference finalists:

As you may be able to tell from the graph, the Celtics had the smallest variance while the Spurs and Heat had the largest variances. The Thunder also had three of their four worst games in terms of contesting shots in the last three games of the Finals, and this general downward trend is consistent across the teams to a lesser degree.

In part 2 of this article, we’ll explore Open+ Frequency, how it changes from the regular season to the playoffs, and its relationship with Contest+%.

The post Regular Season vs. Playoffs: Contest+ % appeared first on .

]]>The post Is LeBron Getting Lucky on Defense? appeared first on .

]]>That can’t be right, can it?

Over at Hickory High, Ian Levy calculates Expected Points per Shot (XPPS) based on shot locations. The calculation is simple: multiply each player’s total FGA from each location by the leaguewide expected points per shot at that location, add it up and then divide by the player’s total FGA. Levy then adjusts the total by the number of fouls a player draws using the average value of a pair of free throws (1.511). Then he compares this to the players’ actual points per shot.

Here at Vantage, we can take this metric one step further. Not only do we have shot location data but we have shot defense data as well. Additionally, we even know *who *is guarding the shooter. So we can calculate *a defender’s *XPPS. Using this metric, we can see if the defender is getting lucky (or unlucky) with his defense. If the defender is contesting shots and forcing the shooter to take the shot from more difficult shot locations, we’d expect to see a very low XPPS.

Let’s look at LeBron’s defense and see if he has been lucky. Is he forcing his shooters into tough shots? First, we need to calculate the XPPS for the league.

contested shots:

Shot Location | XPPS | fga |

a | 1.049048 | 1733 |

b | 1.035159 | 3214 |

c | 0.971154 | 2496 |

d | 0.980715 | 3215 |

e | 1.065789 | 1748 |

f | 0.923077 | 26 |

g | 0.777579 | 1677 |

h | 0.700235 | 2125 |

i | 0.781012 | 5119 |

j | 0.719453 | 2046 |

k | 0.738318 | 1712 |

l | 1 | 12 |

m | 0.47619 | 21 |

n | 0.76643 | 2252 |

o | 0.769585 | 868 |

p | 0.783333 | 840 |

q | 0.760017 | 2371 |

r | 0.736842 | 19 |

s | 1.0625 | 32 |

t | 0.989032 | 5197 |

u | 1.0197 | 5127 |

v | 0.967966 | 4901 |

w | 0.807692 | 52 |

(Note: See location chart at the bottom of this post.)

As we can see in the table, some locations have significantly higher XPPS for contested shots than others. We can do this for each type of shot defense.

Now let’s look at LeBron. How has he defended each location?

Location | XPPS | fga |

a | 1.241379 | 29 |

b | 0.833333 | 54 |

c | 1.241379 | 29 |

d | 1.090909 | 66 |

e | 1.2 | 25 |

g | 0.526316 | 19 |

h | 0.727273 | 22 |

i | 0.878049 | 41 |

j | 0.545455 | 22 |

k | 0.714286 | 14 |

n | 0.48 | 25 |

o | 0.857143 | 14 |

p | 0.571429 | 14 |

q | 1 | 18 |

s | 2 | 1 |

t | 1.125 | 64 |

u | 1 | 100 |

v | 0.8 | 50 |

w | 1 | 2 |

Total | 0.934319 | 609 |

(Note: See location chart at the bottom of this post.)

We can then break this down even further looking at the type of shot defense LeBron has played at each location.

After calculating LeBron’s expected points and actual points for each shot location and type of shot defense (his FGA*points), we add it all up, include expected points on fouls and then divide by total attempts plus missed foul shots. We find that LeBron’s XPPS is 1.01 compared to his actual PPS 0.96.

What does this mean? Our intuition is that LeBron has gotten slightly lucky with his actual points per shot allowed being lower than what you would expect given where and what type of shot defense he played against the opposing player. We suspect the difference isn’t significant enough to say he’s due for any type of regression.

I plan to do a full analysis of all players in the NBA to ground the intuition with data, and to identify the luckiest and unluckiest defenders in the NBA. Stay tuned.

The post Is LeBron Getting Lucky on Defense? appeared first on .

]]>The post Poking Holes in Oladipo with New Shot Defense Metrics? appeared first on .

]]>__Contest+__

Vantage tracks 6 levels of shot defense, including block, alter, and contest (defined as when the defender is within 3 feet of shooter and his hand is up). Contest+ is the percentage of shots defended where player blocks to possession, blocks to opponent’s possession, alters, or contests.

Number of points allowed per shot defended includes free throws resulting from fouls and thus penalizes a player with a high Fouls Per Shot number.

Number of shots defended per defensive chance measures defensive activity and therefore gives context to box score counting stats in which level of activity is key.

FG% by opponents on all shots defended.

Here is what NBA teams utilizing Vantage are seeing in the numbers from Oladipo, McLemore, Porter, Muhammad, and Carter-Williams:

The numbers seem to highlight some weaknesses for Oladipo in shot defense – his foul rate is high and he allowed shooters to go almost 39% against him. As a result, his Points Allowed Per Shot was .897. To provide further context, Tony Allen allows .894 Points Per Shot against NBA-level talent. Giving up a higher number of points to college shooters does not bode well for a player touted as an NBA-ready defender. However, a mitigating factor is his high Shots Defended Per Chance number and his high help rate (only Otto Porter averaged more helps per chance). Thus, a lot of his points allowed were not when guarding his primary target.

We can’t let Oladipo off the hook completely though. Watching the video of his non-contested defense shows him relying on his active hands too much in help rather than playing with his feet, merely waving as guys go by, and he needs to temper his aggressiveness (especially when tired) so that he gives up fewer good looks. Oladipo exhibits the capacity to *become *a good NBA defender, but he is not there yet.

Keep checking in or follow us on Twitter as we continue to introduce new statistics in the following 10 categories:

1) Scoring

2) Facilitation

3) Rebounding

4) Screening

5) Turnovers and Fouling

6) **Shot Defense**

7) Disruptions

8) On-Ball/Screen Defense:

- Keep in Front % (KIF%)
- Close Out Points Allowed
- Points Allowed Per Screen
- Effective Screen Defense Rate

9) Help/Double Team Defense

- Helps Per 100 Chances
- Double Teams Per 100 Chances
- Points Allowed Per Help/Double Team
- Effective Help/Double Team Rate

10) Movement and Involvement

The post Poking Holes in Oladipo with New Shot Defense Metrics? appeared first on .

]]>The post “Always Will Be About Buckets” appeared first on .

]]>

A look at the move (here at Vantage Sports, we call these “post-acquisition moves”) just prior to a shot attempt yields insights into *how to get *buckets.

We first looked at the frequency of post-acquisition moves that led to field goals in the league by looking at the number of field goal attempts after a particular move. Catch and shoot field goal attempts take up the majority of field goal attempts, but for purposes of our analysis, we will not consider them a post-acquisition move. The dominant move is a drive to the right or left (approximately 20% of the time). Ball screens are the third most popular move where the dribbler comes off an on-ball screen. Listed below are all of the post-acquisition moves that Vantage Sports tracks in the data set, sorted by frequency of field goal attempts. The breakdown between two-point and three-point field goal attempts follows.

Most post-acquisition drives lead to 2PT field goal attempts, while ball screens are generally given to help create space for a three point field goal attempt. These findings also show that crossovers, a move Uncle Drew uses to drive straight to the basket in Episode 2, is also used in the NBA to set up more two point field goal attempts (approximately 3.5 crossovers for two point field goal attempts for every crossover leading to a three point attempt).

Looking at the most frequently used post-acquisition moves, we can see what the average field goal percentage is after a certain play is made.

Clearly, the best option is a catch and shoot scenario, but let’s dive a bit deeper into the data.

**The Moves of Diminishing Returns**

One shot fake is not as successful for three point field goals as it is for two point field goals (28.63% for threes versus 42.47% for twos). It is to be noted that one shot fake is the post acquisition move that leads to the most frequent fouled three point attempts (besides catch and shoot attempts).

With shot fakes and jab steps, less are better. The diminishing return on these moves are apparent in the field goal percentage numbers that you see. One or two shot fakes compared to three or more is approximately a 4.25% decrease in field goal percentage. The drop is even more prevalent in jab steps, where one jab step could help effectively earn you 38.70% shooting, but two drops the field goal percentage to 36.84%, and in the rare cases of three or more jab steps, 18.18% shooting from the field.

Facing up the defender (after posting up) from below the arch will generate further from average two point field goal percentage (approximately 6.88% below the average two-point field goal percentage). The percentage gap ticks from beyond the arch to 3.86% below the average. This suggests that despite facing a primary defender in these situations, face ups are lower percentage shots when closer to the basket.

**Which Move Is the Best?**

Like in other studies, the more frequent observations help make up more of the average, so without consideration of the number of observations of a particular move, the top moves that result in the highest field goal percentage, excluding fast break and catch and shoot opportunities for both the 2 PT and 3 PT options, are as follows:

Keeping in mind that jump stops and 2 jab steps combine for less than 2% of the total number of three point field goals in our sample, it is clear that ball screens are the most optimal post acquisition move for creating a high percentage 3 point shot.

The Euro step has broken into the game in a big way and the numbers show why as it is more effective than straight drives, jump stops or spins.

This data show just how hard it is to get buckets in the NBA when attempting to create one’s own shot.

The post “Always Will Be About Buckets” appeared first on .

]]>The post Professional and Collegiate Teams The Future of Scouting, Player Analysis and Team Management appeared first on .

]]>Vantage products provide our team clients with actionable insight in scouting, player analysis, and team development. Our unprecedented data set, relevant video, and intuitive interfaces make advanced analytics simple and actionable.

The post Professional and Collegiate Teams The Future of Scouting, Player Analysis and Team Management appeared first on .

]]>The post Visualizing Vantage’s Metrics appeared first on .

]]>The positive correlations are shown in blue, while the negative correlations are shown in red. The darker the color, the greater the magnitude of the correlation. For the Pie graphs, the more filled they are, the higher the correlation. In addition, notice the lines in the small graphs go forward for positive correlation and backwards for negative correlation.

In the upper left corner is Offensive Efficiency followed by Received Screen Outcome Efficiency, Open+ FG%, Set Screen Points Per Chance, Set Screens Per Chance, Set Screen Outcome Efficiency, Screens Received Per Chance, Isolation Frequency, Roll-Pop%, Solid Screen%, Contest+ FG%, Open+ Frequency and Cut Efficiency. Please see the table at the bottom of this post for definitions.

Open+ FG% and Contest+ FG% have the highest correlations with Offensive Efficiency while the next highest correlation is Set Screen Points Per Chance. This is not in the least bit surprising since the three of these metrics are directly related to points. Likewise, we also see statistics like Received Screen Outcome Efficiency and Set Screen Outcome Efficiency are highly correlated. However, perhaps most interesting is the correlation between Set Screens Per Chance (or Screens Received Per Chance) and Set Screen Points Per Chance. Does setting a lot of screens make teams more efficient? Or do more efficient teams just set more screens? Which metric causes the other?

Now let’s look at the correlations for some of Vantage’s defensive statistics.

In the upper left corner is Defensive Efficiency followed by Effective Screen Defense Rate, Contest+, Pressure Rate Per 100 Chances, Effective Help Rate, Turnovers Forced Per Chance, Defensive Moves Per Chance (also called Defensive Activity Rate), Inside Shots Against %, Close-Out Points Allowed, Effective Double Team Rate, Keep in Front % and Deflections Per 100 Chances. Again, please refer to the bottom of the post for definitions.

Turnovers Forced Per Chance and Close-Out Points Allowed are the most correlated with Defensive Efficiency. However, Close-Out Points Allowed is also the only metric that is on the scale of points allowed. For example, Effective Help Rate measures the number of help attempts that don’t end in a score, assist+ or a missed Open+ shot. It is not directly correlated with points allowed. This does not mean it’s not a useful statistic, as it’s more likely to be *predictive than reflective. *

Now let’s look at the relationship between switching on screens and how effective teams are at defending screens (Effective Screen Defense Rate).

For teams that have more points (which are just individual games) in the upper right of the graph, they will be switching on screens a lot while still remaining effective defending screens. Theoretically, these teams will need versatile defenders who can guard multiple positions to be able to effectively switch on screens. As you can tell from the graph, not many teams switch on screens very often. One team that is pretty interesting is the Knicks, who might come the closest to having a number of points in the upper right corner. They certainly appear to switch on screens more than most teams while still playing effective defense on screens. Another interesting team is the Nuggets, who appear to have a number of random points all over the place. Their graph appears to be the most spread out (they switch sometimes, other times they don’t switch, they also play good and bad defense on screens). Finally, it’s worth remembering the scale of this graph which goes from 0 to 0.4 with some occasional games near or above 0.5. However, for almost all of the games, switching on screens is the *less *likely event.

Let’s take a closer look at the graph above with a subset of 6 teams (the Bulls, Celtics, Clippers, Heat, Lakers and Thunder).

Each team is fit with a regression line as well as a shaded region that includes the 95% confidence interval for the fit. For most teams, we see that an increase in switch% on screens leads to a decrease in Effective Screen% (Effective Screen Defense Rate). However, what this graph is really great for is that we get an idea of the magnitude of the decrease in Effective Screen% (Effective Screen Defense Rate). For example, the Lakers are significantly worse defending screens the more they switch but a team like the Thunder plays pretty consistent screen defense whether they switch or not. In fact, we can see a slight *increase *in their regression line when they switch on defense (meaning they play better screen defense when they switch).

The post Visualizing Vantage’s Metrics appeared first on .

]]>The post Screening Ability and Offensive Efficiency appeared first on .

]]>Due to the interest this piece generated, I wanted to follow up by providing some clarity on the importance of screens in generating efficient offense.

**Tracking Screens**

The “big news” out of the MIT/Sloan conference is that researchers can now recognize 80% of on-ball screens (with a sensitivity of 82%) using SportVu location data. I guess this makes them the Carmelo Anthony of analytics (OK, fine, even Carmelo probably recognizes more than 80% of on-ball screens … he just doesn’t use them).

While we have no doubt that the algorithms to accomplish this are very interesting, recognizing 80% of on-ball screens is like having 80% of a ball — you’re not getting any roll there. This is a perfect example of the disconnect between the analytics bubble and the real world. People spend all this time and effort in producing something that has no value to players and coaches.

Even if we were talking about recognizing 100% of screens, this information isn’t helpful at all without the context of screen effort (did the screener make contact with the defender or reroute the defender?), screen defense (did the defender hedge, soft show, etc.?), screen usage (did the ball-handler use the screen or split the defenders?), and screen outcome (did the player get an open shot, get a teammate an open shot, or develop a play that resulted in an otherwise effective outcome?).

Spoiler alert – Vantage already tracks every screen and all this context to boot. If you missed it, here is a background on Vantage screen analysis.

Recognizing screens is actually the easy part. We’re analyzing *why they are important* and* who is good at them. *

**Who Cares About Screens? **

As the only legal method for blocking in basketball, screens are vital to creating open shots, favorable matchups, and forced rotations. Brand new research using Vantage data is verifying what coaches (and smart fans) have long understood: with a league full of world-class athletes, teams that don’t screen effectively cannot score efficiently.

The guiding star for any offense is “Offensive Efficiency” or, in other words, Points Per 100 Possessions. Research by Vantage Contributor Lorel Buscher verifies that two Vantage metrics, “Received Screens Per Chance” and “Set Screen Points Per Chance,” are significant predictors of Offensive Efficiency.Set Screen Points Per Chance is the amount of points generated through screens per offensive chance, while Received Screens Per Chance is the amount of screens set per offensive chance. The upshot is that teams that set a lot of high-quality screens have more efficient offenses.

More detail is provided below for the statistics-inclined.

More detail is provided below for the statistics-inclined.

The top two teams in Set Screen Points Per Chance over the past 2.5 seasons are the Spurs and Thunder, each at .032. In other words, these teams have generated .032 points per offensive chance from their screens.

Both Tiago Splitter and Tim Duncan generated .101 points per chance through screens. For the Thunder, Kendrick Perkins (.115) and Nick Collison (.081) led the way.

Here is an example from each screen setter for the Thunder:

Simple plays that require (1) a screener that creates space, (2) a ball handler defenders must respect (or do respect even if they should not), and (3) a shooter to make a non-contested shot. We see these plays every night, and they are the backbone of efficient teams.

On the other end of the spectrum, the Jazz have struggled to generate points through screens. As a team, they’ve only generated .015 points per offensive chance through screens, almost half of what the top teams generate. The Jazz are also in the bottom four in amount of screens set.

Even a team like the Heat that can rely on a dominant one-on-one player is still in the top eight in Set Screen Points Per Chance.

**The Nitty-Gritty**

Researcher Lorel Buscher led our research analysis on this project by employing a Bootstrapped Regression Model to verify the importance of screens. Bootstrapping is simply a method of resampling that assigns measures of accuracy among the included metrics.

By bootstrapping, we were able to take 100 random samples from the original data, allowing each sample to be considered independent of the others. A requirement was made that a metric appear in at least 70% of the models built to be deemed a significant predictor.Once the conclusion was made that a metric was not only significant but also a positive predictor of Offensive Efficiency, its relative importance (a weighted average of its recorded estimate across all 100 models) was calculated.

Both “Received Screens Per Chance” and “Set Screen Points Per Chance” were positive predictors with relative importance measures exceeding 0.8.

Our clients have moved beyond the data-collection problem to the integration and analysis problems, and it is time for fans (and the analytics community itself) to do the same. Stay tuned as we roll out more discoveries from our data.

The post Screening Ability and Offensive Efficiency appeared first on .

]]>The post Who Protects the Basket? appeared first on .

]]>In a previous article, we looked at the value of contesting shots. However, rim protection entails more than just contesting shots. We need to look at how often the defender blocks shots as well. Together, by combining a player’s contest/altered frequency with his blocks frequency, we can develop a rim protection rate statistic. So who is protecting the basket?

We can look at this a few ways. First, rather than just presenting a table of information, let’s look at rim protection in the form of a heat map.

Learn About Tableau

We see some interesting if not completely surprising names here. Brandan Wright has long been a per minute star, averaging a 21.2 Player Efficiency Rating over the last 2 years. He also finished with a 2.2 xRAPM last year and was on the plus side in the previous year. After missing the first 23 games with an injury, Wright is back and playing well. However, the assumption with Wright has long been that he’s too thin to muscle with the big boys down low. Yet, despite his skinny frame, he has a long wingspan that is able to help him block an above average amount of shots and contest whatever he doesn’t block. If Wright is able to stay healthy, he’s a player to keep an eye on.

We see that Asik is one of the better rim protectors as well. In fact, we see that he protects the rim at a better rate AND allows a lower FG% at the rim on those contested shots than Dwight Howard. As has been mentioned in many rumors surrounding Asik, the Rockets are actually better defensively when he is on the court then when Dwight is. Does this mean Houston is trading away the wrong center? That would be taking it a bit far — Dwight is still much better offensively and while he hasn’t been as good as Asik at protecting the rim, he isn’t chopped liver either.

Brook Lopez finished with the highest Contest+ so it is not surprising to see him leading the way in rim protection. While he only blocks shots 0.6% more than the average big man (PFs and Cs), he is able to get his hand up and at least contest or alter most shots.

When I first ran the numbers in the previous article for contest frequency, I was surprised to see Serge Ibaka with a relatively low contest/altered frequency. Alas, it was only because he was too busy blocking nearly 25% of the shots attempted against him near the basket!

However, perhaps the biggest surprise on this list is Tyson Chandler, the one time DPOY. With a 45.8% rim protection rate, he comes in as one of the worst players in our sample and worse, he is 8% worse than the average big man which you can see visually with the “cold” blue box.

One thing I’m sure we all noticed when looking at rim protection rate is that many of the names we would expect to see at the top are actually not at the top. So are guys like Roy Hibbert, Tim Duncan, and Marc Gasol not as good as we expect? Is rim protection rate perhaps misleading? Is rim protection FG% actually the better statistic? It’s hard to answer the last question without having a larger sample size and observing how both rim protection rate and FG% vary over time. However, we can plot rim protection rate vs. rim protection FG% to see if there are significant differences.

Learn About Tableau

Well that is pretty messy. The correlation coefficient is actually not as bad as the graph looks (-0.26) and the fact that the two statistics are negatively correlated is certainly a good thing — i.e. as rim protection rate goes up, rim protection FG% should go down. Theoretically at least. Still, it’s not a very strong correlation and leaves us asking the question, *Which of these metrics is stronger in predicting a team’s defensive efficiency?*

We also have one other metric we can look at: close frequency. What is close frequency? It is the percentage of a player’s shots defended that came near the basket. We can get an idea of which players are always near the basket and which players wander around a bit more and are guarding shots further away from the hoop.

Learn About Tableau

In the graph above, the players with larger bubbles are guarding a higher percentage of their shots near the basket. We see that some of the past DPOYs like Marc Gasol, Tyson Chandler, Tim Duncan and perhaps this years DPOY Roy Hibbert do not have the largest bubbles. The percentage of shots they have defended near the basket range from about 60%-65%, which means that about 35%-40% of the shots they have defended are at mid-range or near the 3 point line. Of course, with the 3-seconds-in-the-paint rule, no one can maintain a 100% rate here because ultimately you have to leave the paint. In the future, we will look at the distribution of each player’s shots defended along with his defensive usage (Vantage tracks shots defended per chance). We can also use the distribution of shots to develop an expected points per shot metric.

For now though, we can see which players are near the basket the most (close freq), which players attempt to protect the basket (RP rate) and which players successfully protect the basket (RP FG%).

Learn About Tableau

Finally, as a refresher, it’s worth visually seeing what the difference between getting an open shot under the basket versus if that shot is contested:

Almost every player in the league allows a higher FG% when the shot is open versus if it is contested. And we can see a clear difference between the two types of shot defenses.

The post Who Protects the Basket? appeared first on .

]]>The post Defending the Three appeared first on .

]]>

That was a trick question. Of course not! This video is not an example of good defense but rather Paul George getting lucky that an open shooter missed a wide-open shot.

In a previous article, I developed a framework for calculating defensive XPPS based on Ian Levy’s Expected Points per Shot. Let’s apply this method but focus specifically on three-point shooting and determine which players are getting lucky. Have Paul George’s opponents missed a ton of open shots?

In order to understand what constitutes an open shot, let’s revisit some of Vantage’s definitions for shot defense.

The post Defending the Three appeared first on .

]]>The post Steve Novak’s Defense – By The Numbers appeared first on .

]]>We can measure defensive ability across many different categories, including:

a) how well he keeps opposing drivers in front of him (keep in front% or KIF%)

b) how well he contests, alters, or blocks shots (Contest+%)

c) opposing players’ field goal percentage when he is defending (FG% Against)

d) how well he double teams (Double Team Effectiveness Rate or DBL)

e) how well he helps on defense (Help Effectiveness Rate or HELP)

In our sample (which includes the final quarter of last season and a little less than half the games played this season), Novak’s KIF% is 56%. This includes all situations, from screens to closeouts to isos. This puts him below the median for the Knicks, a little better than the league average, and just about 10 percentage points higher than Kobe Bryant (with apologies to morning radio hosts who still think Kobe is an adequate defender).

Our data has already shown the importance of getting a hand up on shooters. At 38%, Novak’s Contest+% is average for the Knicks but toward the bottom of the league.

Novak at 42.11% is average in this metric for both the Knicks and the league.

DBL measures all attempted double teams as well as outcomes deemed effective (no points, no assist, etc.) and ineffective (assist, pass to open shot, crucial pass, foul, points). At only 6% DBL, Novak is the worst double team defender on the Knicks and watching the film is a painful experience. Novak’s inability to double quickly allows offenses to easily find open shooters. Furthermore, Novak allows opponents to split his double team once every 5 attempts. This precipitates a complete defensive breakdown.

HELP measures all attempts to help a teammate after a No Keep in Front and again takes into account effective (no points, keep in front, no assist, etc.) and ineffective outcomes. At 12.5% HELP, Novak is again the worst Knick.

The post Steve Novak’s Defense – By The Numbers appeared first on .

]]>