Pages

Friday, 22 December 2017

Tackling Success Rate & the Influence of Luck

About four years ago I wrote a post that speculated on the transfer price associated with a group of equally talented players whose success rate in a particular skill had actually been randomly generated.

Each were given a 10% chance of succeeding, each were given 100 opportunities to succeed and the "best" performers were ranked accordingly.

Of course, the difference in success rate was entirely down to randomness.

If you bought the "best" at a premium, you were paying for unsustainable luck. If you bought the "worst", you were getting a potential bargain, if the price reflected the imaginary under performing ability that would likely regress towards 10%.

It's less straightforward when looking a real players.

Players play on different teams, with different tactical setups and different teammates. They probably have varied levels of skill differentials in a variety of skill sets and they have differing number of opportunities to demonstrate their talent or lack of.

Attempting to partly account for the randomness in sampling is most applicable in on field events where there is a simple definition of success or failure.

In such areas as tackles made, raw counting numbers are much more a product of overall team talent and setup, so there has been a tendency to move onto percentage of tackles won, as an outward sign of competence.

Unlike the revolution in scoring and chance creation, where pre-shot parameters are modeled on historical precedence to created expected goals or chances, there is little prospect, given the available data, of similarly modeling expected tackles, dribbles or aerial duels, for example.

But we should at least try to account for the ever present randomness, even in large samples that partly transforms purely descriptive percentage counts into a more informed predictive metric capable of projecting future success rate.

It's easy to be impressed by the eye test that sees four successful tackles made by a player in a single half of football. But aside from draining the tension from the final minutes of a game by declaring said player "man of the match" , as a projection of future performance it is riddled with "luck" and largely unrepresentative of future. larger scale output

To attempt to overcome this, we can work out what a distribution of outcomes would look like if there is no differential in a measured skill within a group of players. We can then compare this distribution to an actual distribution of outcomes where we suspect a differential exists.

For example, in the tacking ability of Premier League defenders.

We can then try to allow for the randomness that may exist in the observed success rate of players who have had differing opportunities to prove there tackling prowess to produce a more meaningful projection.

The more tackles a player has been involved in, the more signal and less noise his raw rate will contain. Whereas in smaller samples, noise will proliferate and perhaps give extremes that will not be representative of any future output.


Here's the raw tackle success rate from the MCFC/Opta data dump from the 2011/12 season.

It lists the 140 defenders involved in the most tackles during the whole of that season. The left hand side of the plot has players with most tackles, moving to the fewest at the right hand side, where more extreme rates, both apparently good and bad. begin to appear.


The second, identically scaled plot has attempted to regress the observed rate towards the mean for the group, based on the differing number of tackle attempts each defender has been involved in.

All of the small sample sized extremes, either good or bad are dragged closer to the the group average, while the larger samples group slightly more tightly, but were clustered more closely to the group mean to begin with.

The first plot illustrates the interplay between randomness and skill. It is at it's most deceptive in smaller sample sizes. It is perfectly adequate as a descriptive stat for defenders, but deeply flawed as a projection of a defender's likely true tackling talent. And the two are often conflated.

While the second plot tries to strip out the differing influences of randomness over different sample sizes to show that there is probably a skill differential for tackling defenders, but it is nowhere near as wide as raw stats imply, even after a season's worth of tackles.

And if you're rating or buying some of the 90%+ success rated tackles based on just 30 or 40 interactions, you're probably staking your reputation on a hefty dose of unsustainable good fortune as they fall back into the pack with greater exposure.

No comments:

Post a Comment