Pages

Monday, 15 January 2018

Arsenal Letting in Penalties Doesn't Defy the Odds.

Arsenal fans have been getting hot under the collar about penalties.

Penalty kicks have either been awarded (against Arsenal) when they shouldn't have been, not awarded (to Arsenal) when they should have or when they have been conceded, they've gone in, alot.

The latter has spawned the inevitable trivia titbit.


There's nothing wrong with such trivia as fuel for the banter engine between fans, but almost inevitably they quickly become evidence for an underlying problem that exclusively afflicts Arsenal.

Cue the Daily Mail "why is Arsenal's penalty saving record so poor"

So lets add some context.

We're into familiar selective cutoff territory, where you pick a starting point in a sequence to make a trend appear much more extreme than it actually is.

As you'd probably guess, Arsenal saved a penalty just prior to the start of the run.

They also saved one Premier League penalty in each of the preceding two seasons, two more per season if you go back two more campaigns and obligingly opponents penalty takers also missed the target completely on a handful of other occasions.

If you shun the exclusivity of the Premier League Arsenal keepers made penalty saves in FA Cup shootouts and induced two misses in Community Shield shootouts, the latter as recently as 2017.

Over the history of the Premier League, 14% of penalties have been saved by the keeper. The remaining have gone wide, hit the post, been scored or an attempt has been made to pass the ball to a team mate. (Arsenal, again)

Arsenal's overall Premier League penalty save rate is also 14%.

So you should ask if we're simply seeing a random streak that was likely to happen to someone, not necessarily Arsenal, over the course of Premier League history.

Arsenal has conceded nearly 100 Premier League penalties because they have  had dirty defenders  been ever present, respected members of the top flight.

Of the current Premier League members, 17 sides have had the opportunity to concede a run of 23 consecutive penalty goals.

If we simulate all the penalties faced by each of these teams using a generic penalty success rates, you find that at least one side during the current history of the Premier league will have conceded a run of 23 penalty goals or more in half of the simulations.

Letting in penalty after penalty, sometimes up to and beyond 23 is something that is going to have happened slightly more often than not in the top flight, based on save rates.

Arsenal just happen to have had both the opportunity and the luck to have been realities slightly odds on winner.

Friday, 5 January 2018

Making xG More Accessible

When the outputs of probabilistically modeled, expected goals met mainstream media it was very unlikely to have resulted in a soft landing.

With a few exceptions, notably Sean Ingle , Michael Cox and John Burn-Murdoch, the reaction to the higher media profile of expected goals has ranged from the misguided to the downright hostile and dismissive.

Jeff Stelling's pub worthy rant on Sky was entirely in keeping with how high the Soccer Saturday bar is set, (Stelling can't really think that, though. Can he?).

While the Telegraph's " expected goals went through the roof" critique of Arsenal's back foot point at home to Chelsea, wildly overstated the likelihood of each attempt ending up in the net.

Despite the understandable irritation, much of the blame for the negative reception for xG must lie with our own enclosed community, which created the monster in the first place.

Parading not one, but sometimes two decimal places is often enough to lose an entire audience of arithmophobic football fans, who would otherwise be receptive to the information that xG can be used to portray.

Presenting Chelsea as 3.18 xG "winners" against a 1.33 xG Arsenal team in a game that actually finished 2-2 is an equally clunky and far from intuitive way of presenting a more nuanced evaluation of the balance of scoring opportunities created by each side.

Quoting the raw xG inputs may be fine in peer groupings, such as the OptaPro Forum but if wider acceptance is craved for the concept of process verses outcome, a less number based approach must be sought.

When Paul Merson says that "Arsenal deserve to be in front" he's simply giving a valued opinion based on decades of watching and participating in top class football.

And, ironically when xG quotes Team A as having accumulated more xG than Team B in the first half of a match, it is similarly drawing upon a large, historical data pool of similar opportunities to quantify the balance of play, devoid of any cognitive bias or team allegiance.

Just as a detailed breakdown of Merson's neuron activity required to arrive at his conclusion would be both unnecessary and of very limited interest, merely quoting xG to a wider audience focuses entirely on the "clever" modelling, whilst completely ignoring any wider conclusion that could easily be expressed in football friendly terms.

I've been simulating the accumulated chance of a game being drawn or either team leading based on the individual xG of all goal attempts made up to the latest attempt, as a way of converting mere accumulated xG into a more palatable summary of a game.


 Here's the simulated attempt based xG timeline for Arsenal verses Chelsea.

It plots how likely it is that say Chelsea lead after 45 minutes given the xG of each team.

In this game, it's around a 50% chance that the attempts taken in the first half would have led to Chelsea scoring more goals than Arsenal.

It's around a 40% chance that the game is level (not necessarily scoreless) and around 10% that Arsenal lead.

So rather than quoting xG numbers to a largely unwilling audience, the game can be neatly summarised, from an xG perspective in a manner that isn't far removed from the eye test and partly subjective opinion of a watching ex professional.

"Chelsea leading is marginally the most likely current outcome, with Arsenal leading the least likely, based on goal attempts".

The value of xG is to accumulate process driven information to hopefully make projections that are solidly based, rather than reliant upon possibly poorly processed and inevitably biased, raw opinion based evaluations.

But that shouldn't mean we can't/won't use our data to present equally digestible, but number based opinion as to who's more likely to be leading in a single match....and express it in varying degrees of certainty, but in plainer English and without recourse to any decimal points.

Saturday, 30 December 2017

Jeff Stelling was Right about xG....For the Wrong Reasons.

Love it or loath it, totally get it or pack it away with opinions such as "foreign managers who don't know the Premier League are rubbish" or simply use it as one component in your predictive market of choice, there's no denying that expected goals made a mark in 2017.

Expected goals is most effective in the long term and in the aggregate, but there's an understandable desire to also parade it for individual games and individual chances.

Jeff Stelling, who only appears to think probabilistically, when lying, fully clothed in bed with a million pounds and a teddy wearing a Hartlepool shirt, may merely have been expressing the well documented caveats of using xG for a single game when he derided the xG thoughts of the Premier League's senior statesman, Arsene Wenger.

Betting on probabilistic outcomes, what are the odds of that!

Using xG rather than actual goals in a single game is simply a more nuanced look at the team process that went into the 90 minutes.

It approaches the difficult question of who "deserved" to win from both a larger sample size than goals, albeit one often twisted by game effects and provides an answer in terms of likelihood, rather than the more palatable, but unattainable level of certainty that has long been expected from TV experts.

1-0 wins can be subject to large amounts of random variation, There's probably even more if you have treated your fans to a 4-3 victory. Whereas 7-0 leaves much less room for doubt as to whom got their just rewards.

If you adopt a Pythagorean wins approach to the goals scored and allowed in these three single game scenarios, you would give a larger proportion of "Pythagorean wins" to the team that won 7-0 than you would the team that won 1-0 and by far the least to the side that triumphed 4-3.

So there is information to be extracted from even basic scorelines that goes beyond wins. draws and losses.

Individual xG chances takes this approach a step further to give indications of whether a team that won 1-0 was fortunate to win or unlucky not to have scored a hatful in competition with the efforts of their defeated opponent.

The most visible flaw of xG can be in individual chances, because although the amount of information available to define an opportunity is large, it is still far from complete.

The broad sweep of xG probabilities, drawn from large historical precursors often trumps an eyetest opinion, particularly where probability is an unfamiliar concept to those using years of footballing knowledge, rather than mathematical models to estimate whether or not a chance should have been converted.

There are also relatively easy to spot examples where lack of collected data has, in a largely automated xG process, generated values that are at odds with reality.

Joe Allen

The above and below examples from Stoke's recent game with WBA, illustrate the problems inherent with calculations made either without a visual check or a more complete set of parameters.


Ramadan Sobhi

 Looked at from the perspective of the WBA keeper, Ben Foster, the post shot xG for Allen's goal is likely higher than the xG for Sobhi's strike, based on placement, power, location, deflection or lack of.

But it is fairly obvious that the absence of Ben Foster himself in the latter shot has in reality elevated Sobhi's effort to a near 100%.

It is the equivalent of an un-fieldable ball in baseball or an un-catachable pass in football, NFL style, simply because of the field position of the designated catcher or saver.

I don't have our xG2 values for each attempt (it's Christmas), but I suspect Foster will be expected to save Sobhi's effort more often than Allen's, in a model that is ignorant of his wayward positioning for the former attempt.

That would be harsh on Foster, acting out his role as auxiliary attacker, chasing an injury time equaliser.

Keeper metrics are based on the savability of attempts on target and once Sobhi got his attempt on target, the true chance of a goal being scored is around 99.9% (to allow for the possibility of the ball bursting prior to crossing the line).

Using Sobhi's goal to evaluate Foster xG over or under performance would immediately put the keeper at a unfair disadvantage.

If we assume the chance finding the net with a weakly hit shot, along the ground, attempting to enter the goal around the centre of the frame, with no deflection (which effectively changes the shot location), taken from wide of the post and level with the penalty spot, is relatively modest in historical precedence, then Foster will already be nearly a goal worse off when comparing his xG goals allowed with his actual goals allowed.

The reality was a shot, that through little fault of his own, Foster was entirely unable to save, whereas the majority of similar attempts upon which models are built, would have featured a more advantageously positioned keeper.

Numerous unrecorded aspects of a goal attempt can greatly change individual xG estimates while still retaining a usefulness when aggregated.

Body shape when attempting to shoot from the striker's perspective, a bizarre trajectory of the flight of the ball, for example, can change actual expected conversion rates, transforming seemingly identical chances into near unsavable certainties or comfortable claims for the keeper.

It's likely that many post shot xG probabilities that are grouped in similar bins actually have a much wider range of true probabilities. They may not be as wrongly classified as the Foster example, but the implied accuracy inherent in multiple decimal places is bound to be an illusion.

There are a couple of ways to attempt to improve this conundrum.

Scrutinising each attempt is one labour intensive option, hoping that events largely even out in the aggregate is another (although randomness isn't always inherently fair).

A third option is to take indicators from the data we do have, that may help to highlight occasions where a chance may have been wrongly classified within a group of similarly computed xG values.

(This is unfortunately where I invoke a rare non disclosure clause).

So what happens to our xG2 keeper ratings if we try to account for factors that we haven't recorded and therefore are absent in our model?

Generally under performing keepers improve, whilst remaining below par and over performers are similarly dragged partway towards a less extreme level.

De Gea and Bravo have been respectively among the best and worst shot stoppers of the last three seasons.

Using models that incorporate much of the post shot information available, such as shot type, power, placement, rudimentary trajectory, deflections etc, de Gea concedes 84 non penalty attempts against a model's average prediction of 95.

For Bravo the numbers are 25 allowed against 15 predicted.

If we concede that some of the attempts that have been aggregated to make up the baseline for each keeper may have been miss-classified, we can apply a correction, based on hints we have in the data we do have, that may reclassify the attempts more accurately.

De Gea's average expected number of goals allowed falls to 92 (still making him above average, but slightly less super human) and Bravo's is given a slightly more forgiving 19 expected goals, rather than 15.

Acknowledging that a model is incomplete has lead to extremes being regressed towards the mean and that's probably no bad thing if these models are to be used to evaluate and project player talent.



Expected Goals is a work in progress tool, not the strawman, full of cast iron claims, that opponents invariably make on the metric's behalf. If you accept the inevitable and often insurmountable limitations, xG can still add much value to any analysis.

Don't be like Jeff, approach xG with an open mind....and also don't go to bed in a suit.