Wednesday, 19 March 2014

Judging A Side Over One Season.

For much of Alan Pardew's reign at Newcastle, his side has provided ample topics of discussion for the analytical community. Their fifth placed finish in 2011/12, with a goal difference of just 5, was widely seen as the product of a decent side getting a bit lucky and their near relegation the following season merely served to reinforce some misconceptions that exist around luck based over-achievement immediately snapping back into ledger balancing under-achievement.

This season the team has largely occupied the dead ground between Simon Gleave's Superior Seven and Threatened Thirteen. They currently hold 8th spot with very little likely up or downside, casting them as mere spectators to the title and Champions League tussles and the relegation grind. So it has been left to their off field employees to guarantee column inches.

Over the previous two and three quarter seasons, Newcastle's success rate, that counts draws as half wins, has averaged out at a shade over 50%. If the Premiership were allowed to meander on without a season's end reckoning, Newcastle since the start of 2011/12 have recorded results typical of a side finishing a 38 game season in 8th or 9th spot, with a goal difference in the region of zero. So, despite the bouts of squad strengthening and weakening during the various windows, might Newcastle have been broadly a "best of the rest" team under Pardew and has their seemingly up and down seasons simply been down to talent and luck interacting in differing doses?

Simulating season long results for one side or a whole league can quickly produce the range of possible outcomes we might expect from teams of a certain quality and it is perhaps a sobering thought, especially for managers that there is a non trivial chance that the best side from the Threatened Thirteen could find itself struggling to top 40 points over a 38 game campaign.

There is almost a 10% chance that a side with a true success rate of 0.5 may recorded 41 or fewer points, the limit of Newcastle's achievement in 2012/13 and a 6% chance that they could rise to the highs achieved in the previous season.

So although a dissected break down of Pardew's previous two completed seasons would give an entirely factual record of the points per game performance of his side in the 2011/12 and 2012/13 seasons, we can be much less sure if that see sawing record accurately tracked the talent of the team.

What we see, isn't a cast iron endorsement of quality or the lack of it, but a combination of randomness and talent.

It would have been easy to over react to the raw results posted by Newcastle in either season and while injury, Europa League involvement, squad churn and tactical change almost certainly impacted on their talent and luck based achievements, it seems unlikely that the 24 point difference between the 2012/13 team and the 2011/12 team truly expressed the quality gap between those two closely related editions of the side.

One way of trying to reconcile an apparent points based performance improvement to any real shift in a sides actual abilities is to utilize to expertise of the bookmaking industry. At the start of the season all of the spread firms publish the expected number of league points they think each side will gather by the end of the season. As the season progresses these estimates will be up or downgraded in line with each side's current rate of accumulation of points.

For example, prior to last Saturday's eventful head to head meeting with Newcastle, Hull had already posted 31 points, almost equal to the general preseason estimate of around 33 or 34 points. Their updated quote suggested they might get about 42 points. So from a performance perspective they were roughly 8 points ahead of where most bookmakers expected them to be.

We have also reached the second half of the season, where a side increasingly completes their head to head appointments and by mid March a team has played a rival home and away between 9 and 11 times. The chances of a team winning or drawing a game depends on not only the gap in quality between the teams, but also the venue. So the expected success rate in a return fixture will depend partly on how each team has progressed or regressed over the 190 or so days between the initial meeting, but also where each match was played.

Above, I've plotted the pre-game quoted success rates for actual Premiership matches, paired by home and away matches. So a home side that is expected to have a 60% success rate at home would likely be quoted at around 40% when they traveled to complete the reverse fixture.

If we look at the actual differences in the quoted success rates for return matches any deviation from this fairly strong, general relationship can be, at least partly attributed to the direction both sides have moved in over the ensuing timescale.

The case of Crystal Palace (or more accurately, Tony Pulis) demonstrate the process. Up until March 19th, they have played 10 teams twice. Stoke, Hull, Manchester United, Norwich, WBA and Southampton each made a return visit to Palace and Palace set off to reacquaint themselves at the homes of Arsenal, Spurs. Sunderland and Swansea.

In eight of those ten matches the quoted pregame success rate in the return was greater than predicted by the success rate in the initial game and the line of best fit from the plot above. Palace's quoted success rate posted by the bookmakers is on average 15% higher over the ten rematches, compared to a typical example from the line of best fit. In short, the recent quotes imply a level of improvement compared to Palace in the opening months of the season.

There are strength of schedule issues, Palace may have played ten teams whose true ability has dipped, but this can be addressed by a least squares approach to the overall loss or gain in success rate of each side, when measured against their actual repeat schedule. The order shuffles slightly if you apply this correction.

Which Teams Have Impressed Enough to be Upgraded from Preseason Estimates?

Team. Average Shift in Success Rate per Game.
Crystal Palace. 0.06
Liverpool. 0.04
Hull. 0.04
Arsenal. 0.04
Manchester City. 0.04
Chelsea. 0.03
Everton. 0.02
Newcastle. 0.01
Sunderland. 0
Southampton. 0
Manchester U. -0.01
Spurs. -0.01
WBA. -0.02
Norwich. -0.02
Stoke. -0.04
Swansea. -0.04
Cardiff. -0.04
Aston Villa. -0.04
Fulham. -0.05
WHU. -0.06

The table above shows the strength of schedule corrected, average raw success rate increases or decreases per game as seen in the repeat fixtures played so far by each Premiership team. It's a combination of perceived improvement (Liverpool) or decline (Manchester United) and partly a popularity contest.

Stoke are on course to better most preseason points estimates, but their marmite factor makes them a universally unpopular team, which may account for their apparent downgrading, despite a better than expected transitional campaign. Even punters have to be offered inducements to side with Stoke! But hopefully, most sides in the table have been up or downgraded predominately on the bookmakers interpretation of repeatable form.

To add context to the previous table, I've finally plotted the change in pregame quoted success rate against the rise or fall in expected final league points totals compared to the estimates at the start of the season. Fulham, for instance are on course to be 10 points adrift of preseason estimates and they've been downgraded by an average success rate of ~0.05. Slightly more than you would expect from the plot.

Palace are on course to gain about 36 points compared to the 31 quoted in August, but their position, well above the average line appears to reflect extremely well on current manager, Tony Pulis and less so on Ian Holloway, who was in charge for the bulk of 2013. Pulis, once again demonstrates an ability to turn relegation fodder into lower half scrappers who can take their season to the wire. When he was appointed Palace had 4 points from a possible 33 and were rock bottom.

Steve Bruce's tenure at Hull is also endorsed by the betting movements. Not only were the Tigers underrated, but there appears cause for added optimism above and beyond there current position of relative safety.

Both Merseyside teams have gather more points than expected, but Everton's upgrade still lies below the line of best fit, possibly indicating a reluctance to take their improvement wholly at face value. A caveat that also applies to Southampton.

Liverpool may be the most telling point on the plot. They were expected to gather around 66 league points when the season began, currently they may be able to top 80 points. An increase of around 16 points. However, if we roughly apply their average increased success rate that is currently being applied to them over a 38 game season, their points total increases by just around half of those 16 points. Outstanding, but partly unexpected performance isn't being taken at face value on a match by match basis. It is also being taken with a slight pinch of salt.

The Reds' uptick is as spectacular as the 38 game epic demonstrated by Newcastle, when they nearly claimed a Champions' League spot, but it may be well to remember that single season performances are a product of repeatable skills and also less repeatable variation. Even with points in the bag, a team might not really be quite as good (or bad) as their record might indicate.  

No comments:

Post a Comment