The Problems of Estimating Win-Contributions in Football (Soccer)

Executive Summary

  • The practical problems of obtaining regression-based estimates of win-contributions are illustrated using data for the English Championship regular season in 2015/16.
  • The estimated regression models for both attacking and defensive play are subject to various problems – low explanatory power, “wrong” signs, statistical insignificance, and sequence effects.
  • But the ultimate problem with regression-based approaches to player rating systems is that they reflect statistical predictive power of individual skill-activities and this may not coincide with game importance from an expert coaching perspective.
  • My conclusion is that a regression-based approach to player rating systems in the invasion-territorial team sports is not recommended.

 

Developing a player rating system in the invasion-territorial team sports using win-contributions at least in principal seems a straightforward procedure involving two stages. The first stage is to estimate the team-level relationship between skill-activities and match outcomes in order to get the weightings to be applied to each type of contribution. The most obvious statistical procedure to use is multiple regression analysis. The second stage is to calculate the overall win-contributions of individual players as a linear combination of their skill-activity contributions using the weightings estimated in the first stage. But although seemingly a straightforward multivariate problem statistically, this approach is fraught with practical difficulties. Indeed I will argue that it is often so difficult to obtain an appropriate set of weightings that a regression-based approach to estimating player win-contributions is just not viable.

 

To demonstrate the difficulty of a regression-based player rating system in the invasion-territorial team sports, I am going to use football (soccer) and specifically data from the English Championship last season (2015/16). In the table below I have reported the results for four regression models estimated using Opta data for the 552 regular-season matches (i.e. 1,104 team performances). These four estimated regression models illustrate many of the problems that bedevil regression models of team performance in football.

 

The first issue is to decide on the appropriate measure of team performance. Using league points for individual matches would imply an outcome variable with only three possible values (win = 3, draw = 1, loss = 0) which is highly restrictive and not really amenable to linear regression. It would be more appropriate to use a form of limited dependent variable (LDV) estimation technique such as logistic regression. To avoid this problem I typically use goals scored and goals conceded as measures of attacking and defensive performance, respectively, estimating two separate regression model which can be combined subsequently. Given the low-scoring nature of football and the Poisson distribution of goals, linear regression remains a rather crude statistical tool but has the advantages of simplicity and ease of interpretation.

 

Model Attack (1) Attack (2) Attack (3) Defence
Outcome Goals Scored Goals Scored Total Shots Goals Conceded
Total Shots 0.0805894   (0.006886)** 0.0698052

(0.006165)**

Shot Accuracy 3.08113     (0.1916)** 3.43328

(0.1940)**

Attempted Passes -0.000960860 (0.0005349) 0.000752025

(0.002346)

0.000554711 (0.0006363)
Pass Completion 0.842777   (0.6249) 10.1551

(2.729)**

-0.146641     (0.6778)
Dribbles 0.00485571 (0.005885) 0.0466076

(0.02582)

Dribble Success Rate 0.141340 (0.1925) 0.334019 (0.8458)
Open Play Crosses     -0.0277967   (0.005021)** 0.213755   (0.02090)**
Open Play Cross Success Rate     0.251270     (0.2396) 6.75974     (1.032)**
Attacking Duels     -0.0109001   (0.002935)** 0.0237501   (0.01283)
Attacking Duel Success Rate     0.365114     (0.3961) 6.69654     (1.727)**
Yellow Cards       -0.0745656   (0.02191)** -0.226067   (0.09603)* 0.00894283   (0.02583)
Red Cards           -0.403386     (0.1045)** -0.755729     (0.4587) 0.376279     (0.1230)**
Total Clearances   -0.0126510   (0.003719)**
Blocks -0.0103887   (0.01644)
Interceptions -0.0136031   (0.006029)*
Defensive Duels   -0.00894992   (0.003265)**
Defensive Duel Success Rate     1.69258     (0.4240)**
Goodness of Fit

R2

32.74% 26.94% 26.20% 6.26%

* = significant at 5% level; ** = significant at 1% level

 

The Attack (1) model uses goals scored as the outcome variable with five skill-activities – shots, passes, dribbles, crosses and attacking duels – plus two disciplinary metrics (yellow cards and red cards). The five skill-activities are each measured by two metrics – an activity-level metric (i.e. number of attempts) and an effectiveness-ratio metric (i.e. proportion of successful outcomes). So, for example, in the case of shots the activity-level metric is total shots and the effectiveness-ratio metric is shot accuracy (i.e. the proportion of shots on target).

 

The Attack (1) model exemplifies a number of the problems in using regression analysis to derive a set of weightings for player rating systems:

  • Low goodness of fit – the R2 statistic is only 32.7% indicating that less than a third of the variation in goals scored can be explained by the five skill-activities and discipline
  • “Wrong” signs – the estimated coefficients for attempted passes, open play crosses and attacking duels are all negative
  • Statistical insignificance – half of the estimated coefficients are not statistically different from zero
  • Sequence effects – most of the goodness of fit in the Attack (1) model is due to the two end-of-sequence metrics, total shots and shot accuracy. As the Attack (2) model shows, total shots and shot accuracy jointly account for 26.9% of the variation in goals scored.

Similar problems of wrong signs and statistical insignificance occur in the Defence model which only captures 6.3% of the variation in goals conceded across matches in part because no goalkeeping metrics have been included. But of course if goalkeeping metrics such as the saves-to-shots ratio are included, these dominate in much the same way as shooting metrics dominate estimated regression models of goals scored.

 

One solution to the problem that regression models will tend to attribute the highest weight to the end-of-sequence variables is to break the causal sequence into components to be estimated separately. The Attack (2) and Attack (3) models are an example of this approach with the Attack (2) model estimating the relationship between goals scored (final outcome) and shots (total shots and shot accuracy), and then the Attack (3) model estimating the relationship between total shots (intermediate outcome) and passes, dribbles, crosses, attacking duels and discipline. This approach resolves some of the problems encountered in the Attack (1) model. Although goodness of fit remains low with only 26.2% of the variation in total shots across matches explained by the Attack (3) model, all of the variables now have the expected signs so that attempted passes, open play crosses and attacking duels now have positive coefficients. In addition pass completion, open play cross success rate and attacking duel success rate are now statistically significant. But attempted passes, although now attributed a positive contribution, has a very small and statistically insignificant coefficient which reflects the underlying playing characteristic of the English Championship that ball possession has little predictive power for goals scored and match outcomes. And this remains the core problem with regression-based estimates of the weightings to be used in win-contributions player rating systems. Regression-based weightings reflect statistical predictive power not game importance. Ultimately I have been driven to the conclusion that regression-based player rating systems are not to be recommended for the invasion-territorial team sports. An alternative approach is the subject of my next post.

The Practical Problems of Constructing Win-Contribution Player Rating Systems in the Invasion-Territorial Sports

Executive Summary

  • Effective data-based assessment of individual player performance in team sports must resolve the three basic conceptual problems of separability, multiplicity and measurability. These problems are most acute in the invasion-territorial sports.
  • In statistical terms, the win-contribution approach to player rating systems can be seen as a multivariate problem of identifying and combining a set of skill-activity performance metrics to model team performance.
  • Regression analysis is the simplest statistical method for estimating the skill-activity weightings to be used in a win-contribution player ratings system with multiple skill-activities.
  • There are three practical problems widely encountered when using the regression method: (i) defining an appropriate measure of team performance; (ii) the skill-activity coefficients often have the wrong sign and/or are statistically insignificant; and (iii) the weightings reflect relative predictive power which may not necessarily coincide with the relative game importance of the specific skill-activity.

 

Evaluating individual player performance in team sports using a systematic data-based approach faces three basic conceptual problems:

 

  1. Separability – team performance needs to be decomposed into individual player performances but the degree of separability of individual player performances depends crucially on the basic game structure of the sport. Separability is highest in the striking-and-fielding sports such as baseball and cricket in which the core of the game is a one-to-one contest between the batter and pitcher/bowler. In the invasion-territorial sports such as the various codes of football, hockey and basketball the interdependency of player actions and the necessity for tactical coordination of players makes separability much more problematic.
  2. Multiplicity – if the game structure is such that individual players specialise in one specific skill-activity which is the dominant component of their performance (e.g. pitching and hitting in baseball with fielding treated as of only secondary importance) then evaluating player performance comes down to identifying the best metric to measure the specific skill-activity performance. However, particularly in many of the invasion-territorial sports, players undertake a multiplicity of skill-activities so that the evaluation of player performance requires finding the appropriate combination of a set of performance metrics.
  3.  Measurability – by definition, data-based player rating systems focus only on those aspects of player performance that are directly observable and measurable. To some this isn’t an issue and they will justify their position with the well-known dictum: “If you can’t measure it, you can’t manage it”. But this just isn’t true. Coaching and managing is about knowing the people for whom you are responsible and how they are performing, and learning how best to facilitate improvements in their performance. You are likely to be less effective as a coach and manager if you ignore available data on performance but likewise you will also be less effective if you focus only on the measurable aspects of performance. As always it is about using all the available evidence as best you can to improve performance. Motivation and resilience may not be directly observable and easily measurable but I doubt that there are many coaches who would argue that they are not important aspects of player performance.

As I have discussed in my previous post, there are two broad approaches to constructing player rating systems – the win-attribution approach and the win-contribution approach. The win-attribution approach, principally plus-minus scores, effectively finesses all three conceptual problems – separability, multiplicity and measurability – by focusing on outcome not process, and attributing the match score pro rata based on players’ game time. By contrast, the win-contribution approach focuses on the process of how the team performance is generated by individual player performance. And as a consequence, the win-contribution approach has to deal with the separability, multiplicity and measurability problems. Ultimately it comes down to:

  • Identifying the appropriate set of specific skill-activity performance metrics; and
  • Determining the best way of combining this set of performance metrics particularly the weightings to be used to produce an overall composite index of player performance

 

From a statistical perspective the win-contribution approach to player rating systems is just a standard multivariate problem of determining the relationship between team performance (the outcome) and the aggregate contributions of players by skill-activity (the predictors). The simplest approach is to estimate a linear regression model of team performance:

Team Performance = a + b1P1 + b2P2 + … + bkPk + u

where

P1, P2, …, Pk = skill-activity metrics (team totals)

b1, b2, …, bk = skill-activity weightings

a = intercept

u = random error term capturing non-systematic influences on team performance

The estimated regression coefficients can then be used to combine the skill-activity metrics for individual players to produce an overall measure of player performance.

 

In principle regression analysis offers a very straightforward method of creating a win-contribution player rating system for the invasion-territorial sports. However there are a number of practical problems in implementing the method to produce meaningful and useful player ratings.

 

Practical Problem 1: Defining an appropriate measure of team performance

This is not a straightforward as it might seem. If the regression model of team performance is to be estimated using season totals in a league then total league points or win-percentage are the obvious outcome measures to use but it is highly likely that data for several seasons will need to be combined in order to have enough degrees of freedom if you intend to use a large number of skill-activity metrics. The alternative approach is to use individual match data. In this case using a measure of match outcome is too restrictive. You run into all of the usual problems associated with limited dependent variable (LDV) models and are better advised to use logistic regression (or related approaches) rather than linear regression. If you want to keep using linear regression with individual match data, it is better to model team performance using scores, either a single model of the final margin or two separate models of scores for and scores against. In my work on player ratings in rugby union and rugby league, I have used individual match data and estimated two separate models for points scored and points conceded, and then combined these two models to create a model of the final margin. I found that this worked better than just estimating a single model for the final margin and seemed better able to identify the impact of different skill-activity metrics. Of course any score-based approach is more problematic in (association) football because it is such a low-scoring sport. I still tend to use goals scored and goals conceded as my outcome measures but I have also used own and opposition shots on target as outcome measures.

 

Practical Problem 2: The estimated regression coefficients may have the “wrong” sign and/or be statistically insignificant

When regression models of team performance are estimated it is more likely than not that several of the skill-activity metrics have coefficients will have the “wrong” sign and/or are not statistically significantly different from zero. There are two common reasons for wrong signs and/or statistical insignificance. First, skill-activity metrics usually suffer from a multicollinearity problem where individual variables are highly correlated with each other either directly (i.e. simple bivariate correlations) or in linear combinations. For example, teams which defend more and make more tackles also tend to make more interceptions, clearances and blocks. High levels of multicollinearity can make estimated coefficients unstable including being more prone to switching sign, as well as being more imprecise (i.e. higher standard errors) and hence more likely to be statistically insignificant. Another reason for wrong signs is that some activity-skill variables may be acting as a proxy for opposition skill-activities. For example, more defending partly reflects more attacking play by the opposition, and the more the opposition attacks, the more goals are likely to be conceded. As a consequence, defensive variables may be positively correlated with goals conceded even although more (and better) defending should be negatively correlated with goals conceded.

 

Practical Problem 3: Regression coefficients define the relative importance of contributions purely in terms of predictive power

Ultimately regression analysis is a technique for finding the linear combination of a set of variables that can provide the best predictions of the outcome variable. So the estimated coefficients are indicative of the relative predictive power of each variable. However predictive power does not necessarily equate to the relative game importance of contributions when you are dealing with processes comprising a sequence of different skill-activities. For example, in football the best predictor of goals scored is shots on target inside the box and so inevitably in any linear regression model of goals scored, the number of shots on target (especially inside the box) will have the highest weighting. But of course shots depend on passing and moving the ball forward successfully to create shooting opportunities, all of which in turn depends on winning possession of the ball in the first place. But all of these skill-activities provide much less predictive power for goals scored because they are further back the causal chain. Similarly when it comes to goals conceded the dominant predictor is the goalkeeper’s saves per shot ratio but the number of opposition shots allowed depends on defensive play such as tackles, interceptions, clearances and blocks. Defensive play is critical as a contribution to match success but statistically will always to be treated as of only secondary importance as a predictor of match outcomes. One way around this within the linear regression method is to estimate hierarchical models to capture the sequential nature of the game.

 

Despite the practical problems, it may still be possible to use the regression method to produce a meaningful and useful player rating system. After estimating the initial regression model of team performance using the basic skill-activity metrics, it is vital to undertake a specification search to find a model with better properties, specifically statistically significant coefficients with the “correct” signs as well as good diagnostics (i.e. random residual variation). The specification search may involve the use of different functional forms such as logarithms and quadratics. It can also involve the transformation of the basic skill-activity metrics. For example, suppose you have data on the total number of successful passes and the total number of unsuccessful passes. Instead of using the data in this form, it might be better to transform the two variables into a total activity measure (i.e. the total number of attempted passes = successful passes + unsuccessful passes) and a success rate (successful passes as a % of attempted passes). A more radical solution would be to use factor analysis to reconstruct the original set of metrics into a smaller set of factors based on the collinearity between the initial variables.

 

The best way forward, as always with all practical problems, is to investigate alternatives to find out what works best in a specific context. So, in that spirit, my next post will be an exploration of using alternative regression-based player rating systems to identify the “best” outfield players in the Football League Championship last season.

 

29th September 2016

Player Rating Systems in the Invasion-Territorial Team Sports – What are the Issues?

Executive Summary

  • Player rating systems are important as a means of summarising the overall match performance of individual players.
  • Player performance in the invasion-territorial team sports is multi-dimensional so that a player rating system needs to be able to combine metrics for a number of skill-activities.
  • There are two broad approaches to the construction of player rating systems – win-attribution (or top-down/holistic) approaches and win-contribution (or bottom-up or atomistic) approaches.
  • The plus-minus approach is the most widely used win-attribution approach based on the points margin when a player is playing.
  • A player’s plus-minus score is sensitive to context especially the quality of team mates and opponents. This can be controlled using regression analysis to estimate adjusted plus-minus scores.
  • The plus-minus approach offers a relatively simple way of measuring player performance without the need for detailed player performance data. But the approach works best in high scoring sports with frequent player switches such as basketball and ice hockey.

 

A central issue in sports analytics is the construction of player rating systems particularly in the invasion-territorial team sports. Player rating systems are important as a means of summarising the overall match performance of individual players. Teams can use player rating systems to review performances of their own players as well as tracking the performance levels of potential acquisitions. Moneyball highlighted the possibilities of using performance metrics to inform player recruitment decisions. But the relatively simple game structure of baseball, in essence a series of one-to-one contests between hitters and pitchers, means that the analytical problem is reduced to finding the best metrics to capture hitting and pitching performances.

 

Once we move into invasion-territorial team sports, we are dealing with sports which involve the tactical coordination of players and player performance becomes multi-dimensional. The analytical problem is no longer restricted to identifying the best metric for a single skill-activity per player (i.e. pitching or hitting in baseball) but now involves identifying the full set of relevant skill-activities and creating appropriate metrics for each identified skill-activity.

 

There are essentially two broad approaches to constructing player rating systems when player performances are multi-dimensional. One approach is the win-contribution (or bottom-up or atomistic) approach which involves identifying all of the relevant skill-activities that contribute to the team’s win ratio, developing appropriate metrics for each of these skill-activities, and then combining the set of skill-activity metrics into a single composite measure of performance. Over the years many technical and practical problems have emerged in constructing win-contribution player rating systems. I plan to discuss these in more detail in a future blog. Suffice to say, the most general criticism of the win-contribution approach is the difficulty of identifying all of the relevant skill-activities particularly those that are not directly and/or easily observable such as teamwork and resilience.

 

The alternative approach is a more holistic or top-down approach that uses the match outcome as the ultimate summary metric for measuring team performance and then attributes the match outcome to those involved in its production. I call this the win-attribution approach to player rating systems. The analytical problem is now the choice of an attribution rule.

 

Plus-Minus Player Ratings

The best-known win-attribution approach is plus-minus which has been used for many years in both basketball and ice hockey. It is a very simple method. Just total up the points scored and the points conceded whenever a specific player is on court (or on the ice), and then subtract points conceded from points scored to give the points margin. This represents the player’s plus-minus score.

 

For those of you not familiar with the plus-minus approach, here’s a simple example. Consider the following fictitious data for the first three games of a basketball team with a roster of 10 players.

The results of the three games are:

Game 1: Won, 96 – 73

Game 2: Lost, 68 – 102

Game 3: Won, 109 – 57

The minutes played (Mins) for each player, and points scored (PS) and points conceded (PC) while each player is on court, are as follows:

 

Player Game 1 Game 2 Game 3
Mins PS PC Mins PS PC Mins PS PC
P1 32 54 58 28 35 64 12 27 18
P2 29 63 45 25 33 56 13 30 21
P3 27 48 43 20 36 47 13 29 23
P4 33 58 52 27 32 63 15 33 22
P5 35 63 54 36 37 82 25 54 33
P6 22 49 24 28 44 43 33 72 30
P7 20 45 20 22 35 37 35 76 32
P8 16 37 27 24 38 51 33 77 36
P9 15 35 23 23 36 50 35 82 38
P10 11 28 19 7 14 17 26 65 32

 

A player’s plus-minus score is just the points margin (= PS – PC). So in the case of player P1 in Game 1, he was on court for 32 minutes during which time 54 points were scored and 58 points were conceded. Hence his plus-minus score is -4 (= 54 – 58). Given that the team won the game with a points margin of 23, the plus-minus score indicates a well below average performance. The full set of plus-minus scores are as follows:

 

Player Plus-Minus Scores Average Benchmark Benchmark Deviation
Game 1 Game 2 Game 3 Total
P1 -4 -29 9 -24 8.50 -32.50
P2 18 -23 9 4 10.27 -6.27
P3 5 -11 6 0 12.85 -12.85
P4 6 -31 11 -14 12.94 -26.94
P5 9 -45 21 -15 18.35 -33.35
P6 25 1 42 68 26.46 41.54
P7 25 -2 44 67 31.92 35.08
P8 10 -13 41 38 26.42 11.58
P9 12 -14 44 42 28.81 13.19
P10 9 -3 33 39 28.48 10.52

 

As well as the plus-minus scores for each player in each game, I have also reported the total plus-minus score for each player over the three games. I have also calculated an average benchmark for each player by allocating the final points margin for each game pro rata based on minutes played. So, for example, player P1 played 32 out of 48 minutes in Game 1 which ended with a 23 winning margin. An average performance would have implied a plus-minus score of 15.33 (= 23 x 32/48). His average benchmarks in Games 2 and 3 were -19.83 (= -34 x 28/48) and 13.00 (= 52 x 12/48), respectively. Summing the average benchmarks for each game gives an overall average benchmark of 8.50 for player P1. The final column reports the deviation from benchmark of the player’s actual plus-minus score.

 

In this example players P1 – P5 were given the most game time in Games 1 and 2 but all five players have negative benchmark deviations. The allocation of game time in Game 3 better reflects the benchmark deviations with players P6 – P10 given much more game time.

 

Limitations and Extensions to Plus-Minus Player Ratings

The advantage of the plus-minus approach is its simplicity. It is not dependent on detailed player performance data but only requires information on the starting line-ups, the timing of player switches, and the timing of points scored and conceded. The very first piece of work that I did for Saracens in March 2010 was to rate their players using a plus-minus approach. I focused on positional combinations – front row, locks, back row, half backs, centres, and backs – and calculated the plus-minus scores for each combination. Brendan Venter, the Director of Rugby, was very positive on the results and commented that “your numbers correspond to our intuitions”. It was on the basis of this report that I was engaged to work as their data analyst for five years. The plus-minus approach was used for player ratings in the early stages of the 2010/11 season but was eventually discarded in favour of a win-contribution approach.

 

One of the problems with the simple plus-minus approach is that it will give high scores to players who regularly play with very good players. So, if a particular player was fortunate enough to be playing regularly alongside Michael Jordan, they would have had a high plus-minus score but this reflects the exceptional ability of their team mate more than their own performance. My dear friend, the late Trevor Slack, one of the top people in sport management and a prof at the University of Alberta in Edmonton, used to call it the Wayne Gretzky effect. Those of you who know their ice hockey history will know exactly what Trevor meant. Gretzky was one of the true greats of the NHL and brought the best out of his team mates whenever he was on the ice. The Edmonton Oilers won four Stanley Cups with Gretzky in the 1980s.

 

Similarly it can be argued that the basic plus-minus approach does not make any allowance for the quality of the opposing players. Rookie players given more game time against weaker opponents will have their plus-minus scores inflated just as those players who get proportionately more game time against stronger opponents will see their plus-minus scores reduced. One way around the problems of controlling for the quality of team mates and opponents is to use Adjusted Plus-Minus which involves using regression analysis to model the points margin during a “stint” (i.e. a time interval when no player switches are made) as the function of own and opposing players. The estimated coefficients represent the adjusted plus-minus scores. There have also been various attempts to include other performance data to create real adjusted plus-minus scores which represent a hybrid of the win-attribution and win-contribution approaches.

 

Overall the plus-minus approach offers a relatively simple way of measuring player performance without the need for detailed player performance data. But the approach works best in high scoring sports with frequent player switches such as basketball and ice hockey. The plus-minus approach is not well suited to football (soccer) which is low scoring and teams are restricted to only three substitutions.

 

15th September 2016

The Real Lessons of Moneyball

Executive Summary

  • Moneyball was a game-changer in raising general awareness of the possibilities for data analytics in elite sport.
  • Always remember that Moneyball is only “based on a true story” and does not provide an authentic representation of how data analytics developed at the Oakland A’s.
  • The conflict between scouting and analytics is exaggerated for dramatic effect.
  • The real lesson of Moneyball is the value of an evidence-based approach. This goes beyond the immediate context of player recruitment in pro baseball to embrace all coaching decisions in all sports.

blog-8-graphic-1                     blog-8-graphic-2

 

 

 

The publication of Moneyball in the Fall of 2003 proved to be a real game-changer both for sports analytics and myself personally. The book, and subsequent Hollywood film with an A-List cast, has probably done more than anything else to raise general awareness in elite sport of the potential competitive advantages to be gained from data analytics.

 

I was visiting the University of Michigan to give some presentations on what business could learn from elite sport in September 2003, just after Moneyball was first published. At that point I was making good progress in the analysis of player performance data in football (soccer) and had constructed what I later called a structural hierarchical model of invasion team sports. As I was being driven to Detroit Airport at the end of my visit, Richard Wolfe, a sport management prof, told me that I had to read Moneyball, saying “it’s you but baseball”. I picked up the book at the airport around 6pm that Friday night and had completed my first read by 6am the following morning. Here was someone actually using data analytics in elite sport to gain a competitive advantage. And there’s nothing like success to persuade others to adopt an innovation. I now had real evidence of what analytics could do; I just needed access to coaches to spread the word – easier said than done. I lost count in the coming months of the number of conversations I started with “Have you read Moneyball?” But people started to take notice and the invitations to meet directors and coaches began to follow.

 

The first coaching staff to invite me into the inner sanctum was at Bolton Wanderers managed by Sam Allardyce, now the England manager. I made a presentation on Moneyball and the implications for football in early October 2004 at their quarterly coaches’ away day organised by Mike Forde, the Performance Director who subsequently became Chelsea’s Performance Director. Big Sam remained pretty quiet during the presentation, restricting himself to points of information and summarising the discussions, but never revealing his opinion on what I was saying. It was Sam’s assistant, Phil Brown, currently manager at Southend United, who was the most vocal and concerned that I seemed to be advocating the use of algorithms for team selection (which I wasn’t). Bolton followed up by getting me to do some analysis of the FA Premier League including identifying the critical success factors in away performances. I also outlined an e-screening procedure for identifying prospective player acquisitions to be prioritised in the scouting process. Although it was something of an achievement to have a Premiership team ask for an analytics input in 2005, the frustration was that I was kept at arm’s length from the coaching staff and only received limited feedback on my reports. Being told that your report had provoked an “interesting discussion by the coaches” was satisfying but nothing more. What I really needed to know was what precisely had interested the coaches and how could I expand and improve the analysis to deal with any limitations they saw in it. It is an important lesson – data analytics only really works when there is full engagement between the coaches and the data analysts. My subsequent experience at Saracens showed how much I could improve the analysis by having direct contact with the coaches and being included in their discussions. As one senior member of the coaching staff at Saracens put it, I effectively became an “auxiliary member of the coaching staff” in the same way as the performance analysts and sports scientists.

 

Of course the biggest impact of Moneyball for me personally was to eventually connect with Billy Beane and to work with him in exploring the potential for analytics in football – the Oakland A’s own the MLS San Jose Earthquakes franchise. Seeing Billy and his staff at work at the A’s was a great education and allowed me to fully appreciate the “true story”. The book and the film are after all only “based on a true story” and make no claim, particularly the film, to be an authentic representation of the development of data analytics at the A’s.

 

Having seen close-up how the A’s actually operate, I’ve been better placed to respond to the criticisms of Moneyball. For example, a head of scouting at a leading European football club recently put to me that “perhaps Moneyball had become a bit of an albatross”. This head of scouting is a former player and is a very progressive individual, open to innovation to improve how things are done. But when we met he was initially very wary of adopting a more analytical approach to scouting since he thought that this would mean a reduced role for the scouts. He was won over when I pointed out that an evidence-based approach would actually enhance the role of scouts since their scouting reports would become key data that would be used in a much more meaningful way rather than just gathering dust as I suspect happens to most scouting reports.

 

The Oakland A’s managed by Billy Beane operate in a fundamentally different way to the Hollywood A’s managed by Brad Pitt. There is an over-emphasis in the film for dramatic effect on the conflict between the traditional scouting approach and the analytical approach. But in reality what differentiated the A’s under Billy Beane was the commitment to an evidence-based approach and a preparedness to question conventional wisdom rather than relying on gut instinct. It was the questioning of conventional wisdom that attracted Michael Lewis to the story in the first place. He started his professional life as a financial trader, trying to make profits on the financial markets by exploiting market inefficiencies caused through the over-reliance by other traders on conventional wisdom that had become outdated. Lewis applied the same lens to the MLB players’ labour market and saw Billy Beane as a kindred spirit, taking advantage of the over-reliance on traditional scouting methods. In particular, Billy Beane had recognised that the market didn’t factor into hitter salaries the ability to draw walks. Hitter salaries were mainly driven by batting and slugging averages. As economists say, walks were a “free lunch” because conventional wisdom saw them more as a pitcher error than due to the hitter’s skill in selecting when to swing and when not. Two economists, Hakes and Sauer (Journal of Economic Perspectives, 2006), have shown that on-base percentage (OBP) which includes walks had no significant effect on hitter salaries in the five seasons prior to the publication of Moneyball but in 2004 OBP was the single most significant predictor of hitter salaries. Conventional wisdom had changed because of he publication of Moneyball and, just as economic theory predicted, the ensuing market correction meant that particular free lunch quickly disappeared.

 

What is also forgotten is that, as in so many success stories, success was a long time coming. The evidence-based culture at the A’s was not created by Billy Beane (the film is very misleading in this respect) although he has played a leading role in the use of analytics by the A’s. But the possibility of gaining a competitive advantage from using sabermetrics was first recognised by Sandy Alderson, Billy’s predecessor as GM, and a long-time admirer of the work of Bill James. It was Sandy Alderson who employed the consultant, Eric Walker, to develop some “Bill James-type stuff that would be proprietary to the A’s”. Alderson passed on Walker’s report to Billy. The rest, as they say, is history.

 

It is the value of an evidence-based approach to all coaching decisions that is the real lesson of Moneyball. It is a lesson that goes beyond the immediate context of Moneyball – player recruitment in pro baseball – and is transferable to all sports. Yes the principal applications of data analytics remain in the area of player recruitment but as my experiences in football, rugby union and other sports have shown, all coaching decisions can potentially be supported to a greater or lesser extent by “knowing the numbers”, systematically analysing the available evidence both quantitative measures and qualitative assessments, and always preferring analysis over anecdote when justifying a course of action. That’s the real lesson of Moneyball.

 

7th September 2016

Rio 2016 and the Marginal Gains from Data Analytics

Executive Summary

  • Team GB’s success at Rio 2016 continues the strong upward trend in “fundamental performance” evident since Atlanta 1996.
  • The upward trend in Olympic performance has resulted from a “perfect storm” of a number of mutually-reinforcing forces including National Lottery funding, performance-related resource allocation, the central focus on Olympic success, and the widespread adoption of a marginal-gains philosophy.
  • Data analytics has been one component of the marginal-gains philosophy in a number of Olympic sports.
  • A marginal-gains philosophy, specifically the use of data analytics, is always more likely in resource-constrained teams in need of a “David strategy” in order to compete effectively with resource-richer rivals.

 

The United Kingdom is rightly basking in the glory of an outstanding performance at the Rio Olympics. Finishing second in the medal table behind the USA and ahead of China is a phenomenal achievement and it is no idle boast to claim that Team GB is now a sporting superpower. Team GB’s medal haul represented its highest medal total in the summer Olympics with the exception of London 1908. The target set by UK Sport was to exceed the 47 medals achieved at Beijing 2008, the previous highest ever in an overseas Olympics. Team GB smashed this target with 67 medals and in so doing became the first ever team to increase their medal total immediately after hosting the previous Olympics.

 

The transformation in Team GB’s performance can be seen in Figures 1 and 2 which track the medal total and number of gold medals, respectively, at the summer Olympics since World War Two. I have included a 3-period (i.e. 8-year) moving average to provide an underlying benchmark of “fundamental performance” to control for random variation. When you look at the medal total you can see that Team GB’s performance is characterised until 1996 by a long-term cyclical pattern somewhat exaggerated in the 1976 – 1996 cycle by the effects of the boycotts at Moscow 1980 and Los Angeles 1984. From this perspective there is very strong evidence of a structural break after Atlanta 1996 with a strong upward trend in performance levels. By 2016 the fundamental medal total is estimated to have risen to 59.7 medals compared to only 19.7 medals in 1996.

 

Figure 1: Team GB, Medal Total, Summer Olympics, 1948 – 2016

Blog 7 Graphic (1).png

 

The structural break is even clearer in Figure 2 when you consider only the number of gold medals won. Unlike the medal total, the fundamental performance for gold medals over the 40-year period 1956 – 1996 more or less flatlines around an average of 4.0 gold medals. By 2016 the fundamental gold medal count had risen to 25.0 gold medals, a truly astonishing transformation.

 

Figure 2: Team GB, Gold Medals, Summer Olympics, 1948 – 2016

Blog 7 Graphic (2)

 

So why has Team GB been so success in the last 20 years? The instant post-mortem by media pundits has focused attention on at least seven factors:

  1. The introduction of National Lottery funding for elite sport in 1997
  2. The performance-related allocation of funding to different sports that rewards medal success and penalises underperformance, which in turn funnels down to supporting only the best individual and team medal prospects
  3. The exceptional athletic talent pool
  4. The ability to attract and retain the best coaches and support staff
  5. The widespread adoption of the “marginal-gains” philosophy that focuses on the continuous search for innovation in equipment and athletic preparation to improve performance
  6. A four-year funding cycle geared principally to supporting Olympic performance with European and World Championships increasingly seen as stepping stones
  7. A more level playing field as anti-doping efforts create fairer competition, benefitting those teams such as Team GB that have been much more committed to eradicating the use of performance-enhancing drugs

As always, a radical transformation in performance is due to a “perfect storm” (or what economists call “cumulative causation”) when a number of mutually-reinforcing forces come together to create a virtuous circle of improvement (or a vicious circle of decline in the case of a sharp drop in performance levels).

 

Data analytics is one component of the marginal-games philosophy in a number of the Olympic sports. It was particularly noteworthy that the track cyclist, Mark Cavendish, paid explicit tribute to the role of data analysts in an interview the day after winning his silver medal in the Omnium event. In this respect the contribution of the English Institute of Sport (EIS) must be recognised. The EIS is an international centre of excellence in the provision of support services to the Olympic sports. The EIS has long been a pioneer in performance analysis and have been heavily involved in the development of data analytics in several Olympic sports. Through the EIS I was invited to be a “fresh pair of eyes” in one of the Olympic sports in which the performance analysts were seeking to develop their analytical capabilities. I was very impressed not only by the knowledge and commitment of the performance analysts I worked with but also their attitude – their openness to new ideas and willingness to work with others outside their own sport to develop their own expertise. It was a great example of the marginal-gains philosophy in action.

 

I listened to Matthew Syed, the Times sports columnist, being interviewed on Radio 5 Live on why Team GB had done so well. Syed is always great value as someone who combines the analysis of the search for excellence in elite sport with the experience of having himself competed at the highest level in table tennis. He stressed the importance of the marginal-gains philosophy in Olympic sports and lamented the failure of football to embrace a similar philosophy. When asked why football did not seem to adopt the marginal-gains philosophy, Syed blamed the short-termism in football and the attachment to conventional ways of doing things. He has expanded on the limiting effects of conventional wisdom in football in his Times column today – “Conventional wisdom rules in football, but the game’s coaches need to be more innovative” (Times, 29th Aug 2016).

 

The importance of a long-term approach alongside a marginal-gains philosophy has been unwittingly recognised by some of Team GB’s competitors. Australia and others have criticised GB track cycling for producing Olympic results out-of-line with their performances in major championships in the run up to Rio. But it is no surprise given that Olympic performance is the be-all-and-end-all for funding provided via UK Sport. World and European success in any sport is always very satisfying but the harsh reality of the pursuit of Olympic excellence in the UK is that every other major championship has become a stepping stone, a valuable learning opportunity, en route to the next Olympic Games.

 

Perhaps it’s the economist in me but I keep coming back to financial incentives as a key element in the story of Team GB’s Olympic success. The marginal-gains philosophy is a “David strategy”, a means for resource-constrained organisations to compete with resource-richer rivals. Premiership football with the enormous revenue streams generated from their media rights face relatively few financial constraints in the pursuit of sporting success and can afford to throw money at the problem. If the team isn’t performing, buy more star players and/or sack the head coach and hire a new one. The marginal-gains philosophy, specifically the use of data analytics, is always more likely to be adopted by teams with resource constraints due to their small economic size, salary caps or a reliance on public funding. It’s why ultimately data analytics is always more likely to be play a more meaningful role in Olympic sports and rugby union rather than Premiership football (and why I’m working for AZ Alkmaar in the Dutch Eredivisie rather than a Premiership club in England!)

 

29th August 2016

The Importance of Defence in Winning Promotion to the Premier League

Executive Summary

  • In the 2015/16 Championship goals conceded were a much stronger predictor of league performance than goals scored.
  • The strongest teams defensively all finished in the top six.
  • Despite their attacking strength the promotion hopes of both Fulham and Brentford were fatally undermined by defensive weaknesses.
  • Keeping a clean sheet has a league points value more than double that of scoring a single goal.
  • Defensive efficiency (based on the ratio of opposition shots on target inside the penalty box relative to total defensive contributions) is a very strong predictor of goals conceded.
  • Improved defence is a cost-effective Moneyball strategy for improving league performance based on tactical organisation, coaching and practice.

 

It’s very early days in the Championship and a mug’s game to predict with any certainty who will be promoted with so few games played. But already there are ominous signs for the promotion prospects of Nottingham Forest, Burton Albion and Blackburn Rovers. Why? Quite simply they have been defensively weak in their first four games and a strong defence is a fundamental building block for any team with serious ambitions of getting promoted to the Premier League.

 

So what’s the evidence to support the assertion that defence is crucial to winning promotion to the Premier League? Well let’s look at the final league table for the Championship last season.

 

Final League Table, FL Championship, 2015/16,

  P W D L F A Pts
Burnley 46 26 15 5 72 35 93
Middlesbrough 46 26 11 9 63 31 89
Brighton & Hove Albion 46 24 17 5 72 42 89
Hull City 46 24 11 11 69 35 83
Derby County 46 21 15 10 66 43 78
Sheffield Wednesday 46 19 17 10 66 45 74
Ipswich Town 46 18 15 13 53 51 69
Cardiff City 46 17 17 12 56 51 68
Brentford 46 19 8 19 72 67 65
Birmingham City 46 16 15 15 53 49 63
Preston North End 46 15 17 14 45 45 62
Queens Park Rangers 46 14 18 14 54 54 60
Leeds United 46 14 17 15 50 58 59
Wolverhampton Wanderers 46 14 16 16 53 58 58
Blackburn Rovers 46 13 16 17 46 46 55
Nottingham Forest 46 13 16 17 43 47 55
Reading 46 13 13 20 52 59 52
Bristol City 46 13 13 20 54 71 52
Huddersfield Town 46 13 12 21 59 70 51
Fulham 46 12 15 19 66 79 51
Rotherham United 46 13 10 23 53 71 49
Charlton Athletic 46 9 13 24 40 80 40
MK Dons 46 9 12 25 39 69 39
Bolton Wanderers 46 5 15 26 41 81 30

 

A casual inspection of the league table suggests that goals conceded rather than goals scored are a better predictor of league performance, and this is indeed the case. Simple regression analysis indicates that goals conceded alone explains 74.4% of the variation in league points whereas goals scored only explains 60.2%. The clubs finishing in the top six all ranked as the best defensively and the three clubs winning promotion to the Premier League – Burnley, Middlesbrough and Hull City (via the play-offs) – had the three best defensive records in the Championship, averaging only 0.73 goals conceded per game which represents a 39.6% performance gain compared to the league average of 1.21 goals conceded per game. By comparison the three promoted clubs had a 23.9% performance gain in attack, averaging 1.50 goals scored per game. Middlesbrough in particular provide a very compelling case for the relative importance of defence over attack, having the best defensive record in the Championship last season but ranking only 8th in goals scored, and winning automatic promotion on goal difference over Brighton and Hove Albion due crucially to conceding 11 fewer goals.

 

Brentford and Fulham both demonstrated last season how a weak defence can seriously undermine a promotion push. Both clubs out-scored Middlesbrough and indeed Brentford were joint top scorers with Burnley and Brighton and Hove Albion. But Brentford could only finish 9th after averaging 1.46 goals conceded per game, the 8th worst defensive performance in the Championship and only marginally better than MK Dons who were relegated. Fulham performed even worse defensively, having the 3rd worst defensive record after the two relegated clubs, Bolton Wanderers and Charlton Athletic.

 

The importance of defence is often undervalued as Anderson and Sally persuasively argue in The Numbers Game (Viking, London, 2013). It’s partly a form of decision bias because attacking success (i.e. goals scored) is a positive observable event whereas defensive success is all about non-occurrences, not allowing the opposition to have shots at goal and not conceding goals. We tend to over-emphasise positive observables and undervalue non-occurrences. Anderson and Sally call it the ‘inequality central to understanding football’. In football 0 > 1 because ‘goals that don’t happen are more valuable than those that do’. (p. 131). The expected value in terms of league points from keeping a clean sheet in a game is considerably higher than the expected league points from scoring a single goal. Anderson and Sally analysed the points value of goals scored and conceded in the Premier League over 10 seasons and found that a clean sheet had an expected points value of nearly 2.5 whereas scoring a single goal had an expected points value of only just over 1.0. A very similar pattern is observed in the Championship last season.

 

Points Value of Goals Scored and Conceded, FL Championship, 2015/16

Blog 6 Graphic.png

 

Keeping a clean sheet yielded an expected return of 2.3 league points in the Championship last season whereas scoring a single goal yielded an expected return of 1.1 league points. The three promoted clubs had the most clean sheets with Middlesbrough amassing 22 clean sheets while Burnley and Hull City both had 20 clean sheets. In contrast, Fulham had only 4 clean sheets and Brentford had only 8 clean sheets, the same number as the bottom club, Bolton Wanderers.

 

Defence is an obvious Moneyball strategy in the sense that developing an effective defence tends to be a more cost-effective way of improving performance. The football players’ labour market tends to reflect the decision bias towards goals scored with strikers attracting a substantial salary and transfer-fee premium. Defenders have tended to be undervalued although perhaps at the top end the recent transfer of the young defender, John Stones, from Everton to Manchester City, may signal a market correction. But an effective defence is also cost-effective because ultimately it is down to tactical organisation especially good positional decision-making that can be improved through coaching and time on the training pitch. Defending is reactive and in some ways much more amenable to coaching and practice than attacking which is more creative and instinctive and hence much more difficult to coach.

 

Effective defending is partly about effort but as always it is not just the quantity of defensive activity but also the quality of that defensive activity. Defensive effort as measured by the total number of challenges, blocks, interceptions and clearances partly reflects possession share with struggling teams having to defend more. In addition good defending is as much about being in the right place at the right time and so isn’t necessarily reflected in tally counts of actual defensive contributions – what Anderson and Sally call the “Maldini Principle” or “dogs that don’t bark”. I have found that a useful measure of effective defence is the ratio of opposition shots on target inside the box (measured as the deviation from the league average) relative to defensive effort. I call this ratio “defensive efficiency” (and scale it by 104 for presentational purposes) since it measures defensive output (shots allowed) relative to input (defensive effort). Good defence is about restricting the number of opposition shots at goal but not all shots at goal are of equal threat as expected goals analysis has highlighted. The most dangerous shots are those from inside the penalty box on target and so restricting this type of shot is the critical aspect of effective defence.

 

Defensive Performance, FL Championship, 2015/16, Ranked by Defensive Efficiency

  Defensive Effort Opposition Shots Opposition Shots On Target Inside Box Defensive Efficiency Goals Conceded
Derby County 94.674 11.891 1.935 89.17 43
Hull City 96.065 10.587 1.957 85.62 35
Middlesbrough 99.804 11.370 2.109 67.16 31
Blackburn Rovers 93.304 11.652 2.348 46.21 46
Sheffield Wednesday 92.065 11.000 2.370 44.47 45
Burnley 96.043 14.391 2.370 42.63 35
Brighton and Hove Albion 92.761 11.804 2.413 39.45 42
Preston North End 95.543 11.957 2.435 36.03 45
Wolverhampton Wanderers 104.739 13.500 2.435 32.86 58
Queens Park Rangers 103.043 12.261 2.609 16.53 54
Reading 96.826 10.783 2.739 4.12 59
Ipswich Town 93.957 13.348 2.761 1.93 51
Nottingham Forest 110.674 14.043 2.826 -4.26 47
Huddersfield Town 95.304 11.065 2.848 -7.22 70
Cardiff City 96.565 12.717 2.870 -9.38 51
Bristol City 91.826 12.783 2.957 -19.33 71
Birmingham City 98.761 14.109 3.087 -31.18 49
Leeds United 93.891 12.783 3.087 -32.80 58
Brentford 92.239 12.978 3.196 -45.17 67
Charlton Athletic 100.348 16.587 3.283 -50.19 80
Bolton Wanderers 100.609 14.435 3.413 -63.02 81
MK Dons 92.826 14.065 3.457 -72.99 69
Fulham 96.087 14.478 3.500 -75.04 79
Rotherham United 100.652 13.804 3.696 -91.07 71

 

There is virtually no correlation between defensive effort and goals conceded (r = 0.026) whereas defensive efficiency is very highly correlated with goals conceded (r = -0.849). Derby County (89.2), Hull City (85.6) and Middlesbrough (67.2) were the highest ranked teams in terms of defensive efficiency with all three teams being promoted to the Premiership. The lowest ranked teams were Rotherham United (-91.1), MK Dons (-73.0) and Bolton Wanderers (-63.0) with all three teams finishing in the bottom four.

 

So as Championship managers evaluate the early-season form of their teams, the message is very clear to Philippe Montanier at Nottingham Forest, Nigel Clough at Burton Albion, and Owen Coyle at Blackburn Rovers – improve your defence quickly or both your promotion hopes and job security will decline very rapidly.

 

23rd August 2016

Are Pogba and Stones Really Worth The Money?

Executive Summary

  • Statistical models of the football transfer market show a very high level of systematic variation in transfer fees.
  • Transfer-fee inflation tends to be closely associated with revenue growth, particularly the growth of TV media revenues.
  • Transfer valuations of individual players depend on five main value-drivers: player quality, selling club, buying club, current contract expiry date, and market conditions.
  • Player quality can be captured using five quality indicators: age, career experience, current appearance rates, current and career scoring rates, and international caps
  • Comparative (or benchmark) valuations of players involve combining the quality indicators and other value-drivers of transfer fees using weights extracted statistically from actual transfer fees (via regression analysis).
  • Fundamental valuations of players involve estimating the incremental revenue value of player contributions on and off the field. In the invasion-territorial team sports this requires a player rating system to combine multi-dimensional performance data into a single composite measure of overall player performance.
  • My player valuation algorithm indicates that the differential in the transfer valuations of Pogba and Stones is justified by Pogba’s greater experience, his goals contribution, and the greater size and status of his previous club, Juventus.

 

This week saw the two Manchester clubs splash the cash, paying a combined total of £136.5m in transfer fees for just two players. Manchester City paid Everton £47.5m for John Stones while Manchester United paid Juventus £89m for Paul Pogba. Are Pogba and Stones really worth the money? It was just this type of question that got me into sports analytics 20 years ago. Working with my good friend and fellow applied economist and sports fanatic, Steve Dobson, we investigated the economics of the football players’ transfer market in England. In particular we wanted to know just how rational football clubs were in setting transfer fees. We put together a dataset covering 1,350 transfers between English clubs during the period from July 1990 through to August 1996 which included Alan Shearer’s world record transfer from Blackburn Rovers to Newcastle United for £15 million. We published a couple of journal articles on our findings and subsequently extended our research to include players transfers in non-league football.

 

In common with other studies of the English football transfer market in the mid-1990s, we found that the transfer market was very rational with our statistical model able to explain around 80% of the variation in transfer fees. Football clubs were using the available information on player quality in a very systematic way to set transfer fees. Also, because our data covered six seasons and four different divisions, we were able to look at trends in transfer fees over time and found some evidence that the rate of increase in transfer fees over time reflected revenue growth. The current transfer window reinforces the relationship between revenue growth and transfer fees. The largesse of the transfer fees paid for Pogba and Stones is just part of another surge in transfer fee inflation fueled by the massive jump in Premiership TV revenues.

 

When we first started to present our findings at economics conferences, the media took quite a bit of interest with several articles on the theme of “boffins apply science to the beautiful game”. We were repeatedly asked if the statistical analysis of transfer fees could be used to value players. This prompted me to start to develop the SOCCER TRANSFERS player valuation system and this really marks my switch from academic data analysis into sports analytics. My focus moved from building a statistical model to explain the variation in 1,350 transfer fees to developing a system to use player and market data to value individual players. Ultimately I constructed a valuation process, a way of bringing together different types of information about players and then converting that information into a financial value. Regression analysis identified the relevant information as well as estimating the conversion rates (known an implicit or hedonic prices) for converting the different types of player information into financial values.

 

My player valuation algorithm initially identified four main value-drivers: player quality, size and divisional status of the selling club, size and divisional status of the buying club, and transfer market inflation. In the mid-1990s there were no player performance data available beyond appearances, goals scored and disciplinary records. So player quality had to be measured using five principal quality inidactors – age, career experience, current appearance rates, current and career scoring rates, and international caps. And remember at that time there were no websites with comprehensive player data. Instead the data had to be painstakingly extracted by hand from the various editions of the Rothmans (now Sky Sports) Football Yearbook. Overseas players were particularly difficult to value because of the difficulties in obtaining data on leagues outside the UK.

 

Another problem I encountered was that the initial analysis was pre-Bosman. The Bosman ruling was first published by the European Court of Justice in September 1995 but initially only applied to cross-border transfers. It was not until 1998 that UK domestic transfers became subject to Bosman free agency with no transfer fees payable for out-of-contract players over the age of 23. Fortunately as I started to provide player and squad valuations for clubs, financial institutions and the courts, I was able to get access to confidential information on contract expiry dates which allowed me to construct an adjustment (formally a polynomial decay function) to capture the decline in transfer value as players entered the last two years of their contract.

 

The revised version of the player valuation algorithm, which I still use today, takes the general form:

SOCCER TRANSFERS Player Valuation System

Blog 5 Graphic

Effectively this approach provides a comparative (or benchmark) valuation of players in which the statistical analysis of actual transfer fees yields estimates of the implicit prices of the various indicators of player quality as well as valuing the impact of differences in buying and selling clubs, transfer market inflation and the remaining length of contract. This algorithm still works incredibly well today. In particular there is little improvement in accuracy by including the very detailed player performance data now produced by commercial companies such as Opta and ProZone. The indicators of player quality I used 20 years ago retain their predictive value. Over the years the player valuation algorithm has been used for a number or purposes including assisting teams in their transfer dealings, determining the required level of player insurance cover, providing an input into the corporate valuation of clubs, estimating the player asset values as security in debt transactions, and resolving legal and tax disputes. A variant of the algorithm was also developed to provide player salary benchmarks in the Scottish Premier League.

 

Although detailed player performance data provides little improvement in comparative player valuations, it does, however, open up the possibility of providing fundamental valuations of players based on an estimation of the incremental revenue gains generated by a player’s contributions on and off the field. Top players are very expensive assets. In any other business there would be an investment appraisal process involving the projection of the future stream of value expected to be generated by the acquired asset relative to the financial costs incurred. While professional sports teams will apply this type of due diligence to stadium and other tangible investments, most have deemed investment in playing talent to be too complex to be amenable to this type of approach. But the American sports economist, Gerald Scully, showed in a paper published in the American Economic Review in 1974 that it is possible to calculate financial values of players based on their playing contributions. Using data from Major League Baseball in 1968 and 1969, Scully developed a two-stage procedure in which he first estimated a regression model of the relationship between batting and pitching metrics (Scully used the slugging average and the strikeout-to-walk ratio) and team win%, and then estimated a second regression model for the relationship between team win% and team revenue. Using these two regression models, Scully could then calculate how much each player contributed to team performance and team revenue.

 

Of course, to apply Scully’s methodology to the invasion-territorial sports such as the various codes of football, hockey and basketball when player performance is multi-dimensional, you need to develop composite player rating systems to measure a player’s overall contribution. And you also need to build in an estimate of the player’s image value given the importance of media, merchandising and sponsorship revenues. The complexities of developing player rating systems in the invasion-territorial sports will be the subject of several future blogs.

 

But, to come back to the original question, are Pogba and Stones really worth the money? It is impossible to answer that question fully without knowledge of the total financial obligations involved in both deals including salary costs, transfer fees and agent fees. But it is possible to compare the transfer valuations of both players using my valuation algorithm. What I can say is that if Stones is valued at £47.5m under current market conditions, then the estimated valuation for Pogba on the same basis would be £86.4m. The difference between the two valuations reflects Pogba’s greater experience, his career scoring rate of around one goal every five games (Stones has only scored one league goal), and the greater status of Juventus compared to Everton. At the very top end of the market, even small differences in the value-drivers translate into exponentially large differences financially (which can be captured statistically by using a loglinear valuation algorithm). There is a clear rationality in the comparative transfer valuations of the two players. Only time will tell whether or not the huge transfer fees are justified by the ultimate bottom line in every player transaction whatever the sport, namely, performance on the field.

 

13th August 2016

The Complexity of Team Cohesion

Executive Summary

  • Team cohesion has been highlighted in a number of studies as a key driver of team performance
  • But it is very difficult to separate out the effects of team cohesion from team quality as well as momentum and feedback effects
  • Crucially the impact of team cohesion on team performance depends on how much time the head coach has been with the team
  • Signing new higher-quality players is a double-edged sword since team quality will rise but, at least initially, team cohesion will fall
  • And changing the head coach will also involve disruption effects particularly when there was a high level of team cohesion under the previous head coach and the inevitable resistance to change

 

Ben Darwin, the former Australian rugby international, now runs his own sports consultancy, Gain Line Analytics. The main focus of his work is team cohesion which he measures by his own trademarked metric, the Team Work Index (TWI). Ben has found that TWI accounts for as much as 40% of on field performance. I had a long Skype call with Ben when he was just starting out as an analyst and found him to be very personable and knowledgeable. His experience in elite team sport gives him a real insight into the dynamics of team building and how to create (and destroy) that critical sporting intangible, team spirit.

 

I don’t know exactly how Ben defines team cohesion (TWI is his intellectual property) but I am pretty sure that fundamentally it must be a measure of how much time that players on a team have played together, what I would call team shared experience (TSE). The relationship between TSE and team performance has been the subject of several academic studies. One of the first on the subject was published in 2002 by Berman et al. who used basketball data and found a significant link between TSE and team performance in the NBA. Along with my co-author, Andy Lockett, I have just published a study in the Journal of Management Studies using data from the FA Premier League over the ten years, 1996 – 2006, and we also found that TSE was a significant driver of team performance.

 

A significant link between TSE and team performance is no surprise. The difficulty arises in unravelling the multitude of factors influencing team performance. The analytical problem is necessarily a multivariate one with the estimated impact of TSE on team performance crucially affected by how you control for team quality as well as the dynamics and feedback effects. Increased TSE will improve team performance but better team performance can mean higher TSE in the future as teams try to retain a successful squad. The relationship runs both ways. And, of course, the added complication is that the richest teams have the financial power to be better able to recruit and retain top quality players. But how much of their success is down to recruiting the best players and how much is down to building team cohesion between these top players? That in turn raises the question of the role of the coaching staff in integrating a group of individual players both tactically and emotionally. It follows that the coaching input should also be included as a driver of team performance. Modelling all of these possible factors is a very complex analytical problem but crucial to producing insights into team performance that have practical relevance. No wonder that it took Andy and myself nearly ten years to complete our research and get it published in a top academic journal.

 

The most important finding of our model of team performance in Premiership football, after controlling for team quality using wage costs as well as average team age and career experience, is that it is not player TSE on its own that has the most significant impact on team performance. Rather it is the interaction of player TSE and the length of time that the head coach has spent with the team (i.e. coach TSE). In other words, it is the shared experience of players and coaches together than drives performance. And the effect remains strong even after allowing for the dynamics of team performance across seasons (i.e. momentum effects) as well as the previously discussed feedback effects. There is a complex interaction between player TSE and coach TSE as shown in Figure 1 below which uses three different scenarios – low player TSE, moderate player TSE and high player TSE – to illustrate how the impact of an increase in player TSE on team performance changes as player TSE increases and as coach TSE increases. The biggest impact of building team cohesion occurs when teams have relatively low levels of player TSE, and the impact increases the longer that the coach has been with the team.

Blog 4 Graphic

Our model of team performance captures very well the trade-off facing teams when they recruit new players. Signing players of higher quality will increase team quality but will reduce team cohesion. Player turnover is a double-edged sword when player TSE is so important. And the same goes for changing the head coach which immediately wipes out all of the player-coach TSE. The new head coach will start with zero shared experience with the existing squad. Our model actually shows that the negative disruption effect of a new coach will be highest when the team has a high level of player TSE. A group of players who have been together for a long period may be particularly resistant to the changes introduced by a new coach as well as potentially reacting negatively to the increased uncertainty about their status in the team.

 

6th August 2016

The Iceman Cometh – Assessing the Effectiveness of Different Playing Styles at Euro 2016

Executive Summary

  • In Euro 2016 games distance covered had a small positive effect on match outcomes principally through improved defence.
  • There is also evidence that teams that made greater use of a short passing game created more scoring opportunities and scored more goals.
  • Italy averaged the highest distance covered while Spain averaged the highest number of attempted passes. Germany ranked highly for both distance covered and attempted passes.
  • Against Iceland, England dominated possession and created many more goal attempts but lost to a team that compensated by working harder in distance covered. It was a similar story in the Final where France played more but Portugal ran more.
  • No playing style dominated Euro 2016 beyond pragmatic football, playing the style that best suits the players available and most likely to effective against specific opponents.

 

Euro 2016 is unlikely to be remembered as a great festival of football. No team really shone in the way that Spain did with their tiki-taka football style between 2008 and 2012. The headlines were made more by the underdogs most notably Iceland who made it to the quarter finals knocking out England on their way, and of course Wales who reached the semi-finals. The success of both Iceland and Wales certainly made a case for the importance of team cohesion (the subject of my next blog). But did we learn anything from Euro 2016 about the effectiveness of different playing styles? Has the relative demise of Spain seen a swing back in favour of the hard-working artisan over the possession-loving artist?

 

The merits or otherwise of the possession game is of course where football analytics has its origins with the pioneering work of Wing Commander Reep from the early 1950s onwards and his data-based advocacy of a direct playing style. Reep remains a controversial figure and many see him as a strong argument against the use of analytics in the beautiful game. Reep found overwhelming evidence over a lifetime of coding and analysing games that most goals scored came from possessions involving three passes or less, and inferred from this that the long-ball game was likely to be most effective. It took until 2005 for Hughes and Franks to provide the definitive analytical critique of Reep’s conclusions. Quite simply, Reep’s fallacy was to focus only on possessions with a successful outcome. Once you include all of the possessions in the analysis, not just those that ended in a goal attempt or goal scored, there is a tendency for a higher number of goal attempts and goals scored per possession, the higher the number of passes, implying the complete opposite to Reep as regards the relative merits of direct and possession-based playing styles. The ability to complete passes is a general indicator of team quality and teams that complete more passes tend to be better able to create scoring opportunities. Spain’s success with tiki-taka football was not a statistical anomaly but just confirmation par excellence of Reep’s misinterpretation of the evidence.

 

Of course, and not just in (association) football but in all of the invasion-territorial sports, there is no simple linear relationship between share of possession and match outcomes. It is not the quantity of possession that counts as much as the quality of the possession in terms of pitch location and how the possession is used. A similar argument can be made when it comes to distance covered. It is not the distance covered or even the amount of high-intensity work that matters as much as the usefulness of the physical effort. The amount of high-intensity work can often be inversely related to the quality of a player’s decision making. “Reading the game” (i.e. exceptional spatial awareness) can allow players to be effective with minimal physical effort. Indeed much the same can be said about defensive actions in general. Players can defend space effectively by being in the right place at the right time without actually making a defensive action in statistical terms. Tally counts of defensive actions and cumulative totals of distance covered may not necessarily reflect defensive effectiveness.

 

Bearing in mind the obvious limitations of tally counts of passes and total distance covered, were there any discernible patterns in the effectiveness of different playing styles by teams in Euro 2016? I have extracted the data from UEFA’s own published statistics on every game. To ensure comparability I have only used the data for normal time and excluded extra time in elimination games tied after 90 minutes. I have analysed differences across games using win-loss analysis (i.e. comparing mean differences between winning and losing performances using t tests and effect sizes) and correlation analysis across all games, as well as ranking teams by game averages, and using cluster analysis to categorise teams.

 

So what do the data tell us about Euro 2016? First of all there is evidence that distance covered has a small positive effect on match outcomes principally through improved defence. Across all 102 team performances the correlation between distance covered and goals conceded is -0.117. A similar effect is found between comparing winning and losing team performances. Winning teams averaged 108.2 km whereas losing teams averaged 107.2 km. Although the difference is not statistically significant, it is consistent with a small positive effect on winning (Cohen’s d = 0.249).

 

In the case of passing, the win-loss analysis yields a small to medium effect for the ratio of short-to-long attempted passes relative to the opposition’s ratio (Cohen’s d = 0.315). Winning teams tended to attempt proportionately more short passes than long passes compared to their opponents. Correlation analysis also picks up similar tendencies. More attempted short passes has a small positive effect on goals scored (r = 0.126) but a large positive effect on the number of goal attempts (r = 0.523). By contrast more attempted long passes has a small negative effect on both goals scored (r = -0.147) and goal attempts (r = -0.099).

 

Table 1 reports game averages and rankings for distance covered, total passes attempted and short passes attempted for all 24 teams. Perhaps surprisingly for some, Italy had by far the highest distance covered, averaging 114.7 km per game. The other top teams in terms of distance covered were Ukraine (112.1 km), Czech Republic (112.1 km), Germany (112.0 km) and Iceland (110.3 km). Germany also ranked highly in terms of attempted passes, both all attempted passes (638.7) and attempted short passes (141.0), second only to Spain who averaged 648.0 attempted passes with an average of 181.0 attempted short passes. England ranked third on attempted short passes (127.8) and fourth for all attempted passes (500.3). France, Portugal and Switzerland were the other most highly ranked passing teams.

 

Table 1: Distance Covered and Attempted Passes, Game Averages and Rankings, Normal Time Only, by Team, Euro 2016

Team Distance Covered (m) Total Passes Attempted Short Passes Attempted
Game Average Ranking Game Average Ranking Game Average Ranking
Albania 104,744 20 350.67 19 104.67 12
Austria 107,116 15 456.00 8 113.00 8
Belgium 104,381 21 443.40 11 113.60 7
Croatia 107,028 16 389.75 15 91.75 16
Czech Rep. 112,111 3 317.00 21 83.33 20
England 107,859 12 500.25 4 127.75 3
France 106,550 17 484.86 5 119.71 4
Germany 111,950 4 638.67 2 141.00 2
Hungary 107,227 14 443.75 10 97.25 15
Iceland 110,305 5 259.00 23 60.60 23
Italy 114,656 1 408.20 12 81.60 21
N. Ireland 108,516 8 230.00 24 51.50 24
Poland 108,343 9 370.00 17.5 88.80 19
Portugal 107,885 11 473.86 6 117.43 5
Rep. of Ireland 103,192 24 279.75 22 67.25 22
Romania 103,311 23 346.33 20 97.33 14
Russia 110,014 6 462.67 7 91.67 17
Slovakia 108,968 7 391.50 14 107.50 10
Spain 107,628 13 648.00 1 181.00 1
Sweden 105,354 19 397.00 13 100.67 13
Switzerland 108,300 10 510.75 3 116.50 6
Turkey 104,164 22 370.00 17.5 109.00 9
Ukraine 112,133 2 448.67 9 106.00 11
Wales 105,873 18 388.83 16 89.83 18
All 107,923   424.50 103.40

 

Figure 1 provides a useful categorisation of teams based on distance covered and attempted passes using cluster analysis. What really stands out are the outliers particularly the Republic of Ireland (low distance covered, low short passes), Northern Ireland (medium distance covered, low short passes), Iceland (medium distance covered, low short passes), Italy (high distance covered, below average short passes), Spain (average distance covered, high short passes) and Germany (high distance covered, high short passes). Wales were below average on both metrics while England were average for distance covered and above average in attempted short passes.

 

Figure 1: Cluster Analysis of Attempted Short Passes and Distance Covered, Game Averages, Normal Time Only, by Team, Euro 2016

Blog 3 Graphic

 

Table 2 summarises two specific games – England’s defeat by Iceland and the first 90 minutes in the Final between France and Portugal. In both games the teams that played more lost out to teams that ran more. Both England and France dominated possession, had higher pass completion rates and created more goal attempts yet failed to win. The two winning teams compensated for their lack of possession and limited goal threat by working harder in terms of distance covered. Both Iceland and Portugal covered around 4 km more than their opponents.

 

Table 2: England vs Iceland and France vs Portugal, Euro 2016, Selected Team Metrics, Normal Time Only

Normal Time Only Round of 16 Final
England Iceland France Portugal
Distance Covered (m) 105,234 109,147 105,749 110,206
Total Passes Attempted 525 243 585 461
Short Passes Attempted 121 59 105 117
Long Passes Attempted 68 61 46 63
Pass Completion 85.9% 71.2% 91.3% 85.9%
Goal Attempts 18 8 17 6
Goals Scored 1 2 0 0

 

Perhaps the winners of the tournament summed it up best by progressing to the Final largely on the performance of their artist supreme, Ronaldo, particularly when it mattered most. But the early loss of Ronaldo in the Final saw Portugal triumph through hard work and defensive organisation. Maybe the lesson of Euro 2016 is that no particular playing style dominated and ultimately it was a triumph of pragmatic football, playing the style that best suits the players available and most likely to be effective against the specific opponents. And one final thought with the new Premiership just around the corner – can we expect Chelsea under Conte to emulate the high work rate of his Italian team at Euro 2016?

 

Endnote on Methods: Cluster Analysis

Cluster analysis is an important exploratory technique that can often generate useful summary categorisations. Cluster analysis can produce very effective visualisations when clustering on two dimensions only as in the case here where I have used distance covered and attempted short passes. If the analysis involves more than two dimensions, it can sometimes be possible to use factor analysis to combine the original metrics into two factors that can then be clustered and visualised. In particular using four clusters and factor rotation can often combine to produce very neat four-quadrant categorisations that are easy to interpret. Two things to bear in mind when using cluster analysis:

  1. It is crucial to standardise the metrics before applying cluster analysis when you are using metrics with very different scales of measurement. In my case, distance covered would have dominated the allocation of teams to clusters if I had not converted both metrics into Z scores before applying the clustering procedure (K-Means clustering). In fact there was a 25% difference in the allocation of teams to clusters between using standardised and unstandardized metrics. Figure 1 displays the clusters in terms of the original units of measurements but the clusters were determined using Z scores.
  2. Cluster analysis, like so many other statistical techniques, can be susceptible to the undue influence of extreme observations (i.e. outliers). This is certainly the case in the analysis of playing styles at Euro 2016. It is always advisable to explore the effects of using different numbers of clusters as well as comparing the effects of excluding the outliers.

 

23rd July 2016

Learning from the Recent Successes in English Rugby

 Executive Summary

  • An analytical mindset has been a key component in the successes of both Saracens under Mark McCall and England under Eddie Jones
  • The analytical mindset at Saracens grew out of Brendan Venter’s evidence-based, people-centred coaching philosophy influenced by his medical background
  • Analytics can never guarantee success but, if properly harnessed, the power of analytics is to improve decision making by replacing guesswork with hard evidence, and ensuring that systematic analysis takes precedence over selective anecdotes

 

Saturday 3rd October 2015 marked an incredible low for English rugby when a 33-13 defeat by Australia at Twickenham saw England crash out of the Rugby World Cup in the group stages of a tournament they were hosting and confidently expected to go all the way to the Final. Wind the clock forward just under nine months to Saturday 25th June 2016 when England completed a historic 3-0 series whitewash Down Under with a 44-40 defeat of Australia in Sydney. And that came on the back of England achieving the Grand Slam in winning the Six Nations in March. What a transformation in a relatively short space of time, and one that English football fans will want to see repeated sooner rather than later following England’s ignominious exit from Euro 2016 after a 2-1 defeat by Iceland just two days after the English rugby success in Sydney.

 

The turnaround in the fortunes of the England national rugby team has been masterminded by a new head coach, Eddie Jones. Beyond the observables – the changes in team and squad selections and coaching personnel, and match performances – along with the head coach’s media pronouncements, it is always difficult from the outside to know exactly how things have changed behind the scenes. But in the case of the regime change under Eddie Jones, it is clear that there are strong connections with the other major recent success in English rugby, the transformation of Saracens in just seven years from a team that had never won the Premiership to multiple Premiership titles and crowned Champions of Europe in May 2016. I was privileged to work with Saracens from March 2010 to May 2015 and saw first-hand how the club transformed its culture, its way of doing things. And one of the critical elements in that cultural change was the adoption of an evidence-based approach to coaching decisions. An analytical mindset is a common characteristic of both Saracens under Mark McCall and England under Eddie Jones. And of course two of Eddie’s assistant coaches at England, Paul Gustard and Steve Borthwick, were key people in the Saracens transformation.

 

The Saracens transformation dates back to 2009 and the appointment of the South African internationalist, Brendan Venter as Director of Rugby (in succession to Eddie Jones – it’s a small world). As well as a wealth of rugby experience in both South Africa and England, Brendan also brought a coaching philosophy strongly influenced by his medical background. As a medical practitioner, Brendan is committed to an evidence-based, people-centred approach. As he told me once, “I treat people not diseases, and I make my decisions using the best available evidence.” Brendan created a culture at Saracens that put an emphasis on really looking after the players as individuals, supporting and encouraging their personal development, and promoting a strong tribal bonding between players, coaches and support staff. An important appointment in this respect was the sports psychologist, David Priestley, who in a very quiet, unassuming and professional way did an incredible job in creating the people-centred Saracens way.

 

Brendan also instigated a more systematic approach to the analysis of games by the coaches. Supported by the performance analyst, Matt Wells, again another true professional massively respected within the club, the coaches recorded their observations for their own areas of responsibility from the game video to create expert data on player performance. It showed incredible commitment to painstakingly go through the game video to analyse every contribution of every Saracens player. Paul Gustard, the defence and lineout coach, would study every tackle and defensive play, and every lineout and then systematically record his observations. The other coaches – Alex Sanderson, Dan Vickers and Andy Farrell (and subsequently Kevin Sorrell and Joe Shaw) – did the same for their areas of responsibility. Gradually a mass of expert data was being built up and I was brought it to interrogate the data and analyse patterns across games. A reporting structure was created to feed into the coaches’ review of games. A central component of these reports was a set of team and player key performance indicators (KPIs) colour-coded using a traffic-lights system.

 

After Brendan returned to South Africa in January 2011, Mark McCall was promoted to Director of Rugby. Brendan had played with Mark at London Irish and brought him to Saracens. The transition was almost seamless particularly as Brendan retained a senior advisory role as Technical Director. Like Brendan, Mark embraces an evidence-based approach – he’s a law graduate. Under Mark the use of analytics greatly expanded into game preparation, particularly opposition analysis.

 

It said a lot about Saracens that they had the confidence to bring in an outsider, a university professor with no previous experience in professional rugby albeit I am a qualified football coach (UEFA B License) with many years of experience in applying analytics to sport. Brendan and Mark encouraged me to think out of the box and use analytics to challenge the coaches. Both of them were acutely aware of the problem of groupthink when a group of people work so closely together over an extended period of time and develop a collective view of the world. I remain in awe of just how hard the coaches at Saracens worked to get the best out of themselves in order to facilitate the players to get the best out of themselves. They embodied the notion of servant leadership, leading by serving the collective. And importantly they were open-minded, and like all great teachers (coaches after all are teachers), always looking to learn more so that they could become better coaches.

 

And it wasn’t just the coaches and the other support staff – sports scientists, medics, strength and conditioning, and psychologists – who embraced analytics and an evidence-based approach, the players did so too, no more so than the team captain, Steve Borthwick. Often known as the Professor of the Lineout, Steve was meticulous in his preparation for games. He worked closely with Paul Gustard in developing the lineout strategy for games. Steve also received all of my team and opposition reports, and regularly followed up with questions and comments. It is absolutely no surprise that Steve has formed such an effective coaching partnership with Eddie Jones, first at Japan and now at England. And it is no surprise either that Paul Gustard has also become an integral member of Eddie’s coaching staff. All three share an incredible commitment to excellence, capacity to work and care for detail. All three have an analytical mindset just like the coaching staff at Saracens. Analytics can never guarantee success but, if properly harnessed, the power of analytics is to improve decision making by replacing guesswork with hard evidence, and ensuring that systematic analysis takes precedence over selective anecdotes. Perhaps English football should heed these lessons.

 

14th July 2016