Rio 2016 and the Marginal Gains from Data Analytics

Executive Summary

  • Team GB’s success at Rio 2016 continues the strong upward trend in “fundamental performance” evident since Atlanta 1996.
  • The upward trend in Olympic performance has resulted from a “perfect storm” of a number of mutually-reinforcing forces including National Lottery funding, performance-related resource allocation, the central focus on Olympic success, and the widespread adoption of a marginal-gains philosophy.
  • Data analytics has been one component of the marginal-gains philosophy in a number of Olympic sports.
  • A marginal-gains philosophy, specifically the use of data analytics, is always more likely in resource-constrained teams in need of a “David strategy” in order to compete effectively with resource-richer rivals.


The United Kingdom is rightly basking in the glory of an outstanding performance at the Rio Olympics. Finishing second in the medal table behind the USA and ahead of China is a phenomenal achievement and it is no idle boast to claim that Team GB is now a sporting superpower. Team GB’s medal haul represented its highest medal total in the summer Olympics with the exception of London 1908. The target set by UK Sport was to exceed the 47 medals achieved at Beijing 2008, the previous highest ever in an overseas Olympics. Team GB smashed this target with 67 medals and in so doing became the first ever team to increase their medal total immediately after hosting the previous Olympics.


The transformation in Team GB’s performance can be seen in Figures 1 and 2 which track the medal total and number of gold medals, respectively, at the summer Olympics since World War Two. I have included a 3-period (i.e. 8-year) moving average to provide an underlying benchmark of “fundamental performance” to control for random variation. When you look at the medal total you can see that Team GB’s performance is characterised until 1996 by a long-term cyclical pattern somewhat exaggerated in the 1976 – 1996 cycle by the effects of the boycotts at Moscow 1980 and Los Angeles 1984. From this perspective there is very strong evidence of a structural break after Atlanta 1996 with a strong upward trend in performance levels. By 2016 the fundamental medal total is estimated to have risen to 59.7 medals compared to only 19.7 medals in 1996.


Figure 1: Team GB, Medal Total, Summer Olympics, 1948 – 2016

Blog 7 Graphic (1).png


The structural break is even clearer in Figure 2 when you consider only the number of gold medals won. Unlike the medal total, the fundamental performance for gold medals over the 40-year period 1956 – 1996 more or less flatlines around an average of 4.0 gold medals. By 2016 the fundamental gold medal count had risen to 25.0 gold medals, a truly astonishing transformation.


Figure 2: Team GB, Gold Medals, Summer Olympics, 1948 – 2016

Blog 7 Graphic (2)


So why has Team GB been so success in the last 20 years? The instant post-mortem by media pundits has focused attention on at least seven factors:

  1. The introduction of National Lottery funding for elite sport in 1997
  2. The performance-related allocation of funding to different sports that rewards medal success and penalises underperformance, which in turn funnels down to supporting only the best individual and team medal prospects
  3. The exceptional athletic talent pool
  4. The ability to attract and retain the best coaches and support staff
  5. The widespread adoption of the “marginal-gains” philosophy that focuses on the continuous search for innovation in equipment and athletic preparation to improve performance
  6. A four-year funding cycle geared principally to supporting Olympic performance with European and World Championships increasingly seen as stepping stones
  7. A more level playing field as anti-doping efforts create fairer competition, benefitting those teams such as Team GB that have been much more committed to eradicating the use of performance-enhancing drugs

As always, a radical transformation in performance is due to a “perfect storm” (or what economists call “cumulative causation”) when a number of mutually-reinforcing forces come together to create a virtuous circle of improvement (or a vicious circle of decline in the case of a sharp drop in performance levels).


Data analytics is one component of the marginal-games philosophy in a number of the Olympic sports. It was particularly noteworthy that the track cyclist, Mark Cavendish, paid explicit tribute to the role of data analysts in an interview the day after winning his silver medal in the Omnium event. In this respect the contribution of the English Institute of Sport (EIS) must be recognised. The EIS is an international centre of excellence in the provision of support services to the Olympic sports. The EIS has long been a pioneer in performance analysis and have been heavily involved in the development of data analytics in several Olympic sports. Through the EIS I was invited to be a “fresh pair of eyes” in one of the Olympic sports in which the performance analysts were seeking to develop their analytical capabilities. I was very impressed not only by the knowledge and commitment of the performance analysts I worked with but also their attitude – their openness to new ideas and willingness to work with others outside their own sport to develop their own expertise. It was a great example of the marginal-gains philosophy in action.


I listened to Matthew Syed, the Times sports columnist, being interviewed on Radio 5 Live on why Team GB had done so well. Syed is always great value as someone who combines the analysis of the search for excellence in elite sport with the experience of having himself competed at the highest level in table tennis. He stressed the importance of the marginal-gains philosophy in Olympic sports and lamented the failure of football to embrace a similar philosophy. When asked why football did not seem to adopt the marginal-gains philosophy, Syed blamed the short-termism in football and the attachment to conventional ways of doing things. He has expanded on the limiting effects of conventional wisdom in football in his Times column today – “Conventional wisdom rules in football, but the game’s coaches need to be more innovative” (Times, 29th Aug 2016).


The importance of a long-term approach alongside a marginal-gains philosophy has been unwittingly recognised by some of Team GB’s competitors. Australia and others have criticised GB track cycling for producing Olympic results out-of-line with their performances in major championships in the run up to Rio. But it is no surprise given that Olympic performance is the be-all-and-end-all for funding provided via UK Sport. World and European success in any sport is always very satisfying but the harsh reality of the pursuit of Olympic excellence in the UK is that every other major championship has become a stepping stone, a valuable learning opportunity, en route to the next Olympic Games.


Perhaps it’s the economist in me but I keep coming back to financial incentives as a key element in the story of Team GB’s Olympic success. The marginal-gains philosophy is a “David strategy”, a means for resource-constrained organisations to compete with resource-richer rivals. Premiership football with the enormous revenue streams generated from their media rights face relatively few financial constraints in the pursuit of sporting success and can afford to throw money at the problem. If the team isn’t performing, buy more star players and/or sack the head coach and hire a new one. The marginal-gains philosophy, specifically the use of data analytics, is always more likely to be adopted by teams with resource constraints due to their small economic size, salary caps or a reliance on public funding. It’s why ultimately data analytics is always more likely to be play a more meaningful role in Olympic sports and rugby union rather than Premiership football (and why I’m working for AZ Alkmaar in the Dutch Eredivisie rather than a Premiership club in England!)


29th August 2016

The Importance of Defence in Winning Promotion to the Premier League

Executive Summary

  • In the 2015/16 Championship goals conceded were a much stronger predictor of league performance than goals scored.
  • The strongest teams defensively all finished in the top six.
  • Despite their attacking strength the promotion hopes of both Fulham and Brentford were fatally undermined by defensive weaknesses.
  • Keeping a clean sheet has a league points value more than double that of scoring a single goal.
  • Defensive efficiency (based on the ratio of opposition shots on target inside the penalty box relative to total defensive contributions) is a very strong predictor of goals conceded.
  • Improved defence is a cost-effective Moneyball strategy for improving league performance based on tactical organisation, coaching and practice.


It’s very early days in the Championship and a mug’s game to predict with any certainty who will be promoted with so few games played. But already there are ominous signs for the promotion prospects of Nottingham Forest, Burton Albion and Blackburn Rovers. Why? Quite simply they have been defensively weak in their first four games and a strong defence is a fundamental building block for any team with serious ambitions of getting promoted to the Premier League.


So what’s the evidence to support the assertion that defence is crucial to winning promotion to the Premier League? Well let’s look at the final league table for the Championship last season.


Final League Table, FL Championship, 2015/16,

  P W D L F A Pts
Burnley 46 26 15 5 72 35 93
Middlesbrough 46 26 11 9 63 31 89
Brighton & Hove Albion 46 24 17 5 72 42 89
Hull City 46 24 11 11 69 35 83
Derby County 46 21 15 10 66 43 78
Sheffield Wednesday 46 19 17 10 66 45 74
Ipswich Town 46 18 15 13 53 51 69
Cardiff City 46 17 17 12 56 51 68
Brentford 46 19 8 19 72 67 65
Birmingham City 46 16 15 15 53 49 63
Preston North End 46 15 17 14 45 45 62
Queens Park Rangers 46 14 18 14 54 54 60
Leeds United 46 14 17 15 50 58 59
Wolverhampton Wanderers 46 14 16 16 53 58 58
Blackburn Rovers 46 13 16 17 46 46 55
Nottingham Forest 46 13 16 17 43 47 55
Reading 46 13 13 20 52 59 52
Bristol City 46 13 13 20 54 71 52
Huddersfield Town 46 13 12 21 59 70 51
Fulham 46 12 15 19 66 79 51
Rotherham United 46 13 10 23 53 71 49
Charlton Athletic 46 9 13 24 40 80 40
MK Dons 46 9 12 25 39 69 39
Bolton Wanderers 46 5 15 26 41 81 30


A casual inspection of the league table suggests that goals conceded rather than goals scored are a better predictor of league performance, and this is indeed the case. Simple regression analysis indicates that goals conceded alone explains 74.4% of the variation in league points whereas goals scored only explains 60.2%. The clubs finishing in the top six all ranked as the best defensively and the three clubs winning promotion to the Premier League – Burnley, Middlesbrough and Hull City (via the play-offs) – had the three best defensive records in the Championship, averaging only 0.73 goals conceded per game which represents a 39.6% performance gain compared to the league average of 1.21 goals conceded per game. By comparison the three promoted clubs had a 23.9% performance gain in attack, averaging 1.50 goals scored per game. Middlesbrough in particular provide a very compelling case for the relative importance of defence over attack, having the best defensive record in the Championship last season but ranking only 8th in goals scored, and winning automatic promotion on goal difference over Brighton and Hove Albion due crucially to conceding 11 fewer goals.


Brentford and Fulham both demonstrated last season how a weak defence can seriously undermine a promotion push. Both clubs out-scored Middlesbrough and indeed Brentford were joint top scorers with Burnley and Brighton and Hove Albion. But Brentford could only finish 9th after averaging 1.46 goals conceded per game, the 8th worst defensive performance in the Championship and only marginally better than MK Dons who were relegated. Fulham performed even worse defensively, having the 3rd worst defensive record after the two relegated clubs, Bolton Wanderers and Charlton Athletic.


The importance of defence is often undervalued as Anderson and Sally persuasively argue in The Numbers Game (Viking, London, 2013). It’s partly a form of decision bias because attacking success (i.e. goals scored) is a positive observable event whereas defensive success is all about non-occurrences, not allowing the opposition to have shots at goal and not conceding goals. We tend to over-emphasise positive observables and undervalue non-occurrences. Anderson and Sally call it the ‘inequality central to understanding football’. In football 0 > 1 because ‘goals that don’t happen are more valuable than those that do’. (p. 131). The expected value in terms of league points from keeping a clean sheet in a game is considerably higher than the expected league points from scoring a single goal. Anderson and Sally analysed the points value of goals scored and conceded in the Premier League over 10 seasons and found that a clean sheet had an expected points value of nearly 2.5 whereas scoring a single goal had an expected points value of only just over 1.0. A very similar pattern is observed in the Championship last season.


Points Value of Goals Scored and Conceded, FL Championship, 2015/16

Blog 6 Graphic.png


Keeping a clean sheet yielded an expected return of 2.3 league points in the Championship last season whereas scoring a single goal yielded an expected return of 1.1 league points. The three promoted clubs had the most clean sheets with Middlesbrough amassing 22 clean sheets while Burnley and Hull City both had 20 clean sheets. In contrast, Fulham had only 4 clean sheets and Brentford had only 8 clean sheets, the same number as the bottom club, Bolton Wanderers.


Defence is an obvious Moneyball strategy in the sense that developing an effective defence tends to be a more cost-effective way of improving performance. The football players’ labour market tends to reflect the decision bias towards goals scored with strikers attracting a substantial salary and transfer-fee premium. Defenders have tended to be undervalued although perhaps at the top end the recent transfer of the young defender, John Stones, from Everton to Manchester City, may signal a market correction. But an effective defence is also cost-effective because ultimately it is down to tactical organisation especially good positional decision-making that can be improved through coaching and time on the training pitch. Defending is reactive and in some ways much more amenable to coaching and practice than attacking which is more creative and instinctive and hence much more difficult to coach.


Effective defending is partly about effort but as always it is not just the quantity of defensive activity but also the quality of that defensive activity. Defensive effort as measured by the total number of challenges, blocks, interceptions and clearances partly reflects possession share with struggling teams having to defend more. In addition good defending is as much about being in the right place at the right time and so isn’t necessarily reflected in tally counts of actual defensive contributions – what Anderson and Sally call the “Maldini Principle” or “dogs that don’t bark”. I have found that a useful measure of effective defence is the ratio of opposition shots on target inside the box (measured as the deviation from the league average) relative to defensive effort. I call this ratio “defensive efficiency” (and scale it by 104 for presentational purposes) since it measures defensive output (shots allowed) relative to input (defensive effort). Good defence is about restricting the number of opposition shots at goal but not all shots at goal are of equal threat as expected goals analysis has highlighted. The most dangerous shots are those from inside the penalty box on target and so restricting this type of shot is the critical aspect of effective defence.


Defensive Performance, FL Championship, 2015/16, Ranked by Defensive Efficiency

  Defensive Effort Opposition Shots Opposition Shots On Target Inside Box Defensive Efficiency Goals Conceded
Derby County 94.674 11.891 1.935 89.17 43
Hull City 96.065 10.587 1.957 85.62 35
Middlesbrough 99.804 11.370 2.109 67.16 31
Blackburn Rovers 93.304 11.652 2.348 46.21 46
Sheffield Wednesday 92.065 11.000 2.370 44.47 45
Burnley 96.043 14.391 2.370 42.63 35
Brighton and Hove Albion 92.761 11.804 2.413 39.45 42
Preston North End 95.543 11.957 2.435 36.03 45
Wolverhampton Wanderers 104.739 13.500 2.435 32.86 58
Queens Park Rangers 103.043 12.261 2.609 16.53 54
Reading 96.826 10.783 2.739 4.12 59
Ipswich Town 93.957 13.348 2.761 1.93 51
Nottingham Forest 110.674 14.043 2.826 -4.26 47
Huddersfield Town 95.304 11.065 2.848 -7.22 70
Cardiff City 96.565 12.717 2.870 -9.38 51
Bristol City 91.826 12.783 2.957 -19.33 71
Birmingham City 98.761 14.109 3.087 -31.18 49
Leeds United 93.891 12.783 3.087 -32.80 58
Brentford 92.239 12.978 3.196 -45.17 67
Charlton Athletic 100.348 16.587 3.283 -50.19 80
Bolton Wanderers 100.609 14.435 3.413 -63.02 81
MK Dons 92.826 14.065 3.457 -72.99 69
Fulham 96.087 14.478 3.500 -75.04 79
Rotherham United 100.652 13.804 3.696 -91.07 71


There is virtually no correlation between defensive effort and goals conceded (r = 0.026) whereas defensive efficiency is very highly correlated with goals conceded (r = -0.849). Derby County (89.2), Hull City (85.6) and Middlesbrough (67.2) were the highest ranked teams in terms of defensive efficiency with all three teams being promoted to the Premiership. The lowest ranked teams were Rotherham United (-91.1), MK Dons (-73.0) and Bolton Wanderers (-63.0) with all three teams finishing in the bottom four.


So as Championship managers evaluate the early-season form of their teams, the message is very clear to Philippe Montanier at Nottingham Forest, Nigel Clough at Burton Albion, and Owen Coyle at Blackburn Rovers – improve your defence quickly or both your promotion hopes and job security will decline very rapidly.


23rd August 2016

Are Pogba and Stones Really Worth The Money?

Executive Summary

  • Statistical models of the football transfer market show a very high level of systematic variation in transfer fees.
  • Transfer-fee inflation tends to be closely associated with revenue growth, particularly the growth of TV media revenues.
  • Transfer valuations of individual players depend on five main value-drivers: player quality, selling club, buying club, current contract expiry date, and market conditions.
  • Player quality can be captured using five quality indicators: age, career experience, current appearance rates, current and career scoring rates, and international caps
  • Comparative (or benchmark) valuations of players involve combining the quality indicators and other value-drivers of transfer fees using weights extracted statistically from actual transfer fees (via regression analysis).
  • Fundamental valuations of players involve estimating the incremental revenue value of player contributions on and off the field. In the invasion-territorial team sports this requires a player rating system to combine multi-dimensional performance data into a single composite measure of overall player performance.
  • My player valuation algorithm indicates that the differential in the transfer valuations of Pogba and Stones is justified by Pogba’s greater experience, his goals contribution, and the greater size and status of his previous club, Juventus.


This week saw the two Manchester clubs splash the cash, paying a combined total of £136.5m in transfer fees for just two players. Manchester City paid Everton £47.5m for John Stones while Manchester United paid Juventus £89m for Paul Pogba. Are Pogba and Stones really worth the money? It was just this type of question that got me into sports analytics 20 years ago. Working with my good friend and fellow applied economist and sports fanatic, Steve Dobson, we investigated the economics of the football players’ transfer market in England. In particular we wanted to know just how rational football clubs were in setting transfer fees. We put together a dataset covering 1,350 transfers between English clubs during the period from July 1990 through to August 1996 which included Alan Shearer’s world record transfer from Blackburn Rovers to Newcastle United for £15 million. We published a couple of journal articles on our findings and subsequently extended our research to include players transfers in non-league football.


In common with other studies of the English football transfer market in the mid-1990s, we found that the transfer market was very rational with our statistical model able to explain around 80% of the variation in transfer fees. Football clubs were using the available information on player quality in a very systematic way to set transfer fees. Also, because our data covered six seasons and four different divisions, we were able to look at trends in transfer fees over time and found some evidence that the rate of increase in transfer fees over time reflected revenue growth. The current transfer window reinforces the relationship between revenue growth and transfer fees. The largesse of the transfer fees paid for Pogba and Stones is just part of another surge in transfer fee inflation fueled by the massive jump in Premiership TV revenues.


When we first started to present our findings at economics conferences, the media took quite a bit of interest with several articles on the theme of “boffins apply science to the beautiful game”. We were repeatedly asked if the statistical analysis of transfer fees could be used to value players. This prompted me to start to develop the SOCCER TRANSFERS player valuation system and this really marks my switch from academic data analysis into sports analytics. My focus moved from building a statistical model to explain the variation in 1,350 transfer fees to developing a system to use player and market data to value individual players. Ultimately I constructed a valuation process, a way of bringing together different types of information about players and then converting that information into a financial value. Regression analysis identified the relevant information as well as estimating the conversion rates (known an implicit or hedonic prices) for converting the different types of player information into financial values.


My player valuation algorithm initially identified four main value-drivers: player quality, size and divisional status of the selling club, size and divisional status of the buying club, and transfer market inflation. In the mid-1990s there were no player performance data available beyond appearances, goals scored and disciplinary records. So player quality had to be measured using five principal quality inidactors – age, career experience, current appearance rates, current and career scoring rates, and international caps. And remember at that time there were no websites with comprehensive player data. Instead the data had to be painstakingly extracted by hand from the various editions of the Rothmans (now Sky Sports) Football Yearbook. Overseas players were particularly difficult to value because of the difficulties in obtaining data on leagues outside the UK.


Another problem I encountered was that the initial analysis was pre-Bosman. The Bosman ruling was first published by the European Court of Justice in September 1995 but initially only applied to cross-border transfers. It was not until 1998 that UK domestic transfers became subject to Bosman free agency with no transfer fees payable for out-of-contract players over the age of 23. Fortunately as I started to provide player and squad valuations for clubs, financial institutions and the courts, I was able to get access to confidential information on contract expiry dates which allowed me to construct an adjustment (formally a polynomial decay function) to capture the decline in transfer value as players entered the last two years of their contract.


The revised version of the player valuation algorithm, which I still use today, takes the general form:

SOCCER TRANSFERS Player Valuation System

Blog 5 Graphic

Effectively this approach provides a comparative (or benchmark) valuation of players in which the statistical analysis of actual transfer fees yields estimates of the implicit prices of the various indicators of player quality as well as valuing the impact of differences in buying and selling clubs, transfer market inflation and the remaining length of contract. This algorithm still works incredibly well today. In particular there is little improvement in accuracy by including the very detailed player performance data now produced by commercial companies such as Opta and ProZone. The indicators of player quality I used 20 years ago retain their predictive value. Over the years the player valuation algorithm has been used for a number or purposes including assisting teams in their transfer dealings, determining the required level of player insurance cover, providing an input into the corporate valuation of clubs, estimating the player asset values as security in debt transactions, and resolving legal and tax disputes. A variant of the algorithm was also developed to provide player salary benchmarks in the Scottish Premier League.


Although detailed player performance data provides little improvement in comparative player valuations, it does, however, open up the possibility of providing fundamental valuations of players based on an estimation of the incremental revenue gains generated by a player’s contributions on and off the field. Top players are very expensive assets. In any other business there would be an investment appraisal process involving the projection of the future stream of value expected to be generated by the acquired asset relative to the financial costs incurred. While professional sports teams will apply this type of due diligence to stadium and other tangible investments, most have deemed investment in playing talent to be too complex to be amenable to this type of approach. But the American sports economist, Gerald Scully, showed in a paper published in the American Economic Review in 1974 that it is possible to calculate financial values of players based on their playing contributions. Using data from Major League Baseball in 1968 and 1969, Scully developed a two-stage procedure in which he first estimated a regression model of the relationship between batting and pitching metrics (Scully used the slugging average and the strikeout-to-walk ratio) and team win%, and then estimated a second regression model for the relationship between team win% and team revenue. Using these two regression models, Scully could then calculate how much each player contributed to team performance and team revenue.


Of course, to apply Scully’s methodology to the invasion-territorial sports such as the various codes of football, hockey and basketball when player performance is multi-dimensional, you need to develop composite player rating systems to measure a player’s overall contribution. And you also need to build in an estimate of the player’s image value given the importance of media, merchandising and sponsorship revenues. The complexities of developing player rating systems in the invasion-territorial sports will be the subject of several future blogs.


But, to come back to the original question, are Pogba and Stones really worth the money? It is impossible to answer that question fully without knowledge of the total financial obligations involved in both deals including salary costs, transfer fees and agent fees. But it is possible to compare the transfer valuations of both players using my valuation algorithm. What I can say is that if Stones is valued at £47.5m under current market conditions, then the estimated valuation for Pogba on the same basis would be £86.4m. The difference between the two valuations reflects Pogba’s greater experience, his career scoring rate of around one goal every five games (Stones has only scored one league goal), and the greater status of Juventus compared to Everton. At the very top end of the market, even small differences in the value-drivers translate into exponentially large differences financially (which can be captured statistically by using a loglinear valuation algorithm). There is a clear rationality in the comparative transfer valuations of the two players. Only time will tell whether or not the huge transfer fees are justified by the ultimate bottom line in every player transaction whatever the sport, namely, performance on the field.


13th August 2016

The Complexity of Team Cohesion

Executive Summary

  • Team cohesion has been highlighted in a number of studies as a key driver of team performance
  • But it is very difficult to separate out the effects of team cohesion from team quality as well as momentum and feedback effects
  • Crucially the impact of team cohesion on team performance depends on how much time the head coach has been with the team
  • Signing new higher-quality players is a double-edged sword since team quality will rise but, at least initially, team cohesion will fall
  • And changing the head coach will also involve disruption effects particularly when there was a high level of team cohesion under the previous head coach and the inevitable resistance to change


Ben Darwin, the former Australian rugby international, now runs his own sports consultancy, Gain Line Analytics. The main focus of his work is team cohesion which he measures by his own trademarked metric, the Team Work Index (TWI). Ben has found that TWI accounts for as much as 40% of on field performance. I had a long Skype call with Ben when he was just starting out as an analyst and found him to be very personable and knowledgeable. His experience in elite team sport gives him a real insight into the dynamics of team building and how to create (and destroy) that critical sporting intangible, team spirit.


I don’t know exactly how Ben defines team cohesion (TWI is his intellectual property) but I am pretty sure that fundamentally it must be a measure of how much time that players on a team have played together, what I would call team shared experience (TSE). The relationship between TSE and team performance has been the subject of several academic studies. One of the first on the subject was published in 2002 by Berman et al. who used basketball data and found a significant link between TSE and team performance in the NBA. Along with my co-author, Andy Lockett, I have just published a study in the Journal of Management Studies using data from the FA Premier League over the ten years, 1996 – 2006, and we also found that TSE was a significant driver of team performance.


A significant link between TSE and team performance is no surprise. The difficulty arises in unravelling the multitude of factors influencing team performance. The analytical problem is necessarily a multivariate one with the estimated impact of TSE on team performance crucially affected by how you control for team quality as well as the dynamics and feedback effects. Increased TSE will improve team performance but better team performance can mean higher TSE in the future as teams try to retain a successful squad. The relationship runs both ways. And, of course, the added complication is that the richest teams have the financial power to be better able to recruit and retain top quality players. But how much of their success is down to recruiting the best players and how much is down to building team cohesion between these top players? That in turn raises the question of the role of the coaching staff in integrating a group of individual players both tactically and emotionally. It follows that the coaching input should also be included as a driver of team performance. Modelling all of these possible factors is a very complex analytical problem but crucial to producing insights into team performance that have practical relevance. No wonder that it took Andy and myself nearly ten years to complete our research and get it published in a top academic journal.


The most important finding of our model of team performance in Premiership football, after controlling for team quality using wage costs as well as average team age and career experience, is that it is not player TSE on its own that has the most significant impact on team performance. Rather it is the interaction of player TSE and the length of time that the head coach has spent with the team (i.e. coach TSE). In other words, it is the shared experience of players and coaches together than drives performance. And the effect remains strong even after allowing for the dynamics of team performance across seasons (i.e. momentum effects) as well as the previously discussed feedback effects. There is a complex interaction between player TSE and coach TSE as shown in Figure 1 below which uses three different scenarios – low player TSE, moderate player TSE and high player TSE – to illustrate how the impact of an increase in player TSE on team performance changes as player TSE increases and as coach TSE increases. The biggest impact of building team cohesion occurs when teams have relatively low levels of player TSE, and the impact increases the longer that the coach has been with the team.

Blog 4 Graphic

Our model of team performance captures very well the trade-off facing teams when they recruit new players. Signing players of higher quality will increase team quality but will reduce team cohesion. Player turnover is a double-edged sword when player TSE is so important. And the same goes for changing the head coach which immediately wipes out all of the player-coach TSE. The new head coach will start with zero shared experience with the existing squad. Our model actually shows that the negative disruption effect of a new coach will be highest when the team has a high level of player TSE. A group of players who have been together for a long period may be particularly resistant to the changes introduced by a new coach as well as potentially reacting negatively to the increased uncertainty about their status in the team.


6th August 2016