Ranking Teams by Performance Rather than Results: Another Perspective on International Rugby Union Rankings for 2017

Originally Written: January 2018

Executive Summary

• Competitor ranking systems tend to be results-based
• Performance-based ranking systems are more useful for coaches by providing a diagnostic tool for investigating the relative strengths and weaknesses of their own team/athletes and opponents
• Performance-based rankings can be calculated using a structured hierarchy in which KPIs are combined into function-based factors and overall performance scores
• A performance-based ranking of international rugby union teams in 2017 suggests that the All Blacks are still significantly ahead of England mainly due to their more effective running game

Most competitor ranking systems are results-based and use either generic ranking algorithms such as the Elo ratings (first developed to rank chess players) or sport-specific algorithms often developed by the governing bodies. As well as their general interest to fans and the media, these rating systems can often be of real practical significance when used to seed competitors in tournaments. These results-based ranking systems can be very sophisticated mathematically and usually incorporate adjustments for the quality of the opponent as well as home advantage and the status of matches/tournaments. These ranking systems also tend to include results from both the current season and previous seasons, usually with declining weights so that current results are more heavily weighted. A good example of an official results-based ranking system in team sports is the World Rugby rankings.
From a coaching perspective, results-based ranking systems are of very limited value beyond providing an overall comparison of competitor quality. What coaches really need to know is why their own team/athlete and opponents are ranked more highly or not. Opposition analysis is about identifying strengths and weaknesses of opponents in order to devise a game plan that maximises the opportunities created by opponent weaknesses, and minimises the threats from opponent strengths (i.e. SWOT analysis). Opposition SWOT analysis requires a performance-based approach that brings together a set of KPIs covering the various aspects of performance. A performance-based rankings system can provide a very useful diagnostic tool that allows coaches to investigate systematically the relative strengths and weaknesses of their own team/athletes or opponents, and help inform decisions on which areas to focus in the more detailed observation-based analysis (i.e. video analysis and/or scouting).
As an example of a performance-based ranking system, I have produced a set of rankings for the 10 Tier 1 teams in international men’s rugby union (i.e. the teams comprising the Six Nations and the Rugby Championship) for 2017. These rankings are based on 36 KPIs calculated for every match involving a Tier 1 team between 1st January 2017 and 31st December 2017. In total the rankings use 118 Tier 1 team performances from 69 matches. The ranking system comprises a three-level structured hierarchy. It is a bottom-up approach which starts with 36 KPIs which are combined into five function-based factors which, in turn, are combined into an overall performance score.

Blog 18.02 Graphic (Fig 1)

There are several alternative ways of combining the KPIs into function-based factors and an overall performance score. Broadly speaking the choice is between using expert judgment or statistical methods (as I have discussed in previous posts on player rating systems). In the case of my performance rankings for international rugby union, I have used a statistical technique, factor analysis, to identify 5 factors based on the degree of correlation between the 36 KPIs. Effectively factor analysis is a method of data reduction that exploits the common information across variables (as measured by the pairwise correlations). If two KPIs are highly correlated this suggests that they are essentially providing two measures of the same information and so could be usefully combined into a single metric. Factor analysis extracts the different types of common information from the 36 KPIs and restructures this into a smaller set of independent factors. The five factors can be easily interpreted in tactical/functional terms (with the dominant KPIs indicated in parentheses):
Factor 1: Attack (metres gained, defenders beaten, line breaks, Opp 22 entry rate)
Factor 2: Defence (tackles made, tackle success rate, metres allowed)
Factor 3: Exit Play, Kicking and Errors (Own 22 exit rate, kicks in play, turnovers conceded)
Factor 4: Playing Style (carries, passes, phases per possession)
Factor 5: Discipline (penalties conceded)
The factors are calculated for every Tier 1 team performance in 2017, averaged for each Tier 1 team, adjusted for the quality of the opposition, rescaled 0 – 100 with a performance score of 50 representing the average performance level of Tier 1 teams in 2017, and normalised so that around 95% of match performances are in the 30 – 70 range. The results are reported in Table 1 with the results-based official World Rugby rankings included for comparison. (It should be noted that the official World Rugby rankings cover all the rugby-playing nations, allow for home advantage and include pre-2017 results but exclude the tests between New Zealand and the British and Irish Lions.)

Blog 18.02 Graphic (Table 1)

Despite the differences in approach between my performance rankings and the official World Rugby rankings, there is a reasonable amount of agreement. Based only on 2017 performances, the gap between New Zealand and England in terms of performances remains greater than suggested by the official rankings. Also Ireland rank above England in performance but not in the official rankings, suggesting that Ireland’s narrow win in Dublin in March to deny England consecutive Grand Slams was consistent with the relative performances of the two teams over the whole calendar year.
Of course, the advantage of the performance-based approach is that it can be used to investigate the principal sources of the performance differentials between teams. For example, England rank above New Zealand in three out of five of the factors (Factors 2, 3, 5) and only lag slightly behind in another factor (Factor 4). The performance gap between England and the All Blacks is largely centred on Factor 1, Attack, and principally reflects the much more effective running game of the All Blacks which averaged 517m gained per game in 2017 (the best Tier 1 game average) compared to a game average of 471m gained by England (which ranks only 5th best). It should also be noted that All Blacks had a significantly more demanding schedule in 2017 in terms of opposition quality with 8 out 14 of their matches against top-5 teams (with the Lions classified as a top-5 equivalent) whereas England had only 2 out of 10 matches against top-5 opponents.