What we did was to go through the cricinfo commentary for 122 ODI matches played in 2011 and 2012, and identify every ball where there was an opportunity for a fielder to bring about a dismissal, either by taking a catch, effecting a run-out, or making a stumping. We then used the commentary to characterise the degree of difficulty of the opportunity (blinder, difficult, normal, absolute dolly), and find the probability that the dismissal would be made for each of these difficulty levels.
We then used the same analysis that produces the first-innings score predictor in the WASP (see my previous post on that here), to calculate how many runs a batting team's expected score in the first innings would increase by after each ball. Batters get credit (discredit) for all of that increase (decrease), whereas the credit or discredit is shared between bowlers and fielders in those cases where there is a fielding dismissal opportunity, with fielders getting more of the credit for a blinder, and more of a penalty for dropping a dolly.
We calculated a distribution across all batters, bowlers, and fielders in our datbase. What we found was that a batsman who is one standard deviation above the average contributes about 8 runs more to his team than an average batsman; a bowler who is one s.d. above average contributes about 6 runs more (that is, he restricts the opposition's score by about 6 runs more than an average bowler), but a one-s.d.-above-average fielder contributes less than 2 extra runs. 8 runs may not sound like much, but an additional 8 runs can make quite a difference to the chances of a first-innings score being successfully chased. (UPDATE 2: Scoring 8 runs more than par rather than par, pushes up the chance of winning from 50% to 56%.)
We still have some improvements to make to the analysis, but they are only going to further minimise the relative importance of fielding.
There are two main reasons for why catches and run-outs are not that important (notwithstanding the recent 2nd ODI between NZ and South Africa, where 5 run-outs tipped the balance in New Zealand's favour). The first is that a lot of the run-outs and catches in ODI games occur near the end of the innings where their impact on the score is not so great. The second is that most of the opportuntities that arrive are ones that are (or should be) straightforward for an international cricketer. We all recall moments of fielding brilliance, but those opportunities simply don't arrive often enough to make the contributions of great fielders worth a place in the team for that reason alone.
There are a couple of caveats to any coaches taking policy conclusions from this.
- We have only looked at dismissal chances. If we were able to get good data on ground fielding, it might make a difference. I suspect not, though.
- We have only looked at ODI cricket. I supsect the role of catching might be greater in test cricket. (I am showing my age here, but I continue to believe that Jeremy Coney should have been in the NZ team between in the 76-78 period, simply to make sure there was someone who could hold on to the slip catches that Richard Hadlee was generating and having continually dropped at that time.)
- It may be that fielding is more dependent on coaching and practice rather than natural talent, relative to batting and bowling, and so the reason that the better-than-average fielders are not that much better than average, is because coaches have correctly emphasised bringing all fielders up to a minimum standard, and have not selected players who don't meet that threshold.
Not arguing with your conclusions, but it has always seemed to me that there are a number of ways to interpret the cliche. The one you've debunked is broadly "it's important to select top quality fielders because they have a large influence on match outcomes". One of many possible other interpretations is "an early dismissal by a non-routine catch, or an early drop of a routine catch is likely to influence the match outcome significantly". I wonder how/if the analysis would change if the data were limited to the first 35 overs of an innings, or to chances where the dismissal would have been (say) the 3rd through 7th of an innings.
ReplyDeleteMichael,
ReplyDeleteI agree that you can interpret the cliche in that way. But then it would also be true that "groundsmen win matches", "tosses win matches", "boundaries win matches", "singles win matches", etc. I do think there is something about a brilliant catch or a horrible drop that sticks in the mind more than any single cover drive or or even a seaming jaffa that earns an LBW, leading to the importance of catches being overstated in people's intuition.
I don't think the analysis would change at all if we looked only at the first 35 overs (of the first innings). Yes, there is much more room for the variation in the expected score at the start of an innings, so catches count for much more (a wicket is about -30 runs on the first ball of an ODI). At the same time, bowlers contributions to wickets count a lot more in the first 35 overs, and the value to a batsman from not being dismissed, so restricting to the first 35 overs would just scale up the importance of batting, bowling and fielding across the board.