Yesterday, I explained how to really measure gross domestic product (GDP) figures. The 2.5% figure is arrived by adopting the statistical technique of ‘difference-in-difference estimation (DID)’. “DID is used to estimate the effect of a specific intervention or treatment (such as a passage of law, enactment of a policy) by comparing the changes in outcomes over time between a population that is enrolled in a program (the intervention group) and a population that is not (the control group)” [See technical details here
On page 9 of his paper, Dr Arvind Subramanian explains this logic in these words: “[h]ere the treatment is the methodology change in India; the treatment period is post-2011.”
However, certain assumptions must be met for the results derived from DID to be reliable. One of the assumptions is that there should be no exchange between the intervention group and the control group.
The 2016 survey of United Nations Statistics Division (UNSD) clearly shows that member countries have adopted the 2008 system of national accounts (SNA) at different points in time. By fixing the treatment time as 2011 on a sample of 70 odd countries, Dr Subramanian’s study inadvertently violates this assumption. For example, Eurostat Korea and Singapore implemented the 2008 SNA in 2014, they are present in both the intervention group (pre-2011) and the control group (post-2011).
Furthermore, since the 2008 SNA is work-in-progress, another critical assumption that composition of intervention and control groups should be stable over time, is also violated invalidating his panel results.
Thus, my understanding is that Dr Subramanian’s results are biased and unreliable.
The bias in the 2.5% figure has two sources: one, the change in the composition of the intervention and control groups over the study period; and second, the existing bias in growth rates is due to outdated base years in many member countries.
In conclusion, what lessons can be drawn from this controversy? The writer believes that the Central Statistical Organisation (CSO) did not cook the numbers and was only complying with the 2008 SNA. But the CSO did fail on many occasions to provide cogent responses to some very pointed question posed by many subject experts.
As a result, the discourse has turned noisy, prompting the chairman of Economic Advisory Council to the Prime Minister (PMEAC) to defend what should normally be CSOs turf. It is advisable that EAC’s response, which should ideally be a white paper, addresses not just Dr Subramanian’s findings but also doubts raised by others.
Specifically, the EAC must explain its reservation on the use of double deflation or the Samuelson-Swamy Theory of Index Numbers, issues in respect of MCA 21, etc.
The constraints faced by the CSO also need some attention. The 'outcome budgets' (OB) 2016-17 for the ministry of statistics and programme implementation (MOSPI) paints a somber picture.
There is a manpower shortage in the subordinate statistical service and field operation division (FOD) due to which sample surveys have to be conducted through contractual staff affecting quality of data collected. The National Statistical Systems Training Academy faces non-availability of faculty members for training.
Centralised selection creates a mismatch between the language known and the language required for statistical surveys in the field.
In the light of these facts, the suggestion that “GDP estimation must be revisited by an independent task force, comprising both national and international experts, with impeccable technical credentials and demonstrable stature” shows gross disconnect with practical realities faced by CSO.
One hopes that the government will address some of these issues and contain the damage already done. This would perhaps be the best possible step in pursuit of a final closure.
(The writer is an economist in the banking sector. The views are personal)