This is a guest post by Professor Prabhat Jha and Lukasz Aleksandrowicz of Centre for Global Health Research.
Reliable cause of death (COD) statistics have transformed public health in the last century. These basic data have uncovered links between diseases and risk factors (such as smoking), and are essential to smart allocations of spending and planning of health programs. Whereas high-income countries have near universal robust death certification and medically-certified COD, these systems are uncommon in low and middle-income income countries.
Most deaths worldwide occur outside of the medical system, and consequently, are invisible in vital statistics. As we point out in this series, global indirect estimates of the causes of death have to rely on roughly 850 guesses for one actual death with good information on causes of death drawn from a representative sample of deaths.
Therefore, improving the coverage and quality of COD data is vital. Verbal autopsy (VA) is currently the best and most practical method of obtaining COD statistics in these areas. Its use in mortality research has been steadily increasing in the last decade, and is now widely practicable to use in large, routine, national mortality surveys. Such surveys need to be representative (ie, a “true snapshot”) so as to capture rural and urban deaths and represent the whole population accurately.
There is fresh debate over two competing VA methods – that of determining the cause of death from the field results by physicians, or by automated methods. Physician-coded verbal autopsy (PCVA) has been the default and most widely-used method. Computer-coded VA (CCVA) methods, although not commonly relied on for COD statistics, have the potential to improve speed and cost, particularly when used in large-scale surveys.
The relative superiority of these methods is not easily resolved, as verbal autopsy is used in areas with little access to medical care at the time of death. Too many past studies have relied on trying to compare rural unattended deaths with hospital deaths as a “gold standard”. But as shown in our series, in India, use of hospital data can grossly distort national estimates of COD.
Approximations for a gold standard diagnosis have been proposed by research groups, such as physician-assignment based on lay reporting of signs and symptoms, or using hospital-assigned CODs based on clinical tests. Both have limitations, however, as diagnosis made from respondent’s recall, without the support of clinical tests, may be unreliable, while using hospital-based patients reduces external validity, as patients without access to medical services may have different symptomatology and recall of symptoms.
Large, comparative studies of the various assignment methods may be illuminating for relative performance. The challenge we undertook in our series of papers was to produce an updated, comparative look at the various VA coding methods.
We first assessed the performance of the Million Death Study, a large, VA survey based on physician coding, in relation to simple and appropriate metrics. These results indicated that population-level assignment by physicians, which was consistent between resamples and which followed plausible age and sex patterns, is reliable. It also helped define performance criteria that can be applied to other VA-based national surveys, as now done in Afghanistan, Mozambique and Zambia, among other countries.
We also examined the performance of several CCVA methods to PCVA-assignment, on five datasets from various countries, and comprising over 24,000 deaths (about twice as large as the earlier study by Chris Murray, which has also been reviewed in this article).
We found that automated methods matched physician coding only about half the time, in individual COD-assignment. They were reasonably better at reproducing physicians’ population-level proportions of CODs (called cause-specific mortality fractions, or CSMFs). These results were fairly consistent for the automated methods across the five datasets. The major limitation of this comparison is that it does not prove validity of the physician-assigned CODs used as a reference standard, only that for a given death, physician- and computer-assigned CODs don’t sufficiently agree.
From these results, we take away two conclusions for future VA work. First and most important is the need to obtain much larger, national COD surveys in various populations using VA. Second, given the discrepancy between PCVA and CCVA methods, we argue to refer to the more transparent and trusted assignment method; indeed physician diagnosis remains the standard for medical care globally. As such, there is little basis to argue for replacing current physician coding by CCVA.
Verbal autopsy will be the most practical method of obtaining COD statistics in low-income countries for the foreseeable future. As such, we see a collaborative approach between physician and automated methods in future VA work, ideally combining the two to retain the expertise of physician-coding, while improving its speed and affordability. Additionally, automated methods themselves have room for improvement, the most interesting of which may be a parallel combination of several automated tools for more robust assignment.