It is estimated that at least fifty-percent of published research is poorly conducted and possibly not well reported. This makes the research difficult to interpret and use. It is wasteful.
Systematic reviews are considered the gold standard for healthcare decision-making as they are known to include the best available evidence to answer health research questions. Given that they are often used for healthcare decision-making by healthcare professionals and public health analysts, they should be of the highest quality. Therefore, systematic review authors should conduct their reviews using rigorous methods and avoid biased reporting.
When it comes to systematic reviews, we simply cannot afford to waste efforts.
It would be a shame to invest precious resources and time into a systematic review that will ultimately be unusable. Worse still would be using these results for healthcare decisions and policy changes. When it comes to systematic reviews, we simply cannot afford to waste efforts.
The good news? There are many tools and methodologies that have been designed to help in the process of conducting a methodologically sound, and well reported systematic review. These tools are guidelines that describe not only how to report and conduct a review, but also how to design it.
What tools are available?
The number and type of tools used to assess quality of methodological rigor and reporting of systematic reviews is growing. But with so many tools available, which one(s) should we use?
Quality of conduct (methodological quality) tools were developed to assess how well a systematic review was designed and conducted, whereas reporting quality guidelines were designed to guide authors in appropriate reporting of methodology and findings of systematic reviews.
In 1991, the ‘Overview Quality Assessment Questionnaire’ (OQAQ) became the first validated methodological quality guideline to be published. This was succeeded a decade later by ‘A Measurement Tool to Assess Systematic Reviews’ (AMSTAR). In 1999, the first meta-analysis reporting guideline, ‘Quality of Reporting of Meta-analyses’ (QUOROM) Statement was published which was subsequently followed a decade later by the ‘Preferred Reporting Items of Systematic reviews and Meta-Analyses’ (PRISMA) Statement.
Other tools have been developed for use by specific organizations. Cochrane, for example, has created the ‘Methodological Expectations of Cochrane Intervention Review’ (MECIR) to guide conduct of Cochrane reviews for interventions.
Selecting the right tools
Our recent study in Systematic Reviews found that the criteria used to assess methodological quality or reporting quality of systematic reviews varied. Eight published tools were used in 80% of the studies and the authors of the remaining studies created their own criteria. The most widely used tools were PRISMA and AMSTAR.
Eight published tools were used in 80% of the studies and the authors of the remaining studies created their own criteria
With the variety of tools available there is confusion over what should be used to assess quality of conduct or reporting.
Reporting guidelines such as PRISMA aim to improve the quality of reporting of systematic reviews. Reporting guidelines should not, however, be used to gauge the quality of the review. We also recommend not using quality of conduct tools, such as OQAQ, to assess quality of reporting of systematic reviews. While methodological criteria are important to improve quality of conduct, they do not gauge the quality of reporting.
Other tools such as the ‘Risk of Bias in Systematic Reviews’ (ROBIS) were developed to complement AMSTAR. The concept of risk of bias is distinct from methodological quality as it looks at limitations in the design, conduct, or analysis of research that distort the findings. Although there is some content overlap, the majority of criteria are distinct.
Despite the available tools, it is a missed opportunity for systematic reviewers if they do not use these guidelines to appropriately design and evaluate their own methods.
A companion study by us assessed systematic review compliance with reporting guidelines and quality assessment tools. We found that compliance to methodological quality and reporting criteria items was variable. Systematic review authors complied well with several items in the guidelines but some items require major improvement.
Systematic review authors should try and keep up-to-date with tools available and how to comply with the criteria before even beginning their review. To facilitate the use of these tools funding agencies could endorse and implement guidelines at the protocol stage, thus helping to improve quality of conduct and reporting.
Given all the tools available, it is unfortunate that methodological quality and reporting of systematic reviews is not better. Efforts should be directed towards improving the quality and reporting wherever possible. Systematic review authors should follow appropriate guidelines of reporting and methodological quality when they design and evaluate their research.
Comments