Recently I attended the EQUATOR scientific meeting and annual lecture in Paris. The theme of the day was ‘improving reporting to decrease the waste of research’. The focus was not on the familiar, and very important, topic of reporting of randomised controlled trials (RCTs), but on issues that receive less attention – how to report animal studies, observational studies, genetic studies, and prognostic & diagnostic studies.
We know a lot about the problems of poor reporting of RCTs and, quite rightly, put a lot of emphasis on improving their reporting, but we shouldn’t forget about ‘good reporting’ in other study designs either.
The EQUATOR (Enhancing the QUAlity and Transparency Of health Research) Network is an international initiative that launched in 2008. Set up to improve the reliability and value of the published health literature, the EQUATOR Network champion transparent and accurate reporting and the wider use of reporting guidelines. There’s lots of information on their website if you’re interested in finding out more about their history, who they are or what they do.
Although the focus of this years’ meeting was not on RCTs, perhaps the best known of the reporting guidelines is for RCTs. The Consolidated Standards of Reporting Trials (CONSORT) statement, formed of a 25-item checklist and a flow diagram, is an evidence-based minimum set of recommendations for their reporting. Good reporting of clinical trials is vital – they will form the basis of clinical decisions – but more work is needed as research has shown time and time again that that the current quality of reporting is not sufficient. You can read more about the CONSORT group’s most recent meeting on our blog.
The evidence on the quality of reporting of other study designs is far less established. In addition to the CONSORT statement, EQUATOR have a vast number of different reporting guidelines (over 200) for different study designs (see table below for some of the more common ones). To name just a few there are STROBE (for observational studies), PRISMA (for systematic reviews and meta-analyses), STARD (for diagnostic studies), CARE (for case reports), and ARRIVE (for animal studies).
Two common types of study that were covered at the meeting were observational studies and animal studies.
Speaking about animal experiments, Malcom MacLeod of the University of Edinburgh explained the issues and significant potential for improvement in their design, conduct, analysis and reporting. He spoke about the CAMARADES collaboration which is working to address some of these issues, by providing a framework for groups involved in systematic review and meta-analyses of data from animal research. They hope that by providing a precise and robust overview of existing data, they can help focus further research, thereby minimizing unnecessary replication.
Doug Altman, of the University of Oxford, spoke about the need for better reporting in observational studies – where a researcher ‘observes’ what happens rather than tries to affect an outcome, for example cohort, or case-control studies. Good reporting is vital, given that a large proportion of the medical research conducted is observational. Altman discussed whether STROBE has had an impact on this. The answer? We don’t know – we need empirical evidence.
So, where do we go from here?
What’s clear is that there’s still a problem with reporting across the medical literature and that this reduces the usefulness of research. The key questions now are what can be done to fix this and whose responsibility is it? The second question is perhaps easier to answer – it’s our collective responsibility – authors and reviewers, editors and publishers, and funders.
But what can we do to tackle this? We must all continue to raise awareness of reporting guidelines and increase their use. Authors should make use of them when writing up their research. Reviewers and editors should make use of them too when assessing and reviewing manuscripts, to ensure completeness of reporting – for all study designs, not just RCTs.
We also need more research, not only to identify the problems associated with reporting somewhat ‘neglected’ study designs, but also to identify interventions which work to improve reporting and are feasible to implement – perhaps through a simplified approach.