In our health care systems, more and more data about nearly everything are generated and collected in every second. With the increasing demands of documentation and controlling, and with the advance of electronic tools and devices, almost all activities leave an electronic trace.
During routine care, physicians order procedures, note diagnoses, prescribe drugs, obtain lab results and record special encounters electronically. Wireless devices provide real-time information on vital signs, including temperature and blood pressure, heart and respiration rate, heart rate variability, or oxygen saturation.
It’s hard to believe that in a few years many clinical trials will be conducted without using such big data sets of routinely collected health data.
Sometimes, patients volunteer to provide samples of their genome in huge biobanks, or take part in registries collecting detailed and often well-structured data.
In standardized routine follow-up visits, we record information on the outcome of surgery and other interventions, and document the patient’s well-being.
Costs are systematically monitored and controlled for administrative purposes.
It’s hard to believe that in a few years anyone will use paper to note a patient’s blood pressures or temperature, or take the medical history. It’s also hard to believe that in a few years many clinical trials will be conducted without using such big data sets of routinely collected health data.
Using existing data structures and a learning healthcare system
All these data on procedures, diagnoses, treatments, patient characteristics, and costs can be used for clinical trials.
They can help to more efficiently identify and recruit patients, or to measure outcomes of randomized interventions. They may help to better understand differences between patients agreeing to be randomized and those who don’t.
Trials can be completely embedded in data structures, such as registries, cohorts, or electronic health record systems, and provide real-world evidence of high validity and maximal generalizability.
We often don’t need to spend time and enormous resources with setting up structures and monitoring data collections, only for a single trial to provide a single answer on a single research topic. Instead, real-time evaluation of interventions may be possible, continuously evaluated in an evolutionary series of randomized experiments, each aiming to find out which intervention is working better, towards a learning health care system.
Big data to improve trial research processes
Big data may also assist in improving the design and conduct of the trial processes. For example, algorithms based on machine learning and artificial intelligence techniques could augment data monitoring, detect data accuracy problems or improve other aspects that would also be important for conventional trials that are not based on routinely collected patient data.
Better understanding of challenges and barriers, limitations and implications
The challenges and barriers, as well as the limitations and implications, of using routinely collected or big data for clinical trials were briefly outlined in a recent review in Trials. We now aim to collect articles reporting specifically on such issues in a brand new thematic series titled ‘Big data for randomized trials’.
We are seeking articles providing practical insights or guidance on the planning, setup, conduct and analysis of trials using big data.
We are interested in articles showing how using big data may make trials more feasible. We would like to better understand novel approaches, for example, how we can implement randomization directly into care systems to evaluate public health interventions, or treatments, or alternative care strategies. Topics related to ethical implications, costs, or methodological aspects are very welcome.
Probably the biggest issue when using data not collected for the purpose of research is its accuracy. Where do we stand here? Can we provide some guidance to researchers when we may best use which data source for what purpose?
Submit your work!
Overall, we are seeking articles providing practical insights or guidance on the planning, setup, conduct and analysis of trials using big data (or routinely collected data, we would make no specific difference here). We are highly interested in empirical research on research studies.
Commentaries and opinion pieces are welcome as well, especially those looking explicitly to practical challenges and ideally offering solutions.
All articles should make clear the relevance of using big data for randomized trials and current research gaps, and they should describe how the results or opinions in the article can be used to improve trial design and conduct in the future.
You can find more information about this thematic series and the submission process via our call for pages page here: https://www.biomedcentral.com/collections/bigdatatrials