It is likely no surprise that misinformation and disinformation permeate the communication space around public health events. The COVID-19 pandemic has been no exception, and in fact, has highlighted how misinformation and disinformation can damage public health response efforts. In particular, damage to trust, heightened confusion, and intensified discord can widen the gap between effective public health interventions and public willingness to support them.
Misinformation about disease outbreaks spreads rapidly in information voids, when there is little true information to ground viewers in the reality of the current situation. While we are learning more every day, there are many unknowns about COVID-19. As a result, the continued lack of definitive information on the virus and its spread has created an opportunity for misinformation and disinformation to take root.
The lessons we learned from our analysis of Ebola misinformation have direct application to the COVID-19 pandemic today.
Recent research published by our team takes a close look at misinformation during October 2014, when cases of Ebola were identified in the US. We sought to understand misinformation in the context of an emerging, fear-inducing disease in order to draw lessons for future outbreaks or epidemics that might similarly frighten the public. Little did we know that we would be publishing this research in the midst of one of the largest and most severe pandemics in recent history. The lessons we learned from our analysis of Ebola misinformation have direct application to the COVID-19 pandemic today.
First, we learned that misinformation can come in many forms, making it harder for people to determine what information is correct. While about 5% of the tweets we reviewed were false, another 5% were partially true or subtle misinterpretations of true information. While it may seem simple to label information as either true or false, we found making this distinction to be incredibly nuanced. In the process of hand coding thousands of tweets, even trained coders with public health expertise sometimes found that distinguishing between true and misleading information was challenging and required the combined judgement of several people. Many people may not be able to take the time to carefully and objectively verify nuanced half-truths, instead making judgements based on their own knowledge, experiences, and worldview.
We found a strong tie between misinformation and political content
Second, we found a strong tie between misinformation and political content. We had initially, and perhaps naively, thought that misinformation during a health event like an Ebola outbreak would primarily be framed in a health context. Instead, we observed great overlap in misinformation, political tweets, and tweets that seemed designed to induce societal discord. Additionally, both the 2014 Ebola outbreak and COVID-19 have occurred during contentious US election years, and there are indications that COVID-19 misinformation has been politicized. An important lesson emerging from this is that public health events, response activities, and communication efforts, while often apolitical in their inception, can be vehicles for messages that serve goals beyond ensuring the health of the public.
Finally, we found a several different misinformation types or tropes. Government conspiracies, false rumors of concerning viral characteristics (eg airborne transmission of the Ebola virus), and fake cures were the most frequent types. Today, we see these same rumor tropes appearing during the COVID-19 epidemic. In our sample of Ebola-related tweets, some rumors were refuted, but many were not, casting some doubt on the ability of social media to self-correct effectively. More research must be done to determine the best ways to protect the public from these specific types of rumors and misinformation.
Developing solutions to reduce the spread of misinformation and disinformation from diverse sources, including public figures, will not be easy. Further research is needed to understand how best to reach populations resistant to traditional public health messaging and health authorities. While essential efforts must be taken by social media platforms to control the spread of misinformation, the suggestion that technology companies should simply act to stop the dissemination of often nuanced and subtle misinformation overlooks the role of many other important stakeholders – and our own need to take a hard look in the mirror and improve the ways that we, as members of both the public and scientific community, consume and provide information to others.