The COVID-19 pandemic is precisely the time when we need factual health information to be communicated with clarity and compassion. Our fears, anxieties and sense of urgency to address this problem have led nearly all of us to reach out for answers. The Internet and social media platforms could be useful vehicles for quickly and efficiently getting accurate information out to the public.
Instead, the opposite seems to have happened.
In April 2020 alone, sources of health misinformation had an estimated 460 million views on Facebook, according to Avaaz, an advocacy organization that released a report about the massive problem of health misinformation on Facebook. The false claim that the American Medical Association (AMA) and hospitals were encouraging doctors to over count COVID-19 deaths was viewed an estimated 160 million times.
The amount of misinformation on the Internet isn’t exactly news to anyone. It’s common knowledge that there’s plenty of false or misleading information that’s sometimes purposely, and sometimes innocently, posted on the web and social media platforms. With so much misinformation, it’s becoming more difficult to discern fact from fiction. As Max Reid wrote in a story about fakery on the internet, there’s an “uncanny sense that what you encounter online is not ‘real’ but is also undeniably not ‘fake,’ and indeed may be both at once, or in succession, as you turn it over in your head.”
If so many of us know to question whether something is true or false, why is it that so much misinformation continues to be spread? The answer is a complicated soup of technical chicanery as well as the human tendency to share information that evokes a strong emotional response. An MIT study found that the nature of social media – including eye-catching headlines and the ability to click “like” and/or share a story – leads people to spread misinformation without stopping to think about its veracity. This behavior, coupled with networks of “superspreaders” of health misinformation, has created a perfect storm allowing the problem to grow. Credible health information sources such as the World Health Organization (WHO) and the Centers for Disease Control and Prevention (CDC) were getting four times fewer views in April 2020 than the top misinformation sites.
“Facebook’s algorithm rewards and encourages engagement with content that provokes strong emotions, which is exactly the kind of content we warn patients to doubt and carefully assess, since false information is often packaged as novel and sensational.”
How to stop the misinformation spread
The MIT study revealed our inclinations to quickly share information before we have a chance to think about if it’s true or not. But the study also revealed a relatively simple solution: stop and think before you click to share. Just a little nudge reminding people to think critically made a difference. When study participants were first asked to judge the accuracy of a non-COVID-19 headline and then told to review a group of COVID-19 headlines, the rate of sharing misinformation was three times less likely than the group that wasn’t prompted to think first.
What specific things should you look for when trying to determine if you’re looking at something that’s fact, fiction, an opinion, or an outright falsehood? It may not be practical to do all the sleuthing yourself, and luckily, you don’t have to. There are a number of reputable fact-checking websites that research the thousands and thousands of claims found on the Internet. One of the most well known fact-checking sites, Snopes.com, received roughly 10,000 COVID-19-related questions during the last two weeks of March alone. It’s hard for these sites to keep up with the sheer volume of claims to investigate, so using some of these common sense strategies yourself can become part of your Internet routine:
Look for red flags.
- Is the story intended to create a strong emotional reaction, or fit a particular point of view? MIT published research in 2018 showing that false news spread faster on Twitter than true stories. False news typically uses “ragebait” to elicit feelings of fear, disgust, and surprise – things that are quick to get you to react.
- Some sites produce a confusing mix of legitimate and false stories, blurring the lines between real and fake. Once you start paying attention, you’ll be able to spot twisted facts, omissions of facts, or opinions presented as facts.
- Photos, videos, and infographics are easily manipulated, so don’t always believe what you’re seeing. You can try using Google Reverse Image Search to see where else images have appeared.
- The general rule to follow: if a story grabs your attention with a strong emotional reaction, slow down and take a closer look.
Consider the source.
This is one you should have learned in school. Use reputable sources. Check that dates are current.
- If you find a story on a website you’ve never heard of, or a site that promotes a particular point of view, try finding the same information on other sites. You may be surprised to see a range of websites putting a slightly different slant on the same basic information. If you find more reliable news sources are covering the same story, chances are better that professional standards of research have been followed.
- Follow the hyperlinks within a story to see what original resources were used – do you trust those sources?
- For stories posted on social media, go to the original source, or check out what else the author or group has published. Some information on social media simply isn’t sourced, or dubiously sourced. Stop right there. Don’t share what doesn’t pass the smell test.
Ask the experts.
We’ve mentioned sites like Snopes.com, and there are many others.
- The International Fact-Checking Network (IFCN) at the Poynter Institute has a large coronavirus facts databasethat can be searched by keyword, country, or organization. Articles on the site also debunk false claims such as suggesting that oximeters used to gauge a patient’s oxygen levels can also gather personal data such as fingerprints.
- FactCheck.org has coronavirus coverage including stories, videos, and conspiracy theories that they’ve researched. They also list sources of reliable coronavirus information such as the CDC, NIH, WHO, Johns Hopkins University and more.
- New York Times Coronavirus Tracking includes maps and information about vaccines and treatments.
- Your local library. Librarians are trained to search reputable resources, and you don’t even need a computer – just call or stop by to ask your question.
Social media needs to step up their game to root out misinformation
Internet users need to exercise judgment to sort out the hucksters from the truth. This reality has always applied, no matter what topic you’re taking a virtual look at. But social media platforms can and should do more to correct misinformation. We’re starting to see some action, such as when Instagram imposed a “false information” screen over a July 29 post from the popular singer Madonna claiming that hydroxychloroquine was the cure for coronavirus. They included links to fact-checking sources to provide accurate information. But more needs to be done.
According to the Avaaz report, Facebook flags just a tiny fraction of the COVID-19 misinformation it found on this platform. If Facebook robustly reached out to users who have seen misinformation and provided fact-checked corrections, trust and confidence could be restored.
Another important aspect the Avaaz report highlighted had to do with the way Facebook’s algorithms currently promote posts on the basis of how often it’s seen, without regard to whether the information is true or not. Organizations touting conspiracy theories and outright falsehoods have been able to take advantage of this system by using sensational headlines and other (often questionable) techniques to inflate their number of views to hit the “going viral” mark. Viral posts, by their very nature, continue to get shared until you remember to stop, and think before you click.