Half of debunked online disinformation targeting three prominent scientists remains live and unlabeled.
By Dr. Brian D. Owens
Anesthesiology, Critical Care Medicine
Virginia Mason Medical Center
Introduction
Social-media sites such as Facebook and Twitter are not doing enough to tackle online abuse and disinformation targeted at scientists, suggests a study by international campaign group Avaaz.
The analysis, published on 19 January, looked at disinformation posted about three high-profile scientists. It found that although all of the posts had been debunked by fact-checkers, online platforms had taken no action to address half of them.
“Two years into the pandemic, even though they have made important policy changes, the platforms, and Facebook in particular, are still failing to take significant action,” says Luca Nicotra, a campaign director for Avaaz who is based in Madrid.
Scientists under Attack
Online threats aimed at scientists has become a major problem during the COVID-19 pandemic. A survey by Nature last year found that many scientists who had spoken publicly about the disease had experienced attacks on their credibility or reputation, or had been threatened with violence. Some 15% had received death threats.
Nicotra and his colleagues looked at pandemic-related disinformation targeting three prominent scientists: Anthony Fauci, head of the US National Institute of Allergy and Infectious Diseases in Bethesda, Maryland; German virologist Christian Drosten; and Belgian virologist Marc Van Ranst. They checked posts across five social-media sites — Facebook, YouTube, Twitter, Instagram and Telegram.
Between January and June 2021, the authors identified 85 posts across the platforms that contained disinformation targeting the scientists and their institutions, and that had been debunked by several fact-checking organizations. By late July 2021, when the study concluded, 49% of the posts were still live and had not been removed or labelled with a warning about the fact-checkers’ findings. The posts had collectively racked up nearly 1.9 million interactions.
The failure to label debunked disinformation is a problem, says Nicotra, because unlabelled posts get much more engagement than ones that are labelled. Labelling is a “very effective strategy” for fighting disinformation, Nicotra says. “Especially if users who have previously interacted with the content are also informed.”
Much of the Avaaz report focuses on Facebook because the platform’s size allows for better statistical analysis, but also because the other sites generally don’t provide access to the necessary data and tools.
“We know enough to say the same problem exists on the others, and it might even be worse,” says Nicotra. “But the lack of transparency makes our job more difficult.”
Problematic Posts
A spokesperson for Meta, the parent company of Facebook and Instagram, which is based in Menlo Park, California, says that the company has strict rules on misinformation about COVID-19 and vaccines, and does not allow death threats against anyone on the platforms. It has “removed more than 24 million pieces of content for violating those policies since the pandemic began, including content mentioned in this report”, the spokesperson says. “We’ve also added warning labels to more than 195 million pieces of additional COVID-19 content which don’t violate our policies but are still problematic. We will continue to take action against any of the content that breaks our rules.”
But Nicotra says that the platforms are still missing large numbers of problematic posts, especially outside the United States and Europe, and in languages other than English. In 2020, Facebook devoted just 13% of its budget for developing misinformation-detection algorithms to regions outside the United States, according to documents released by whistle-blower Frances Haugen, a former product manager for the company.
Another problem is that the algorithms that govern social media are designed to keep people engaged, and so tend to highlight content that is controversial or emotionally charged, says Nicotra. He says that new regulations, such as the European Union’s Digital Services Act — which requires companies to assess and act to reduce the risk of harm to society from their products — could force changes to the algorithms.
No Silver Bullet
“These are underlying problems with social-media platforms that we now see crop up with COVID, and with other crises they will potentially emerge again,” says Heidi Tworek, a historian who studies health communications at the University of British Columbia in Vancouver, Canada.
Although tweaks to algorithms and better enforcement of the companies’ own terms of service will help, Tworek says, there is no silver bullet that will solve the problems of online harassment and misinformation.
Some organizations have started working on ways to support scientists facing online harassment. In December 2021, the Australian Science Media Centre in Adelaide held a webinar that provided practical advice to scientists on how to protect themselves, including how to control privacy settings, and where and how to report abuse. The webinar also highlighted the need for institutions to provide support. “It’s an area that’s often been ignored, but they do have a responsibility of care to their employees,” says Lyndal Byford, the centre’s director of news and partnerships. The UK Science Media Centre (SMC) is planning to run a similar event on 24 February.
Fiona Fox, chief executive of the SMC in London, hopes efforts such as this will help researchers to feel safer talking about their work in public. “We can’t let this stop scientists from engaging with the media,” she says. “The public interest lies in good scientific communication.”
Originally published by Nature, 01.28.2022, DOI:https://doi.org/10.1038/d41586-022-00207-2, under the terms of a Creative Commons Attribution 2.0 Generic license.