The incorrect belief about an MMR vaccine link to autism came from an academic paper published in 1998 (later retracted), while widespread beliefs without evidence about the harm of water fluoridation were pushed by the print media, campaign groups and word of mouth.
Misleading information also exists today, but what has changed is the speed at which false data spreads and the large number of people who may end up reading it.
Unfortunately, the internet is increasingly full of false information and dubious news, and when it comes to understanding the science and making health decisions, this can have life-or-death consequences. An example of this is people discouraged from getting vaccinated, who, as a result of misleading information online, have ended up in hospital or even died.
However, removing information completely can feel a lot like censorship, especially for scientists whose careers are built on the understanding that facts can and should be questioned and that evidence changes.
That is why the Royal Society of London for the Advancement of Natural Science (or simply, Royal Society), the oldest continuously operating scientific institution in the world according to the Encyclopedia Britannica, has given its point of view on the subject, trying deal with the challenges posed by new ways of communicating information.
in a new report, the institution discourages social media companies from removing content that is “legal but harmful.” Instead, the report’s authors believe social networks should adjust their algorithms to prevent content from going viral and prevent people from making money off false claims.
But not everyone agrees with that view, especially researchers who are experts at tracking how misinformation spreads online and how it harms people, like the Center to Counter Digital Hate (CCHR). This organization maintains that there are cases, such as when the content is very harmful, clearly erroneous and with a lot of diffusion, in which it is better to eliminate it.
An example of this, according to CCHR, is Plandemic, a video that went viral at the beginning of the pandemic. It made dangerous and false claims created to scare people away from effective ways to reduce the damage of the virus, such as vaccinations and wearing masks. It was eventually removed from the platforms. Later, a sequel to the video, Plandemic 2, came out, but social networks were better prepared for it, so it was restricted and did not have the same reach as the first.
As a result, the Royal Society says: “Removing content can exacerbate feelings of mistrust and be exploited by others to promote misinformation content.” This “can do more harm than good by driving misinformation content… into harder-to-reach corners of the internet.”
But, that those corners are “more difficult to approach” is part of the point. If false information goes to the most hidden corners of the Internet, the risk is reduced that someone who is not already committed to potentially harmful beliefs and who is not looking for them, will be exposed to them by chance.
Some of the conspiracy-driven protests originated not in the dark and far-fetched corners of the internet, but on Facebook. And there is little clear evidence that content removal leads people to hold harmful beliefs.
So, as the authors of the Royal Society report suggest, one way to tackle this problem is by making fake content harder to find and share, and less likely to automatically appear in someone’s feed.
That way, you “guarantee that people can still speak their minds,” but they just aren’t guaranteed an audience of millions, explained Professor Gina Neff, a social scientist at the Oxford Internet Institute. “They can still post this information, but the platforms don’t have to make it go viral,” he added.
On the other hand, the Institute for Strategic Dialogue (ISD), a think tank that monitors extremism, notes that a substantial proportion of misinformation is based on the appropriation and misuse of genuine data and research.
“This is sometimes more dangerous than completely false information, because it can take much longer to disprove by explaining how and why it is a misinterpretation or misuse of the data,” the ISD spokesperson says. That’s where fact-checking, another tool supported by the Royal Society, comes into play.
According to the ISD, studies have shown that a small group of accounts spreading misinformation have had a “disproportionate influence on public debate on social media.” “Many of these accounts have been flagged by fact checkers as sharing false or misleading content on multiple occasions, but are still active,” the organization added.
Many disinformation experts see fact-checking as an important tool, and research on ISIS and the far right suggests it can be successful. An example is the case of David Icke, a prolific spreader of misinformation about Covid and anti-Semitic conspiracy theories.
After being removed from YouTube, CCHR’s investigation found that his ability to reach people was greatly reduced. His videos remained on the alternative video platform BitChute, but his views fell from an average of 150,000 to 6,711 after the removal on YouTube. On this platform, 64 of his videos had been viewed 9.6 million times.
Professor Martin Innes, one of the authors of a study called Deplatforming disinformation: conspiracy theories and their control, He says that “part of the problem is that it is necessary to develop current models of platform elimination. It’s not enough to simply remove a piece of content or a small number of accounts.”
Innes is referring to deplatforming, a method that attempts to “boycott a group or individual by removing the platforms (such as conference sites or websites) used to share information or ideas.”
The investigation of organized crime and the fight against terrorism shows the need to disrupt the entire network, the professor added. However, he believes that “this level of sophistication is not yet built in” to how we deal with disinformation capable of endangering people.