Thursday, January 27

It’s that easy for deniers to skip the anti-bullying controls of social networks

States are debating how to close common spaces to coronavirus and vaccine deniers. In social networks, this process began almost at the same time that the pandemic spread throughout the world with the aim of stopping the spread of hoaxes about false cures, the non-existence or lightness of COVID-19 or the supposed danger of the vaccine. The platforms use their automatic detection algorithms, whose mission is to analyze the global conversation and block misinformation or reduce the visibility of potentially dangerous comments. However, two years later it is still relatively easy to outwit these artificial intelligences.

The Chinese virologist’s report that the coronavirus was created in a laboratory is funded by Steve Bannon

Know more

A new study by the NGO EU Disinfolab to which has had access documents how the word camouflage technique is enough for deniers to bypass controls on Instagram, Facebook, Twitter or YouTube. At least in Spanish. It consists of altering keywords such as “vaccine”, “COVID-19” or “pandemic” enough so that they continue to be understandable to other users, simple modifications that nevertheless make them undetectable by algorithms.

In this way, write “b4kun4”, “v @ kN4” or “nacuva” instead of “vaccine”; “k0 B1T” or “C (o (v (i (d” instead of COVID-19 or “pl @ πd € m1∆” or “@ # plan # demia” instead of the denier term “plandemia” is valid for These comments go under the radar, reflects the analysis of the NGO, specializing in tracking how disinformation is shared on digital platforms.

The technique is not massive, recalls EU Disinfolab, just as the deniers are not. But it is worth them to continue occupying the digital space to try to spread hoaxes among other users. “The most worrying thing is that malicious actors are one step ahead and have already developed strategies to escape the content moderation systems of the platforms,” ​​explains the NGO.

The organization’s study also shows that what the networks suggest to see below, as in the case of Facebook, are videos of alleged doctors who question the effectiveness of health measures to control the virus.

On other occasions, researchers have found that algorithms have problems detecting that disguised keywords have been inserted into videos attached to publications through graphics or text added to images. This has happened on Instagram.

The growth of health misinformation in parallel with the spread of the pandemic worries even the World Health Organization, which has called it an “infodemic.” The WHO recognizes that hoaxes have the ability to directly impact public health, either causing personal harm derived from false cures with dangerous products or increasing insecurity about the vaccine.

In its analysis, EU Disinfolab emphasizes that “content moderation by Facebook, Instagram, Twitter and YouTube sometimes seems hazardous, since the application of platform policies has flaws, leaving out a lot of misinformation.” “According to our observations, many of the publications that contain words camouflaged within a disinformation message go completely unnoticed,” they add.

The opacity of the networks regarding their moderation system does not help. As the NGO recalls, they do not have common rules regarding the content they tolerate, which they block or which they penalize in terms of visibility. Sometimes these rules are not totally public, nor are the characteristics of the human teams that must monitor the work of the algorithms. Twitter, Meta and YouTube have refused to give details about those teams in response to requests from this medium on repeated occasions. Nor do they respond to questions about where they are located or what their knowledge of the Spanish socio-cultural context is. has contacted the different networks mentioned in the EU Disinfolab study. Meta (Facebook and Instagram) and Twitter have not responded at the time of publication of this information. YouTube refers to its transparency report on the videos it removes each quarter and the reasons for it. From July to September of this year, the platform eliminated 6.2 million videos globally (31,000 in Spain). The vast majority (5.9 million) were automatically deactivated. From the EU Disinfolab they detail to this medium that YouTube is the network with the least prevalence of this type of practice of camouflaging words.