Friday, February 3

YouTube assures that the new EU law can weigh down its fight against hoaxes


The EU Digital Services Law has entered the final phase of negotiations. It is a package of measures with which the community bloc wants to tackle the problems associated with large platforms, such as the harmful consequences that their algorithms can have or the absence of common and clear rules for the moderation and removal of content. Some measures that, although well-intentioned, can hinder the battle against disinformation, according to YouTube in a statement published this Wednesday to which elDiario.es has had access.

YouTube videos shared on Facebook: this is how the great misinformation gap was opened during the first wave

Know more

“As currently drafted, the Digital Services Act could require us to do a systematic and detailed assessment before launching any product or service. And that’s something that could slow down our work in protecting the YouTube community from urgent threats.” and complex, such as the misinformation associated with Covid-19,” says Neal Mohan, product manager for the US website.

“Every time a new disinformation threat arises, we must act quickly,” emphasizes the person in charge of the platform: “We are aware that, when we talk about combating disinformation, none of the parties to these negotiations wants to lengthen the time of response; however, there is a real risk of this happening if this proposal is upheld.

The new law proposes to increase the transparency of content moderation, forcing platforms to explain themselves to users when they delete their content, but also when they reduce its visibility. The latter is one of YouTube’s most popular strategies to stop disinformation without stepping on freedom of expression.

There are a large number of videos that YouTube classifies as “potentially harmful” because they go to the limit of its rules. However, he doesn’t consider them to be completely violated, so he doesn’t delete them. The company calls them limit content or “borderline videos” and when it detects them, its policy is to prevent their exposure by removing them from the recommendation lists, cutting their appearance in related videos or preventing the subscribers of the channel that publishes it from being notified of their publication. .

With the new European rules, YouTube should inform the authors of those limit or “borderline” videos of how their visibility has been penalized, when and why. Data that would prevent frustration for millions of users who participate or earn a living on the platform, but could also shape an instruction manual on how to bypass their defenses for those seeking to intoxicate.

YouTube’s problem isn’t just YouTube

The clearly harmful videos are the ones that give YouTube the least problems, since it automatically deletes them for violating its rules. According to their data, 92% of the videos deleted in the last quarter of 2021 (more than 3.7 million) were identified by their algorithms, not by people. Of those, 34% did not receive any views. Another 40% only had between one and ten.

Borderline videos are another story. According to an Oxford study, this type of content opened the great disinformation gap during the first wave: the researchers found that although YouTube’s algorithms prevented the internal viralization of videos with disinformation, they were unable to do anything when users shared them in other networks. platforms. Posted on Facebook, they were the perfect ammunition to spread hoaxes, false cures, denialism or far-right conspiracy theories about the pandemic.

“The problem of network crossing” was pointed out by Mohan as one of the points that YouTube is working on for this 2022. “Even if we do not recommend a doubtful video, it can receive visits through other websites that link to a video from YouTube or embed it,” he acknowledged in a post that summarized your goals for this year. As a solution, the platform is considering “deactivating the share button or breaking the link” of these contents, which would prevent them from being shared or embedded in other web pages. However, his doubt is that it is a practice in accordance with freedom of expression.

Care must be taken to limit the spread of potentially damaging misinformation while leaving room for debate

Neal Moham
YouTube product manager

“Sharing a link is an active choice a person can make, as opposed to a more passive action like watching a recommended video,” he argues. “Context is also important: Doubtful videos included in a research study or news story may require exceptions or different treatment. Care must be taken to limit the spread of potentially damaging misinformation while leaving room for debate and education on sensitive and controversial issues,” explains Mohan.

This policy has caused specialists in intercepting disinformation to charge against YouTube. In a letter made public in February, more than 80 hoax verification organizations denounced that the company “makes no real effort” to “enforce policies that address the problem.” “YouTube allows unscrupulous actors to use its platform as a weapon to manipulate and exploit other people, and to organize and raise funds,” they asserted.

In his comment on Wednesday, Mohan assures that during 2020 YouTube updated its disinformation policy on the coronavirus ten times, which changed the conditions for deleting videos, but also hiding limit content. In his opinion, the new EU Digital Services Law being negotiated in Brussels could hinder that fight.



www.eldiario.es