Mercadona has decided to terminate the procedure opened by the Spanish Data Protection Agency (AEPD) proceeding to pay the 2.5 million euros of penalty proposed by this body in relation to the pilot project that was tested for several months in 48 of the 1,640 stores that the company has, as reported to Europa Press in sources from the supermarket firm.
Mercadona installs a system that detects people with an order to leave the establishment
The system, with the corresponding judicial authorization and after scientifically contrasting, applied a first technological filter and a second visual verification and established that the identified person had a current restraining order from the establishment. Then he proceeded to notify the Security Forces and Bodies, responsible for enforcing the measure in force.
As they explain from the company, no information of the rest of the people was saved, since it was eliminated in its entirety in 0.3 seconds, which is the duration of this entire process (time similar to a blink), which made it totally impossible the possibility of identifying persons for whom there was no judicial prohibition of access to the establishment.
For the implementation and implementation of this measure, which sought to reinforce the security of both store personnel and customers, the company, from the outset, maintained close contact with the corresponding authorities. Likewise, it shared with the AEPD all the procedures of its Early Detection System before starting the test.
At the same time, the company affirms that “the strictest standards of transparency” were applied, with information campaigns both through the media and through posters in these 48 supermarkets. Furthermore, in each and every one of the cases it always had prior judicial authorization, supported by more than thirty-seven final judgments with a restraining order from the establishment in force that authorized the use of said technology.
Mercadona, despite all this and given the lack of definition and legal doubts revealed in what has been done so far in the procedure on this technology, considers that “now the most responsible and rigorous thing is to terminate this pilot test.”
Facial recognition, one of the most hyped advancements in the field of artificial intelligence, is on swampy ground. Numerous scientific and civil society groups have questioned its use due to the ethical doubts that it entails, since some studies have shown that it works much worse in women or racialized people, increasing false positives or negatives. The main companies that develop it, such as Amazon, IBM or Microsoft, have announced that they will stop researching in this field and selling facial recognition systems to police and public order forces until clear rules are established on their use by the national parliaments.