Members of the European Parliament (EP) have warned about the biases incurred by artificial intelligence (AI) algorithms, especially when used by police forces for mass surveillance.
MEPs indicate that final decisions must always be made by people and emphasize that those affected must have the possibility to appeal to this type of decision.
This resolution, adopted with 377 votes in favor, warns that “many algorithm-based identification technologies make more mistakes when identifying and classifying racialized people or people belonging to certain ethnic communities, LGBTI people, children and the elderly, and also women”.
This resolution, which is not binding, could send a strong signal on how this Parliament could vote in the upcoming AI Law negotiations.
The European Commission’s proposed law restricts the use of remote biometric identification (including facial recognition technology) in public places, unless it is to fight “serious” crimes, such as kidnapping and terrorism.
“We ask for a moratorium on the deployment of facial recognition systems in the police field, as the technology has proven ineffective and often discriminatory. We are against the techniques of prediction of the behavior based on artificial intelligence, as well as the processing of biometric data for the massive surveillance ”, affirms the deputy of the EP Petar Vitanov.