Saturday, October 16

The UN warns of the expansion of dangerous artificial intelligence systems and asks to paralyze their use


“The ability of artificial intelligence to serve the population is undeniable, but so is its ability to contribute to large-scale human rights violations, in an almost undetectable way,” warns the United Nations High Commissioner for Human Rights. , Michelle Bachelet. His Office this week published a report that warns that states are not paying enough attention to the expansion of this technology, despite the serious risks it entails. For this reason, it requests that a moratorium be imposed on the sale and application of some of these systems, such as those for emotion recognition, biometric identification or those that need a massive collection of personal data.

Four experiments show how algorithms influence the choice of a political partner or candidate

Know more

The UN collects multiple investigations that have shown that AI reflects the biases of its programmers and can make classist, sexist or racist decisions. However, it warns that governments are incorporating artificial intelligence into their security, judicial or social aid granting systems without due guarantees.

In addition to the paralysis of the most dangerous systems for human rights, The report includes a series of recommendations to the states. Among them is the duty to “dramatically increase the transparency of its use of AI, including by adequately informing the public and affected people and allowing an independent and external audit of automated systems.”

The research collects examples from around the world and its recommendations are global. However, Spain is not an exception in the trend contrasted by the United Nations. The most widespread use is biometric detection. Station and airport security services use facial recognition and body shape algorithms to identify suspicious individuals. This year Renfe came to tender the hiring of a system capable of detecting the ethnic group, the type of clothing or the mood of the travelers, although it was forced to cancel it due to the controversy generated.

Biometric technologies, which are becoming a wild card for states and companies, urgently need more human rights guidelines

The trend goes beyond public security forces. A growing number of private companies are analyzing the possibilities of this technology. Mercadona put into operation a facial recognition system that analyzed all the customers of its establishments only to detect those on whom the justice had imposed restraining orders (a total of 37 people throughout Spain). The Data Protection Agency fined Juan Roig’s company 2.5 million euros after ruling last July that the system was “disproportionate.” Faced with legal doubts about the mechanism, the company decided to abandon the project.

Biometric recognition is one of the areas of greatest concern to the United Nations and for which it has asked for the implementation of artificial intelligence to be halted, “at least until the responsible authorities can demonstrate compliance with privacy regulations. and data protection and the absence of significant precision problems and discriminatory impacts “. “Biometric technologies, which are becoming a wild card for states, international organizations and technology companies, urgently need more human rights guidelines,” he adds.

The EU is also negotiating a moratorium like the one requested by the United Nations for biometric recognition systems. However, in the draft presented by Brussels an exception is added for situations that affect security, which gives way to its use by police and security forces. 60 civil society organizations from across Europe have organized a campaign to ask the European Commission to toughen European standards against “uses of biometrics that could lead to illegal mass surveillance”. So far 60,600 citizens have signed it.

Technologies without scientific basis

Emotional recognition is a derivative of facial recognition. Part of the artificial intelligence industry defends that their systems can detect and predict a person’s feelings from their face and gestures. Your goal is to sell them as useful programs for, for example, border protection, where you can raise alerts about people in whom you detect anxiety or guilt.

The problem is that this type of recognition lacks a scientific basis, according to more and more studies and researchers. The United Nations has taken up this alert and places the emotional recognition technology in the group of artificial intelligences that it asks to paralyze: “Emotion systems operate under the premise that it is possible to automatically and systematically infer the emotional state of human beings from their facial expressions, which lacks a solid scientific basis. ”

In this sense, the UN asks that emotional recognition stop being used to qualify employees in their job or to assess their worth for a certain position, two other of the most widespread uses. It also emphasizes that, along with facial recognition, emotional recognition is the one that poses the greatest threat of massive and indiscriminate collection of data and images of people who are not suspected of any crime.

Social aid decided by machines

Another use of artificial intelligence on which the United Nations report draws the attention of the states is the processes of granting social assistance. “They are increasingly being used to help deliver public services, often with the stated goal of developing more efficient systems,” he explains. However, the opacity of the decision method means that they can become unfair “and a dystopia of cyber surveillance.”

One of the main concerns regarding the use of artificial intelligence for public services is that it can be discriminatory, in particular with respect to marginalized groups

The UN recalls the case of the Netherlands, which has served as an international example for a court ruling that prohibited an algorithmic system whose mission was to detect fraud with social benefits. The Dutch judges found that it infringed the applicants’ right to privacy, as it collected a large amount of data about them. The danger posed by these systems is that of imposing a double penalty on those who are forced to apply for benefits, since their privacy is violated and “individual autonomy and choice is undermined,” says the report.

In Spain, the use of several of these algorithms in public services is known, but the transparency of all of them has been questioned. The Police use the well-known VioGen, which assesses the risk faced by the complainants of gender violence. The Interior Ministry has refused to open the algorithm for auditing by independent organizations. Another case is that of the BOSCO program, which grants or denies the right to receive aid from the social bond of light. The Government has refused to reveal its source code in response to a transparency request from Civio and is currently the case is in court.



www.eldiario.es

Leave a Reply

Your email address will not be published. Required fields are marked *