The problem is already quite defined. Researchers, technologists or the UN have been pointing out for several years that behind the trade name “Artificial Intelligence” there are algorithms that can be as discriminatory as those that program them: opaque systems that under a false sense of mathematical neutrality can be as unfair as a boss exploitative or make decisions directly from structural machismo. With the evidence in hand, it is time for politics.
Four experiments show how algorithms influence the choice of a political partner or candidate
The budgets for 2022, finally approved this Tuesday, include the creation of a State Agency for the Supervision of Artificial Intelligence, independent and with its own budget. The organism’s mission will be to “minimize the risks” of the algorithms that spread throughout society, from social networks to the work environment, through Siri, Alexa and other virtual assistants that have landed in the houses they act to distribute public resources or aid.
The creation of this entity, already known as the Algorithms Agency, comes through an amendment of More Country-Equo agreed with the Government. The agreement includes little more than the general lines of the Agency and its initial item, which will be five million euros. A pact that has been welcomed by the specialists, who ask that it not remain in the cosmetic and get fully involved in the “audit of algorithms” that prevents its harmful effects from going unnoticed.
It is not an easy mission. It is often said that Artificial Intelligence systems are “black boxes” because all companies, from the digital giants to the smallest startup, are very reluctant to reveal how they work. It’s your intangible heritage: the details of the Google search engine, the distribution of tasks between Glovo delivery people or finding the next hot TikTok content are worth billions. Before there was only one formula for Coca-Cola, now each digital company has its own.
It does not enter into anyone’s mind that a pharmacist refuses to have their products analyzed to verify that they are not dangerous. The same thing must happen with Artificial Intelligence
– More Country
“The intellectual property of companies must be respected, but it does not enter into anyone’s mind that a pharmaceutical company or a food company refuse to have their products analyzed to verify that they are not dangerous. With Artificial Intelligence the same must happen” , explains Héctor Tejero, political coordinator of Más País, in conversation with this medium.
“We believe that the Algorithm Audit Agency should work in a similar way to how public health authorities do: if you want to put a product on the market you must demonstrate that it is not harmful and comply with all control regulations”, abounds: ” The best example is the rider law, which obliges companies to allow workers to access algorithms. If the law that is written to create this Agency establishes similar criteria, we would be on the right track. ”
The formation of Íñigo Errejón is in talks with the Secretary of State for Digitalization and Artificial Intelligence (SEDIA) for the drafting of the law that should define the functions of this State Agency. Part of the work will be inspired from Brussels, since the EU finalizes an Artificial Intelligence Regulation that will have the audit of algorithms as its central axis. In July, SEDIA proposed to the community institutions that Spain be the testing ground for the Regulation.
Convincing companies to allow public scrutiny of their algorithms will not be the only challenge facing the new Agency. It will also have to establish how to carry out this analysis, a completely unexplored terrain in public administration. Organizations such as the Data Protection Agency, with which it is planned to work together, accumulate in comparison a trajectory that is already close to three decades.
“Right now there is no protocol of what is to audit an algorithm, what steps must be followed, etc. The Agency will have to create a standard,” acknowledges Tejero. “Spain can become a small power in this field,” he says.
Another of the Agency’s aspirations is for public administrations to set an example and audit the algorithms they use.
“The administration starts off on the wrong foot”
The creation of the Algorithm Agency is full of good intentions. However, the truth is that public administrations have a long history of making it difficult for citizens to audit the systems they use. To the point of going to court to try to prevent it.
Currently the Supreme Court is pending a contentious administrative appeal on the transparency of the social energy bonus, aid for vulnerable families, presented by the Civio Foundation in favor of transparency. The Ministry of Ecological Transition refuses to hand over the algorithm code, despite being urged by the Transparency Council.
We already have many good practice guides. What we need are concrete tools, clear indications and compliance models
– Ethics Consulting
“It is clear that public administration is starting off on the wrong foot”, Gemma Galdón, director of Eticas, a consulting firm specializing in analyzing the impact of Artificial Intelligence, is disfiguring. “We have spent four years insisting on the Ministry of the Interior to allow the audit of VioGen [el algoritmo que analiza el riesgo de los casos de violencia de género] and they have not allowed it. In the end we decided to do it externally. Same as with RisCanvi [algoritmo que predice el riesgo de reincidencia de los presos de las cárceles catalanas]”, it indicates.
The creation of the Agency “is good news”, but “the important thing is that it be endowed with specific mechanisms”. “We already have many generic good practice guides, what we need are concrete things, tools, clear indications and compliance models,” asks the expert.