Monday, November 29

The Algorithm, the public perception of science

Since the beginning of time, learning has shaped the cultural heritage that we inherit. The experience is a trunk of learning that can be synthesized in a finite set of suggestions that allows us to take advantage of this wealth of knowledge and experiences so as not to have to repeat mistakes, which were the very origin of learning. That is precisely what has been tried to build with the computers that today permeate the vast majority of our days and that without a doubt, increasingly define our interaction, learning itself and even the future of our planet. What cannot be measured, cannot be controlled: this is one of the greatest paradigms among the scientific foundations for proposing future scenarios. From the creation of models based on data, we can identify the main components that define the behavior of the circumstances that we want to understand.

But neither physics nor current mathematics are enough for us to support the analytical creation of phenomena that govern our reality every day: we have managed to predict how the moon will move with respect to the sun and astronomers are most certainly not wrong. never again, but so far these models are of little use to us to know, for example, how to still mitigate cancer or alleviate psychiatric conditions, much less to know for sure, the results of a popular election. We have increasingly fast approaches to understanding complex systems, but all of this is thanks to the use of computers and the algorithms that run within these clusters of transistors.

Last October, the IEEE’s SPECTRUM magazine published a special report titled “Why is Artificial Intelligence (AI) so dumb? (Why is AI So Dumb?). This public perception of science is a reflection of the feeling that most societies have about the expectations so broad that the proliferation – and abuse – of the term “AI” in the media causes, as well as its possible repercussions and its – until today – limited results. This issue discusses the past and uncertain future of AI, as well as explains in considerable detail the emergence of methodologies called Deep Learning. It explains its spectrum – still so limited – of application as well as, among other things, the enormous implications and importance of human participation, not only in the conception of these algorithms but in the very involvement of subjectivity that involves considering behaviors based on human individuality as part of the conformation of learning that implies a certain non-human intelligence of machines.

Central part of the articles invariably use examples as well as illustrations that animate – and that generate visual images – with robots of different nature. I cannot deny a certain subjective proclivity since I have dedicated myself to robotics for more than 30 years, however it seems to me a wrong tendency to insist on “personalizing” (sic) the most sophisticated reasoning capacities with robots, while said intelligence artificial and its capabilities are mainly available more and more in almost all of our living spaces, with or without robots.



www.elfinanciero.com.mx

Leave a Reply

Your email address will not be published. Required fields are marked *