There is a lot of talk about artificial intelligence and how robots will end up replacing us and taking our jobs. Leaving aside the debate of whether we definitely really want to do these jobs (and can avoid it), one of the keys to this revolution is machine visionwhich goes far beyond putting a camera on a robot arm.
The human brain receives information from our environment through the eyes. What arrives is a soup of changes in light and size, which our brain is responsible for interpreting. For example, if we perceive a round shape that is rapidly getting larger, in a fraction of a second we duck and dodge a ball, long before we knew it was a ball.
Artificial intelligence systems are capable of extract information and conclusions, examining large data sets. This is how Amazon deduces which products you might be interested in. However, these intelligent systems live in a black box. In order for them to interact with the world they need “eyes”, that is, artificial vision.
Artificial vision is the ability of machines to interpret and understand images. This allows them to read handwritten text, identify objects, and navigate their environment. None of this is in the future. These are things that, with some limitations, your smartphone, your robot vacuum cleaner and, even more surprisingly, self-driving cars can already do.
Machine vision should not be confused with computer image processing. For example, robotic cameras that circulate through pipelines take images that are then processed and analyzed in great detail to find pinpoint leaks.
Machine vision often doesn’t need that much precision, just enough to make a decision. It is what allows an autonomous electric car, like the new Tesla models, identify a pedestrian, distinguish him from a lamppostand brake at the right time.
Machine vision and repetitive work
All over the world there are people who pass in front of a conveyor belt every day. From newly manufactured plastic parts to fruit and vegetables can circulate on this belt. The job of these operators, usually underpaid, is to use their eyes and hands to locate the defective pieces or fruit and remove them from the conveyor.
This is one of the examples of a job that, thanks to artificial vision, a robot can do better. The manual fabrication or assembly work they can also be automated when machine vision is added.
For example, many assembly lines use robots for welding and assembly, but depend on all parts being in one exact place. With an artificial vision system, it is possible for a machine to pick up the pieces, place them accurately, distinguish one from another and assemble them, just like a human operator would.
These advances make machine vision one of the components of Industry 4.0, or the fourth industrial revolution, along with machine learning, the internet of things and 3D printing. The result is factories with less waste of materials and energy, and also with much less labor.
Robots with artificial vision also live outside factories. The robot vacuum cleaner, capable of moving around the house and avoiding obstacles while cleaning, is just the beginning. There are already models of service robots that are designed to interact with people in various settings, such as hospitals, schools or homes. For example, in assisting people with disabilities, to help them get out of bed or take a bath.
Cars, drones and accidents
There are other applications of artificial vision closer to our daily lives. For example, machine vision allows drones deliver packages in remote areas autonomously, without the need for a remote operator to direct them. This application has already been tested by the United States Postal Service (USPS), with packages of up to two kilos, and even in adverse weather conditions.
The other side of the coin is the autonomous military drones. Although remotely controlled drones have been around for years, machine vision would allow these killer planes to shoot or bomb targets, and make on-the-spot decisions about which target to shoot at.
The use of artificial vision in cars also has great advantages for the driver, since it allows them to anticipate collisions, analyzing the environment and thus avoiding accidents. Autonomous cars could improve traffic flow and freeing passengers from the task of driving, allowing them to watch movies or work on the road.
However, fully autonomous cars are not expected until 2030 for other reasons, notably its legal ramifications. For example, what happens when the artificial vision system fails and there are victims? Who is responsible?
The robot eye that sees everything
Today’s mobile phones are already capable of fairly accurately recognizing handwritten text, something that was very expensive and complex just a decade ago. Computer vision also allows Google to search for images by their content and search for similar images.
Machine vision has been used for years for facial recognition. It is the reason why there is now automatic kiosks to pass passport controls at many borders: the camera is recognizing the traveler’s face and comparing it to a database, allowing criminals or terrorists to be quickly identified.
There are proposals to use this same technology to identify people without the need for documents, which would eliminate cumbersome checkpoints, queues and, ultimately, passports and identity cards. Nevertheless, not everything is advantages.
facial recognition is within the limits of legitimate use and the right to privacy when it comes to people who are not suspected of any crime. In addition, there is a risk of its use for other purposes.
A camera pointed at a crowd can be used to recognize and identify each person passing by, who is shopping in a store, or who is taking part in a demonstration.
This information could be used for political, commercial or criminal purposes. The scene in “Minority Report” where billboards identify Tom Cruise by name and try to sell him something as he walks by is closer than we think.
But when a machine is able to see and interpret reality, that technology can also be used to restore sight to visually impaired people. Today these people already have readers with handwriting recognition, or applications that allow them to identify what is in front of the camera.
In a very short time we will be able to see artificial eyes for humans with many more capabilities. In the next decade, the machines you see will change our lives. It is up to us to establish the limits so that the change is positive.