“Neuromorphic computing promises to empower smart devices that face the challenge of processing huge amounts of data in real time, as well as adapt simultaneously to unforeseen changes. And all this while sticking to a very demanding consumption and latency ”. This description is not ours. It’s from Intel. And it clearly sums up What is the purpose pursued by this exciting branch of engineering.
This discipline is inspired by our knowledge of the structure and functioning of the nervous system of animals, which is indisputably limited, to fine-tune a chip that as far as possible emulate processing power of an animal brain. Intel, IBM, HP or Google are some of the companies that are investing resources in the development of this technology, but the advances that neuromorphic computing is experiencing also come from some of the most prestigious universities on the planet, such as WITH, Stanford or he IMEC.
Intel released four years ago his neuromorphic Loihi processor, a chip manufactured with 14 nm photolithography incorporating 128 cores and just over 130,000 neurons. Each of these artificial neurons can communicate with thousands of the neurons with which it lives, creating an intricate network that emulates the neural networks of our own brain. And just a few days ago it presented Loihi 2, a revision of its neuromorphic processor for research that comes accompanied by Lava, an open source software development environment that seeks to facilitate the development of applications that can benefit from the qualities that the neuromorphic computing.
Loihi 2 is 10 times faster and incorporates 1 million neurons per chip
Before going any further, it is important that we bear in mind that neuromorphic processors have not yet left the laboratory. At least not in a definitive way. The companies and universities that are working in this area have many research projects underway, some of them in collaboration with organizations far from the academic world, which, precisely, pursue demonstrate commercial viability of these solutions. According to Intel, Loihi 2 puts us a little closer to this milestone.
This neuromorphic chip is, according to information provided by Intel, up to 10 times faster than the original version of the Loihi processor. Its architecture has been refined to be able to facilitate the implementation of new algorithms and applications that can take advantage of neuromorphic computing, but with a lower power consumption than the first Loihi chip. However, the most impressive specification of Loihi 2 is a direct consequence of the improvements that Intel has made in its manufacturing process. And it is that its higher density of transistors has endowed it with 1 million artificial neurons per chip, a quantity far superior to the 130,000 neurons that the original model has.
The first Loihi 2 processors are being manufactured using a preview version of Intel integration technology 4, a node that uses 7nm extreme ultraviolet photolithography (EUV), but which, according to Intel, in practice has a transistor density comparable to, or even higher than, the 5nm node of TSMC. The lithography used in the production of neuromorphic chips is very important not only because it facilitates the integration of a greater number of artificial neurons and the implementation of improvements in the microarchitecture; it also has a direct and beneficial impact on their consumption, a critical parameter in these processors.
It doesn’t do everything, but here neuromorphic computing can make a difference
This discipline represents an alternative to classical computing, but only in very specific applications that can benefit from a high level of intrinsic parallelismor, and in which, in addition, it is necessary to minimize latency and reduce consumption as much as possible. Intel as well as other companies and institutions that are promoting the development of neuromorphic computing see it as a complement to classic computers that offers us some important advantages in certain usage scenarios. Here are some of them:
- Optimization and search: neuromorphic algorithms can be designed in such a way that they are able to explore a large set of solutions to a given problem to find those that satisfy specific requirements. This feature is very useful to find the optimal route to be followed by a package delivery person, or to plan the schedules that the classes of an educational institution should have, among many other options.
- Voice command identificationIn early experimental tests, a Loihi chip has proven to be as accurate in recognizing voice commands as a GPU, but according to the Accenture researchers who conducted this experiment, Intel’s first-generation neuromorphic chip consumes up to 1,000 times less energy and manages to respond 200 ms faster.
- Gesture recognition: Experiments being carried out by Intel researchers suggest that Loihi processors are able to learn and recognize the individualized gestures of a wide set of people very quickly. In this area, neuromorphic computing appears to be much more efficient than the classic artificial intelligence algorithms that we are currently using.
- Robotics: Two research groups from Rutgers (United States) and TU Delft (Netherlands) universities have designed drone control applications using Loihi chips that have been shown to consume 75 times less power than control tools designed to be run on a GPU . And without negatively affecting your performance.
- Image recovery: neuromorphic algorithms are proving to be very effective in all those processes in which it is necessary to identify a set of heterogeneous objects based on their similarity to one or more models taken as a reference. In this usage scenario, according to Intel, neuromorphic systems are up to 24 times faster and 30 times more efficient than solutions that use the combination of a GPU and a CPU.
More information | Intel