During the GTC 2022 event, Nvidia presented the Hopper GPU architecture, a line of graphics cards with which the firm claims to accelerate the types of algorithms that are usually used in data science.
Jen-Hsun Huang, one of the founders of Nvidia, held a webcast showing off the Nvidia H100 GPU, which contains 80 billion transistors and a component known as the Transformer Engine, which is intended to accelerate specific categories of AI models (AI).
Another highlight is Nvidia’s MIG technology, which allows an H100 to be split into seven smaller, independent instances capable of handling different types of jobs.
“Data centers are becoming AI factories that process and refine mountains of data to produce intelligence. Nvidia H100 is the engine of the global AI infrastructure that companies use to accelerate their businesses powered by this technology”, Huang explained in the presentation.
On the day, the company also announced a new mapping platform that will provide the autonomous vehicles real coverage of more than 300,000 road miles in North America, Europe and Asia by 2024.
It is about Drive Map that, according to the founder of the firm, should allow high levels of autonomous driving. In addition, this platform will not only be open to current Nvidia customers, but will also seek to increase the company’s solutions for the audiovisual sector.
Isaac Nova Orin
During GTC, Nvidia also discussed Isaac Nova Orin, its compute and sensor architecture that is powered by Jetson AGX Orin hardware. According to the company, Isaac Nova Orin comes with all the computing hardware and sensors needed to design, build and test autonomy in autonomous mobile robots.
These types of prototypes can understand and move around their environment without being directly supervised by an operator. This new computing architecture should be available later this year.
Lastly, the company introduced its Grace CPU Superchip, a solution that complements the Grace Hopper Superchip CPU-GPU announced last year. The Nvidia Grace Superchip incorporates two ARM CPUs that are connected to each other using a high-speed, low-latency interconnect.
According to Nvidia, this data center CPU is intended to deliver higher performance and double the memory bandwidth, as well as being more power efficient compared to conventional CPUs. current server chips.