Home > Gaming > Nvidia > Nvidia Releases Xavier Chip with 20 Trillion Operations per Second
Nvidia

Nvidia Releases Xavier Chip with 20 Trillion Operations per Second

Nvidia launches the world’s first AI supercomputer chip.

nvidia-xavier

Moore’s law has been on the decline lately. As components get smaller, quantum mechanical effects such as quantum tunneling have limited how small our chips can get. All is not lost, however. Jen-Hsun Huang, Nvidia CEO, says we’re on the precipice of a new Moore’s law-like curve of innovation, driven by CPUs with “accelerator kickers, mixed precision capabilities, new distributed frameworks for managing both AI and supercomputing applications, and an unprecedented level of data for training.”

One of these new innovations is Nvidia’s Xavier chip. Xavier is a single-chip computer with over 7 billion transistors; a number which exceeds most server-class CPUs. They are toting Xavier as the world’s first AI supercomputer chip, and it can perform over 20 trillion operations every second at a miraculous 20 Watts. 50 Xavier chips would be able to produce 1 petaflop, or one quadrillion operations per second, at a cost of 1 Kilowatt of power. By comparison, conventional super computers with the same computing power would cost $2-4 million and require between 100-500 Kilowatts. The first petaflop supercomputer, built in 2008, cost $100 million.

nvidia_xavier_gtc-europe

Huang sees an approaching golden age of computing, with advances so quick and significant it’ll exceed our predictions. A merging of AI and supercomputing codes can offload traditional coding in lieu of neural networks and create computers which are both low in power and much more powerful. We’re already seeing an increased use of AI in self-driving cars and robots which learn through deep learning trial and error.

“Deep learning is a supercomputing challenge and a supercomputing opportunity,” Huang says. “Modern supercomputers should be designed as AI supercomputers. This means a system has to be good at computational science and data science and that requires an architecture that is good for both. We want to be able to support models that are very large and process may of those across multiple nodes, so interconnectivity is important. We have shown GPUs that can be shared in this way across massive GPU sets of nodes and the supercomputers of the future will be balanced by these two computational approaches with this architecture.”

source: Next Platform

David F.
A grad student in experimental physics, David is fascinated by science, space and technology. When not buried in lecture books, he enjoys movies, gaming and mountainbiking

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Read previous post:
SD Cards to be Rated for App Performance

A new classification system is being introduced for SD cards.

Close