Nvidia has announced its DGX-1 Deep Learning System at the 2016 GPU Technology Conference. The system sports eight of Nvidia’s newest Tesla GP100 graphics cards, providing up to 170 teraflops of half-precision performance.
That might not mean much to the average PC fan, but in context that is over twelve times the graphics performance of the Nvidia Titan X, it’s most expensive and powerful graphics card on the market.
The Tesla GP100 is based on TSMC’s 16nm FinFET manufacturing process, and uses High Bandwidth Memory (HBM2) for the first time. Nvidia is the first to adopt both features, before Intel or AMD, though Samsung has been using the 16nm manufacturing process since late 2015.
Rather than make the GPU slimmer with the new manufacturing process, Nvidia has added a lot more transistors to the card. Nvidia expects the graphics card to break performance records in the GPU industry, but it might be a while before consumers are able to purchase the card.
Nvidia DGX-1 “middleman” in quantum computing race
Moving back to the DGX-1 Deep Learning System, Nvidia has added two 16-core Intel Xeon E5-2698 CPUs with a clock speed of 2.3GHz, 512GB of DDR4 RAM, four 1.92TB SSD RAID, and dual 10GbE InfiniBand network ports, according to Ars Technica. It requires a 3,200w PSU to power it — not provided by Nvidia.
The final cost comes to $129,000 for the system, and will be available in June. We suspect no augmented reality or self-driving car manufacturers will be interested, unless they plan to sell their product at a massive loss or at a unsellable price point.
Deep learning companies might be more enthused by the product, as it offers much more performance than anything on the market. Performance in this context means the deep learning machine can view, learn, and understand things at a quicker rate.
Computing power is seen as the ultimate necessity for deep learning systems to thrive, which is why Google, IBM, and Microsoft are all invested in quantum computing. While tests are run on quantum computing, the DGX-1 might become the middleman for deep learning firms keen to use the fastest computing hardware.