Home Nvidia unveils new deep learning system for supercomputers

Nvidia unveils new deep learning system for supercomputers

Nvidia has announced its DGX-1 Deep Learning System at the 2016 GPU Technology Conference. The system sports eight of Nvidia’s newest Tesla GP100 graphics cards, providing up to 170 teraflops of half-precision performance.

That might not mean much to the average PC fan, but in context that is over twelve times the graphics performance of the Nvidia Titan X, it’s most expensive and powerful graphics card on the market.

See Also: Facebook no longer just importing French brains

The Tesla GP100 is based on TSMC’s 16nm FinFET manufacturing process, and uses High Bandwidth Memory (HBM2) for the first time. Nvidia is the first to adopt both features, before Intel or AMD, though Samsung has been using the 16nm manufacturing process since late 2015.

Rather than make the GPU slimmer with the new manufacturing process, Nvidia has added a lot more transistors to the card. Nvidia expects the graphics card to break performance records in the GPU industry, but it might be a while before consumers are able to purchase the card.

Nvidia DGX-1 “middleman” in quantum computing race

nvidia-dgx-1

 

Moving back to the DGX-1 Deep Learning System, Nvidia has added two 16-core Intel Xeon E5-2698 CPUs with a clock speed of 2.3GHz, 512GB of DDR4 RAM, four 1.92TB SSD RAID, and dual 10GbE InfiniBand network ports, according to Ars Technica. It requires a 3,200w PSU to power it — not provided by Nvidia.

The final cost comes to $129,000 for the system, and will be available in June. We suspect no augmented reality or self-driving car manufacturers will be interested, unless they plan to sell their product at a massive loss or at a unsellable price point.

Deep learning companies might be more enthused by the product, as it offers much more performance than anything on the market. Performance in this context means the deep learning machine can view, learn, and understand things at a quicker rate.

Computing power is seen as the ultimate necessity for deep learning systems to thrive, which is why Google, IBM, and Microsoft are all invested in quantum computing. While tests are run on quantum computing, the DGX-1 might become the middleman for deep learning firms keen to use the fastest computing hardware.

About ReadWrite’s Editorial Process

The ReadWrite Editorial policy involves closely monitoring the tech industry for major developments, new product launches, AI breakthroughs, video game releases and other newsworthy events. Editors assign relevant stories to staff writers or freelance contributors with expertise in each particular topic area. Before publication, articles go through a rigorous round of editing for accuracy, clarity, and to ensure adherence to ReadWrite's style guidelines.

Get the biggest tech headlines of the day delivered to your inbox

    By signing up, you agree to our Terms and Privacy Policy. Unsubscribe anytime.

    Tech News

    Explore the latest in tech with our Tech News. We cut through the noise for concise, relevant updates, keeping you informed about the rapidly evolving tech landscape with curated content that separates signal from noise.

    In-Depth Tech Stories

    Explore tech impact in In-Depth Stories. Narrative data journalism offers comprehensive analyses, revealing stories behind data. Understand industry trends for a deeper perspective on tech's intricate relationships with society.

    Expert Reviews

    Empower decisions with Expert Reviews, merging industry expertise and insightful analysis. Delve into tech intricacies, get the best deals, and stay ahead with our trustworthy guide to navigating the ever-changing tech market.