Overview
Swiftly training the deep learning models with maximize eight GPU featuring with NvlinkTMWith NVLinkTM to support up to 300GB/s GPU to GPU commnucation, D52G-4U can shorten model-training time-frame, which is 40x faster than conventional server on processing 408GB Chest X-ray images. |
Up to 10x dual-width 300 watt GPU or 16x single-width 75 Watt GPU supportAs an purpose-built system for Artificial Intelligent (AI) and High-Performance Computing(HPC) workloads, QuantaGrid D52G-4U can deliver up to 896 tensor Tflops to training deep learning model with eight* NVIDIA® Tesla V100 dual-width 10.5 inch or provides up to 293 GOPS/watt of peak INT8 performance to do inferencing with sixteen NVIDIA® Tesla P4 and 2-socket Intel® Xeon® Scalable processor; up to 56 double precision Tflops computing power can accelerate HPC workloads such as Oil & Gas, bioinformatics, Mechanical Engineering. On top of superior computing power, D52G is of 2x100Gb/s high bandwidth low-latency networking to expedite communication among different GPU nodes |
Diversify GPU Topology to Conquer Any Type of Parallel Computing WorkloadThe QuantaGrid D52G provides multiple GPU topology on the same baseboard tray to meeting your different use case. |
High-Bandwidth, Low-latency Networking Between GPU NodesThe QuantaGrid D52G-4U have additional two/four PCIe Gen3x16 LP-MD2 slot to provide optional two/four 100Gb/s low-latency infiniband or Intel®Omni Path options to do GPUDirect in the server or RDMA between different GPU nodes. |
NVMe SSD Support to Accelerate Deep LearningThe D52G-4U support max. 8x NVMe SSD, which can accelerate both training and inferencing with Fast I/O data reading as deep learning is a data-driven algorithm. |