4x Xeon CPUs + 4x NVIDIA GPUs = AI, Deep Learning platform
Innovative multi-node structure
Chassis structure incorporates a Intel i5 vPro NUC, low power management node alongside 4x Xeon-D powered computational nodes.
Each of the Xeon compute nodes has a facility to accept a PCIe GPU co-processor. (note low profile only)
Redundant internal inter-connectivity
2 ethernet gigabit connections on node0 managed by a smart Layer3 switch.
The four Xeon compute nodes each have 2x 1GbE connections managed by a smart Layer3 switch.
3 external 1GbE ethernet connections allow external units to join the cluster environment and/or enable the creation of a daisy-chain configuration.
Easy and intuitive serviceability and management
Multiple I/O options available /node (behind service panel)
Integrated, compact design
Within 15kg the unit incorporates:
1x management node
4x compute nodes incl GPU slot
750W Power Supply
24 port managed smart switch (incl WiFi)
With endless hardware configurations and colour options as well as personalised laser engraving, the V6XD-G will meet event the most specific technical needs while easily morphing into any company’s visual identity.
When purchased alongside our recommended flight case it meets cabin size requirements for most continental and intercontinental air-lines.
Tranquil PC recommend NVIDIA Quadro / Tesla GPU cards for use in the V6XD-G system.
The following cards have been tested and certified for use in the V6XD-G:
For high performance AI / Deep learning Tranquil PC recommend the NVIDIA Tesla P4 GPU card - add FOUR cards to the FOUR Xeon Compute nodes.
The INFERENCING ACCELERATOR
In the new era of AI and intelligent machines, deep learning is shaping our world like no other computing model in history. GPUs powered by the revolutionary NVIDIA Pascal™ architecture provide the computational engine for the new era of artificial intelligence, enabling amazing user experiences by accelerating deep learning applications at scale.
The NVIDIA Tesla P4 is powered by the revolutionary NVIDIA Pascal™ architecture and purpose-built to boost efficiency for scale-out servers running deep learning workloads, enabling smart responsive AI-based services. It slashes inference latency by 15X in any hyperscale infrastructure and provides an incredible 60X better energy efficiency than CPUs. This unlocks a new wave of AI services previous impossible due to latency limitations.
The Tesla P4’s small form factor and 50W/75W power footprint design accelerates density optimized, scale-out servers. It also provides an incredible 60X better energy efficiency than CPUs for deep learning inference workloads, letting hyperscale customers meet the exponential growth in demand for AI applications.
The V6XD-G is a small minimalist product, but the engineering and inside is far from simple.
See below images of the rear of the unit, with the master node and 4 Xeon compute nodes exposed.
We know that purchasing a Tranquil Cluster Server, is an investment you want to be completely confident with. Everyone has different needs for their cluster, and we are here to help you choose the right one for you. Not only can we assist in the initial model selection, we can guide you on your journey in the configuration and finally discuss the custom colour schema and custom laser branding to your requirements.