© 2017 Tranquil PC Ltd

Tranquil PC Ltd

 

Queensmead Place  -  Trafford Park  -  Manchester  -  M17 1PH  -  UK

ph +44 161 402 3036  fx +44 161 848 0665  e sales@tranquilpcs.com

  • Facebook Social Icon
  • Twitter Social Icon
  • LinkedIn Social Icon
  • YouTube Social  Icon
  • Flickr Social Icon

Please review our Privacy Notice, if needed, prior to sharing your information with us

V6XD-G with NVIDIA Tesla GPU (1)

August 30, 2017

Sometime ago we mentioned that the a new variation on the popular V6XD (Xeon) Cluster was in development.  Now we are a little nearer to it's release.   The V6XD-G, as it will be catalogued, provides the same master node controller and four Xeon Compute nodes along with internal managed switch and high reliability power supply, in a small and rugged cabinet - but it additionally supports a GPU card per Compute node.
 

Tranquil will be posting a series of blogs, showing how this high performance micro cluster was developed over the coming weeks - so please keep your eyes on this blog.

 

Register your interest in the V6XD-G now


A list of professional and server grade GPU cards are now being tested for final certification and one of these should be the state of the art NVIDIA Tesla P4!  It's a perfect partner for the V6XD-G!
Here's what NVIDIA say about it:

 

The INFERENCING ACCELERATOR

 

In the new era of AI and intelligent machines, deep learning is shaping our world like no other computing model in history. GPUs powered by the revolutionary NVIDIA Pascal™ architecture provide the computational engine for the new era of artificial intelligence, enabling amazing user experiences by accelerating deep learning applications at scale.

The NVIDIA Tesla P4 is powered by the revolutionary NVIDIA Pascal™ architecture and purpose-built to boost efficiency for scale-out servers running deep learning workloads, enabling smart responsive AI-based services. It slashes inference latency by 15X in any hyperscale infrastructure and provides an incredible 60X better energy efficiency than CPUs. This unlocks a new wave of AI services previous impossible due to latency limitations.

 

The Tesla P4’s small form factor and 50W/75W power footprint design accelerates density optimized, scale-out servers. It also provides an incredible 60X better energy efficiency than CPUs for deep learning inference workloads, letting hyperscale customers meet the exponential growth in demand for AI applications.
 

Share on Facebook
Share on Twitter
Please reload

Featured Posts

The Ultra Portable High Performance Micro Cluster, just got a boost!

December 20, 2017

1/6
Please reload

Recent Posts

February 20, 2018

December 14, 2017

November 7, 2017

September 4, 2017

Please reload

Archive