Amazon Web Services, Inc. recently announced the availability of P2 instances, a new GPU instance type for Amazon Elastic Compute Cloud. The technology is taking on the challenge of meeting the requirements of compute intensive applications, like that of artificial intelligence technology, which requires massive parallel floating point performance. According to Amazon, Industries that rely on computational fluid dynamics, computational finance, seismic analysis, molecular modeling, genomics, and rendering, will benefit from the now, most powerful GPU instances presently available in cloud, with up to 16 NVIDIA Tesla K80 GPUs.
The new instances will reduce simulation times, and reduce cost.
“We’re able to leverage the massive amount of aggregate GPU memory and double precision floating point performance in Amazon EC2 P2 instances to fit more simulations into a single node, significantly reduce customer simulation times, and reduce the cost of running large simulations.”
As explained on their website, AWS’ P2 instances allow customers to build and deploy compute-intensive applications using the CUDA parallel computing platform or the OpenCL framework without up-front capital investments.
“To offer the best performance for these high performance computing applications, the largest P2 instance offers 16 GPUs with a combined 192 Gigabytes (GB) of video memory, 40,000 parallel processing cores, 70 teraflops of single precision floating point performance, over 23 teraflops of double precision floating point performance, and GPUDirect technology for higher bandwidth and lower latency peer-to-peer communication between GPUs. P2 instances also feature up to 732 GB of host memory, up to 64 vCPUs using custom Intel Xeon E5-2686 v4 (Broadwell) processors, dedicated network capacity for I/O operation, and enhanced networking through the Amazon EC2 Elastic Network Adaptor.”
Just plain cool use cases:
Clarifai provides image and video recognition APIs or some of the world’s most innovative companies:
“Deep learning plays a central role in our image and video classification APIs, and high performance GPUs in the AWS Cloud can vastly accelerate inference for our algorithms,” said Matthew Zeiler, CEO, Clarifai. “Amazon EC2 P2 instances will give us the agility to scale up to serving numerous models in parallel, delivering results faster than previously possible without massive capital expenditures, un-utilized hardware, and large data transfers. We will be able to leverage the massive aggregate single precision floating point processing capability of Amazon EC2 P2 instances to reduce inference times for our customers, and substantially reduce the cost of processing.”
MathWorks, the leading developer of mathematical computing software, helps millions of engineers, scientists, researchers, and students around the world analyze and design systems and products that are transforming the world:
“MATLAB users moving their analytics and simulation workloads onto the AWS Cloud require their analyses to be processed quickly,” said Silvina Grad-Freilich, Senior Product Manager, MathWorks. “The massive parallel floating point performance of Amazon EC2 P2 instances, combined with up to 64 vCPUs and 732 GB host memory, will enable customers to realize results faster and process larger datasets than was previously possible.”
MapD is a GPU database for interactive SQL querying and visualization of multi-billion record datasets:
“As the leader in GPU-powered databases and visual analytics applications, we are deeply invested in the emergence of large, cloud-based GPU instances and P2 is the most powerful we have seen,” said Todd Mostak, CEO and Founder, MapD. “Our performance on Amazon EC2 P2 instances is exceptional. On a dollar-to-dollar basis across a set of standard SQL benchmarks, MapD is 78 times faster on Amazon EC2 P2 instances than CPU-based solutions. Furthermore, these speedups were seen over multi-billion row datasets, speaking directly to our ability to deliver performance at scale with these instances. With this launch, our customers can now query and visualize billions of rows of data within milliseconds while enjoying the flexibility, scalability and reliability they have come to expect from AWS.”
The growing need for GPU compute workloads within AI, and data sets as large as those mentioned previously, users need even higher GPU performance than what was previously available. Matt Garman, Vice President of Amazon EC2, explains that the beauty of P2 instances, is that AWS offers seven times the computational capacity for single precision floating point calculations and 60 times more for double precision floating point calculations than the largest G2 instance:
“This allows top performance for compute-intensive workloads such as financial simulations, energy exploration and scientific computing.”
To learn more about AWS, visit https://aws.amazon.com.