The beauty (and savings) of running on-prem workloads with AWS GPU Spot Instances

November 29, 2016

Michele Borovac

Two of the hottest trends in technology and computing today are the worlds of Deep Learning (AI, machine learning, etc.) and Accelerated Computing (simulations, computations, etc.). It’s clear that these fields are already playing a pivotal role in technology (and our lives), and will continue to do so going forward. Whether it is robotics, self-driving cars, health and life sciences, earth and space simulations, financial derivative computations… these are just a few of the places where these trends are playing a huge role.

The problem, however, is that traditional CPU-based computing isn’t effective for this type of processing, and purchasing on-premises GPU servers (which are effective) can be very expensive. For example, Nvidia’s DGX-1 can deliver up to 56x performance and perform training operations 75x faster than a CPU-driven server, but it also retails for $129,000 USD! It’s expensive, plus it limits usage to small pools of users on-premises, and locks you into that hardware for a 3+ year depreciation cycle. It’s powerful, but not without its own challenges.

Organizations looked to avoid those challenges by renting GPU power. For example, Amazon lets you rent their p2.8xlarge instance type and Microsoft Azure lets you rent their NC24 instance type, both of which include powerful GPU hardware like the DGX-1 mentioned above. Unfortunately, one big challenge still stood in the way: getting your data to (and from) the cloud. Renting GPU hardware typically meant waiting long periods of time to upload the data into the cloud for processing and then downloading updated data back on-premises once you were done. This wasted a lot of time and ate into cost savings, because the GPU instance had to be running throughout.

Until today, organizations have been stuck between a rock and a hard place when it came to GPU power. Velostrata changes all of this with a unique streaming-based approach to workload mobility, where your workloads begin running in the cloud within minutes, but your data stays on-premises. This eliminates the waiting downtime before and after computation, and keeps your rental costs down.

An additional advantage to Velostrata’s approach is that it opens the door to even more cost savings through GPU spot instances. Typically, the downside to spot instances is that they end after six hours or Amazon can terminate them abruptly. That meant that using GPU spot instances for any kind of stateful app or computation wasn’t feasible without advanced customizations that kept data in-synch.

But, with Velostrata, since the data remains on-premises, you can easily rely on GPU spot instances to run workloads for brief (or unexpected) periods of time without any risk of data loss. Not only is it as seamless as a reserved GPU instance, but the cost savings can be huge, too. Renting the p2.8xlarge instance without a commitment can cost up to $7.20/hour, whereas a spot instance can cost as little as $2.20/hour- an additional 70% cost savings.

Combining Velostrata’s streaming-based approach to running workloads in the cloud with Amazon’s Spot GPU instances finally gives organizations the power to cost-effectively run complex workloads on GPUs in the cloud. Whether it’s training a deep learning neural network, running expansive Monte-Carlo simulations, or something completely new, Velostrata and the cloud are cost-effectively democratizing advanced computing so anyone can realize their ideas.

If you’d like to learn more, you can view a short demo of how easy Velostrata can make it to leverage GPU spot instances.