Hybrid Cloud: How to Move a Two-Tier Application to the Cloud (and Back)

By: Shahar GlixmanAugust 1, 2016

This blog post describes how I used Velostrata to move a 4 VM, multi-tier application (Dell Benchmark Factory) running on VMware in my private data center to the public cloud (AWS in this example), all in a matter of minutes, allowing me to test the application’s deployment in the cloud, performance, and various optional configurations in the cloud.


Dell Benchmark Factory (DBF) is a database performance testing tool that enables IT managers to conduct database workload replay, benchmark performance, and test scalability. In my case, the system was comprised of the DBF controller VM (which also serves as an agent), two DBF agent VMs and a MS-SQL server VM.

I used the DBF controller to create a 1TB workload on the SQL server. I’ve configured the DBF to be a 2-tier application with 3 agents loading the SQL server. All VMs are hosted by a single VMware-ESX server.

Here is a vSphere view of the system running on-premises:

To start with, I ran DBF Tpc-E for several hours on-premises as a benchmark. The results show ~ 90 TPS and average latency of 116ms.

Looking at the SQL performance counters, it can be clearly seen that the CPU is the bottleneck. Since I don’t have spare ESXs in my private data center, I decided to test the application in the public cloud. Without Velostrata, full migration of the application to the cloud will take a while, but I wanted to test how the application will work in the cloud before actually deciding to migrate it. This is a perfect use case for Velostrata.


I already deployed the Velostrata management console in my data center which took  less than 30 minutes (this will be described in a separate blog post). I also created a Velostrata Cloud Extension (CE) in AWS, in a private virtual private cloud (VPC). A Cloud Extension is the infrastructure set by Velostrata in the VPC in AWS that serves as an intelligent proxy to the data center for workload VMs.

Velostrata’s “run-in-cloud’ operation moves a VM from the data center to the cloud, without moving its disks. This is done by creating a remote VM in the cloud and using smart streaming and caching algorithms to boot the cloud VM in almost real time. For more information on Velostrata technology, visit our Technology page.

Here is a view of the Velostrata Cloud Extension VMs deployed in the cloud:

Now – let’s start the fun!

I will now run the 4 VMs in the cloud. To do this, I select the VMs from within vCenter, right-click -> Velostrata Operations -> Run in Cloud:

The Run in Cloud wizard starts, allowing me to configure the details of the operation.

I select the Cloud Extension I want (I can have several CEs – in different cloud providers, different regions and zones):

Next, I select the instance type. Here I selected large instances, so that CPU will not be the bottleneck, as it was in my data center.

I select a Storage Policy: “Write isolation”

Velostrata provides two storage policy options:

Write Back – With this option, writes by managed VMs in the cloud are asynchronously and automatically synchronized back to the data center. This means that if I want to return the VM to the data center at some point, it will already be fully synchronized and I can do it in minutes.

Write Isolation – In this case, writes by managed VMs in the cloud are kept in the cloud, and not synchronized to the data center. In my case, since I only want to test the application in the cloud, I don’t need to synchronize the changes back to the data center.

Next, I select the security groups (AWS firewall rules):

Finally – select the cloud subnet and the Edge Node (Velostrata Cloud Extensions always have 2 Edge Nodes, usually located in separate cloud zones for redundancy):

That’s it! You’re ready to go.  Here is the operation summary:

Click Finish.

Now the Velostrata system stops the data center VMs, takes a VMware snapshot and moves the VMs to the cloud.

Overall, moving the 4 VMs to cloud, with ~ 2 TB of disk took 10 minutes (including me preparing a cup of coffee. BTW, even if the app had 10TB disks, it still would have taken just 10 minutes, due to Velostrata’s intelligent streaming, optimization and caching technologies):

And here, you can see the VMs in the cloud:

In general, I don’t have to ever connect to the AWS console, as I have the cloud instance information in my vCenter, and I can now connect to the VMs in cloud via RDP, just as I’m used to, and their DNS name stays the same.

At first, the VMs seems like they are working a bit slowly, while the Cloud Extension is fetching data blocks from the VM storage in the data center and putting them in cache, but, in short time, my DBF system performs as good as in the data center.

I then connect to the cloud SQL VM, and I can see all disks:

I can now start the DBF test again – in the cloud.

I started DBF, and after 30min, saw higher throughput than I saw in my DC (110 TPS in cloud Vs 90 TPS in my DC):

Looking at the SQL VM, I see that the CP utilization is still 100% – a clear bottleneck.

Since I want higher throughputs, I will reconfigure the SQL VM, using a larger instance type.

Velostrata lets me change the cloud instance type straight from vCenter, I right-click on the VM -> Velostrata Operations -> Reconfigure Cloud Instance:

I select larger instance type: m4.10xlarge:

After a quick reboot, I can connect to the SQL server again, and restart the test.

See below, now DBF gets to 200 TPS, more than double!

SQL server CPU is at ~ 50%.

When done with the test, I can return the VMs back to my data center, simply by right clicking on the VMs in VC and selecting Run On Premises.


Using Velostrata, by lunch time I was able to finish testing my application performance in the cloud. I moved my 4 VMs from VMware on-premises to the AWS public cloud, performed 2 cycles of performance tests for rightsizing my setup, tuned the VMs and returned them to my data center.

Now, what can I get done before dinner? 

Shahar Glixman
Shahar Glixman
Shahar is part of the founding team at Velostrata and is responsible for researching and directing performance analysis and optimization for the company. His career spans more than 20 years in various software engineering and R&D leadership roles, with primary emphasis on performance and optimization. Most recently, Shahar was a senior R&D engineer in the recommendation team at Outbrain, developing infrastructure and algorithms for Big Data analytics. Prior to Outbrain, Shahar was a systems architect at Wanova. Prior to Wanova, Shahar held R&D management and development positions at Cisco, Actona, VSoft and Medivision. Shahar holds a BsC in Computer Engineering from the Technion, Israel Institute of Technology.