Accelerating the Dev/Test Cycle in Production Environments Using the Cloud
July 28, 2016
One of the popular use cases that we frequently hear from our customers is related to leveraging the public cloud to provide a scalable and agile testing environment for stateful production workloads. For example, an IT operations manager wants to create a replica of a real production database, perform any desired testing and fixes and then re-evaluate the application and results.
In many cases, these tests require resources that may not be available on premises, such as additional CPU, RAM and IOPS. Performance testing can be even more demanding from a resources perspective, especially when evaluating the effect that a capacity increase can have on application performance.
In this blog, I’ll explain how, using Velostrata, IT can quickly and easily leverage the public cloud to speed up Dev/Test processes. Our environment includes a typical enterprise application running on a production PostgreSQL database server. Once the application has been modified or upgraded, it needs to be tested. Once tests are completed successfully, the changes made to the original application can be merged, accepted, and then brought to production. In a traditional Dev/Test environment, a full replica of the production database needs to be created, and then the modified application executes using the replicated database for input. This database replication can be prohibitively expensive both in time and space. Velostrata addresses this challenge simply and easily.
Leveraging Public Cloud for Dev/Test
Velostrata makes it possible to extend your data center environment quickly and easily to the public cloud (AWS, Azure), so you can take advantage of its scalability and flexibility, while avoiding an increased burden on your data center. With Velostrata, you can create a clone of the production system, but will move and run this clone in the cloud in minutes, leveraging Velostrata’s workload streaming and caching.
This approach reduces (and in some cases eliminates) the infrastructure footprint for:
- CPU and RAM required for the test workload
- Storage IOPS required by test activities
- Storage capacity required by test activities
Creating the Test Setup
Figure 1 illustrates the configuration of the test environment in AWS. First, a VPN needs to be configured, and a Velostrata Cloud Extension is deployed by using the “add cloud extension” built-in operation to connect our data center instance to AWS. For more information about how Velostrata works, download our whitepaper.
Next, in order to create an isolated test environment, we’ve created a dedicated security group in AWS VPC with the following security rules:
- Incoming: 5342 (PostgreSQL), 3389 (RDP)
- Outgoing: None
This allows the dev application to connect to the database, while keeping the test replica isolated from the production environment.
Figure 1: Configuring a Test Environment
Finally, we make a copy of the production database by taking a VMware snapshot and creating a linked clone from this snapshot. This is a quick operation that consumes zero resources – both in terms of CPU as well as on-prem disk capacity. Furthermore, linked clone creation doesn’t require a stop/reboot of the production VM.
Moving to Cloud
Once the linked clone is ready, we issue the “run in cloud” operation on the copy using Velostrata’s context menu within vCenter or via the corresponding Powershell cmdlet.
We place the VM in the isolated subnet that we’ve created, and we select “Write Isolation” mode to keep all temporary data created by the application in the cloud:
One of Velostrata’s unique benefits is that only the blocks that are required for boot are transferred, while other blocks are brought on-demand or in the background if and when needed.
Our production VM contains a number of attached disks with total size of 90GB. Normally, assuming generic 35%-40% compression, moving VM content to the AWS cloud over a WAN link (ours is 10Mbps / 80ms RTT) would take around 12h.
However, since Velostrata only needs to fetch the blocks required for the boot and application startup, less than 4GB are read from on-prem storage (which is reported in real-time under the Velostrata monitoring tab in the vSphere UI. See the graphs below for detail). Furthermore, Velostrata applies compression and dedup optimizations, resulting in just 156MB that must transfer over the WAN link. In less than 8 minutes, the instance is running in the cloud, and most of these 8 minutes are due to the time to provision the cloud instances in AWS.
This offload allows us to spin up a number of test systems without affecting our on-prem capacity. Once the VM is running, we can perform functional testing.
So, now we’ve completed the functional testing, but what about performance testing? Can we experiment performance with multiple types of CPUs to find out which is the most cost-effective one?
We definitely can. AWS cloud provides wide range of instance types balanced for different needs, such as C4 instances for CPU intensive workloads or R3 instances for RAM intensive workloads. All we need is to reconfigure our cloud VM to use larger instance, which can be done easily from the Velostrata plugin to the VMware vCenter management console:
Once the instance type is changed and the VM is rebooted, we can run relevant performance tests to verify that new dev code doesn’t introduce any regression.
Integrating with an Automated Test Infrastructure
All the flow above is, of course, good for demonstration and on-demand testing. Most of the environments, however, employ periodic testing (e.g. a nightly run), which requires integration with an automated test environment.
This can be easily done using Velostrata Powershell cmdlets. Using simple scripts, such the one below, we can perform setup and teardown of the test copy.
We’ve seen that, by using Velostrata software, we can create a test copy of the production system in a very short period and without being limited by on-prem resources. This makes it possible to plan VM infrastructure more efficiently, but gain the necessary flexibility to adapt to new requests or business change.
In my next blog, we’ll scale up the system, focusing on multi-tiered, multi-VM applications with inter-VM dependencies.