How to Accelerate Your Transition to the Cloud

In 2020, it is no exaggeration to say that, if your application is not cloud-based, customers may not consider using you––and with public cloud adoption rates rising to 92% in 2018, it’s clear to see why. Having dedicated systems running native applications that require maintenance by individual organizations creates a coordination headache that is easily solved by cloud technology. I have been involved with companies that developed shrinkwrap and downloadable software, applications that run entirely in the cloud in a Software-as-a-Service (SaaS) environment, and a hybrid of the two models. Each has their own challenges.

Cloud Migration: Why Migrate in the First Place?

Perhaps the most important reason to get our application(s) into the cloud is the enhanced level of control we have for deployment and updating our applications; this dovetails nicely with Continuous Delivery Initiatives. There is a definite benefit to having everything under one roof from a product perspective. When I’ve managed releases that required shipping to customers versus a single deploy-and-roll-out for hosts in the cloud, the process was much easier to manage with the cloud option. I didn’t have to wonder if customers picked up the latest release or if it had been deployed in a timely manner.

Code deployment can be faster as we can determine which features we want to release and when. SaaS applications benefit from having a cloud deployment as smaller updates can be more frequently run and deployed, allowing for more frequent deployment. By deploying more frequently and in smaller chunks, the odds of a catastrophic issue getting out are considerably lower. What’s more, if there is an issue found, testing and redeploying a fix can be done much quicker.

Making a One-to-One Move: Lift and Shift

In many environments, if the application(s) in question are already set up in an environment that can be easily duplicated in the cloud, hosting can be as simple as deploying a cloud environment, sizing the system, and then pushing the application and its related components to the new server(s). This process, hosting a system in the cloud without any rework at all, is called “Lift and Shift”.

There are some immediate benefits to this approach:

  •  First and foremost is there is no retrofitting of the application from dedicated hardware to hosting in the cloud. If it is determined that more memory, CPU, or storage is needed, the cloud device can be resized to fit the system needs.
  • Project teams can avoid or delay the heavy overhead (time & cost) of refactoring existing legacy applications to be cloud native.
  •  Additionally, Lift and Shift will give customers the benefit of protection with cloud system backup and recovery if needed.

There are challenges with this approach as well. If our application is inefficient on native hardware, it won’t be any more efficient in the cloud. In fact, while leveraging the ability to add resources may feel like a benefit, the cost per system could increase. Lift and Shift could potentially cost more in the long run.

Refactoring Applications for the Cloud

Another aspect of Cloud Migration is the fact that applications can be separated into component pieces and deployed as microservices. The benefit here is that monolithic applications can be split up into separate smaller applications and each of those can be deployed on either the same server or on independent servers. This way system intensive components can be given the resources they need and those that are less intensive can be set up on systems that do not require as much horsepower or storage.

Additionally, this separation of labor will allow for deployment to not be “all or nothing”. Using a microservices model, if an update is made to a front end component, rebuilding the entire system to take advantage of the change is not necessary. Changing a back end service can likewise be done without requiring a full rebuild and deploy. Setting up a front end cluster or a back end replication service can be separate operations and performed on an as needed basis. 

Speeding Up System Setup with Terraform and Containers

An additional benefit to cloud systems is that setup rules can be made to spin up a new system with the needed parameters, and containers can be utilized to quickly set up a base application to be deployed for new customers. This allows for repeatable and quick setup of new environments. Depending on your cloud provider, there may be other tools that can be set up to help make new system setup quick and efficient. Setup rules can also be implemented to create multi-front end/load-balancing systems or systems where multiple microservices get their own dedicated resources and containers if desired.

Do You Have to be ‘All In”?

While a “Lift and Shift” arrangement brings everything over to a cloud server, and breaking a system up into smaller microservices can also make deployment easier, some people ask if it makes sense to deploy some assets in the cloud and other assets on locally hosted equipment. While it is possible to make a setup that works this way, there are numerous disadvantages. The first and largest is the fact that cloud systems can leverage proximity for best performance. Many cloud services have services in geographical regions. To take advantage of the best performance and throughput, it makes sense to set up machines in the same region and cloud segment where possible.

Is it possible to set up services where some of the application is in the cloud and other parts are on dedicated hardware? If latency and time delay for transactions are not critical, then yes, it is possible to set up a system where, for example the entire front end is hosted in the cloud and the back end could be hosted locally. However, be aware that the greater the distance between the cloud hosts and your native hardware, the greater the latency, to the point where transactions may time out or be unresponsive.

If unsure about what and where to start, choose a smaller customer or better yet, set up a staging site and experiment–this could be a great pilot project. Create a representative sized cloud environment. Perform a Lift and Shift first to see if your system will run effectively as is in the cloud. If yes, then examine the system and see if there are components that could be separated.

How Does this Relate To Testing?

It may seem like this is a lot of modification and that this will benefit the development environment but what about testing? How will testing improve or benefit from this arrangement? One of the benefits that I typically see is that with the ability to put an environment in the cloud, setting up and getting ready to test is a much quicker process. Using a container like Docker helps to set up environments quickly and have everything in place that can be accomplished in a matter of just a few minutes rather than having to perform a lengthy set up and install on a dedicated machine. Hans Buwalda discusses the benefits of virtualization and containers at great length on the LogiGear blog. Scaling machines for performance or load testing is as simple as shutting down a machine, resizing it, and starting it up again. By setting up the rules for deploying the system and installing software, I can integrate these steps into a deployment pipeline (say with Jenkins and the Blue Ocean plug in) so that each machine gets set up, each environment spins up as expected, and the code I want to test gets pushed automatically with a minimum of fiddling on my part.

This benefits both testers in that we spend less time on the minutiae of setup and more time actually testing and exploring features and changes. DevOps teams also get a benefit in that updates and rollouts can be performed faster. Overall, everyone wins if the time and attention are placed to set up and maintain this kind of environment.

Conclusion

There are numerous options and approaches to getting our applications into the cloud. In some cases, performing a Lift and Shift may be the fastest way to accomplish this process but it may not be the most efficient or cost effective way in the long run. It may make sense to do a lift and shift to start and then refactor the application(s) we have so that we can pull apart the components and take advantage of what cloud infrastructure and available tools can offer us. Additionally, the platform that we use may well determine the trajectory of future development work and the potential of vendor lock in (and associated costs) needs to be considered. At the end of the day, being in the cloud isn’t magic, we are still using someone else’s computing services and paying for them. The benefits to that arrangement, however, may make a big difference in the way our organization uses its development and IT resources to its best advantage, making more flexible applications and services for everyone.

Michael Larsen
Michael Larsen is a Senior Automation Engineer with LTG/PeopleFluent. Over the past three decades, he has been involved in software testing for a range of products and industries, including network routers & switches, virtual machines, capacitance touch devices, video games, and client/server, distributed database & web applications.

Michael is a Black Belt in the Miagi-Do School of Software Testing, helped start and facilitate the Americas chapter of Weekend Testing, is a former Chair of the Education Special Interest Group with the Association for Software Testing (AST), a lead instructor of the Black Box Software Testing courses through AST, and former Board Member and President of AST. Michael writes the TESTHEAD blog and can be found on Twitter at @mkltesthead. A list of books, articles, papers, and presentations can be seen at http://www.linkedin.com/in/mkltesthead.