How to Accelerate Your Transition to the Cloud: Pt 2

This article is a follow-up to How to Accelerate your Transition to the Cloud. In which we discuss the reasoning for migrating to the cloud, lift and considerations for moving, most notably lift and shift and if QA leaders need to be all in. It can be read online at LogiGear.com/blog 

What Comes Next After “Lift and Shift”?

The idea of “lift and shift” is that we as an organization will take our existing product and its infrastructure and put it into the cloud “as is.” We literally pick up our in-house hosted product and we move it in its entirety up to the cloud and make no other changes. This is a common procedure and many companies do this. However, if we only do this and do not make any other changes, we are greatly limiting how that product will run and how it can––or even if it will––scale once it is up in the cloud.

I remember well the process of moving a product that I was working on that had both on-premises and in-cloud hosting arrangements. Because of this, many of the changes we wanted to make had to be done in a measured and time-based manner, as a number of our customers’ security requirements stated that we had to conform to their in-house hosting agreements. This was what made us perform the “lift and shift” process in the first place, so that we could free up resources and utilize the cloud but also be in sync with our on-premises customers and their needs. Over time, we realized that the platform we had chosen to use and develop would greatly be enhanced by moving away from a monolithic system and allow for microservices to perform discrete tasks.

Decoupling from the Main Application

Our first challenge was to determine how we could get the content that was once monolithic to display in the product through a microservices model. To help illustrate this, the product we initially designed was based on a Dashboard and web page model, where widgets could be loaded and those widgets could be configured to display information. Initially it was a single product and it focused on a single database, used a dedicated open source search engine, and had some areas for customizing the look and feel of the product. There were few moving parts and overall it was a system that could be hosted on a single platform if desired (our term for this arrangement was an “All-in-one Appliance”).

The issue of course comes into play when, by putting an application into the cloud and offering it to more people, there comes a time where a monolithic system creates bottlenecks. More to the point, as our original little company was acquired by a larger corporation, they saw the benefit of our “widget display” dashboard and asked,  “Could we help expand the product so their other applications could likewise be accessed through that same dashboard?”

This was the beginning of what would be an odyssey that is still happening, but we have accomplished a lot to make this request possible. In short, we realize this was no longer going to be a one database or one data feed product. Instead we would need to take advantage of various microservice technologies and we would need to leverage multiple servers and databases, as well as have those servers and services able to talk to each other.

Can You Hear Me Now? Good!

For those familiar with the old Verizon Wireless™ TV ads, there was a guy that walked around with a phone saying repeatedly, “Can you hear me now? Good!” I mention this specifically because when you make a move over to microservices from a more linear product, this is what you will be spending a fair amount of your time doing. In our case, it was less about multiple products sharing information between each other (that comes later and has its own share of headaches); but, even getting a system to display information in a single dashboard from other systems was a challenge! One of the ways we would set up and test this was to have a piece of code that we would share between all of our servers and, based on what was being displayed, we would load the URL to download the microapp we had created; if all was successful, we would see a screen with some strings printed in the widgets in question. This was our smoke test that we would run before we would do any other type of testing. If we didn’t see the strings appear, we knew that we couldn’t communicate with the downstream service. This was a much needed first step for us to make sure we could make work reliably before we went into further testing.

The Integration Challenge

With an initial lift and shift, there may be little in the way of additional items needed to be tested or there may be a finite number. By moving to a microservices model––especially if multiple organizations can create those microservices and leverage them––the integration challenge becomes more pronounced. How many tools now need to work together? Whose fault is it if a service doesn’t display? If an underlying platform change is needed, will it have a ripple effect on others?

I well remember a time when we were working with a status tool that would allow people to see if individuals were available or not based on their last online actions. It worked great for our team; all timing elements worked well and we had no issues seeing how those close to us were doing. The issues came in with people who were more remotely located, as in their “local” servers were a long way away from the platform server(s), so there was a need to make changes to ensure that we were polling the microapp regularly enough to ensure we were getting an accurate representation of when they were there. This is a simple example but imagine moving to microapps that deal with multiple groups and share data with those groups. How does the data integrity get verified? How do you know what you have on machine 1 is representative of what’s on machine 2? Also, consider the number of polling requests that would need to be made. Is that an efficient process, or could it ultimately use up so many cycles as to be a drain on your resources and ultimately your hosting budget?

When One Database Doesn’t Rule Them All

One of the bigger challenges I faced with a microservices approach was the fact that updates required database migrations. When these systems are updated, they often need to be done in a sequential manner. With a single database, this isn’t as difficult to keep track of than it is when you have multiple databases that need to be in sync. One of our more involved testing projects went into creating what we called an “Appliance Update” checklist. What this meant was that we would poll a server, and in the process of doing so, we would get back a number of version strings. Those version strings would tell us what migrations had been run and which hadn’t. It would then help us create a pipeline that we could run and make those updates, whether it was one or 20. If it was an “All-in-One Appliance,” then this was typically a straightforward process. If the systems in question were a “Clustered Appliance” (meaning it may have had multiple front ends, multiple back ends and potentially dozens of microservices running) then that update would require a little more care. With time, we were able to minimize the challenges but they are still there and we want to make sure that we are keeping that system in sync as it tends to get more complex over time.

Conclusion

While it might be easy to think that putting your infrastructure into the cloud with a lift and shift is a “one and done” procedure, over time, the environment and people using it will start to shape the system and your software needs will also need to adapt. A microservices approach allows for the breaking up of the original monolithic structure, but it comes with its own challenges and issues. It will take time and coordinated effort to get those systems working and also testable; but, given that time and effort, those changes might reap big benefits in scalability and adaptability of your applications. Are you ready to intelligently perform a lift and shift? Or, need some help mid-stream? We’d love to help! Visit our contact us page and inquire today.

*Disclaimer: This article is not endorsed by, directly affiliated with, maintained, authorized, or sponsored by any of the companies mentioned in this blog (Verizon Wireless). All product and company names are the registered trademarks of their original owners. The use of any trade name or trademark is for identification and reference purposes only and does not imply any association with the trademark holder or their product brand.

Michael Larsen
Michael Larsen is a Senior Automation Engineer with LTG/PeopleFluent. Over the past three decades, he has been involved in software testing for a range of products and industries, including network routers & switches, virtual machines, capacitance touch devices, video games, and client/server, distributed database & web applications.

Michael is a Black Belt in the Miagi-Do School of Software Testing, helped start and facilitate the Americas chapter of Weekend Testing, is a former Chair of the Education Special Interest Group with the Association for Software Testing (AST), a lead instructor of the Black Box Software Testing courses through AST, and former Board Member and President of AST. Michael writes the TESTHEAD blog and can be found on Twitter at @mkltesthead. A list of books, articles, papers, and presentations can be seen at http://www.linkedin.com/in/mkltesthead.