The Cloud demands that we be as nimble as possible, delivering features and fixes in almost real-time fashion. Both customer and provider rely on software development that can maintain quality while being light on its feet and constantly moving. In addition, Cloud-oriented systems tend to be highly complex and dynamic in structure — more than our industry has ever seen before.
The traditional software development and testing models do not support this constant “diligence in motion”; therefore a new Cloud delivery model must be adopted. Traditional models worked reasonably well in the world of client / server, since users were most often internal resources. User experience was downplayed, and glitches tolerated.
The lengthy cycle for requirements generation and feature development, followed by a set of testing cycles, allows for extended periods of time without testing. But these gaps do not correlate with the needs of Cloud consumers. For them, ongoing, reliable, uninterrupted experience is everything.
An effective delivery model of software for the Cloud pivots on one key moment – the instance of feature release. A provider or customer must be able to fix or change application features on-the-fly; that is, all tests for this fix or new feature are complete at the moment of feature release.
The only way companies can realistically achieve this model is to have superior test sets, that are fully automated – and to go about automation the right way. Otherwise it can quickly become unachievable and unmanageable.
“In the past 5 years, evaluating millions of tests for our clients, LogiGear has achieved automation percentages of over 95% of all tests.”
When automation efforts fail to achieve high percentages on tests, the method is often considered faulty. But when test automation follows specific and unique guidelines, its success can be measured again and again.
Guidelines for Successful Cloud Test Automation
When an automation team spends a disproportionate amount of time on automating tests, and the resulting automation then ends up covering only about 30% of the tests, the automation policy has failed. A much higher percentage is needed to “test everything always” in Cloud applications. Additionally, automation should not dominate the test process. The actual automation development, and more importantly maintenance effort should only have a modest footprint in terms of time and resources.
While many testing organizations mistakenly approach automation from the perspective of tooling or programming, an approach centered on automation effective test design combined with an agile test development process yields far better results. When done right, the result is a set of automated tests with on-the-fly adaptability that readily meets the facile requirements of testing in the Cloud.
Tools have their place in the process, but frequently steal the center of attention, viewed as panaceas. Primary focus goes to buying and learning “the tool” rather than expending the time, effort and cost involved in revisiting test design. But if a framework and test design process are not established, using a tool is like shooting in the dark — the Ready > Fire > Aim! approach adds up casualties quickly. The plan of attack must be mapped out before a proper weapon can be selected, otherwise “fire” can — and as we have seen, usually will — turn into “backfire”.
Establishing a test design process allows for more possible tests that are readily available, improving development cycles through flexibility. The approach aims to have at least 95 percent of tests automated, and 95 percent of testers’ time spent on test development, not automation.
These tests are not based on regression or bug validation, but are calibrated to find and hunt for bugs, boundary conditions, state transitions, exploratory testing and negative tests.
The test design approach has six essential principles:
- No more than 5% of all tests should be executed manuallyThe cost of introducing automation is usually significant. By maximizing the investment, automating a high percentage of test cases, the return of a rewarding payoff is more likely.
- No more than 5% of all efforts around testing should involve automating the testsCreating more and better test cases is key to proper test design. When testers spend significant portions of their time programming automation, test cases tend to be shallow, addressing only the basic functionalities of the system.Allocating time for in-depth development allows testers to write more elaborate cases, using testing techniques such as decision tables or soap opera testing, as well as their imagination (a frequently underestimated asset). The result is better coverage with less effort at the tool end.
- Test development and automation must be fully separatedTo make sure that test cases are sufficiently in-depth, a distinction must be made between the responsibilities of testers and programmers. For successful Cloud test automation, testers must be dedicated only to testing.
- Test cases must have a clear and differentiated scopeEach test case should be well-defined in its scope and purpose, and together test cases should map out comprehensive coverage while avoiding overlap and omissions.
- Tests must be written in the right level of abstractionTools for conducting tests must be flexible enough to handle both higher business levels and lower user interface (UI) levels on demand.
- Test methods must be simpleThe method used to achieve effective test design, and the subsequent high coverage automation, should be easy and straightforward. Most of all, it should not add to the complexity of automation.
These principles of test design ensure the adaptability required for successful automated testing in the Cloud. Software development companies that have automated most of their testing, focus most of their efforts on testing, have dedicated testers trained to create comprehensive, flexible test design and produce useful, accessible results will be well-equipped for travel at the speed of service.
Above information regarding Automation cloud Testing seen good,
but neeed detail procedure for how to write script for cloud testing
Points 1 and 2 seem to clash with one another. Increasing the number of test cases for automation automatically increases test automation development effort. Moreover, extensive coverage of functionality will result in even more effort on development. Benefits of automation can be reaped only after multiple regression cycles. Moreover, more automated scripts means more maintenance effort when some GUI of functionality change comes up.
I discovered your blog site on google and check a few of your early posts. Continue to keep up the very good operate. I just additional up your RSS feed to my MSN News Reader. Seeking forward to reading more from you later on!…0I am often to blogging and i really appreciate your content. The article has really peaks my interest. I am going to bookmark your site and keep checking for new information.
Usually I do not read post on blogs, but I would like to say that this write-up very forced me to try and do so! Your writing style has been surprised me. Thanks, quite nice post.