I just got back from the 8th International Conference on Software QA and Testing for Embedded Systems, Oct 21 – 23. This was my first time at this conference and I’d give it a high grade for the overall quality of the content and speakers, and in particular, for the people who organize this conference. They are first class. Thanks to my new friends at SQS S.A., Spain — Sander Hanenberg, Jesús M. de la Maza and Begoña Laibara — for treating me well during the conference.
My keynote “Automation Coverage Less than 50%? Don’t Do It!” explored the common excuses for failing to achieve high-volume test automation output. It also pin-pointed underlying barriers to test automation. My key messages were:
- You must achieve high-volume test automation that exceeds at least 50% of automatable test cases or you will not get good return-on-expenses.
- To get efficient in optimizing your test volume, you need to have a good handle on managing the high rate-of-change (interface changes, functionality changes, etc.). This will help minimize the maintainability effort.
- Your automation technology must be easily extensible to allow supporting new platforms and new object recognition capability. This will help extending your automation technology to support any new platform that your company needs to support.
- Then ultimately, your tests must be highly reusable and scalable. This helps you get to high volume of tests.
Here is the summary of the key takeaways from the talk:
- You must fully understand your automation cost of ownership.
- You should not underestimate the challenge of keeping maintenance costs low (it’s not easy but you must keep it low).
- You need to get efficient—you must optimize your volume of test to exceed 50% coverage or your return value will be marginal.
- Efficiency is key, and it will come from excellent test design and automation methodology (e.g., action-driven), and a well-architected framework technology.
- You must minimize programming tests.
- High scalability comes from high reusability of common “actions” and team-based staffing model.
- You must provide high visibility or transparency to your automation program to give management full control of measurability, which ultimately leads to manageability.
Coincidently, Jamie Tischart of McAfee also spoke at this conference on the topic called “Fusion Testing” (I think Jamie came up with this term) which is a Lightweight Testing Approach of his own that combines classic testing techniques with lightweight principles like Agile and XP to increase test execution by better utilizing test resources. He talked about the needs of both exploratory testing and high-volume test automation, and how you must do both effectively. Ironically, Jamie’s team at MXLogic-McAfee has been living this for years and they have also formed a strategic partnership with LogiGear, now in its 4th year, to help implement this program with high-volume test automation, and been benefiting from the result. This was a real treat for me because to speak about a subject is one thing; but to hear about the success from the work that we contribute is actually a whole different level of satisfaction.
Another new friend I have made is Chris C. Schotanus who was Hans’ colleague at CMG. Chris recently completed another book, “TestFrame: An Approach to Structured Testing” which he was kind enough to give me a copy—thanks Chris. I am sure it will be a great read…so it will be my next read. I’ll let you know how it goes.
To top things off, Heather and I also got a chance to go see an UEFA EUROPA LEAGUE football game (soccer for us in America) between Athletic Club (Bilbao, Spain) and CD Nacional (Madeira, Portugal) which the Athletic Club won 2–1. It was great fun!
Hi again,
i am reading news now regularely. And more i read about high volume test automation, more questions i have. As i mentioned before, automating different applications with different tools or writing atomation code from scratch, i learnt a lot and know well that theory and words very often are far away from reality. Let me ask you one simple question: how do you analyze test results outcomed from high volume (close to million as you have mentioned before) auto tests? Do you count time for investigation the results. You should know well that most of the automation will not report the exact problem spot and most often the next step will show the error, rather than actual error in application. Let me know. Thanks!
In my practice, high volume test automation is method centric with the support of keyword technology behind it. As to any test automation technology, you are correct: analyzing test results is a huge and time-consuming task that must be accounted for as part of the cost of automation. Certainly, false negatives (Fail test does not mean that there is a bug in the AUT) are common events in comparison-based automation. The key in the strategy is having a method to minimize the debugging and maintenance of the tests. This is what I normally refer to as handling “the high rate-of-change” and it is unavoidable. If you spend more time to analyze and maintain your tests than adding new tests, then the automation solution will not or scale or give you the return that you wish. All of this is not easy, but it is doable and must be done. As to theory can be very different than reality, I also agree. Talking theory is easy! The valuable lesson only comes when you actually have done it!
Hello,
i have another tough question that is hidden in the books and in the classrooms. Designing Automation tests (AT), Writing AT, executing AT, etc. Here is complete process in determination of Auto Tests Life Cycle. Who is doing what within QA team? or what what is the best way to organize the process: DAT (Design AT) –> WAT (Write AT)–> EAT (Run/Execute AT) –> AAT (Anylyze Results from AT) –> LBAT (Log Bugs based on AAT) –> MAT (Maintain AT). Form my experience: DAT and WAT is work of both QA and Automation Engineer. The most complicated and challenging part for managers – EAT and AAT. In your off-shore model who is doing this most important and most challenging part? Who is tracking the validation of the results? The are tons of questions assosiated with this, please explain. Thanks,
Dmitry
In our high-volume automation model, the offshore team does all of the test design, writing and maintenance. Test execution, failure analysis, and bug reporting can be done by on- and offshore team. The onshore team then can spend more time on exploratory testing and giving direction and advices on new tests, and also on reviewing tests.
This is a good blog. Keep up all the work. I too love blogging and expressing my opinions. Thanks 🙂
I’m typically to blogging and i really recognize your content. The article has actually peaks my interest. I am going to bookmark your web site and maintain checking for brand new information.