Test Planning to Avoid Technical Debt

Technical debt is not something an organization intends to get into. As we get involved in a new venture, or we look to expand on or develop a new opportunity for a legacy market, we think we are making good choices or that we have covered our bases when it comes to development and testing initiatives. However, over time and as demands for time and attention grow, technical debt is a persistent issue. It is common but it doesn’t need to be inevitable. Additionally, technical debt is, like any other debt, something that can creep up on us if we are not aware or alert to its ramifications. 

For the purposes of this article, we will define technical debt as “a concept in software development that reflects the implied cost of additional rework caused by choosing an easy (limited) solution now instead of using a better approach that would take longer.”

Technical debt is, for lack of better words, a set of chains we choose to drag around when we do not address it. Often the root of accumulating these chains comes down to a lack of planning or understanding of how we might be able to test what we are implementing. If we focus on a “development at all costs” approach without taking into account how we will test, we increase the odds that our testing efforts will be inadequate when the time comes, and that will again add to our level of technical debt, making the problem more difficult to solve.

The “Brick Wall” of Technical Debt: How Organizations Get Here

I can look back over the past 30 years at a variety of organizations I have been involved with and I can see situations where technical debt was well managed and ill managed. To be clear, none of the organizations I was involved with (or am currently involved with) completely and totally avoids technical debt. Technical debt can come about due to a variety of reasons. One area that is common (and I have experienced it first-hand) is the notion that testing will catch up or be dealt with at a later time. If something is to be cut back on to meet the shipping date, often it is the testing efforts. As a long-time software tester, I understand this frustration. I have actually seen organizations that were well positioned at one point finding themselves in significant technical debt in a short order of time. As I looked back and considered these organizations, it seems there are several areas that they all have in common.

First, there needs to be a willingness and a focus on making sure that testing can be completed in a timely manner. In large organizations with complex products, that alone may place me in a doomed position. How can I design tests that will make sure that we have everything tested and ready for release? How can we test in a way that is both effective and efficient? Also, who does the responsibility for paying off technical debt ultimately fall to? Of course, with quality being the responsibility of the whole team in Agile, the whole software development team needs to address it, but who on the team specifically deals with what and how?

Often, we fail because we either have not planned for how we might best leverage testing or we have not communicated in a meaningful way as to how we might be able to test something. As an example, I have often been in situations where a feature was implemented and it was handed to me to test. I then worked with this feature and tried to test it based on the attributes or details as the developer chose to implement. Instead, what could have happened had I been involved in the planning discussion from the beginning? Were I to have been there to discuss how I might test their implementation, I could provide feedback that might help make the process entirely different than what the programmer was initially considering. As an example, rather than just trying to navigate through a system to find the values and confirm that they are present (doable but time consuming) what if I were to suggest an API call that would then get me the data directly? This is an example where my suggesting ways to test helped the programmer go down a different avenue and in the process allowed us to test effectively and at the same time not have to rely on a cumbersome approach to arrive at the same confirmation.

From some of my own experiences in this sphere, some of the ways to handle technical debt have been addressed in the following ways.

Don’t Be Afraid to “Start Over”

Eliminate an over-reliance on specialized knowledge or limited exposure to the methods of automation and testing. In one of the companies that I worked with, there was an excellent, fairly comprehensive test automation solution. It was well constructed and had most of our application’s capabilities covered. This system was developed and maintained by a small team with specialized and optimized tools. It was a great system… when it was developed.

Over time it became difficult to maintain and fix this system because the people who built it left the company for other opportunities. Many of the steps and procedures were not well documented, rendering many details a virtual black box. As many of us tried to come up to speed on using the system, we kept clashing with areas that were abstracted in such a way that it took a long time to find the right way to do something, made challenging by the fact that the people who made the system, again, were not there to answer questions or give advice as to what to do to expand or modify the system.

At one point, we finally decided it made more sense to start over and create a new system. At that point there was a virtual bidding war as to what we should use. Ultimately, the testers and programmers all got together and we made a decision as to how to tackle the project. As a unified group, we picked a solution that would have the best level of reusability between groups; a number of us were starting from zero with that approach (yes, I was one of those people) but there were enough team members who had familiarity with that system. They helped guide us and get us up to speed in a reasonable amount of time.

The key takeaway from this experience was that, while it may not have been a “perfect tool” for me, no tool is perfect. In the long run, having the ability to swap out tools as needed or to allow groups to use readily available systems that suit their workflow would be an initial hindrance, but over time, this system would help us conquer that section of technical debt. Furthermore, having us all working on a unified system would help prevent that debt from rising again. Again, the tool being used is less important than the problem being solved. It may well come down to the fact that my team (or just me) might need something that doesn’t fit into the tool hierarchy as has been defined. I may very well be the only person using that particular tool for a time but as is often the case, when an area of need is determined, the first person to get there and address it has the upper hand at determining the approach going forward. In other words, just because a preferred tool or framework has been the go to for the past decade, don’t think that it will always be that way or that you are required to approach every problem with the same tool.

Get Involved With Test Design as Early as Possible

When I hear an organization wants to “shift-left,” to my ears what I am hearing is that there needs to be a focus and determination to get testing involved early. I’m all for it but the next question I ask is, “How early?” Were it my decision, I would say at the time of requirements development would be ideal. Is that too early? I don’t think so. 

I have found being part of the initial story workshop allows me and other testers to provide input. A term that Jon Bach taught me many years ago is that, at story creation, we are best able to “provoke the requirements”. This is where testers, with their experience and ability to consider different scenarios, can be invaluable.

One of the best reasons for getting testing involved this early is that testability can be designed into the system. At this stage I can ask questions about the new feature and what it can do. I can verify and validate the processes being described. Often, I can influence and encourage a more testable design at this stage. By addressing testability early in the process of story development, I can ask for certain testing hooks to be made available.  Additionally, I can request that API calls be created (if that is relevant). I can also ask to see if there are ways to set or delete values via database calls that can be run from my test setup (be that via API or direct calls from test scripts). By having these abilities in place, creating automated tests or test scenarios in general is more straightforward and takes less time.

The biggest challenge teams looking to combat technical debt will face is the urgent feature request that absolutely, positively, has to go out as soon as possible.  No joke, I have been in many meetings where a feature being proposed is punctuated with, “We need it yesterday!” Sadly, these are all too common occurrences. The issue that arises is that we will “fix it the next time”, meaning that after this feature ships, we will come back and we will more thoroughly verify that feature or automate those steps.

As a tester, it is critical that I step into this situation and ask, “OK, if this is needed yesterday, what do I need to do to help ensure that we can test this properly? What areas do we need to look at? What services are available that will allow me to perform this testing?” I have all too often felt like a “we need it yesterday” request comes in and everyone scrambles to deal with it. To borrow from our financial example, we can treat this like a blown tire on a car. Yes, it’s an issue that needs to be immediately fixed but that issue is significantly punctuated if we discover we don’t have a spare tire in our trunk or the tools necessary to jack up the car or remove the wheel to put the spare tire in place. While we may not be able to anticipate the blown tire or when it will occur, we can certainly be prepared for that eventuality and make sure there is a tire in the trunk, that it is inflated, and that all tools needed to replace the tire are also in the trunk.

That is what our test planning in these situations provides. Note that I said “test planning,” but that doesn’t mean you need to write a traditional-style, 80 page test plan; there are many modernized approaches to test plans that can help you succeed here. It may be a brief discussion, it may involve taking a little extra time to make sure that there are testing hooks in place so that we can quickly and effectively test that emergency feature. I treat that extra time as insurance, and do my best to make sure that I can act on that planning and have those tools in place and ready when they are needed.  

Being involved as early in the testing process as possible on these issues will be a big help. It may not always be possible but I have found that being able to do pilot/navigator programming sessions in these situations are beneficial. They often allow us to, again, consider areas of testability that can help us prevent urgent requests from getting out of hand and having loose threads hanging around that we have to clean up later.

How to Avoid Gaps Amidst a Digital Transformation

Currently, many organizations are being faced with challenges due to COVID. Older ways of working and operating are having to be jettisoned, in some cases temporarily and in others permanently, to allow for a different way of working. It is not too outlandish a statement to say that COVID is exposing many areas and issues where organizations are having to adapt and create entirely new Digital Transformations in their way of operating. This may not have been intended but it is an opportunity for those organizations that choose to accept and embrace it. If there was a resistance to doing such things as paired programming and testing, examining new tools and approaches, considering AI or Machine Learning platforms, augmenting the software delivery pipeline, or re-examining how testing is being performed, who actually does it, and when, this is where that can happen. To learn more about options for Fortifying your SDLC check out this on demand webinar by Clay Simmons. By focusing on remote work by so many, new pathways to how work is accomplished and when are actively being considered where these avenues did not have to be considered before.

So what can we as testers do here? First and foremost, we can be early. We can ask and look to be part of the early planning for features and provide input regarding test design, what we would look for, and how we would accomplish the tasks we are looking to do. This need not be exhaustive but even by asking questions about testability, accessibility, performance, usability, and clearly understanding the acceptance criteria for stories, many misunderstandings can be eliminated far earlier in the process. Additionally, by allowing for more communication and planning up front, we can get a clearer picture of areas that need work and areas that we need to shuffle our priorities for. It’s not a guarantee that emergency situations will not derail us from time to time but it increases the chances we will be able to adapt and handle those emergencies with the least amount of disruption.

Conclusion

Technical debt can be insidious but it can be managed with time, focus and diligence. Much like financial debt, often the first thing to do when you discover that you are in a hole is to stop digging. Make a survey of where you currently stand. Weigh the options you have to tackle the debt. Identify if there are areas where the team is deficient in knowledge, skill, or tooling. Take the needed time to put in place the steps necessary to pay down the debt. This may be a slow process and it may take months, even years to work off the accumulated technical debt. Still, with the steps I have described above, I feel you will be in a good place to make progress and potentially tackle it sooner than you might think.

Michael Larsen
Michael Larsen is a Senior Automation Engineer with LTG/PeopleFluent. Over the past three decades, he has been involved in software testing for a range of products and industries, including network routers & switches, virtual machines, capacitance touch devices, video games, and client/server, distributed database & web applications.

Michael is a Black Belt in the Miagi-Do School of Software Testing, helped start and facilitate the Americas chapter of Weekend Testing, is a former Chair of the Education Special Interest Group with the Association for Software Testing (AST), a lead instructor of the Black Box Software Testing courses through AST, and former Board Member and President of AST. Michael writes the TESTHEAD blog and can be found on Twitter at @mkltesthead. A list of books, articles, papers, and presentations can be seen at http://www.linkedin.com/in/mkltesthead.