Spotlight Interview: Exploratory Testing with Jonathan Kohl

Based in Calgary, Alberta, Canada, Jonathan Kohl is a consultant, author, and speaker in the software industry. He draws on technical, philosophical and business concepts in his work. For LogiGear Magazine: The Exploratory Testing Issue, Kohl answers questions regarding the method:

1. Can you tell us a little bit about your take on Exploratory Testing (ET)? What do you do differently than James Bach, for example?

I’m influenced by Cem Kaner and James Bach, and I have used a lot of their work as a base to build from. In my work, you’ll often hear me cite their prior work, but one of the interesting things with an exploratory approach is that it is dependent on the individual. That makes it hard to compare between one or the other tester talking about ET. Sure, the ideas are similar, and you’ll hear common themes from many of us, but the individual experiences and knowledge collected over time is very different.

Rather than try to compare, I will tell you what I am focusing on in my work. I’m passionate about bringing ET to the masses, by making it approachable:

  • I actively work with teams to help them apply ET or complement their scripted testing with it
  • I write articles addressing questions that come up the most in my public work, often providing alternate explanations for an ET approach
  • I try to explain concepts in ways that people can understand and apply in their own work
  • share experiences, tell stories and provide live testing demonstrations
  • I explore areas where scripted testing and ET can work together
  • I explore how I can use automation tools within my exploratory test work, as described in my Man and Machine article
  • I hate it when a tester feels stupid. I want ET to help empower them and help them feel energized and excited about their work
  • I try to make my ET training as real-world and approachable as possible, and I constantly work at improving it

I’ve done some work as a commentator: there are a lot of approaches to ET out there. Some of them are well thought out, thorough and cite prior work. Others, not so much. However, it is fascinating to see how there are different ideas and approaches, and personalities with different ideas.

I have done some work categorizing different approaches and strategies that I see. You have people like James Bach who is constantly coming up with different ideas and approaches and is often pushing the envelope. You have that with others now too, particularly the Thoughts from the Test Eye blog authors. Rikard Edgren, Henrik Emilsson, Martin Jansson are a great recent example. I call a lot of these ideas “abstract strategies.”

They spur testing thinkers on to get better and better and expand knowledge. At the other end of the spectrum, you have that unknown tester who just discovered how to apply a new idea to their testing, and pragmatically records it on their “cheatsheet” wiki page. “Copy this data and paste it in the app in various places” or something along those lines. That’s a very concrete idea or strateg, and different personality types reach for different strategies at different times. There are others – I introduced some of these ideas here: http://www.kohl.ca/blog/archives/000188.html

I’m also slowly trying to develop productivity tools to help aid ET. Why should the programmers get the benefits from cool technology advances while we still use stone-age tools? It’s taking more time than I would like, but I hope I’m at least contributing ideas in that space.

I’ve tried to bridge the gap between the theory that we, who talk about ET often like to indulge in and the testers on the front-lines with their fingers on the keyboards. That has been surprising. For example, I have been working with others on a free open source tool to help record and structure session-based testing called Session Tester. I started out with a simple design for capturing session notes, and then I looked at the Session-Based Testing Management article by Jon and James Bach.

I’d tried out SBTM as described in the article several times in the past, but I had adapted it for different reasons. Some of it was too heavyweight for an Agile team, or some testers got hung up on the word “charter” and so on. I always kept the basics of SBTM sessions: a clear goal or charter, a time box, a reviewable result, a debrief with others, and the use of heuristics and other thinking tools to help with test idea generation during test sessions.
However, I tracked different things and recorded different things with different teams. When we started going through the design, we hit a crossroads: do we follow the SBTM document to the letter, or do we start basic and get feedback from real users on how they have implemented SBTM? We decided to go with the latter and let the community dictate what they wanted in the tool.

The results were fascinating. Once I got feedback on a half-baked tool pre-release, I found that there were many variations of SBTM out there. I had very few people chastise me for not supporting a “by the book” implementation of SBTM in the tool. Instead, I got a lot of encouragement to make the tool as flexible as possible to allow for variations.
I think this is great – people taking a tool, trying it, and adapting it to suit them. We have so much restrictive attitudes in software development: “do this, do that and only until you are deemed a master should you try to adapt” and it is refreshing to see what people are doing to adapt ET tools on their own without permission from a guru. Wonderful!

I find that space fascinating. There can be an amazing contrast between what people like me are talking about versus what is actually going on in the industry inside shops that we don’t have visibility into. I have to admit I don’t always like what I see, but if it is working for that team, then that’s really all that matters. They don’t have to enjoy ET at all the levels I do.

2. How do you run your sessions in ET? Are they 120 minutes long or do you have a different approach?

I follow the basics of SBTM sessions: a clear goal or charter, a time box, a reviewable result and most importantly, a debrief. How that looks, what we report on, or what terms we use depends on the team, and how I can get the concept working with that group of individuals. Often, just getting people comfortable with note-taking is the first hurdle. I’ve often replaced the term “charter” with “goal” or more often “mission.” Like “heuristic” the word “charter” can invoke a negative reaction in some people. To me it’s just a word, so let’s get past the word and get on to the concept and make it work.

The time box varies in length. I like to start people out with an hour with an appropriate setup and cool down period afterwards and work up from there. It is amazing how exhausting doing full concentrated testing work with note taking for an uninterrupted period of time can be. Once people get used to that, we ramp up or down depending on energy levels and other factors. I rarely go over 120 minutes in my own work, but I have had testing sessions in pairs or trios that went for much longer because we were experiencing flow, we were discovering all sorts of interesting information and we fed off each other’s energy.

3. How do you use heuristics in your ET process?

When I first started leading test projects, I found that my team was overwhelmed with paperwork stemming from test plans and test case management tools. I adjusted to try to meet the needs of various stakeholders in the team, and I thought: “our mission as testers is to provide important information, like bug reports to stakeholders. Hardly anyone reads our test plans, and no one else looks at our test cases. What are we in business to do? Find and report information quickly or maintain and create heaps of paper?”

I and the testers I worked with decided we should focus on testing and do as little documentation as required. We adapted quite naturally and organically, but we broke some of the rules in the QA department. People liked our results, but they were uncomfortable with how we got there. My manager at the time had me read Testing Computer Software by Kaner, Falk and Nguyen and suddenly I had words to describe what we were doing: exploratory testing. Cool! Now I had a term and respected people in the testing community to back up my approach. That was powerful.

As I moved on and began to lead another team, I had trouble training them. They didn’t have the benefit of the experience and gradual adjustments my other team had. I needed to get them up to speed quickly, so I started creating these “test philosophy” documents, and how to guides, bug taxonomies and so on to accompany our test checklists.

I was used to using whiteboards as communication devices, so I got the biggest one I could find and stuck it up outside my office to help guide and track our testing work. This was all quite crude and rudimentary, but it worked. I started digging into Cem’s work to see what other options were out there, and from Cem I discovered James Bach’s work, and suddenly I had a new word to describe what we were doing: heuristic.

I was quite familiar with the term from both my undergrad work in logic and philosophy and from having my Dad as a teacher in elementary school. He was big into using heuristics to solve word problems in mathematics. When I read James’ work on heuristics, I started using them to help us plan and structure our testing. It wasn’t until I met James in person and we tested software together that I saw a heuristic being used in real time. A big lightbulb went off over my head.

I can use these much more in real time as well as planning and everything in-between. Wow! How did I miss that? I know about heuristics and they worked great with those word problems in real time. In fact I was already doing much of that, but using James’ work as a springboard really kick-started me. Why reinvent the wheel when there is such great prior work that you can draw on?

So I use heuristics for several purposes. One is to help plan and structure our testing, and how to take a seemingly endless world of possibilities, and consciously optimize that work into something manageable and most likely to reveal important information. I also use them in real-time and I have a few of my own I like to pull out in different situations.

Heuristics are a memory aid and a possible way to solve a problem, so they help me in all sorts of situations. Anywhere from hearing a trigger when interviewing a team member about a design such as “persistence” – my heuristic is the question: “what gets persisted where?” to using something like San Francisco Depot in a live testing demonstration or during my own planning work.

4. Do you use Oracles? If yes, what was your experience in using the program?

Of course, that’s another word I got from James and Cem. It used to drive me crazy when I’d see a scripted test case with one expected result specified. My brain would immediately leap to different options and combinations. How ludicrous is it to just specify one result when there are lots of things we can use? How can I possibly pass this test case by looking at one tiny aspect? The work Cem and James described on oracles really resonated with me. Thankfully I wasn’t alone.

The oracle problem – finding out sources of information that let you know if your tests are passing or not, can be subtly difficult. My rule of thumb is to get a second source of information to verify things by. Sometimes that’s relatively simple: “There is a free government service that provides the currency exchange rates once a day that we can compare our calculations with.”

Sometimes it is complex, like working with data visualization techniques when doing performance testing. Why on earth are we seeing a spike in page load times every 20 minutes? That sort of oracle determination requires a combination of experience, knowledge and a willingness to research.

Some things that have surprised me:

  • Validating data sent from embedded devices in a test environment – discovering that the real thing worked far differently than a new design and tests let on. The (passing) tests were using mocked up data, and I decided to use the real thing to contrast with the data used in the tests. An engineer unlocked a storage space, pulled out some dusty devices and helped me get them going. I needed them to generate real data, so we played with a temperature probe, just to get some sort of data generated. To generate the first batch of real data, I grabbed a soldering iron and put a can of Coke in the freezer. Once the soda pop was sufficiently cold, I alternated touching the hot soldering iron and the cold pop can on the probe. Real data was generated, immediately uncovering real problems discovered in the system.
  • A tester’s gut feel, or intuition – faced with overwhelming evidence to the contrary, including passing tests, massive project inertia, great testers seem to be able to stand up and say: “Wait a minute. Something feels wrong. Give me a couple of hours to investigate.” Those are always surprising because they start with a suspicion or hunch, and investigation and a good investigative structure around their testing leads to real evidence.

Countless examples of the wrong oracle in a test case document. Either the document was out of date, or just plain wrong to begin with. Using simple means to come up with an independent check on that oracle revealed the test case was just wrong. Even a little creativity within test cases or using them as exploratory guidance references can reveal interesting information.

5.  How do you find the best testers with the best domain knowledge? What are some of the ways you go about finding this information out?

I find out if someone has a testing aptitude by asking about their interests, and their experiences in the past. I worked with a test manager who had worked for many years in a restaurant, and I was able to map some of the lessons they had learned there to their success as a tester. When you personalize and validate prior experience that the individual tester may not associate with technology, interesting things happen.

I’ve had sports coaches, history researchers, accountants, short-order cooks, programmers, database administrators, systems administrators, and all sorts of roles excel in testing. Two of the best who come to mind were an executive assistant who was brought in to help test a new system part-time, and a technical support analyst. These were both people with superb analytical, reporting and investigative skills, and you’d never have known it without giving them real testing problems to solve.

6.  Lastly, can you tell us about a recent tour that you’ve completed or done, and what you found in the way of bugs or problems using ET that you would not have found using traditional testing methods?

I feel that way about most of the past 13 years of doing exploratory testing. It’s hard to pick just one. Often, this occurs when there are intermittent bugs that people are struggling to reproduce. Hold on – one is coming to mind now …
Recently, I worked with a team that was transitioning from a highly clerical scripted testing approach to an exploratory testing approach.

They were in a regulated environment, so still required documentation. We had a transition period where we were using both systems – the emerging lightweight documentation system for ET, and the older scripted system. The test lead asked me to help get some coverage on some of the test cases that hadn’t been converted over to checklists and guidance docs yet, so I happily obliged. It’s fun to do something a little different once in a while.
One of the test cases felt wrong. There wasn’t a lot of information beyond: “go here, put in these inputs and these outputs should show up” and a note to use some tool to validate the results. I installed the tool, looked over the test case and while it was passing I was still confused. I tracked down the test case author and they were a bit taken aback.

The test didn’t make much sense to them either. So I moved on and talked to a programmer, the Product Owner and a subject matter expert. They all felt that the test was out of date and not valid anymore. I had a hunch that there was something to it. Why write the test case in the first place? Also, I can’t just mark a test as passed or failed without understanding it, so I decided to spend a few minutes exploring.

Now for the embarrassingly simple part of this story: I had no idea what this feature did, or what its purpose was, so I started off on a simple tour. What were all the functional aspects of this feature? I moved from area to area on the screen clicking, typing, moving the mouse around, but I still felt lost. So, in desperation, I used a “click frenzy” heuristic. The mouse started slowing down when I was in a certain region of the screen, so I honed in on that. All of a sudden: Bang! The application crashed. I went back to the programmer: “Is there any reason why clicking here over and over would cause a crash?”

That suddenly jogged his memory about that feature, and all these details from months ago rushed back. Now he knew what the test was about, and we brought in the Product Owner and Subject Matter Expert and started sharing knowledge. Of course, the feature had nothing to do with a click frenzy + crash, but that was enough of a primer to get the ideas and memories going. The scary part of all of this?

The test had been reported as passing for months, but the functionality had been broken all that time. That’s what happens when testers get bored of following the same test scripts over and over – they start to game the system instead of using their powerful brains.

LogiGear
LogiGear Corporation provides global solutions for software testing, and offers public and corporate software testing training programs worldwide through LogiGear University. LogiGear is a leader in the integration of test automation, offshore resources and US project management for fast, cost-effective results. Since 1994, LogiGear has worked with Fortune 500 companies to early-stage start-ups in, creating unique solutions to meet their clients’ needs. With facilities in the US and Viet Nam, LogiGear helps companies double their test coverage and improve software quality while reducing testing time and cutting costs.

Thought on “Spotlight Interview: Exploratory Testing with Jonathan Kohl

Comments are closed.