September 2, 2008

Don't test that

Here is the story. We received a customer complaint that she is able to access some data that she was not supposed to. On reading the customer feedback it seemed to me that this was something we usually test for since the access control system is one of our prime sales pitch. So I asked the tester in charge of that particular feature to investigate the allegation. She came back reporting that she was able to reproduce the issue. She seemed tense, realizing the gravity of the situation. By the way, I usually never blame my testers when a problem is spotted in the wild, no matter how serious it is. She knew this, so her nervousness wasn't about fear of blame, but rather about why she overlooked this very important problem. I remained calm to prove to her that we were not playing the blame game. They know the drill for what comes next. I usually ask a series of questions about how we overlooked the issue, and then we start talking about how we can try to avoid missing similar problems in the future. She knew the questions I will be asking, so she wanted to cut the conversation short, by saying that she wanted to test it but the programmer told her not to.

You as a tester just discovered some serious problems with a new feature and report it to the programmer. The programmer shrugs off your claims and tells you not to test those scenarios since those are not important in his opinion. What do you do?

There is a dilemma here. On one hand, I can't accept the fact that I or any of my testers will intentionally blind ourselves from certain problem scenarios. On the other hand, if we do test the forbidden zone, we risk pissing off the programmers, and being accused of not utilizing our time crunch adequately. Being rapid testers, deadline stress is the norm.

There are a couple of ways I play this game. If I feel that these tests are actually very important to test right away, I try to convince my testers or the programmers with scenarios of the kind of problems I expect. Sometimes I would take examples of similar reported scenarios from Customer Support.

If that does not work I go for Plan B. When a programmer says "Don't test that", I take that to mean "Don't test that NOW". So instead of testing those right away, I would wait until things calmed down. What that means is that if we do not anticipate serious problems in the areas we are testing and we still have time before the release (which is pretty rare) we would go ahead and test the forbidden zone. Otherwise we would wait until after the release.

There have been cases where we did not return to these problems due to time constraints or plain forgetfulness and the worse happens, i.e. the problem is reported by the customer. In such cases my testers and I TRY (emphasis on try) not to scream "I told you so". When we decided not to test these scenarios it was a combined understanding between the programmers and us. However it is important to bring this new information to the attention of the programmers and their superiors so that we can avoid making similar strategic mistakes. So instead of gloating over our hunch I would say something like, "Although we agreed that this was not important enough to test, it seems we did not anticipate the current scenario". You have to admit that it was a hunch, since we couldn't come up with any credible scenarios to prove otherwise.

Blame games do not appeal to me. I believe being a Test Manager, it is important to trust the skills of my testers. They must be good since I hired them and have been working with them to improve their craft. I intentionally avoid a signoff process of their test strategies. I play more of an advisory role recommending test ideas I feel were overlooked or complementing them on a plausible intuition. I would even help them in coming up with scenarios to communicate with the programmers or reproducing unreproducible problems or be their peer tester or speed up their testing efforts. They realize I am a facilitator rather than a lawmaker. There will be times I would disagree with them, but my disagreement is not authoritarian. They always have autonomy over their testing. This in no way suggests that if an issue gets overlooked in a product I will use them as a scapegoat. I think that was clearly expressed in this writing.

I understand this is an unconventional way of managing a Test Team. Many managers I know prefer that their testers spell out all their test cases and get those approved by them. They would rejoice when they were right and the programmers were wrong or take punitive measures when issues popup that should have been tested (in their opinion). Many would argue that this is the only way to have control over what is being tested, and to ensure that issues don't go overlooked. However I feel it is only an illusion of control. I fail to see how a Test Manager can know more about the test subject than the tester who spends more time studying it, exploring it and experiencing it. It is inevitable that problems may be overlooked, but when it does I will be there trying to understand it with my testers. I also make it a point to take into account when the stakeholders are satisfied with our testing, and I have found that there are more cases of appreciation rather than dissatisfaction over unnoticed issues.

13 comments:

  1. Thanks for sharing this story and your analysis.

    What did you do to the customer who reported the issue?

    I hope you must have thanked her for helping you people become cognizant of the issue.

    When you say "customer" do you mean end users who buy the product or an organization which would pay you for your services and package what you deliver and then send it to the market?

    If she is an end user and is interested at a part time job, you might want to ask her if she would be willing to offer more feedback.

    After all, how many people hit the Send Report button that Microsoft OS throws up, each time?

    How about the idea of checklist for your test team as compared to test cases?

    I do claims based testing that I learned from James Bach and it appears to avoid such situations most times. You might want to try that: Gather and list all claims that your organization, business team, development and test team does about the product to other people or potential customers or as advertisements. Test to find information about the claims. If it turns out that there exists a problem in the product which violates the claim, there might be more than one such problem. Hunt for it!

    Holding a tester responsible for this might not be a good idea but educating a tester in this context is a good idea. A question that I ask to myself is:

    What did I do different from my previous testing effort that I can term as better testing as compared to my previous attempts to test?

    That's one of the way and you might want to discover other ways that suit you and other testers with whom you work with.

    Also you might want to consider to blog more often.

    ReplyDelete
  2. This it the first time I am actually reading your blog, and I like your way of writing. Its very easy to read :)

    ReplyDelete
  3. Pradeep,

    Excellent suggestions. I think I used the term "customer" inappropriately. I actually meant the end user. Our application is deployed live on our servers, and organizations pay us for using our web application. The end users are the staff who actually use our application to care for individuals who are mentally/developmentally challenged.


    > After all, how many people hit the Send Report button that Microsoft OS throws up, each time?

    We definitely thanked the end user. Our case is a little different from many other products. Our product is used by care givers and they take their profession extremely seriously because of the nature of the individuals they serve. So it is very common for our end users to hit the feedback button very often. In fact, my company hosts conferences with many of these organizations highlighting this feedback feature specifically and why it is in their best interest to do so. We also have a great customer support team who gets back to each and every support request or feedback (no automated replies), so that encourages them further.

    This has helped our test team enormously in understanding the usage patterns of our end users and their expectations. We work very closely with the customer support who also assist us in coming up with scenarios. In fact, some of my testers also do part time customer support work (although very little).

    So in a way a many of our end users are doing part time job in improving our product (without any cost) :) We also take suggestions for new features very seriously. Many of our end users participate in online discussion forums about improving the product. This is also done during our conferences.


    > How about the idea of checklist for your test team as compared to test cases?

    That is a good suggestion. Our product is huge, yet we write zero test cases. We have checklists for some critical features and also for regression tests and post-deployment tests. However checklists are not very common for testing all new features or bug fixes. I have tried this with my team on several occasions and left it to them to what extent they would build these. For some features I encourage these to be more informal than formal, since these are ever changing, and for others it is OK to ignore it. I am yet to find a sweet spot in when and how much checklists to write. Using mnemonics is something I am trying to work at to assist or sometimes replace this. Not much progress there yet, but hopefully soon.


    > If it turns out that there exists a problem in the product which violates the claim, there might be more than one such problem. Hunt for it!

    Claims based testing is one of our most important test strategies. If end users find problems with any of our claims we risk loosing our business because of the critical nature of the industry we serve. You made a very good point, and I think this is exactly where we went wrong. When we found some problems with one of our claims we did not look further because the programmers down played the issues. This is however a rare case in our test team, because we usually have daily team discussions about issues they have found and their strategies. I am also co-located with the team for instant assistance and updates. We encourage a lot of screaming and communication within the team :)


    > What did I do different from my previous testing effort that I can term as better testing as compared to my previous attempts to test?

    This is something I really need to make a habit of. I usually analyze where we went wrong. I can understand how analyzing the other side of the spectrum can be just as important.


    > Also you might want to consider to blog more often.

    Thanks. I will definitely try.

    ReplyDelete
  4. Ahsan,

    Thanks. I am glad my writing appeals to you.

    ReplyDelete
  5. Great write up Sajjadul. I am new to your blog and enjoyed reading the article.

    You have raised a very common, usual, scenario - developers saying "Don't test that". I usually abort testing the respective program, mark it as a risk and mail my Manager and development team if it's in middle or late in the project timeline. The reason being I could forget testing it or will not have enough time to test it before release.

    Mailing both Manager and development team heuristic has helped me in most cases. Usually the developer comes back to me to get his program tested, since I have raised an alarm for his program.

    I also agree with your comments on "blame game". I hope more Test Managers read your post. I feel "blame game" destroys the team and the team's respect towards their Manager.

    Also, enjoyed your conversation with Pradeep in Comments section.

    Keep writing,

    -Sharath.B

    ReplyDelete
  6. Hi,

    That was neatly written.
    A common scenario that we test manager' face.

    I agree with your point of trusting our own testing team and trying to be a facilitator rather than a lawmaker.

    I too give a lot of freedom to my testing team members (but look at some complex/critical things) by doing so:
    - They have freedom
    - They think that their boss is not a "CONTROLLER"
    - They would like to work under their test manager
    - They would be encouraged to think more & intrun be more productive
    - Help to build a better product
    and inturn make the customer happy
    - End of the day the testers too are happy
    - I would give them some guidelines/approach on how to go about testing and inturn ask them to share their thoughts (introspect). By doing this all other peer's would learn something and that inturn would boost the morale of the team.

    As sharath rightly said send an email or mention this item under "RISK FACTOR's" in your status reports and circulate to the higher authority.

    Rgds,
    Raj

    ReplyDelete
  7. Sharath,

    Thanks for sharing what works for you. This is exactly what we need.

    By the way, I think you and Pradeep took an excellent initiative in creating the testing demonstration videos. Keep up the good work.

    ReplyDelete
  8. I feel there is a difference between 'Letting the programming team know that a feature would not be tested' and 'Testing the feature and not informing the programming team that the feature was tested till a serious issue was found in the feature'.

    The seriousness of the issue may vary.

    I would have followed the second option, take the programmer in confidence, test it without informing, if no issues are found: fantastic.

    If issues are found, let the programmer know and slowly let all the stakeholders know.

    -Ajay

    ReplyDelete
  9. Ajay,

    Of course your choice would depend on the time constraints you are undergoing as well as your testing mission. You will probably take the decision analyzing the risks of not testing the questioned functionality.

    However, do consider that the programmer or project manager may be more concerned about other areas of the system in your limited testing time. So you may have to balance your curiosity with the concerns of the team. If you spend significant time in your investigation on areas that the team did not perceive as valuable then that may have other consequences.

    I would advise that you do an initial investigation on the functionality that you suspect as having important problems and then decide on the risks based on that information you have gathered. This initial investigation may span for only a few hours (which will largely depend on the type of application you are working on). Thanks.

    ReplyDelete
  10. Hi

    I like your post!

    I am not sure on how different is your approach from a popular testing stategy - Risk Based Testing?

    Also, reference on why the tester did not test the feature, was it mentioned in the test plan that it would not be tested?
    Was the programmer a stakeholder?


    Out of curosity, how critical was this bug eventually?

    ReplyDelete
  11. > After all, how many people hit the Send Report button that Microsoft OS throws up, each time?


    Lets try to break this question

    1. How many technical users?
    2. How many non technical users?

    ReplyDelete
  12. I specifically liked your test management approach, giving more freedom to the testers and being their facilitator not controller, that is exactly the definition of a mentor.

    ReplyDelete