For a long time I have been trying to deal with programmers who are not supportive of the testing cause. I suppose there is at least one in every company. They are usually senior programmers, but younger programmers are quick to learn. Pretty soon their population multiplies. They cannot comprehend how someone, who knows so little about their code, can possibly find important problems better than they can. Ironically, they are usually more cooperative with customers. Even then, I can't blame them since I am certain they had to deal with an abundance of frail misguided testers. To them, testers do not add value to their work. It took me a long time to realize that I need to break this assumption for them to believe.
When I was a programmer I had my share of delusions. My constant complains to management led them to believe that I should guide the testers. That was a tactical mistake (at least at the time). I considered myself a good programmer and so my allegiance remained with the programmers. I assumed that to be a better tester the testers need to be more like programmers, i.e. they need to penetrate the black box and understand the internals of the software. Those who could not meet this condition were not good testers in my judgment. Thankfully the sands of time shaped me to be wiser. As I executed and experienced the testing challenges, I am better equipped to deal the obstacles. I guess I did it the unconventional way, i.e. started as a Test Manager and then tried hard to learn testing. However, I have finally become a tester after a lot of disasters. You truly need to be a tester to lead testers.
Now, getting back to the other nonbelievers, most of whom may never get the opportunity I got. Let me share an account of how I handled such a case in one project. I got debriefed on the project when the project was close to release and was told to test it. Me and my team never met or communicated with the programmers of this project. We were instructed that an online bug tracker is the preferred means of communication, since the programmers were extremely busy. They were very confident about their code and were skeptical about the value our testing would add. We were given a couple of days to test and report the problems. I usually do not negotiate deadlines. Instead I try to understand and prioritize what to test within that time frame. The discussion turned out to be quite interesting, but that is another story.
During the discussion and demonstration of the software we tried to understand why they were so proud of it. It turned out they had already demonstrated the software to the customer and were in the final stage of fixing the bugs reported by them. So the problems we were identifying during the demonstration itself was addressed with, "the customer did not complain about it", or "they will never do that". This response was very important for me to note. We then kick started the testing.
I instructed my team to keep log of every problem they encountered, no matter how trivial it seemed. I knew this approach will be met with sharp criticism by the programmers. So I cautioned them not to report seemingly trivial problems until they discovered an important problem. Then they can report the important problem with high priority, followed by the so called trivial problems, all at the same time. This turned out to be extremely effective. It was as if the programmers were forgiving the testers for reporting low priority problems because they got some important problems to deal with.
We tried to report problems with the customer's perspective, trying to express cases where the customer may find it a problem. Whenever a tester found a problem, he would shout out the problem to the entire fleet. This avoided duplicate reports, hence giving less chance to the programmers to complain. With this method, other testers could analyze if there could be similar problems in their own testing areas, and could check dependencies.
We regrouped every two to three hours and discussed the important problems and our next charter. When we spoke to the envoy of the programmers and they told us that certain problems will not be fixed, we did not complain. If we felt that the programmers failed to realize the seriousness of certain problems, we simply elaborated the scenarios. We did not take the authority to dictate what must be fixed.
It all went pretty well. As the deadline approached, our time frame was increased, and then increased again, and finally increased again, so that we could keep testing and reporting problems they would fix. Some of the programmers finally visited us personally to fix some issues that were blocking our testing progress and appreciated our work. It was interesting how all this was achieved without insisting more time, without taking supreme authority of bug fixes and without any confrontation with the programmers, while the testers had a ball.
September 14, 2007
Converting the nonbelievers
Subscribe to:
Post Comments (Atom)
There are couple of very important take away from this post that every tester can take. If we just focus on our primary task with some tactics ( like filing trival issues along with critical, let programmer decide which defect should be fixed, share every finding with whole team to remove duplication and revisiting charter in every couple of hours and so on.. ) every thing else will follow. I also feel that programmers are now becoming matured, knowledgable and more open towards testing. If you as a tester demonstrate that you add value to the project and improve quality, programmers attitude will certainly change towards testing.
ReplyDeleteTesting Geek,
ReplyDeleteThis reply is long overdue and I am sorry for that. Thanks for extracting the key points from this long post.
Hopefully in time many unfortunate testers will too enjoy their experiences like we do, but only if they want to.