March 29, 2011

Vote my video to send me to EuroSTAR


This is bit of an unconventional post in this blog. I hope you will forgive me :)

The message is simple. Watch my video above and if you like it (I hope you do), please vote for me here. Remember, I am "Video 6: Sajjadul Hakim" :)

Your vote can take me to the EuroSTAR 2011 conference. Many of you do not know that I will be speaking at CAST 2011 in Seattle, on this same topic, i.e. "Understanding Gut Feelings in Software Testing". No, there was no voting for that. Just the belief of James Bach that my talk may be worth listening to :)

Voting ends on Friday, April 1. So please hurry. Thanks to many of your support I am already neck and neck with the leading contestants. It's an intense competition. Your vote can make a difference now!

Vote for "Video 6: Sajjadul Hakim" here.

August 27, 2010

Masters of Illusion - Fake Testers (Part-1)

Before you start, let me warn you that this is an experiment. I encourage opinions even if you do not agree. I am not sure how some readers will take this, but I assure you that my writing does not implicate anyone in particular.

Testing has come to be a mainstream job. There was a time when I wouldn't get enough resumes when I posted a job ad for a testing position. Not anymore. Now I am literally overwhelmed with the number of resumes I receive. But even then I am not able to hire testers who can do magnificent testing. I try to hire those who can learn to do magnificent testing. Even that is pretty hard to find. I am concerned about this predicament. I notice that most people I meet, testers or non-testers, seem to think they know testing and are clueless about how wrong they are.

Being a tester is about busting illusions, but what if the tester is the illusionist. Being a tester is about not being fooled, but what if the tester is the one fooling his audience. Things become worse as this illusionist is the one who is rewarded and encouraged in his workplace and community. Due to this he is even able to build followers and zombies. How do you recognize this fraud? I have identified some traits from my own experience communicating with testers. Do question my thoughts and knowledge. This post is for those who do not want to be fooled, and to encourage them to question the motives of such testers.

The attributes I outline below are sometimes, in my opinion, what fake testers possess. However, having these attributes does not necessarily make someone a phony. This is what Nassim Nicholas Taleb calls the Round-trip Fallacy, i.e. "these statements are not interchangeable".


Escape Demonstration

You would think it's obvious that testing should be taught by "demonstrating" testing. Apparently that is not the case. Take for example the ISTQB trainings. Every time a student tells me that it helped him to know more about testing, I ask him if the instructor ever demonstrated testing to him or let him test something. Of course, he would answer NO.

Demonstrating testing is extremely important to teach testing. But it's extremely difficult to do if you never tested before or if you don't practice testing. This is probably the most effective way to expose a phony. You can tell a lot about someone's skill if you watch him test. It's also a great way to judge testers during interviews.

Usually a fake tester would avoid public testing demonstrations, and you can forget about rapid testing sessions within an hour or half an hour. He will always find some excuse. Mentors such as James Bach, Michael Bolton and Pradeep Soundararajan do demonstrations all the time, and when they can't, they articulate their testing in their blogs or articles. I sometimes demonstrate one hour rapid testing sessions to my testers on an application they choose for me. It's a great way to teach and also build my own demonstration skills. I find that demonstration for the purpose of teaching is more difficult to do, because you have to continuously communicate your thoughts and actions to your audience. I have certainly become better at it.

When my testers cannot reproduce a problem reported by the end user, I try to do it myself and then explain to them how I figured it out. Even if I can't reproduce it, I would still explain how I investigated. If my testers saw a problem, but can't figure out how they found it, I sit next to them and try it out with them. If my testers tell me they can't figure out how to test something, I would either explain that or demonstrate it in person. You will not see that happening with fake trainers or Test Managers.

Here is a question for you. If you never saw your Test Manager or Trainer demonstrate or articulate testing then how do you know they are testers?


Weekend Testing (TM) -- Why Not

As I was writing the last section, it got me thinking about Weekend Testing. If you haven't heard of this before, read what James Bach has to say about it. Oh, and it's no longer just an Indian thing.

I know I love testing and jump at the first testing opportunity, but why was I not participating in Weekend Testing? The obvious reason was that I was already working long hours during the weekdays and the electricity going out every other hour was just too frustrating to go on an online bug chase. An enticing bug chase. A mind boggling discussion opportunity with culturally diverse sapient testers of the context-driven testing community. The more I thought about it, the more pathetic my excuses sounded. This was what I always dreamed about. So I decided to give it a try one weekend, when I happened to be still in my office and the clock struck 3pm IST. It was just as awesome as I expected it to be. So I did it again, and then again.

I know that many testers feel shy or are afraid to participate because of the possibility of performing badly in public. It's kind of the same fear that some have questioning in public. I know my testers are pretty enthusiastic about it, but are shy. That's because every time I do a Weekend Testing, they find out what I had tested and start testing that at work on their own. They feel that they want to practice a little more before jumping in. That is ok with me. In fact I want to speed up this process. So recently I ditched the weekly 1 hour rapid testing sessions, and started a mock Weekend Testing session, at work. It's like we are all sitting in the same room but using Skype to communicate. We pick an application at random in 10 minutes and start testing the application in the next 30 minutes. Then 5 minutes for the experience report, followed by discussions, all on Skype. Sometimes when we have less time, we discuss without Skype. I am hoping pretty soon more of my testers will gather the courage to participate in Weekend Testing.

Anyone can make up their own brand of Weekend Testing if it works for them. Even James Bach said on Twitter, "They can set up their own weekend tester thing with just their own friends".

If you never did Weekend Testing, that doesn't necessarily make you a phony. But if you never plan to be in Weekend Testing, you probably aren't enthusiastic about testing at all. Here is what James Bach said on Twitter; "It would be like a carpenter saying that he'd never build something for fun, at home. Such a man is in the wrong job."


I Don't Know -- The Forbidden Phrase

When faced with a question you don't know the answer to, it is ok to say "I don't know". There is no shame in it, even if you are an expert. I say it all the time. Then I also say "But I will find out". However, that is not my exit strategy. I do try hard to find out. Unfortunately fake testers don't want to do that. I notice this during conversations and even in testing forums. Usually I have disagreements and debates with non-context-driven testers about what they preach as "Best Practices". So when I present them with credible scenarios and question their beliefs, they usually avoid the question or cherry pick the questions they "think" they can answer. But eventually, they would say something like they are busy or label me as a heretic or just wouldn't reply. Sometimes they will even give vague replies. I understand this could also be a side effect of my "Transpection" practices, introduced by James Bach in his blog:


"One of the techniques I use for my own technical education is to ask someone a question or present them with a problem, then think through the same issue while listening to them work it out."
Source: http://www.satisfice.com/blog/archives/62


When I read about "transpection", I did not feel foreign to the idea. But the post made me aware of the side effects of transpection. Also, I now have a name for it. I have only used transpection on the testers I coach and close associates. I don't think that was intentional though. I did it without thinking about my approach. I don't remember ever doing it in public so I can safely rule out that cause in my unpleasant scenarios. However as James mentions, it is a legitimate cause for someone to get irritated with you and look the other way.


To Be Continued...

And so ends Part 1 of "Masters of Illusion - Fake Testers". I have a lot more to say in Part 2 and Part 3. But I wanted to give you a break to ponder over what I have written so far.

Coming up in Part 2:
  • The Techie Tester: Fake testers are usually oblivious to the notion that there is more to testing than technology and process...
  • Exploratory Testing is not Predictable: They seem to assume that exploratory testing is uncertain (because testers make it up as they go) and scripted testing is not...


Picture reference: paradisescrapbookboutique.typepad.com

October 27, 2009

Reconnaissance - A Testing Story

Our application is so huge that we have teams of programmers assigned to separate modules. One day, the development Team Lead of the messaging module came up to me and said...

Team Lead: We are giving a release today with the changes to the messaging module.

Me: Today?

Time Warp -- About Two Months Ago:

The application has a messaging module. We added a feature to enable or disable this module for users. While testing the feature it was found that some user names did not appear in the UI, and so could not be added to the recipient list of the message. On further investigation it was found that this was a side effect of another module; the user creation module. Users are created by entering user profile data. It was assumed that once a user is created it will insert some default user preferences in the database. When implementing the new preference for the messaging module the code was written assuming that the preference values will always be found in the database. But it turned out that the preference data was not inserted when users were created with another feature; i.e. importing all user profiles in bulk from an Excel spreadsheet. This feature did not insert the default preferences. The feature was developed about five years ago. Why was this bug not detected earlier?

There aren't too many preferences. When dependent modules and features look for their respective preferences, the code assumes certain defaults. These defaults are the same as the default preferences that are inserted. So this went undetected all these years, without any noticeable side effects.

Since the release date was close, a quick fix was made that basically checked for the absence of the messaging preferences. This fix was marked to be refactored at a later time.


Time Warp -- About a Week from the Present:

Some performance improvements were being made to the application. Since the messaging module is widely used in the application, some of its database queries were optimized. In one of the optimizations, the extra messaging preference checking was removed. Of course everyone knew that this also required a bug fix for creating new users; i.e. the default preferences should always be inserted in the database. So that fix was done for the Excel spreadsheet import feature. Since it was suspected that the bug in the messaging list (the one where the user names did not appear in the UI) may reappear again, two testers were assigned to test it.


Time Warp -- Back to the Present:

I had been preoccupied the whole week trying to write automated test scripts using a combination of Fitnesse, Selenium RC, JUnit and the test harness I created 2 years ago. I abandoned its progress back then because of lack of resources and the difficulty of maintaining it. I was revisiting the progress with some new ideas I have picked up over the years. Due to this I was not able to give much time to followup on the testing that was done recently. This was an expected side effect and my peers and testers are well aware of it.

So now getting back to the conversation...

Team Lead: We are giving a release today with the changes to the messaging module.

Me: Today?

Team Lead: Yes. They (my testers) said it was ok.

It was a performance improvement. I heard a little about the changes that were made, but did not debrief the testers on their progress. I also play the role of a Test Toolsmith, and there are occasions when I disappear from testing to develop tools to speed up our testing effort. Most of the time these are not automated tests, but are tools that assists in test acceleration. I can probably write about those on another day. This is way more interesting to write about now.

My team uses Yammer to write short notes about what they are working on. Yammer is very similar to Twitter, but it lets you create closed groups with all your coworkers automatically, based on their company email address. It is kind of like twitter for the workplace. Yammer helps to keep me aware of what is going on with my team when I am away. I can interact with the testers later at certain chunks of time during the day about their tasks. Not a foolproof practice (in fact it is far from it), but it is working well for us during such times. When I put aside my Toolsmith hat after a week (or couple of weeks) we become more agile and start interacting all throughout the day.

Me: Ok. Let me talk to them about their tests just to followup.

Whatever hat I was wearing at that time, it was important that I got back to being a Test Manager immediately. I trust my testers, but I am paranoid by nature. Two of my good testers were working on this. So I wasn't anticipating a disaster. Yet I felt curious enough for a debrief with one of the testers (since the other tester was busy working on something else).

The debrief went well, and she seemed to have some good ideas about what to test. I also called one of the programmer's to come over and talk a little bit about any dependencies he was aware of. We didn't go too far with that. So I continued to bounce my test ideas with the tester. At one point I asked her...

Me: What problems did you find?

Tester: I didn't find any problems.

Me: I mean, did you find any problems after they made the changes?

Tester: Actually, I didn't find ANY problems. (She looked and sounded disappointed)

Me: You did not find ANY problems?

Ok, here is the thing. This was not a big issue, and it is not that we always found problems after a code change. In fact, sometime things are pretty rosy. But every time I hear this, I have bells ringing in my head. It was as if "I didn't find any problems" was a heuristic that makes me wonder. Were we looking for the right problems? Were there important tests we didn't think of? So we started talking a little more about other possible tests. She had checked those as well. Impressive. So, I asked her...

Me: How do you know that you did not verify the wrong data?

Illusions happen. I witnessed quite a few myself. In her case, this was a list of user names and titles, that were word wrapped, and the readability was not that great.

Tester: I created the user names and titles by labeling them with the kind of test I was doing. That's how I made sure.

I just love it when they can explain why they did what they did. It may seem that I am just making life difficult for my tester, but hey, things like this can go wrong, and in my opinion we need to be conscious about it. During the discussion I realized that she did not test the other features of creating a user profile. Yes, there are six ways to do this.

This is an application that is being built for more than five years. It has many features and not so obvious dependencies. A thorough regression test before each release is impossible with the limited resources we have, so we have to analyze dependencies to figure out what to test. Since the dependencies are so meshed, checklists haven't been working out that well for us during regression. We figured mind maps would be a useful way to remind us of dependencies. The problem is that once we put down detailed dependencies on the mind map it becomes very difficult to follow. We needed a tool to zoom in on specific nodes, hide the rest of the clutter and show its dependencies. Then zoom in on the specific dependent node and notice if it has its own set of dependencies. Then maybe follow the trail if necessary. An interesting tool that does something similar with mind maps was PersonalBrain. It is not very intuitive and does need some getting used to. Its user interface was not exactly what we had in mind, but it is still helpful when you want to manage complicated dependencies. Here is a snapshot of how the dependencies of creating user profiles looked like (the first one is zoomed out, and the second one is zoomed in on Create User to reveal more branches)...




Getting back to the story. Luckily the Team Lead was walking by, so I told him that we did not test all these other ways to create a user profile. I knew there wasn't much time left for the release, and it will take a while to test all these features, so I wanted to see if he thought we deserve more time. He said, they didn't make any changes there and these were developed a long time ago. There were no signs of any problems. Ok, so clearly this did not have any priority, and with what I had analyzed so far I didn't think it had much priority either.

Back to the debrief with the tester. While I was talking to her, I had asked another tester (who is also a Test Toolsmith) to pull up the source code change logs. Now that we had exhausted our test ideas, I wanted to see if I get any clue from looking at the code differences. It is worth noting that although I was once a programmer, I don't know much about this application's source code (which by the way is pretty huge). I never invested much time reviewing it. I don't usually analyze the code changes, but since this was dubbed as a very small change, and the code differences looked friendly enough, I thought I could dare to investigate.

I could only make out some query changes. The programmer was also kind enough to explain the changes. He calmly reiterated it was simply a performance tweak. He removed a check for a null value in a particular table column. That was basically it. He explained that this value would have been null when importing user profiles from an Excel spreadsheet. Even that was also fixed by another programmer. Also, they will be running a migration script on the database to update these null values. I knew about the migration script before, but now it was all making sense. I realized that everyone was very focused about this one particular feature. So I went to talk to the programmer who fixed the Excel import feature. He said that his fix was only for the import feature and it will not affect any other features for creating user profiles. Well, that's what he said. There are six ways to create a user profile, and these were implemented about five years ago. The original programmers were long gone, and this programmer was assigned to fix this very simple bug for this very specific case. He didn't know about 'all' the other ways of creating user profiles, and said he did not review the code for 'all' the other implementations either.

Suddenly I felt an urgency to test every known way to create a user profile before the release. Clearly there wasn't any time left, but from the evidence I gathered, although there was little proof that there may be a problem there, there was just too much uncertainty and too much at stake. The problem with this change was that, if it went live, and the other user profile creation procedures had the same bug, those users will suddenly stop appearing in user lists used to send messages. That would affect a major portion of existing users, and since we had more than 50,000 users, that could result in a customer support nightmare.

So I quickly instructed another tester (who usually deals with the application's administration related features) to start testing all the five ways to create a user profile, while I continued the debrief (had some new dependencies to discuss). I briefly told her what to look for, i.e. the database table values she should monitor, and to check the values of the messaging preferences when creating user profiles from the UI. Note that she was not a geek, nor was she very technical. But checking database values is not that difficult. Since we run Oracle, we use Oracle SQL Developer to open tables and filter values. This is very simple to do. In this particular case there is no complicated SQL query writing capabilities required. However, if anyone did need specific SQL queries, then someone with more technical experience would write that for them.

After about ten to fifteen minutes, the results of one of the tests were in...

Tester: Are you sure there will be a new entry in the table?

Time for me to look into the results. We have two database servers running. We usually keep one server for ongoing development changes to the database tables. Then these changes are made into migration scripts by the programmers and database administrators. These migration scripts are then run on the other server that has a similar snapshot as the live database. This is done to verify if the migration scripts causes any side effects.

I wanted to make sure she was checking the right database. So we modified some values from the UI to check if that value changed in the corresponding table. It did. I wanted to make sure she was extracting the correct user ID from the browser HTML, and verifying that in the database. She was. I wanted to make sure if our test procedure was correct. So we did the Excel import test again, and checked if it entered the relevant values in the preference table. It did. Clearly there was a problem.

I informed the Team Lead of this and told him we are checking the other methods as well. He quickly went to investigate the code with his programmer. After a while the next test result came in. Same problem. Then the next. Same problem. All other methods of creating a user profile had the same problem. It was finally decided that the release will be postponed to another day, until these get fixed.

Picture reference: afrotc.com

November 15, 2008

Surviving a downsizing

I think it is fair to assume that if you don't have programmers, you will not have a product. So when your employers run into a financial crisis, will they prefer to let go their testers before considering the programmers? A recent discussion thread at Software Testing Club got me thinking.

Testers are mostly service providers, i.e. they provide quality related information about the product to help the stakeholders and programmers make critical decisions. They ask important questions that are generally overlooked by others. These are usually regarded as very important functions, since most others are not doing it. It is a different mindset and a different set of skills that most programmers, project managers or customer support personnel cannot acquire very quickly.

There is a popular myth that testers with technical knowledge or automation skills may be preferred over manual (or sapient) testing skills. We must be careful when we undermine the value and skills of the so-called manual testers. It is rather questionable how much automation testers add value to the project (depends on the kind of product of course), since they are not really questioning the product like the testers are doing. The automation tasks may be easily taken over by the programmers, since it is programming, and learning new tools may not be that difficult. But if automation testers have important sapient testing skills, then that may not be the case.

There is another popular myth that programmers will probably be better at testing their product, since they created it. I think creating and questioning your creation are two different skills that are very difficult to practice together. They require very different mindsets and experiences. However, I can understand that programmers can write very good unit tests since that requires intimate knowledge of the code.

In my opinion, during any financial crisis, the employees who are more dynamic will be the survivors. Testers may actually have an edge in this case, since they would probably know about the business domain, about customer issues, about the overall product infrastructure, about product usability, about recommending important new features etc. Testers who are not quality police and are able to take on deadline challenges will probably have more preference when management decides to reconsider project timelines. Testers who are able to derive important feature related information about the product by questioning and exploring, rather than always demanding spelled-out specifications, may seem more favorable. My point is that if management perceives the tester not being a liability, but rather facilitating the project in essential ways, that will probably make it much harder for the management to consider them for downsizing.

The value a tester brings to the team is more than about finding bugs. The product will always have bugs, even with testers, since testing cannot ensure the absence of bugs. A tester's value depends on how he questions the perceived value of the product. For example, if the tester is able to identify problems in the product that could disappoint or frustrate the end user, he has successfully defended the expected quality from the product. It is true that some companies do not experience this value from their testers, maybe because the testers do not have the required skills. If such testers would want to survive then they need to start giving attention to these skills.

Unfortunately, downsizing may not always be dependent on performance, but more of a budget issue. So it may very well be the case that good testers are shown the door, simply because the company cannot meet the budget. However, do consider another angle to this predicament. If your company is not as huge as Microsoft, then chances are that you have far less testers than programmers. This ratio may tilt in the favor of the testers considering a strict budget constraint where a certain number of employees need to be laid off. Remember that all programmers' skills are not equal in importance to management. Programmers have an equal challenge to prove their worth, especially since they will probably be much higher in number than the testers, and therefore be more preferable for downsizing. So if management values the work done by the testers then the ratio of layoffs may not favor the programmers. In the thread at Software Testing Club, Jim Hazen shared a very interesting experience depicting such a scenario:

I've been in Software for 20+ years, 20 of it in Testing. I have been through 3 merger/acquisitions and a few companies downsizing. I have survived some of those events, others I was not as lucky. Now I have seen situations where the whole development team was cut loose (because they did F'up badly) and the Test group kept intact (management figured that other developers in the company could be re-tasked and could take over the code, but that they were understaffed on testing as is and because the product was close to shipping they needed to keep the test staff). This was a unique situation.

You will notice that most of what I wrote above assumes you have a mature and responsible management who recognizes the non-monetary value added by testers. The kind of value addition I wrote about are in no way worthless activities. Yet it may not receive the credit that it deserves with immature management. In the case of naive management, it is easier for them to perceive programmers and sales people adding monetary value, instead of testers. Nevertheless, understand that this is only in the narrow sense of adding value. In a recent blog post, Michael Bolton suggested some ways testers can add monetary value. Whenever I talk about testers adding value, I mean it in the broader sense. This will always be difficult for immature management to realize, if they are strictly looking to quantify the value. But like Michael says, "there are lots of things in the world that we value in qualitative ways."

Downsizing decisions depend on the kind of company you work for. I do not work for employers that do not value my testing skills and my contributions. I would advice other testers to do the same. Of course, I also do my part in proving my worth to management.

Picture reference: msnbc.com

October 16, 2008

Software Testing for Dummies?

In my ongoing debates and discussions about saying no to testing certifications, I came across a very interesting comment:

BBST is an excellent course (so far what I have learned) and I think every tester should learn from it. But would you please tell me how many testers in Bangladesh actually learned or will learn all the lessons from this course. I know very few of them. Many tester will start learning very seriously but after few days he/she will lose interest because it's free and it will not give any direct output, many of us (especially Bangladeshi people, of course there are many exceptions) are always looking for instant output for their given effort. But most of them have a lot of potential.

I have a feeling this holds true, even in other countries. Comments like this are pretty common actually. It is implied that testing certifications provide an easier entry to this craft. They are easier to prepare for and pass. Also since certifications are not free, people will take it seriously, or would try hard to get the return of their investment. That would be a lot easier than reading blogs and articles of veteran testers, or participating in online testing forums, or coming up with context-driven testing strategies or blah blah blah. So fortunately there is a way to be INSTANT testers.

Unfortunately it is time to break the bad news. I believe...

Testing is not for the LAZY.
Testing is not for the IMPATIENT.
Testing is not for those who seek SHORTCUTS.
Testing is not for those who do not know how to LEARN quickly.
Testing is not for those who are not SHARP.
Testing is not for those who do not have PASSION for it.
Testing is not for those who cannot QUESTION.
Testing is not for those who are not INVESTIGATORS.
Testing is not for PROCESS freaks.
Testing is not for CONTROL freaks.
Testing is not for those who cannot be DYNAMIC.
Testing is not for those who cannot deal with UNCERTAINTIES.

I believe that if any tester (or wannabe) falls under any of the statements I just listed, they should rethink, or quit now! They should find another suitable profession that meets their characteristics. Go back to school, or go back to programming, or go back to customer support, or take an administrative job, or whatever, to build their career in something else. Certifications is a way to attract many people with unfavorable characteristics into our profession.

Sometimes I felt there were people who had potential. But while working with them I learned that they lacked the attributes I respect in a tester. Some were just plain lazy and wanted shortcuts to being skilled. There are no shortcuts.

As testers, we are investigators. We are told to investigate something whose complexities are least understood, i.e. software. We are able to spot and analyze clues to problems that are generally overlooked. We ask questions that were not perceived by others. We expose the illusions about the software. Our clients are fascinated by our reports. This can't be easy. That is why we need intelligent people in this profession. That is why I believe that testing is only for the talented.

Now that is a profession I am proud to be in. That is why people who have these qualities are successful.