Everyone likes to think that testing means thoroughly testing every corner and possibility of the application before it leaves the door.  In theory, this would be a wonderful plan but in the land of reality, this is horribly expensive!   Most companies would sink under the cost of such an effort.   So where do you find the savings?

It all starts with the Test Plan…

I once created what I considered to be an amazing performance test plan that totally freaked out my product manager.  The cost of the plan would have been outrageous and he was aghast that I would even propose it.  I then went on to explain to him that what I had produced was the menu.  You don’t have to pick every item on the menu but you do need to understand what your options are before deciding you want and why.  Rather than predetermining what to serve, I was laying out all the possibilities so that we could then review this with all the stakeholders to determine what tests were high priority, which were nice to have and which were unlikely to ever be needed.    By doing this process up front, it:

  • limits possible scope creep – how many times have you had stakeholders think of new tests or requirements when you are already half-way through executing your plan?
  • reduces unnecessary work by testers – testers start working on a test that half-way through, it is determined that it isn’t needed or is dropped for lack of time or resources.  More discussion and review upfront will prevent this.
  • prevents missing business risk issues – sometimes an incorrect assumption is made that a test is not required but it was never discussed and all the ramifications are not brought to light.  By documenting all possibilities and the reasons for any exclusion, you have a history to refer back to.
  • ensures that business concerns are prioritized in the testing strategy – the priority should always be to focus on the customer impact and risk to the company.  The best way to ensure this is being incorporated is to bring the business stakeholders into the QA process as early as possible.

Are you using a Test Case Priority rating system?

A priority rating will ensure that the test team is working on the critical items first so that if the project timing starts to unravel, you know your team has already addressed the high priority items.   This same priority rating can be used for test execution so that the critical areas are tested and important bugs are found as early as possible in the test cycle.  Put adding a priority rating, you are asking the testers to always be asking themselves “How important is this test that I am writing?”   This can also become a great tool for coaching your team.  If you can quickly see that a tester is always writing mostly low priority test cases, it gives the manager an opportunity to mentor them to see what is important in the functionality and help them to write higher quality test cases.   If the tester rates something as a high priority that is not, it gives you an opportunity to ensure they have a clear understanding of their area.

Create a test matrix with the test purpose and priority before writing any test cases

A test matrix should always have each test case purpose clearly defined.  Sometimes this can be a field on its own or this can be incorporated into the test case title. It makes for a faster analysis of the test cases and it also helps the tester remember what the goal of the test is.  I can’t tell you how many test cases I have reviewed where my primary question at the end was “what are you testing here?”  It seems simple and obvious but it gets missed many times and without it, a manager level analysis would take forever.

Once the test matrix has been determined, you can now invite the other teams (developers, business analysts, architects, product managers) to review this list and give feedback.  If you give them hundreds of detailed test cases, they are either not going to have time to go over it or it could be a very expensive exercise, given the cost of the time required.  By having the test matrix reviewed before the test cases are written, you eliminate the possibility of a  tester spending hours writing test cases that a developer  could have told you right away is just testing the same piece of code that is already covered in other test cases.

Look for redundancy in Common Processes

Common processes are processes that will be executed by a large number of test cases.   A good example of this is the login screen.  You can often eliminate the need for individual positive scenarios by ensuring that the variety of positive login scenarios are covered in other test cases and  concentrate on just the negative login scenarios.

Look for unnecessary alternative test cases

How likely is it that a customer will want to use the application in this manner?  If the number of tests required or the negative ramifications for the customer are high, maybe a suggestion can be made to the designers to block or disable the customer from being able to perform these actions.

Do boundary test analysis

Do you really need to test every value in a drop down list?  What values are the critical ones to test?  When testing a date field, have you established for the test team which are the critical dates to be tested?

Also see my post – How Much Testing Is Enough?