How Many Test Cases Do You Need? How Many Do You Have? | Sparks
797
post-template-default,single,single-post,postid-797,single-format-standard,ajax_fade,page_not_loaded,,qode-title-hidden,qode-child-theme-ver-1.0.0,qode-theme-ver-9.3,wpb-js-composer js-comp-ver-4.12,vc_responsive

How Many Test Cases Do You Need? How Many Do You Have?

Arecent discussion with a top-brass individual in Software QA Domain, brought up this very question, which I really didn’t bother about for the past seven years as a QA professional. After the discussion was over, I managed to find some time to reflect on it. Does it matter how many test cases are there to test a particular application?

First, a bit of theoretical background…

Well, this is something we all know. According to the ISTQB foundation level text (which by the way, can be considered as a comprehensive handbook for a test engineer) and IEEE829 Test Documentation Standard, test cases are stemmed out as part of test design and documentation procedures.

Given a software system to test, the test engineers first analyze the system specification to find test conditions – which in simple terms, are things that could be tested. Once the engineers are satisfied that they have found a good amount of conditions that covers the system’s functional and non-functional aspects (which is named as ‘Test Oracle’), they select which of these to be actually used for testing and prioritize them based on importance and risk levels assigned to each one.

Then come the test cases – they are the detailed specifications on how one or more test conditions are actually tested. A pre-determined set of pre-conditions, inputs, steps to follow and expected outputs are documented as a test case, which resides in a formal test specification.

On top of these, test procedures are also documented to specify how the test cases should be executed. Incase test automation comes in to the picture, the procedures may extend to automation scripts etc.

But what is most important and the thing to remember is that no matter how these standards and ‘conventional ways of doing things’ are set, during the actual execution, it all depends on several factors.

Practically, you can’t always do it by the book…

Unless you’re developing and testing your own product and you are not in a hurry to take it to the market, which is highly unlikely, there is a plethora of constraints to consider.

First and foremost, it is Time. If you are working on a fixed-cost project, you have to utilize the maximum amount of available time for testing. This leaves the documentation with a smaller ration. If the project is budgeted on time-and-material basis, the customer will want to see the latest iteration in working condition after each build or sprint. Again leaving documentation and design a smaller chunk of the time available.

But anyway, you cannot test without doing any of these…

Yes, you have to analyze the system and identify the test conditions – otherwise there is little or no means of knowing what kind of a beast you are dealing with. And yes, you have to prepare test cases – otherwise there is no way to communicate to other stakeholders of the project, what you test and how you do it. And even if you do all the testing by yourself, keeping track of what to do and what have been done is always the best practice.

You may use different means to keep track of these test cases – ranging from spreadsheet based check-lists to well-structured and mapped test cases in a test management tool. The means of managing test cases can be selected to suit the constraints relevant to the project.

Which method should you choose?

The common rule of thumb is, the more details you have, the better the test case is. For an example, ‘Input numbers 2 and 3, calculate total, correct output should be 5’ is more clear cut than just saying ‘test the sum of two numbers’.

But still, it might make no sense to spend three days to compose the test cases, if you only have five days to test, fix bugs and re-test a particular application. The way the test cases should be prepared will have to be decided on a case-by-case basis.

The best way is to be ‘Efficient & Effective’!

As our management gurus put it, ‘to be efficient is to do the thing right, and to be effective is to do the right thing’. Putting this into context, it can be safely considered effective if the test cases you have adequately cover the functional and non-functional aspects of the software under test.

Efficiency, however, is in a league of its own. Depending on the person, one might cover a certain function with just one test case whereas another might require more than one. If all the functional and non-functional aspects can be covered with say, 10 test cases, no purpose is served by having more than 10 test cases or less thereof.

Turns out, it is a numbers game…

There is this specific case I want to cite as an example. For this particular web-based application, my test team had prepared about 90 test cases, excluding certain edge scenarios. We ensured all the test conditions were covered and the test execution time was typically 32 man hours.

Once the application finally passed through my team and sent to the client, he wanted to test it again using some third-party testers. After about two weeks, this test team had reported 80 new defects! Both my test team and development team were not convinced by this result, so we called up a triage meeting to review the defects. After thoroughly analyzing all 80 defects, what the team determined was that only two of the reported 80 incidents were actually valid defects.

When we took it back to that third-party test team, it was found out that they have prepared over 300 test cases and to execute all these, the team (we were told there were two engineers) took about two whole weeks. Amazed by this result, we cordially requested them to share the test cases just to get an idea on what they were up to. Needless to say, even after months and several reminders, it is apparent that they did not want to share the test cases with us. Therefore, the conclusion was that they have exaggerated on the work they did.

Ultimately it comes back to the numbers game I was talking about. Showing your management or the client that you have a large number of test cases and logging a large number of defects is one method some test teams resort to, in order to just ‘show’ that they did good work. Since the management or a client would rarely dig deep to examine the validity in such situations, it was a safe bet to make.

At the end of the day, ‘How many test cases should be there?’

The first thing to do is to identify the test conditions. The time spent here will never go wasted since this also helps you to understand the system better. Then you have to understand the nature of the project including the perception of the management and the client about testing.

From there, use test design techniques you prefer to carefully craft the test cases. Remember, the set of test cases that comes out as a result of this should be both efficient and effective.

The challenge is to prepare not more or less but the right amount of test cases which will provide complete coverage to achieve the greatest effectiveness.

Reviewing this work in collaboration with the development and business teams is always a good practice. It would help to bridge any gaps or misunderstandings. Other than that, as long as the set of test cases you have could cover all functional and non-functional aspects of the software application under test, the number of test cases does not really matter.

Image Courtesy: unsplash.com/@theheartdept


Deshal Weerasinghe is an Associate QA Lead at Zone24x7. He usually writes at the Smoke Test. You can also follow him on LinkedIn.

No Comments

Post A Comment