In every software company I have worked at over the span of 14 years, at some point there has been a battle over what is the difference between the severity and priority of a defect. The second battle that follows soon after is over who controls the determination.
It is critical that everyone working in the Software Development Life Cycle has a complete understanding of the rating systems and who has the authority to manage them before starting a release. Otherwise the amount of time spend squabbling over this will be staggering and demoralizing to the team.
One event I particularly remember was a troublesome release I was managing a few years back. I had called a meeting with the CTO and told him that there were too many critical defects still unresolved and that our release date was in jeopardy. The next day, I was told that the problem had been resolved! I was amazed – had some super developers worked all night to come up with some amazing fixes?? No, the CTO had reviewed and changed all the critical defects into medium severity defects. Problem solved!
Needless to say, I wasn’t impressed. The severity ratings of these defects had all been reviewed and agreed upon when they were logged and changing the severity to meet the release criteria which was no open critical defects struck me as cheating. It is a bit like moving the goal posts to suit a bad release. If upper management had decided that the risk of missing the release date was greater than risk in releasing the application with these defects, then this was the discussion that should have been taking place. But to try to make everyone believe that suddenly these defects had become less severe overnight was not going to convince anyone. What should have happened was the priority should have been changed to “Deferred to next release”.
Issue 1 – How do you rate the Severity of a Defect?
The following is a typical grading breakdown used for severity:
1. Critical / Showstopper – The defect results in the failure of the complete software system, of a subsystem, or of a software unit (program or module) within the system.
2. Major / High – The defect results in the failure of the complete software system, of a subsystem, or of a software unit (program or module) within the system. There is no way to make the failed component(s), however, there are acceptable alternatives or workarounds which will yield the desired result
3. Average / Medium – The defect does not result in a failure, but causes the system to produce incorrect, incomplete, or inconsistent results, or the defect impairs the systems usability.
4. Minor / Cosmetic – The defect does not cause a failure, does not impair usability, and the desired processing results are easily obtained by working around the defect5.
I prefer a more detailed approach that would also take into account the following types of impact and gives a more accurate assessment of the defect’s effect on the software:
1. Impact on the customer experience – This means that the defect results in features of the product being unusable. The importance of the feature being impacted will be reflected in the severity rating. For example, problem in a minor report might not be a high impact, errors in a critical report would be a showstopper for the release.
2. Impact on the business risk – These are defects that effect security, stability, performance or compliance standards and could represent a risk to the company’s reputation or a financial risk of a lawsuit.
3. Impact on testing – If large areas of the functionality cannot be tested because they are blocked by a defect, the test team could be at risk of missing their milestones for release.
You could have the grading system broken down to reflect a feature of each. For example, a Showstopper is either a) critical customer feature is broken, b) compliance standard is at risk, c) over 100 test cases are blocked.
Another solution is to have 3 sub-ratings for each of the impact areas and the final severity for the defect is the highest score from one of these 3 impacts. For example, a security bug might have a minor impact on testing and minor impact on customer experience but would be a showstopper on business risk. The final severity for this would be a showstopper.
The severity speaks only to the impact the defect has on the solution. It should not indicate when the bug should be fixed but severity is one of the factors used to set the priority.
Issue 2 – How do you determine the Priority of a Defect?
The priority refers to when the defect needs to be fixed. This is used by the development teams to schedule and plan their builds / hot fixes and work effort. It can also be used to flag the defects that will be deferred to later releases. The severity should be considered when setting the priority but it won’t be the only consideration. For example, the development work on a performance bug fix might be delayed until later in the cycle when code is more stable and more of the new features have been checked in. A defect that requires a substantial change to the code should be done earlier in the release rather than later to allow for more time for regression testing.
Issue 3 – Who determines Severity and who determines Priority of a Defect?
The severity is first set by the person logging the defect based on their best judgement. This rating should then be reviewed by the Defect Managers and the QA Manager and approved. The product managers, business analysts and development team might also be consulted for their input. Once an agreement has been made on the severity, it should NOT change based on pending deadlines or workload. It should only change if some more details or there is a change regarding the defect (for example, a work around is found).
Whether the defect results in a minor or major code change, it does NOT change the severity of the defect to the business or the customer. If the development team is allowed to change the severity based on the code impact, it changes the focus of creating quality software which should always be based on what is best for the customer and the company. A bug found late in the testing cycle that requires a large code change to fix doesn’t suddenly become less severe because there is no time left. You might decided that the risk of fixing it for the release is too great and then the priority might be to defer it.
Priority is set by development and it is up to development to make ensure that enough defects are fixed to meet the exit criteria for the release. Quality Assurance might request to have the priority of some defects changed to assist the testing schedule and objectives.
By keeping the two ratings separate, the final release report will give an accurate portrait of the quality and risks in the release. By blurring the two ratings, you run the risk of loosing clarity and understating the business risks which puts the company at risk.