I love numbers.  Even more, I love puzzling them out to figure out what they are telling us.  They often can give us a great picture of where we stand on a project and where the problems are.  Without correct understanding of what the numbers mean, they can be misused and result in some very undesirable consequences!

I was working for a company where a suggestion was made for improving the overall quality of our software.  The suggestion was to track how many defects each developer was responsible for causing and how often their defects were reopened and factor this into their yearly bonus.  On the face of it, it seemed a reasonable way to reward good developers – right?  Fewer defects, less reopen rate = better developer, seems logical.   As a Manager of SQA, I was horrified and was able to successfully shoot it down before it got started.  My reasons :

1. Good developers will get the hardest code to work on.  You aren’t going ask your best developer to work on typos and simple presentation issues, you are going to put your best people on the real narly stuff that needs a great mind but the downside is that it might produce more bugs and these bugs might be the ones that tend to get reopened several times because the code is trickier.  The junior developer with the easy projects using this evaluation system, is going to look great and rarely will his spelling fixes get reopened!  Your best developers will be penalized for tackling the hardest projects.

2. The defect tracker will become a battle ground.  The defect tracker needs to be used as a communication tool between the designers, the developers and QA.  This is where you get to thrash out what happened between the design and reality, it is our “sandbox” to play in.  If the developers suddenly feel like every defect is a strike against them, instead of viewing QA as partner in producing a good solid product, QA will become the enemy.  Every defect will be fought over.  QA will start to hesitate over every defect they add.  At first, they will just wander over and question the designer or the developer on the issue.  Maybe the developer will ask the QA analyst to hold off a day on logging it and they will fix it in the next build.  Before you know it, the team will start avoiding the defect tracker and everything starts being managed in verbal conversations.   Now you have no metrics at all to gauge how the project progressed and where the problem areas were.

3. The blame game takes over.  Defects are caused by a variety of reasons, not just developer error.  Sometimes the requirements are not fully spelled out leaving the developer to guess what the designer or architect intended.  Use cases often cover the happy paths and some of the negative flows but often omit or skimp on the alternative paths.  In code that is modularized, often one module doesn’t play nice with the other and debates start on which module has the problem.  QA often comes up with test cases that no one ever considered in the design stage.  If you place too much emphasis on whose fault it is, the team gets caught up more in the blame game instead of solving the problem.  The goal in QA should be to not make it personal, it is all about building the best possible product.  We need to focus on finding fault in the SDLC process, not on individuals.

If there are underperformers in the team, it is up to the manager to figure this out and apply corrective action, not the rest of the team or the tools.  Good metrics will help us understand where our processes are failing but they should not be used as a replacement for good management skills.  Sometimes simple ideas made with the best of intensions can unravel the whole team if you don’t think them through.