Measuring Software Quality

dollars Measuring Software QualitySoftware quality metrics must be part of the top 5 discussed subjects if you’ve been working in a Software engineering team. Discussion are usually endless as there is no silver bullet. I was reading this article today which summarizes the typical issues when deciding which metrics to track.

Should we track:

  • Number of tests?
  • Pass rate?
  • Plan versus actual?
  • Run to plan?
  • Pass to plan?
  • Number of defects?
  • Code coverage?
  • Functional coverage?
  • Performance figures?

They’re all reasonable metrics to track during the life-cycle of a software development. They usually bring confidence about our test progress and give us some hint about the quality of our software. However, the real quality of a software reveals itself in the field when actual customers use the product. Tracking field defects is a good start but it could be misleading as it doesn’t tell you if customers are actually using the product. From my perspective, the bottom-line is revenue and how many bugs are reported against it. That’s my favorite KPI to track: Number of bugs (reported from the field) per millions dollars revenue. The lower, the better of course. It doesn’t reflect the raw quality of your software, but I think you would agree with me to say that good software sell like hot cake and crappy ones fail miserably. Correlating bugs and revenue is a reasonable indicator to track.

Keep in mind the fact that I call a bug any problem reported by customer which require code change. I’m not talking about bugs which could be reported as such by customer and actually end up as user error, environment issue, request for information etc. These problems are also important to track but indicate a different level of quality ie. Usability, documentation etc.

 Measuring Software Quality