This blog post is part of an Atlassian blog series raising awareness about testing innovation within the QA community. You can find the other posts in this series under the QA Innovation tag.

The post is written by Bryce Day, CEO of Catch and the driving force behind the development of the highly successful QA management tool Enterprise Tester. Bryce has been involved in software testing since 1997.

Testers tend to talk testing. That means they want to know things like: the number of test cases, how many have passed or failed, which tests are assigned to who, and how many times has an issue been tested and retested. While all of these numbers provide interesting statistical analysis, they don’t provide the real oil that the management team need. In fact there is only one real measure that means anything and it’s a four letter word – Risk! A question I hear frequently is “what are the key statistics you use to manage your testing?” Look on forums and discussion groups and you’ll get a number of differing views on how best to manage your testing. I, too, have an opinion, but unlike many others I like to focus on what I consider the basics.

Forget everything else including all those fancy statistics, risk is the only thing that matters.

Not convinced? Well, let me change your mind.

Testing versus Quality All testers talk about testing their organisations product, but frankly who cares about testing? What they should be worried about is quality!

Quality is very different to testing. Sure, you can validate a products quality through testing but how many of you are called Quality Managers instead of Test Managers? Or how often do you hear in a team “how much more testing have you got to go?” instead of “what’s the current quality of this build?”

There’s no doubt about it. Measuring quality and understanding the quality profile of the product is the key to what we as testers do.

To me as a manager, quality is a reflection on how much risk I’m prepared to take. For example, I would want to buy a high quality car because my risk appetite for a car breaking down is low, but I’m willing to purchase a low quality $2 toy from a discount store because my risk appetite is much higher that it will break. How many of you reading this have had a discussion with your project manager or management team about companies risk profile around a product release? I would guess very few. So forget the current way of thinking about testing and think of it instead from a risk perspective.

How much is too much? The question now becomes ‘how much risk are we willing to take around this project/product/sprint?”

Agile guys might say that they bake quality into the process. Sure, sounds great, so do they actually know the risk profile of the project? My bet is that they know about as much as the waterfall or iterative methodology teams, which isn’t much.

The only way to understand the level of risk that the project is taking on is to understand the risk profile of the project, which is to say… you need to verify it or sample it.

Turning our attention to some theory:

Risk = Probability of Occurrence x Value of the Loss

So if I had a 50% probability of something occurring and if it occurred I’d lose $500 then my risk would equal $250. If we translate this into current testing speak:

Risk = Test Case Importance x Requirement Importance

In the QA space we can use estimates to achieve a similar thing. And the more we do, the better we get!

Taking the ‘value of the loss’ part of the risk calculation we could estimate this based on the level of importance associated with the Requirement. If a Requirement was critical then the value of the loss could be assumed to be higher than a Requirement rated as high. Turning our attention to Test Cases, since they are a step-by-step walk through of the process that has been derived from the basic path they can be used as an approximation of the ‘probability of occurrence’ part of the risk calculation. Risk becomes some form of Test Case Importance x Requirement Importance.

The ‘value of the loss’ factor is arguably the most important of the two, since a big loss is worth more to me than a small loss. Conversely, a small loss occurring frequently is just as bad if not worse. Using a weighting factor I can smooth the risk profile across the requirements and test case combinations we have. And this is what I’ve come up with:

Risk Rating Using this derived risk rating, as a Quality Manager, I can focus on reducing the risk level of the project to one that is acceptable to the management team.

A Polarising View? It’s interesting the discussions this has started, with both my team and others. Some think that this is just a new spin on an old idea, for others it contextualises something they have never been able to measure, and there are a few that find it too simplistic. For me though it provides something tangible that I can measure and discuss at a management level. It provides a yardstick against which I can measure all of our product development, and a common language for our team to discuss the risk profile of each of our products.