We just had our all-hands meeting and I'm watching the slides of our various development teams about their accomplishments for the year - from the release of our products to technology upgrades to some major refactors. I started imagining what our QA slide would look like, but then I kept getting stuck.
At the end of it all, the most important gauge of software development in general is that we've delivered working software. But in what area can the QA team take credit for the success?
Is success in the number of bugs found during testing?
When I was starting my new job, I was happy that I was able to catch a lot of bugs before the changes were deployed to production. One day, a developer asked me this: "Why is QA happy that there are a lot of bugs?"
I told him, "Well, it's my job to find bugs before someone else finds them."
And then he dryly replied, "I see."
It got me thinking later if my mindset is actually correct. With the introduction of Shift Left which suggests putting testing early to find bugs as early as possible, the new goal is probably to "Prevent as many bugs as you can."
But then, how does one even measure the number of bugs prevented? Do I count the number of clarifications I've cleared with the team before they started? Do I have to ask Doctor Strange to look into the universe where I didn't ask so I can compare the number of bugs?
Is success in how many test cases we have?
Having many test cases doesn't automatically mean greater test coverage. Each individual can have different ways on how they would want to break up their test cases, so I don't think this metric gives much value.
Is success in how fast we execute our tests?
In terms of manual test execution, it could vary by person, right? It's another thing that depends on an individual. Some will simply breeze through the test case and follow the steps as is, while some could be curious while running a test so they do extra checks that aren't mentioned in the test case, or some would try to understand the feature and how it's implemented first before they run the test case.
It's sufficient that we run the right tests and complete them within the project timeline.
Is success in how much we maintain our test cases?
Let's be honest, only the QA team cares about the maintainability of the test cases. If you report this to the company as a success, they probably don't care.
However, when we promote the test cases as living documentation, it's going to be a valuable deliverable for the whole team. User stories, requirement documents, and technical documentation are there but once the release is done, they don't get updated anymore. In case there's a next release for the same feature, it's going to be a new set of documents. But for the test cases, as long as you organize and maintain them right, will serve as the key documentation that everyone can use if they want to be familiar with the system.
Is success in how much we explore new tools to improve how we test?
Maybe you created an internal tool to make testing a complex feature easier, or you discovered a tool that can help the testing process in general.
The answer...?
I felt silly after writing everything above. Duh, of course, the QA team should take credit for the delivery of working software. The changes won't be deployed if we don't test it and give the green light. Although any QA-specific initiatives and improvements seem unimportant in this sea of developers, it's probably more on their lack of visibility and familiarity with what we do and what matters to us.