Skip to main content

Think piece: How do you measure QA team success?

We just had our all-hands meeting and I'm watching the slides of our various development teams about their accomplishments for the year - from the release of our products to technology upgrades to some major refactors. I started imagining what our QA slide would look like, but then I kept getting stuck.

At the end of it all, the most important gauge of software development in general is that we've delivered working software. But in what area can the QA team take credit for the success?

Is success in the number of bugs found during testing?

When I was starting my new job, I was happy that I was able to catch a lot of bugs before the changes were deployed to production. One day, a developer asked me this: "Why is QA happy that there are a lot of bugs?"

I told him, "Well, it's my job to find bugs before someone else finds them."

And then he dryly replied, "I see."

It got me thinking later if my mindset is actually correct. With the introduction of Shift Left which suggests putting testing early to find bugs as early as possible, the new goal is probably to "Prevent as many bugs as you can."

But then, how does one even measure the number of bugs prevented? Do I count the number of clarifications I've cleared with the team before they started? Do I have to ask Doctor Strange to look into the universe where I didn't ask so I can compare the number of bugs?

Is success in how many test cases we have?

Having many test cases doesn't automatically mean greater test coverage. Each individual can have different ways on how they would want to break up their test cases, so I don't think this metric gives much value.

Is success in how fast we execute our tests?

In terms of manual test execution, it could vary by person, right? It's another thing that depends on an individual. Some will simply breeze through the test case and follow the steps as is, while some could be curious while running a test so they do extra checks that aren't mentioned in the test case, or some would try to understand the feature and how it's implemented first before they run the test case.

It's sufficient that we run the right tests and complete them within the project timeline.

Is success in how much we maintain our test cases?

Let's be honest, only the QA team cares about the maintainability of the test cases. If you report this to the company as a success, they probably don't care.

However, when we promote the test cases as living documentation, it's going to be a valuable deliverable for the whole team. User stories, requirement documents, and technical documentation are there but once the release is done, they don't get updated anymore. In case there's a next release for the same feature, it's going to be a new set of documents. But for the test cases, as long as you organize and maintain them right, will serve as the key documentation that everyone can use if they want to be familiar with the system.

Is success in how much we explore new tools to improve how we test?

Maybe you created an internal tool to make testing a complex feature easier, or you discovered a tool that can help the testing process in general.

The answer...?

I felt silly after writing everything above. Duh, of course, the QA team should take credit for the delivery of working software. The changes won't be deployed if we don't test it and give the green light. Although any QA-specific initiatives and improvements seem unimportant in this sea of developers, it's probably more on their lack of visibility and familiarity with what we do and what matters to us.

Popular posts from this blog

Reframing how I identify bug root causes

It's been more than a year now since I've set up our bug tracking in JIRA. In there, I've set up an Issue Root Cause custom field where it had the following options: Incorrect coding Unclear / Missing / Incorrect Requirements Unclear / Missing / Incorrect Design Insufficient / Duplicated / Incorrect Testing Deployment Issue Environment Third-party issue My thoughts when I listed these as options is so that it would be easy to identify which team is responsible for the cause why we had that bug. They're pretty straightforward -  Incorrect coding  is of course when the developers didn't follow the expectations,  Unclear / Missing / Incorrect Requirements is because there's a gap in the requirement, and so on. And also, it's because that was the way it's done in my previous company so that's also my initial knowledge source. Recently, I've been reading a few articles regarding Shift Left, reducing silos, and generally how quality is a team activit...

QA Tools: Custom Test Case Generator in Google Sheet using Google App Script

When testers have to document test cases, it's usually done in the traditional format of putting all your test cases and steps in one sheet. Once it accumulates, for me it can be overwhelming to look at. It looks something like this: As a solution, I decided to make a tool that will help me focus on only drafting my test scenarios and look at my test cases one at a time. I'd like to share this here with everyone else who's like me. Hopefully, it can make your testing journey even a little better. Test Case Generator Tool This tool is for QA or non-QAs who need to write test cases. What it can do: Write your test scenarios in one tab Focus on writing steps for each test case one sheet at a time Generate an import file Generate a test execution document Sample test execution document: Scroll to the bottom to see how it works. Sheet Purpose Test Scenarios This is where you set the details for this test set/the project/story/epic where you're creating the test cases u...

A Bug's Life! Defect Management Process in Software Testing

First things first, what does a "bug" mean in the software development process? A bug is what we call a behavior that is different from what's expected based on the provided requirements. A tester's main purpose is to find and report bugs in the system as early as possible. In some companies, it's sometimes called a "defect". They actually mean slightly different if we go by the book. In this post, I'll be covering the lifecycle of a software bug which are issues that are found during the development phase. You're testing the system and you found a bug. What is the first thing you do? a) You immediately write a bug report b) You try to reproduce it c) You complain "How could this very obvious bug reach QA? Dev should've spotted this earlier ugh." The formal answer is B (I'll just let you figure out what's the informal answer), It's common for a QA to be gaslighted. "I cannot reproduce this bug.", "It doesn...