It's been more than a year now since I've set up our bug tracking in JIRA. In there, I've set up an Issue Root Cause custom field where it had the following options: Incorrect coding Unclear / Missing / Incorrect Requirements Unclear / Missing / Incorrect Design Insufficient / Duplicated / Incorrect Testing Deployment Issue Environment Third-party issue My thoughts when I listed these as options is so that it would be easy to identify which team is responsible for the cause why we had that bug. They're pretty straightforward - Incorrect coding is of course when the developers didn't follow the expectations, Unclear / Missing / Incorrect Requirements is because there's a gap in the requirement, and so on. And also, it's because that was the way it's done in my previous company so that's also my initial knowledge source. Recently, I've been reading a few articles regarding Shift Left, reducing silos, and generally how quality is a team activit...
We just had our all-hands meeting and I'm watching the slides of our various development teams about their accomplishments for the year - from the release of our products to technology upgrades to some major refactors. I started imagining what our QA slide would look like, but then I kept getting stuck. At the end of it all, the most important gauge of software development in general is that we've delivered working software. But in what area can the QA team take credit for the success? Is success in the number of bugs found during testing? When I was starting my new job, I was happy that I was able to catch a lot of bugs before the changes were deployed to production. One day, a developer asked me this: "Why is QA happy that there are a lot of bugs?" I told him, "Well, it's my job to find bugs before someone else finds them." And then he dryly replied, "I see." It got me thinking later if my mindset is actually correct. With the introduction of...