It’s interesting to hear so many opinions on this from whomever I speak with working in an agile environment. It’s an issue which often gets overlooked and I believe exists in agile environments as an old habit from previous working practices where they have a used a verification model. I don’t believe we should ever see ‘Bugs’ on a product backlog in the format of bugs. If you do, maybe there is a more fundamental problem to how the team is working with the product owner and a sign of compromising quality in favour of meeting the goal or that the team are influencing priority through defect thinking, where defects can be subjective. To meet the goal of completing the story and/or sprint, quality should not be comprised to do so. If this is happening, this is a giving the business a false sense of progress and can be a dangerous path to mayhem and putting the product at risk.
Looking at the following diagrams where bugs are recorded and added to the ever growing backlog, it’s plain to see that the business is getting the impression things are complete and the effect this has on the product revenue. The problem here is that the backlog is growing rapidly and each of the items said to be complete may not be shippable. Using the familiar Agile iterative deployment approach at a very high level, the costs of not being able to ship can sometimes be catastrophic to ROI. Not included in the following diagrams is the loss of feedback by not being shippable which can also reduce the ROI for the next shippable feature. Most people already in the Agile world will be familiar with this iterative deployment model and the affects on ROI, but not all see that this model is seriously worsened when it comes to bugs. Carrying over bugs and adding them to the backlog can be very costly. The mindset of putting off today what we can do tomorrow in terms of quality is not the path to progress and is fundamentally irresponsible.
Many agile teams have a product backlog and also maintain separate bug lists. Priority can only exist when there is one single list to work from. This is the same reasoning why in Scrum the ‘Product Owner’ is not plural. By having separate lists or multiple sources of work to pull from, priority is distorted and the order of work can be significantly reduced in value. As simple and obvious as this sounds some struggle with this as creating a single prioritised list with stakeholders isn’t easy.
Not only for the reasons identified in the previous diagram when adding bugs to the existing backlog, it can be very expensive when you have separate bug lists. It reduces the production output value and can promote low valued items to be tackled above true prioritised value. This can in turn amount to waste and can reduce revenue on a large scale in some cases.
Those who have separate bug lists or bugs in the product backlog need to question how and why these emerged. When questioning if bugs exist or where they exist, consider the following 8 points:
- When the product owner accepts a story, the story is complete. Complete being the product owner is satisfied it meets their requirements and is shippable in it’s current state, which in turn should be supported by the definition of done.
- Scope can emerge during implementation and negotiated with the product owner during a sprint.
- Accepted stories with outstanding requirements have been de-scoped and the remaining work re-prioritised against the backlog.
- All items on the backlog are user stories which are prioritised according to business value.
- Tracking bugs demonstrates a lack of trust which facilitates blame and accountability.
- Tracking bugs is admittance that quality has been and can be compromised.
- Bugs lower morale and suggest to the to others outside the team that the team may not be on top of quality .
- The team is responsible for quality, the product owner is responsible for scope.
The next question which is often asked when trying to explain the above is “what about defects discovered post implementation, surely these are bugs ?”. My answer to this usually a reference to points 1 and 3 above. If the specification has been met and accepted, how are these defects against the specified desired behaviour. Such cases could be emerging requirements which need to be converted into stories and prioritised. If on the other hand it’s regressed from the introduction of a new story we should evaluate the priority when re-introducing the requirement. If a story introduced the change and is still in progress we should immediately fix it. If this is not in progress, we should then look to consider the priority of the issue, then turn it into a story identifying the business value through the product owner. In such cases we should also ensure that this case is covered in our automated tests to get early feedback for any future changes.
The question we really have to ask is why we record bugs. A behaviour which I commonly see is that testers record bugs in bug tracking tools. Recently I have been working with a team who have long lists of closed bugs in “quality centre” and all bugs found are fixed adhering to the teams definition of done.When I confronted this behaviour I asked how often do you look back through the previous bug lists and why do you keep them, the answer was that they never look back, but keep the log in case anyone picks them up that something isn’t covered. This to me is counter productive as it’s pretty much a CYA log which has emerged from a verification model environment with blame cultural thinking. Another way to look at this is that if you have tests in place already covering the cases mentioned, recording bugs in this manner is nothing but waste. Waste generated by fear in the workplace.
I’d be interested in hearing your thoughts and feedback. The above is my own opinion from my own experience, although speaking to others I’m not alone with these thoughts. If you’d like to add your opinion, please feel free to leave feedback below.