PM-Clinic Week 2 Summary
Topic: Prioritizing QA late in the schedule
QA does not have enough time to do a full test pass before the unmovable release date.
How do you determine what will be tested? How do you track it? It may involve things
like asking development what they think changed the most or the product marketing team
about the most important features from their perspective. Then whatever criteria we
arrive at has to be mapped to about 1000 or so test scenarios. I'm looking for both
strategy for how to approach this at a team level (who should have authority over this
decision?), but also tactics for actually getting it done.
Signed, NEQNET (Not enough quality not enough time)
High level points from the discussion
- This situation is a crisis. Whatever happens, a leader must take responsibility
for going back after the fact and examining how this situation could have been avoided.
- Unmovable dates are not beyond questioning. Any date is movable, it's a matter
of cost. Someone should push back on the date and find out what the reasons are the
date is unmovable. There may be other options. It gives everyone a leg to stand on
when they ask people to work harder. Instead of saying "the date can't move"
they can offer some actual reason "we have to launch before the Superbowl or
our advertising budget is lost."
- Try to sacrifice features should be sacrificed before quality. Cut a whole area,
instead of taking away 10% of the quality value of each piece of the project. This
will hurt less in the long run.
- PM, dev and test (and possibly marketing) must negotiate on making these kinds of
decisions. One may drive, but all must contribute.
- There needs to be a criteria for prioritizing features and test cases. For test
cases a simple criteria is : 1) importance of the functionality to the customer/business.
2) The likelihood of problems in an area. 3) The costs of running the test cases for
an area (some are nearly free (automated) others are quite expensive). Even if you
think this criteria is lame, make it better - you need some kind of criteria.
- PM or whoever owns cross functional decisions should drive high level prioritization
. This should focus on prioritizing customer scenarios. If you can identify which
customer scenarios ("printing files", "doing simple DB searches",
etc.) matter most, it's straightforward to map test cases to those customer scenarios,
and prioritize them that way. (Defining good scenarios is it's own topic - someone
should email me if that want to nominate it).
- If the QA team is crunched because of high level schedule issues, the whole team
should pitch in to help. It's not the QA team's fault that the dev and PM team missed
- The PM should work with the QA team to do risk analysis. What are the possible problems
if all the test cases are not run? Is the unmovable date worth the risks? This should
happen before the decision was made to keep the dates where they are.
Low level points from the discussion
- Review the bug database. Look for bugs fixed for this release that are high priority.
- Focus on areas, but less boundary testing (international char sets, escaping of
characters, etc.) in each area. These tests often find few bugs, but take much time
- Focus on component integration points. Very common for major issues to occur here.
- Consider QFE (Quick fix engineering). Allows you to stage your test coverage to
compensate for the schedule. QFE is quick fix engineering, a process for rolling out
release updates soon after a major release.
Myk O'leary, Gareth Howell, Neil Enns, Brian Hagins, Andrew Stellman, Gwynne Stoddart,
Scott Berkun (thanks!)