Knowing when a software product is done, or is at least “good enough to ship,” is important.
Becoming skilled at knowing how to make this determination takes some serious thought,
discipline, and work.
Assuming you have a software requirements document for the application, which defines exactly
what was to be developed and how it should behave, one key release criteria is that all
requirements have been met by the product you actually developed.
“But I don’t have a requirements document,” you say?
Then you’ll have a very tough time knowing if all the requirements are met by the software you
developed.
Requirements should initially drive your test plan and matrix of test cases. Requirements should
include not only look, feel, functionality, performance (both speed and size), and usability, but
also compatibility with various environments (e.g., operating systems, CPU platforms, I/O
devices, screen sizes, memory constraints, compatibility with various plugins if applicable, etc.)
All of these requirements should translate into test cases.
“But I don’t have a test plan or test cases,” you say?
Then you’re going to have a tough time objectively verifying that you application meets the
requirements.
The readiness criteria also needs to include judging reliability of the application. Again, your test
plan and test cases should include actual usage scenarios to cover the requirements, as well as
stressing the application (e.g., firing lots of input at it in a short period of time, dealing with loss of
network connections, dealing with lack of disk space, dealing with remote services being down
or providing invalid information, ensuring that your application is not vulnerable to various
forms of attack, ensuring that not matter what happens you don’t corrupt the user’s data, etc.).
Of course, you should be keeping track of every bug you find and fix, and folding in additional
test cases to ensure those bugs don’t return the next time you tweak the source code. Each bug
should be evaluated to determine how severe it is and how likely it is a user would encounter it.
This information is important to have when deciding whether you’ve reached your desired
quality level. The release criteria should involve looking at the number of severe and likely-seen
bugs, as well as the rate at which they are being found during the testing process.
During the end game, it’s not fair to simply stop testing to avoid finding more “showstopping”
bugs. Testing needs to continue, and any bugs need to be reported and evaluated. A freeze on
new features needs to take place well before release, and the source code needs to be frozen for
some period of time during final testing, so that you’re not performing final testing on a moving
target.
One additional note on performance. The requirements should clearly state the performance
criteria that must be met, but improved performance cannot happen at the expense of reliability.
As Dave Cutler, the father of Windows NT, once said to me:
“Yeah, it’s faster, but does it still get the right answer?”
Release criteria for a software (or firmware) product should be established at the beginning of
the project, before a line of code is written. Sure, it can change, as market requirements,
hardware platforms, etc. change. But it needs to be a clear and visible set of hurdles that the
team knows must be cleared before anyone says “Ship it!”
How do you know when you’re done? Release criteria. If you don’t have release criteria, it’s time
to establish them. Otherwise, how will you know when you’re done?