From the early days, it was clear that like any other machine, software needed a way to verify it was working properly and was built with no defects.
Software development processes have been heavily inspired by manufacturing industry standards, and early on, testing and quality control were introduced into the product development life cycle. So software companies frequently have a quality assurance team that focuses on setting up processes to guarantee robust software and track results.
Those processes usually include a quality control process where the quality of the built artifact is assessed before it can be considered ready for users.
The quality control process usually achieves such confidence through the execution of a test plan. This is usually a checklist that a dedicated team goes through during the various phases of production to ensure the software behaves as expected.
Test plans
A test plan is composed of multiple test cases, each specifying the following:
- Preconditions: What's necessary to be able to verify the case
- Steps: Actions that have to succeed when executed in the specified order
- Postconditions: In which state the system is expected to be at the end of the steps
A sample test case of software where logging in with a username and password is involved, and we might want to allow the user to reset those, might look like the following table:
Test Case: 2.2 - Change User Password |
||||||||||||||||||||
Preconditions:
|
||||||||||||||||||||
|
||||||||||||||||||||
Postconditions:
|
These test cases are divided into cases, are manually verified by a dedicated team, and a sample of them is usually selected to be executed during development, but most of them are checked when the development team declared the work done.
This meant that once the team finishes its work, it takes days/weeks for the release to happen, as the whole software has to be verified by humans clicking buttons, with all the unpredictable results that involves, as humans can get distracted, pressing the wrong button or receiving phone calls in the middle of a test case.
As software usage became more widespread, and business-to-consumer products became the norm, consumers started to appreciate faster release cycles. Companies that updated their products with new features frequently were those that ended up dominating the market in the long term.
If you think about modern release cycles, we are now used to getting a new version of our favorite mobile application weekly. Such applications are probably so complex that they involve thousands of test cases. If all those cases had to be performed by a human, there would be no way for the company to provide you with frequent releases.
The worst thing you can do, by the way, is to release a broken product. Your users will lose confidence and will switch to other more reliable competitors if they can't get their job done due to crashes or bugs. So how can we deliver such frequent releases without reducing our test coverage and thus incurring more bugs?
The solution came from automating the test process. So while we learned how to detect defects by writing and executing test plans, it's only by making them automatic that we can scale them to the number of cases that will ensure robust software in the long term.
Instead of having humans test software, have some other software test it. What a person does in seconds can happen in milliseconds with software and you can run thousands of tests in a few minutes.