Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Software Testing Strategies

You're reading from   Software Testing Strategies A testing guide for the 2020s

Arrow left icon
Product type Paperback
Published in Dec 2023
Publisher Packt
ISBN-13 9781837638024
Length 378 pages
Edition 1st Edition
Arrow right icon
Authors (2):
Arrow left icon
Matthew Heusser Matthew Heusser
Author Profile Icon Matthew Heusser
Matthew Heusser
Michael Larsen Michael Larsen
Author Profile Icon Michael Larsen
Michael Larsen
Arrow right icon
View More author details
Toc

Table of Contents (22) Chapters Close

Preface 1. Part 1:The Practice of Software Testing
2. Chapter 1: Testing and Designing Tests FREE CHAPTER 3. Chapter 2: Fundamental Issues in Tooling and Automation 4. Chapter 3: Programmer-Facing Testing 5. Chapter 4: Customer-Facing Tests 6. Chapter 5: Specialized Testing 7. Chapter 6: Testing Related Skills 8. Chapter 7: Test Data Management 9. Part 2:Testing and Software Delivery
10. Chapter 8: Delivery Models and Testing 11. Chapter 9: The Puzzle Pieces of Good Testing 12. Chapter 10: Putting Your Test Strategy Together 13. Chapter 11: Lean Software Testing 14. Part 3:Practicing Politics
15. Chapter 12: Case Studies and Experience Reports 16. Chapter 13: Testing Activities or a Testing Role? 17. Chapter 14: Philosophy and Ethics in Software Testing 18. Chapter 15: Words and Language About Work 19. Chapter 16: Testing Strategy Applied 20. Index 21. Other Books You May Enjoy

Our scope - beyond button-pushing

Patrick Bailey, a professor at Calvin College, once did a research report where he asked people their role and, if they could only do one form of testing, what would it be? Bailey found, strikingly, that people tend to associate the most valuable kind of testing with their role. Programmers found unit testing valuable, internal customers found User Acceptance Testing, Business Analysts found system testing, and so on more valuable.

Project Managers, for example, tend to see work as an assembly line. Anyone with a mantra like “plan the work, work the plan” and a documentation bias is going to like the idea of writing the tests down and then executing them. When we have seen that tried, the results, are, frankly, lower value. So, the company criticizes the button-pushers, maybe laughing at them. Thomas Moore wrote about this in 1551 in his book Utopia. To borrow from that author: First, we create bad testers and then we punish them.

That may be the first mistake, which used to be the most common. Today, we see people who saw that first mistake, know it is foolish, and view all human testing as that sort of low-value, scripted exercise. This group sees testing as something else – automation of the GUI, lower-level unit checks, and perhaps API tests. All those have their part and place in this book, but they also fail to capture the improvisational element of good testing. We’ll cover that improvisational effort in Chapter 1, at the GUI level, where it is most obvious, and then try to maintain that spirit throughout additional chapters.

In other words, this book is not limited to “just” button-pushing testing, but there is something that happens in the hands of someone skilled that needs to be studied and applied at all levels, throughout the process.

The scope of our book is about all the ways to find out the status of the software, quickly, by exercising the code. We’ll be the first to admit that is not a complete picture of software quality. It does not include how to create good requirements, or how to perform code inspections or pair programs. We see testing as a feedback mechanism, it is not the only one, and there is more to quality than that feedback.

“Just testing”, we thought, was more than enough for one book. That feedback is important. It is often neglected; it is often done poorly. The second part of the book covers test integration into a delivery process. The third part covers “Practicing Politics”, on how to give feedback that can be effectively used by the organization.

If you’ve ever heard “Why didn’t we test that”, “Why didn’t we find that bug”, or, perhaps worst of all, “Okay, you found that bug and prioritized it as must fix and we insisted it could be delayed, but why didn’t you advocate more effectively?”, then this book is for you. This book is also for you if:

  • If you keep seeing bugs in software that seem obvious that you wish other people caught.
  • If you want to get better at finding information quickly and expressing it well to change the outcome of the process.
  • If you want to come up with ways to use the information you find to reduce the bug injection rate.
  • If you want to be able to diagnose and explain how you made key risk/reward tradeoffs.

Hopefully, by now you realize that testing is serious, grown-up, risk-management work. Any beginner can follow the process, and any mid-level tester can tie up most software teams in delays waiting to fix things. The real value in testing is beyond that, in making smart risk/reward decisions, to invest limited resources in risk management.

You can think of this as three levels of testing. On level one, you go through the basic motions of using the application in the most simplistic way. This is the “happy path.” The second level of tester is on a bug hunt. The person viewing testing views their job as to find bugs, or, as one person once put it, to “cackle with glee” as they “make developers cry.” Where the first level is probably overly agreeable, the second level can be actually adversarial to development. It is on the third level that we look at how much time will be invested in what risks in order to “not be fooled” about quality while making minimal disruptions to delivery speed. In some cases, finding problems with the product early can increase delivery speed.

What is Testing?

Not too long ago or far away, one of the authors, Matthew, was the testing steward at a Health Insurance Company in West Michigan. The insurance company had contracted with a local but nationally known consultancy to come in and work on some software maintenance. One of the consultants, a brilliant coder, shoved the keyboard away in frustration, shouting, “This is totally untestable!”

Matt picked up the keyboard and mouse and started to exercise the software, saying, “Sure it is, watch!” This was followed by, “Oh, you don’t mean the code is untestable, you mean you have no way to have a computer run it through pre-defined exercises and check the results. You mean it isn’t … test-automate-able, maybe?”

This is another example of what Pat Bailey was talking about in his research at Calvin. To the programmer, “tests” were automated pieces of code that checked the functionality. To Matt, the employee of the insurance company, testing was the process of figuring out information about the software.

At almost the exact same time as this story, one of our mentors, Dr Cem Kaner, was giving a talk on “Testing as A Social Science”, (https://kaner.com/pdfs/KanerSocialScienceTASSQ.pdf). In that presentation, Dr Kaner defined software testing this way.

Software Testing is:

  • A technical investigation
  • Conducted to provide quality-related information
  • About a software product
  • To a stakeholder

This book tends to use that as an operating definition. Testing is where you try to find out if the thing you built will actually do the things you are asking it to do. While you can fool yourself with a schedule, a bad design, and code that doesn’t work, Testing is the first process designed to make sure that we are not fooled. Kaner went on to make this argument, “Much of the most significant testing work looks more like applied psychology, economics, business management (etc.) than like programming”.

This will probably upset a lot of programmers, and maybe some DevOps people too. After all, the holy grail of testing in the 2020s is to automate all the checking of things. In general, that means running all the checks, quickly, and reporting results automatically with No Humans Involved, or NHI.

People seem to stop and forget that the holy grail seems to be fiction, with a great deal of energy wasted searching for a legend. The yellow brick road led to the Emerald City headed by a wizard that was a lie. People forget that Zeitgeist, the spirit of the age, implies not truth but a sense of fashion that will go out of style.

It’s not that we are against automation or tooling. Our problem is the dearth of information on how to do testing well.

Given a test framework (or exercise) …

What test should we run (or write) first?

What does that test result tell us?

When should we stop?

When we stop, what do we know?

This sounds like an interactive exercise, where the results of the first test inform the second. It is possible to select some subset of that exercise, turn it into an algorithm, write it up in code, and run it routinely to see if what worked today runs differently tomorrow. Some people call this regression testing. It is even possible to create automated checks before the code, sometimes called Test Driven Development (TDD). Even if the tests are automated, the process of figuring out what to run and what that tells us is the process we are more interested in.

Some people make a distinction between the institutionalized, run-it-every-time bits of code and the feedback-driven process of exploration. In particular, Michael Bolton uses the term “Checks” to be the algorithmic unthinking comparison, and ‘Testing” to be the more expansive activity that often includes checking. We find this helpful in that “automated testing” loses some of the flavor of Dr Kaner’s definition that we saw earlier. To that, we would add Heusser’s First Observation: After the first time you run an automated test to see if it passes, it really ceases to be a true test. Instead, it becomes automated change detection – and what programmers do is create change!

Our goal for the reader is to be able to do more than memorize the material and more than perform an analysis of software according to a set of rules to come up with test ideas. We believe in this book we have synthesized a large amount of material that superficially disagrees with itself and is inconsistent. We’ve interpreted that through our own ideas, laying out some test approaches that have the most value, and then giving advice on how to balance them. When you’re done with this book, you should be able to look at a user interface or API, identify many different ways to test it and have the tools to slice up your limited time to determine how much time to spend on what test approaches, describing them to someone else.

Whew. It’s finally here. The book is final. It is ready to be out.

This book is about our (Matthew Heusser and Michael Larsen) stories. We are proud of it. It includes how we test, why we test, what influenced our thinking about testing, and a few exercises for you. To do that we had to do a few unconventional things, like change our “person” between I, you, we, Matt, and Michael. We had to write about opinions, which you can disagree with, and experiences, which might not be relevant for you. One of our biggest challenges was what to cut, to understand what would be the most valuable to you, the reader. After doing hundreds of interviews on podcasts, consulting broadly, and attending a hundred or so conferences, we think we have some idea of what that might be. Yet a book takes our potential audience for feedback even wider. We look forward to hearing from you, the reader, about what we could have phrased more carefully, and what we should add, subtract or change. For today though…

On with the show.

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime