You should now have a better idea about the possibilities of testing around companies and applications. In this section, we are going to review some methods for improving each of the situations.
Identifying improvement areas
The test mentality has a lot to do with asking questions and being curious. We started the chapter asking some questions, and here are some more to help you discover areas in which you could improve around quality:
Is there any repetition that could be reduced? If so, what? How could you automate those processes?
How many tests do you have of each type? We will be talking about the test pyramid in the following chapters; what does yours look like? Are there any other types of tests that could be beneficial for you? Are there any tests that are at the wrong pyramid level (or in more than one)?
How long does it take to sign off for each deployment? The ideal time should be under 15 minutes (which does not mean you cannot test further in other environments prior to or after deployment).
How much do you rely on your current tests? Are we testing what we should be testing? For example, I’ve frequently seen tests related to downloading documents from the browser. Unless your app is a browser, you should trust the download works properly. Is there another way of testing that document without going through the pain of downloading it? Are there other tests in which you are testing functionality that you do not need to test?
How much do you trust in your tests? If tests are not reliable or out of date, are they adding any value? Keeping the system clean is highly recommended. If you have a version control system, if a feature ever comes back, you should be able to retrieve the old test code from the version history.
Are you using any tools to track the issues and understand where we need or do not need tests?
Do you have the right tests for the right time? It is important to understand what tests to run and when to run each throughout development. We will discuss this further in the test pyramid chapters. We should also make sure we understand why we are testing or why we need a particular test. Avoid failing to do something just because other companies are doing it or imposing a set of tests that are not needed yet.
Lastly, if it is still hard to discern when something is needed, I highly recommend talking to a professional; some people would consider a short consultation for more tailored advice.
Building the right team – testing roles and skills
Let us just take a bit of time defining testing roles as I have found that companies do not seem to agree about their definitions, and it is important to understand what people I am referring to throughout the book. I will also add some tips to help each of the roles grow in Chapter 12, Taking Your Testing to the Next Level.
Having a test expert help figure out the maturity of the company and what is needed to improve the quality of the product is particularly important. Test managers and test architects should be in distinct positions. However, not all companies need both positions, and sometimes, the job can be done by the same person. In some cases, automation is performed by developers, other times by developers in test, and QA, they are even called “automators” (which I believe to be a made-up word).
Rather than thinking of the following as “job positions,” you could also consider them “roles” that can be performed by different professionals as needed.
Test manager
A test manager makes sure the tests are performed correctly (created by the test or dev team) and good practices are implemented. They need to understand how and what to look for in the quality area. The position requires deep knowledge of testing and people skills.
Test architect
The architect designs the frameworks and tools to be used for testing and can give practical development advice to people building test code. This position requires deep technical knowledge and experience in planning and coding tools from the start while having deep knowledge about testing. Sometimes this position is covered by an SDET.
Software development engineer
Software development engineers (SDEs) are also known as developers. They are the people in charge of building features and, depending on the company, in charge of more or less of the test code.
Manual testers
Some people refer to manual testers as QA testers. They are knowledgeable and passionate about applications, issues that could arise, and test methodologies. However, in some companies, QA testers also write some automation (generally for the user interface (UI)). Some companies invest in teaching automation, providing tooling to help them achieve more in less time, including the use of automated behavior-driven development (BDD) to turn the test definition into code, and visual testing in which UI screens are compared automatically with an expected image.
SDET
SDETs are a rare species. They have the dichotomy of being developers with a testing mentality/passion.
Being stuck in writing test code could be frustrating for most developers as it is a repetitive task, not always challenging. When a company uses SDET to define the role that I am here referring to as QA, some people find themselves in that position, expected just to write automation code, have an unpleasant experience, and move away from the title.
Instead, they should be empowered to identify processes to automate and tools they could write to keep improving their programming skills.
Many companies are starting to join a movement called “shift left” or “combined engineering” in which SDE and SDET are combined into the “software engineer” role and work on all tasks related to coding, even test coding.
DevOps
DevOps is a combination of developers and operations. A while back, tasks related to servers, deployments, and networks were done by a specialized team, sometimes referred to as “systems engineers.” Most of the time, this also included a degree of system testing and even security.
As more programming languages, tools, technologies, and techniques started to develop, roles started to get increasingly specialized. However, the issues discovered by this team were usually difficult for them to fix, as they were not part of the development and needed an understanding of the code and features that they were not familiar with.
Here was where DevOps was introduced, in other words, developers doing this job for their team rather than across the company. In some companies, the “ops” bit is taken for granted and eliminated from the word, which I will do throughout the book.
Other terms
Other terms related to testing are systems engineers or system verification testers (SVTs) (like “ops” but with more test knowledge and debugging capabilities), functional verification testers (FVTs) (a rather old term that involves end-to-end front and backend automation and testing), and integration testers.
For simplicity, they will all be referred to as SDETs in this book, even though there might be SDETs specialized in some areas as developers are.
Scaling
Horizontal scaling means that you add more machines to your testing. For example, using the cloud to test in different systems (more web browsers, mobile versus desktop, and different operating systems).
Vertical scaling means that you scale by adding more types of tests (security, accessibility, and performance). We will talk about the “test pyramid,” where each test falls on it, and what percentage of the time spent in testing should be done in each of them.
Identifying the type of testing you need and what systems should be covered should be part of an architecture design. We will talk about diverse types of testing and tools for horizontal scaling in Chapter 2, Chapter 3, and Chapter 4.
Automating the automation
Time is the highest-valued currency you have; it is something you can never get back. This is one of the reasons why I like automation, so I can save time (mine or other people’s). And by automation, I do not just mean test automation but any repetitive process. That is what I mean by “automating the automation.”
In most companies, the “task automation experts” are the SDETs. That said, if you are a developer or a QA tester, you could highly benefit from this practice.
I have identified some basic steps for automating anything:
- Recognizing automatable tasks:
The first step to automating something is to think about repetitive tasks. Then, you should consider how long it would take to automate that task and calculate how much time it would save if it were automated.
You can tell when a company is doing well on their test automation when you see this thought process translated into it, rather than automating as many things as possible as some proof of skills or performance. The same concept can be extrapolated to any other repetitive tasks, including, of course, automation.
- Write some code that does that task for you:
Once you have a clear picture of the steps involved in the repetitive task, you should also have an idea of how to automate those steps. The exact language, style, and patterns are up to you to define.
- Identify when the code needs to be executed:
Sometimes we want to execute the code after something else happens. For example, testing automatically after a feature has been developed is common. Or, we can have automation that depends on some trigger, such as a page being refreshed or an application launched. Other times, we want the execution to happen at a certain point in time, for example, every morning. That could be automated with a cron job (also known as scheduled tasks) or services.
- Identify success measures:
The next step is to identify our gain from this automation. What do we need to achieve with it? What is our best result metric? Sometimes, to verify that the automation has been executed and to check its success, we rely on logging. However, checking the logs could also be considered a manual task, and we will need to make sure we automate it if that is the case.
I suggest creating an alert if something has gone wrong, for example, a test case failing. We may also have a notification that everything has gone well, too, just to verify it has worked. An email with the details, a text message, or even a phone call could all be ways of automating the logs (more details on notifications in Chapter 11, How to Test a Time Machine (and Other Hard-to-Test Applications).
Where to start?
When there are a lot of things a company or team could improve or even automate, I have a formula for automation that we can apply to other things:
(Time spent in doing the task manually + monetary value (based on the cost of potential issues)) / (time that will take to build the automation).
I argue that we could also think of it this way:
(Time spent in doing something before a change + monetary value (how much the task is needed)) / (time that will take to implement the change).
For “time spent,” I mean total time, if there are three steps, we need to multiply it by the potential times those steps will be repeated. Of course, there might be other factors to consider as well, for example, the scalability of current versus future solutions and people impacted by this, but I found this to work for general cases and as a baseline.
Moving on
Imagine that all the feasible options in this chapter are checked and implemented, and one or more test frameworks are set up. The team is managing the new tests, which are carefully planned and executed as part of the CI/CD structure. The analytics are set up and create alerts for any problems or improvements. So, can you ever be done with testing?
I presume most people in quality will jump straight away to this answer: “you are never done with testing.” However, in the same way as you might “be done” with development, you need to “be done” with testing. It is normal if you are not 100% happy about your project, but you have done your best, and (usually due to budget) maintenance is all there is left, at least for that project (and until some cool tool or technology comes along).
What to do then? As with development, at this point, we can move on to the next project. Of course, someone should stay to fix potential issues and provide support, but you should not need such a big team or even the same skills for this. Not worse or better, simply different. Some people enjoy being at the beginning of the project, and others prefer maintaining them.
Development and testing go hand in hand; when there are fewer changes in development, there are fewer changes in test code.