Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Crafting Test-Driven Software with Python
Crafting Test-Driven Software with Python

Crafting Test-Driven Software with Python: Write test suites that scale with your applications' needs and complexity using Python and PyTest

eBook
€20.98 €29.99
Paperback
€36.99
Subscription
Free Trial
Renews at €18.99p/m

What do you get with a Packt Subscription?

Free for first 7 days. $19.99 p/m after that. Cancel any time!
Product feature icon Unlimited ad-free access to the largest independent learning library in tech. Access this title and thousands more!
Product feature icon 50+ new titles added per month, including many first-to-market concepts and exclusive early access to books as they are being written.
Product feature icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Product feature icon Thousands of reference materials covering every tech concept you need to stay up to date.
Subscribe now
View plans & pricing
Table of content icon View table of contents Preview book icon Preview Book

Crafting Test-Driven Software with Python

Getting Started with Software Testing

Many think that the big step from "coding" to "software engineering" is made by having elegant architectures, well-defined execution plans, and software that moves big companies' processes. This mostly comes from our vision of the classic industrial product development world, where planning mostly mattered more than execution, because the execution was moved forward by an assembly line and software was an expensive internal utility that only big companies could afford

As software development science moved forward and matured, it became clear that classic industrial best practices weren't always a great fit for it. The reason being that every software product was very different, due to the technologies involved, the speed at which those technologies evolve, and in the end the fact that different software had to do totally different things. Thus the idea developed that software development was more similar to craftsmanship than to industry.

If you embrace that it's very hard, and not very effective, to try to eliminate uncertainty and issues with tons of preparation work due to the very nature of software itself, it becomes evident that the most important part of software development is detecting defects and ensuring it achieves the expected goals. Those two things are usually mostly done by having tests and a fitness function that can verify the software does what we really mean it to – founding pieces of the whole Software Quality Control discipline, which is what this chapter will introduce and, in practice, what this book is all about.

In this chapter, we will go through testing software products and the best practices in quality control. We will also introduce automatic tests and how they are superseding manual testing. We will take a look at what Test-Driven Development (TDD) is and how to apply it in Python, giving some guidance on how to distinguish between the various categories of tests, how to implement them, and how to get the right balance between test efficacy and test cost.

In this chapter, we will cover the following:

  • Introducing software testing and quality control
  • Introducing automatic tests and test suites
  • Introducing test-driven development and unit tests
  • Understanding integration and functional tests
  • Understanding the testing pyramid and trophy

Technical requirements

Introducing software testing and quality control

From the early days, it was clear that like any other machine, software needed a way to verify it was working properly and was built with no defects.

Software development processes have been heavily inspired by manufacturing industry standards, and early on, testing and quality control were introduced into the product development life cycle. So software companies frequently have a quality assurance team that focuses on setting up processes to guarantee robust software and track results.

Those processes usually include a quality control process where the quality of the built artifact is assessed before it can be considered ready for users.

The quality control process usually achieves such confidence through the execution of a test plan. This is usually a checklist that a dedicated team goes through during the various phases of production to ensure the software behaves as expected.

Test plans

A test plan is composed of multiple test cases, each specifying the following:

  • Preconditions: What's necessary to be able to verify the case
  • Steps: Actions that have to succeed when executed in the specified order
  • Postconditions: In which state the system is expected to be at the end of the steps

A sample test case of software where logging in with a username and password is involved, and we might want to allow the user to reset those, might look like the following table:

Test Case: 2.2 - Change User Password

Preconditions:

  • A user, user1 exists
  • The user is logged in as user1
  • The user is at the main menu
# Action Expected Response Success / Fail
1 Click the change password button. The system shows a dialog to insert a new password.
2

Enter newpass.

The dialog shows 7 asterisks in the password field.
3 Click the OK button. The system shows a dialog with a success message.
4 Wait 2 seconds. The success dialog goes away.

Postconditions:

  • The user1 password is now newpass

These test cases are divided into cases, are manually verified by a dedicated team, and a sample of them is usually selected to be executed during development, but most of them are checked when the development team declared the work done.

This meant that once the team finishes its work, it takes days/weeks for the release to happen, as the whole software has to be verified by humans clicking buttons, with all the unpredictable results that involves, as humans can get distracted, pressing the wrong button or receiving phone calls in the middle of a test case.

As software usage became more widespread, and business-to-consumer products became the norm, consumers started to appreciate faster release cycles. Companies that updated their products with new features frequently were those that ended up dominating the market in the long term.

If you think about modern release cycles, we are now used to getting a new version of our favorite mobile application weekly. Such applications are probably so complex that they involve thousands of test cases. If all those cases had to be performed by a human, there would be no way for the company to provide you with frequent releases.

The worst thing you can do, by the way, is to release a broken product. Your users will lose confidence and will switch to other more reliable competitors if they can't get their job done due to crashes or bugs. So how can we deliver such frequent releases without reducing our test coverage and thus incurring more bugs?

The solution came from automating the test process. So while we learned how to detect defects by writing and executing test plans, it's only by making them automatic that we can scale them to the number of cases that will ensure robust software in the long term.

Instead of having humans test software, have some other software test it. What a person does in seconds can happen in milliseconds with software and you can run thousands of tests in a few minutes.

Introducing automatic tests and test suites

Automated testing is, in practice, the art of writing another piece of software to test an original piece of software.

As testing a whole piece of software has to take millions of variables and possible code paths into account, a single program trying to test another one would be very complex and hard to maintain. For this reason, it's usually convenient to split that program into smaller isolated programs, each being a test case.

Each test case contains all the instructions that are required to set up the target software in a state where the parts that are the test case areas of interest can be tested, the tests can be done, and all the conditions can be verified and reset back to the state of the target software so a subsequent test case can find a known state from which to start.

When using the unittest module that comes with the Python Standard Library, each test case is declared by subclassing from the unittest.TestCase class and adding a method whose name starts with test, which will contain the test itself:

import unittest

class MyTestCase(unittest.TestCase):
def test_one(self):
pass

Trying to run our previous test will do nothing by the way:

$ python 01_automatictests.py
$

We declared our test case, but we have nothing that runs it.

As for manually executed tests, the automatic tests need someone in charge of gathering all test cases and running them all. That's the role of a test runner.

Test runners usually involve a discovery phase (during which they detect all test cases) and a run phase (during which they run the discovered tests).

The unittest module provides all the components necessary to build a test runner that does both the discovery and execution of tests. For convenience, it even provides the unittest.main() method, which configures a test runner that, by default, will run the tests in the current module:

import unittest

class MyTestCase(unittest.TestCase):
def test_one(self):
pass

if __name__ == '__main__':
unittest.main()

By adding a call to unittest.main() at the end of our tests, Python will automatically execute our tests when the module is invoked:

$ python 01_automatictests.py
.
----------------------------------------------------------------------
Ran 1 test in 0.000s

OK

We can confirm that the test we cared about was executed by using the -v option to print a more verbose output:

$ python 01_automatictests.py -v
test_one (__main__.MyTestCase) ... ok

----------------------------------------------------------------------
Ran 1 test in 0.000s

OK

During the discovery phase, unittest.main will look for all classes that inherit from unittest.TestCase within the module that is recognized as the main Python module (sys.modules['__main__']), and all those subclasses will be registered as test cases for the runner.

Individual tests are then defined by having methods with names starting with test in the test case classes. This means that if we add more methods with names that don't start with test, they won't be treated as tests:

class MyTestCase(unittest.TestCase):
def test_one(self):
pass

def notatest(self):
pass

Trying to start the test runner again will continue to run only the test_one test:

$ python 01_automatictests.py -v
test_one (__main__.MyTestCase) ... ok

----------------------------------------------------------------------
Ran 1 test in 0.000s

OK

In the previous example, only the test_one method was executed as a test, while notatest was recognized as not being a test but instead as a method that we are going to use ourselves in tests.

Being able to distinguish between tests (methods whose names start with test_) and other methods allows us to create helpers and utility methods within our test cases that the individual tests can reuse.

Given that a test suite is a collection of multiple test cases, to grow our test suite, we need to be able to actually write more than one single TestCase subclass and run its tests.

Multiple test cases

We already know that unittest.main is the function in charge of executing our test suite, but how can we make it execute more than one TestCase?

The discovery phase of unittest.main (the phase during which unittest.main decides which tests to run) looks for all subclasses or unittest.TestCase.

The same way we had MyTestCase tests executed, adding more test cases is as simple as declaring more classes:

import unittest


class MyTestCase(unittest.TestCase):
def test_one(self):
pass

def notatest(self):
pass


class MySecondTestCase(unittest.TestCase):
def test_two(self):
pass


if __name__ == '__main__':
unittest.main()

Running the 01_automatictests.py module again will lead to both test cases being verified:

$ python 01_automatictests.py -v
test_two (__main__.MySecondTestCase) ... ok
test_one (__main__.MyTestCase) ... ok

----------------------------------------------------------------------
Ran 2 tests in 0.000s

OK

If a test case is particularly complex, it can even be divided into multiple individual tests, each checking a specific subpart of it:

class MySecondTestCase(unittest.TestCase):
def test_two(self):
pass

def test_two_part2(self):
pass

This allows us to divide the test cases into smaller pieces and eventually share setup and teardown code between the individual tests. The individual tests will be executed by the test runner in alphabetical order, so in this case, test_two will be executed before test_two_part2:

$ python 01_automatictests.py -v
test_two (__main__.MySecondTestCase) ... ok
test_two_part2 (__main__.MySecondTestCase) ... ok
test_one (__main__.MyTestCase) ... ok

In that run of the tests, we can see that MySecondTestCase was actually executed before MyTestCase because "MyS" is less than "MyT".

In any case, generally, it's a good idea to consider your tests as being executed in a random order and to not rely on any specific sequence of execution, because other developers might add more test cases, add more individual tests to a case, or rename classes, and you want to allow those changes with no additional issues. Especially since relying on a specific known execution order of your tests might limit your ability to parallelize your test suite and run test cases concurrently, which will be required as the size of your test suite grows.

Once more tests are added, adding them all into the same class or file quickly gets confusing, so it's usually a good idea to start organizing tests.

Organizing tests

If you have more than a few tests, it's generally a good idea to group your test cases into multiple modules and create a tests directory where you can gather the whole test plan:

├── 02_tests
│ ├── tests_div.py
│ └── tests_sum.py

Those tests can be executed through the unittest discover mode, which will look for all modules with names matching test*.py within a target directory and will run all the contained test cases:

$ python -m unittest discover 02_tests -v
test_div0 (tests_div.TestDiv) ... ok
test_div1 (tests_div.TestDiv) ... ok
test_sum0 (tests_sum.TestSum) ... ok
test_sum1 (tests_sum.TestSum) ... ok

----------------------------------------------------------------------
Ran 4 tests in 0.000s

OK

You can even pick which tests to run by filtering them with a substring with the -k parameter; for example, -k sum will only run tests that contain "sum" in their names:

$ python -m unittest discover 02_tests -k sum -v
test_sum0 (tests_sum.TestSum) ... ok
test_sum1 (tests_sum.TestSum) ... ok

----------------------------------------------------------------------
Ran 2 tests in 0.000s

OK

And yes, you can nest tests further as long as you use Python packages:

├── 02_tests
│ ├── tests_div
│ │ ├── __init__.py
│ │ └── tests_div.py
│ └── tests_sum.py

Running tests structured like the previous directory tree will properly navigate into the subfolders and spot the nested tests.

So running unittest in discovery mode over that direction will properly find the TestDiv and TestSum classes declared inside the files even when they are nested in subdirectories:

$ python -m unittest discover 02_tests -v
test_div0 (tests_div.tests_div.TestDiv) ... ok
test_div1 (tests_div.tests_div.TestDiv) ... ok
test_sum0 (tests_sum.TestSum) ... ok
test_sum1 (tests_sum.TestSum) ... ok

----------------------------------------------------------------------
Ran 4 tests in 0.000s

OK

Now that we know how to write tests, run them, and organize multiple tests in a test suite. We can start introducing the concept of TDD and how unit tests allow us to achieve it.

Introducing test-driven development and unit tests

Our tests in the previous section were all empty. The purpose was to showcase how a test suite can be made, executed, and organized in test cases and individual tests, but in the end, our tests did not test much.

Most individual tests are written following the "Arrange, Act, Assert" pattern:

  • First, prepare any state you will need to perform the action you want to try.
  • Then perform that action.
  • Finally, verify the consequences of the action are those that you expected.

Generally speaking, in most cases, the action you are going to test is "calling a function," and for code that doesn't depend on any shared state, the state is usually all contained within the function arguments, so the Arrange phase might be omitted. Finally, the Assert phase will verify that the called function did what you expected, which usually means verifying the returned value and any effect at a distance that function might have:

import unittest

class SomeTestCase(unittest.TestCase):
def test_something(self):
# Arrange phase, nothing to prepare here.

# Act phase, call do_something
result = do_something()

# Assert phase, verify do_something did what we expect.
assert result == "did something"

The test_something test is structured as a typical test with those three phases explicitly exposed, with the do_something call representing the Act phase and the final assert statement representing the Assertion phase.

Now that we know how to structure tests properly, we can see how they are helpful in implementing TDD and how unit tests are usually expressed.

Test-driven development

Tests can do more than just validating our code is doing what we expect. The TDD process argues that tests are essential in designing code itself.

Writing tests before implementing the code itself forces us to reason about our requirements. We must explicitly express requirements in a strict, well-defined way – clearly enough that a computer itself (computers are known for not being very flexible in understanding things) can understand them and state whether the code you will be writing next satisfies those requirements.

First, you write a test for your primary scenario—in this case, testing that doing 3+2 does return 5 as the result:

import unittest

class AdditionTestCase(unittest.TestCase):
def test_main(self):
result = addition(3, 2)
assert result == 5

Then you make sure it fails, which proves you are really testing something:

$ python 03_tdd.py 
E
======================================================================
ERROR: test_main (__main__.AdditionTestCase)
----------------------------------------------------------------------
Traceback (most recent call last):
File "03_tdd.py", line 5, in test_main
result = addition(3, 2)
NameError: name 'addition' is not defined

----------------------------------------------------------------------
Ran 1 test in 0.000s

FAILED (errors=1)

Finally, you write the real code that is expected to make the test pass:

def addition(arg1, arg2):
return arg1 + arg2

And confirm it makes your test pass:

$ python 03_tdd.py 
.
----------------------------------------------------------------------
Ran 1 test in 0.000s

OK

Once the test is done and it passes, we can revise our implementation and refactor the code. If the test still passes, it means we haven't changed the behavior and we are still doing what we wanted.

For example, we can change our addition function to unpack arguments instead of having to specify the two arguments it can receive:

def addition(*args):
a1, a2 = args
return a1 + a2

If our test still passes, it means we haven't changed the behavior, and it's still as good as before from that point of view:

$ python 03_tdd.py 
.
----------------------------------------------------------------------
Ran 1 test in 0.000s

OK

Test-driven development is silent about when you reach a robust code base that satisfies all your needs. Obviously, you should at least make sure there are enough tests to cover all your requirements.

But as testing guides us in the process of development, development should guide us in the process of testing.

Looking at the code helps us come up with more white-box tests; tests that we can think of because we know how the code works internally. And while those tests might not guarantee that we are satisfying more requirements, they help us guarantee that our code is robust in most conditions, including corner cases.

While historically, test-first and test-driven were synonyms, today that's considered the one major difference with the test-first approach. In TDD we don't have the expectation to be able to write all tests first. Nor is it generally a good idea in the context of extreme programming practices, because you still don't know what the resulting interface that you want to test will be. What you want to test evolves as the code evolves, and we know that the code will evolve after every passing test, as a passing test gives us a chance for refactoring.

In our prior example, as we changed our addition function to accept a variable number of arguments, a reasonable question would be, "But what happens if I pass three arguments? Or none?" And our requirements, expressed by the tests, as a consequence, have to grow to support a variable number of arguments:

    def test_threeargs(self):
result = addition(3, 2, 1)
assert result == 6

def test_noargs(self):
result = addition()
assert result == 0

So, writing code helped us come up with more tests to verify the conditions that came to mind when looking at the code like a white box:

$ python 03_tdd.py 
.EE
======================================================================
ERROR: test_noargs (__main__.AdditionTestCase)
----------------------------------------------------------------------
Traceback (most recent call last):
File "03_tdd.py", line 13, in test_noargs
result = addition()
File "03_tdd.py", line 18, in addition
a1, a2 = args
ValueError: not enough values to unpack (expected 2, got 0)

======================================================================
ERROR: test_threeargs (__main__.AdditionTestCase)
----------------------------------------------------------------------
Traceback (most recent call last):
File "03_tdd.py", line 9, in test_threeargs
result = addition(3, 2, 1)
File "03_tdd.py", line 18, in addition
a1, a2 = args
ValueError: too many values to unpack (expected 2)

----------------------------------------------------------------------
Ran 3 tests in 0.001s

FAILED (errors=2)

And adding those failing tests helps us come up with more, and better, code that now properly handles the cases where any number of arguments is passed to our addition function:

def addition(*args):
total = 0
for a in args:
total += a
return total

Our addition function will now just iterate over the provided arguments, adding them to the total. Thus if no argument is provided, it will just return 0 because nothing was added to it.

If we run our test suite again, we will be able to confirm that both our new tests now pass, and thus we achieved what we wanted to:

$ python 03_tdd.py 
...
----------------------------------------------------------------------
Ran 3 test in 0.001s

OK

Writing tests and writing code should interleave continuously. If you find yourself spending all your time on one or the other, you are probably moving away from the benefits that TDD can give you, as the two phases are meant to support each other.

There are many kinds of tests you are going to write in your test suite during your development practice, but the most common one is probably going to be test units.

Test units

The immediate question once we know how to arrange our tests, is usually "what should I test?". The answer to that is usually "it depends."

You usually want tests that assert that the feature you are providing to your users does what you expect. But do tests do nothing to guarantee that, internally, the components that collaborate with that feature behave correctly? The exposed feature might be working as a very lucky side effect of 200 different bugs in the underlying components.

So it's generally a good idea to test those units individually and verify that they all work as expected.

What are those units? Well, the answer is "it depends" again.

In most cases, you could discuss that in procedural programming, the units are the individual functions, while in object-oriented programming, it might be defined as a single class. But classes, while we usually do our best to try to isolate them to a single responsibility, might cover multiple different behaviors based on which method you call. So they actually act as multiple components in our system, and in such cases, they should be considered as separate units.

In practice, a unit is the smallest testable entity that participates in your software.

If we have a piece of software that does "multiplication," we might implement it as a main function that fetches the two provided arguments and calls a multiply function to do the real job:

def main():
import sys
num1, num2 = sys.argv[1:]
num1, num2 = int(num1), int(num2)
print(multiply(num1, num2))


def multiply(num1, num2):
total = 0
for _ in range(num2):
total = addition(total, num1)
return total


def addition(*args):
total = 0
for a in args:
total += a
return total

In such a case, both addition and multiply are units of our software.

While addition can be tested in isolation, multiply must use addition to work. multiply is thus defined as a sociable unit, while addition is a solitary unit.

Sociable unit tests are frequently also referred to as component tests. Your architecture mostly defines the distinction between a sociable unit test and a component test and it's hard to state exactly when one name should be preferred over the other.

While sociable units usually lead to more complete testing, they are slower, require more effort during the Arrange phase, and are less isolated. This means that a change in addition can make a test of multiply fail, which tells us that there is a problem, but also makes it harder to guess where the problem lies exactly.

In the subsequent chapters, we will see how sociable units can be converted into solitary units by using test doubles. If you have complete testing coverage for the underlying units, solitary unit tests can reach a level of guarantee that is similar to that of sociable units with must less effort and a faster test suite.

Test units are usually great at testing software from a white-box perspective, but that's not the sole point of view we should account for in our testing strategy. Test units guarantee that the code does what the developer meant it to, but do little to guarantee that the code does what the user needs. Integration and functional tests are usually more effective in terms of testing at that level of abstraction.

Understanding integration and functional tests

Testing all our software with solitary units can't guarantee that it's really working as expected. Unit testing confirms that the single components are working as expected, but doesn't give us any confidence about their effectiveness when paired together.

It's like testing an engine by itself, testing the wheels by themselves, testing the gears, and then expecting the car to work. We wouldn't be accounting for any issues introduced in the assembly process.

So we have a need to verify that those modules do work as expected when paired together.

That's exactly what integration tests are expected to do. They take the modules we tested individually and test them together.

Integration tests

The scope of integration tests is blurry. They might integrate two modules, or they might integrate tens of them. While they are more effective when integrating fewer modules, it's also more expensive to move forward as an approach and most developers argue that the effort of testing all possible combinations of modules in isolation isn't usually worth the benefit.

The boundary between unit tests made of sociable units and integration tests is not easy to explain. It usually depends on the architecture of the software itself. We could consider sociable units tests those tests that test units together that are inside the same architectural components, while we could consider integration tests those tests that test different architectural components together.

In an application, two separate services will be involved: Authorization and Authentication. Authentication takes care of letting the user in and identifying them, while Authorization tells us what the user can do once it is authenticated. We can see this in the following code block:

class Authentication:
USERS = [{"username": "user1",
"password": "pwd1"}]

def login(self, username, password):
u = self.fetch_user(username)
if not u or u["password"] != password:
return None
return u

def fetch_user(self, username):
for u in self.USERS:
if u["username"] == username:
return u
else:
return None


class Authorization:
PERMISSIONS = [{"user": "user1",
"permissions": {"create", "edit", "delete"}}]

def can(self, user, action):
for u in self.PERMISSIONS:
if u["user"] == user["username"]:
return action in u["permissions"]
else:
return False

Our classes are composed of two primary methods: Authentication.login and Authorization.can. The first is in charge of authenticating the user with a username and password and returning the authenticated user, while the second is in charge of verifying that a user can do a specific action. Tests for those methods can be considered unit tests.

So TestAuthentication.test_login will be a unit test that verifies the behavior of the Authentication.login unit, while TestAuthorization.test_can will be a unit test that verifies the behavior of the Authorization.can unit:

class TestAuthentication(unittest.TestCase):
def test_login(self):
auth = Authentication()
auth.USERS = [{"username": "testuser", "password": "testpass"}]

resp = auth.login("testuser", "testpass")

assert resp == {"username": "testuser", "password": "testpass"}


class TestAuthorization(unittest.TestCase):
def test_can(self):
authz = Authorization()
authz.PERMISSIONS = [{"user": "testuser", "permissions":
{"create"}}]

resp = authz.can({"username": "testuser"}, "create")

assert resp is True

Here, we have the notable difference that TestAuthentication.test_login is a sociable unit test as it depends on Authentication.fetch_user while testing Authentication.login, and TestAuthorization.test_can is instead a solitary unit test as it doesn't depend on any other unit.

So where is the integration test?

The integration test will happen once we join those two components of our architecture (authorization and authentication) and test them together to confirm that we can actually have a user log in and verify their permissions:

class TestAuthorizeAuthenticatedUser(unittest.TestCase):
def test_auth(self):
auth = Authentication()
authz = Authorization()
auth.USERS = [{"username": "testuser", "password": "testpass"}]
authz.PERMISSIONS = [{"user": "testuser",
"permissions": {"create"}}]

u = auth.login("testuser", "testpass")
resp = authz.can(u, "create")

assert resp is True

Generally, it's important to be able to run your integration tests independently from your unit tests, as you will want to be able to run the unit tests continuously during development on every change:

$ python 05_integration.py TestAuthentication TestAuthorization
........
----------------------------------------------------------------------
Ran 8 tests in 0.000s

OK

While unit tests are usually verified frequently during the development cycle, it's common to run your integration tests only when you've reached a stable point where your unit tests all pass:

$ python 05_integration.py TestAuthorizeAuthenticatedUser
.
----------------------------------------------------------------------
Ran 1 test in 0.000s

OK

As you know that the units that you wrote or modified do what you expected, running the TestAuthorizeAuthenticatedUser case only will confirm that those entities work together as expected.

Integration tests integrate multiple components, but they actually divide themselves into many different kinds of tests depending on their purpose, with the most common kind being functional tests.

Functional tests

Integration tests can be very diverse. As you start integrating more and more components, you move toward a higher level of abstraction, and in the end, you move so far from the underlying components that people feel the need to distinguish those kinds of tests as they offer different benefits, complexities, and execution times.

That's why the naming of functional tests, end-to-end tests, system tests, acceptance tests, and so on all takes place.

Overall, those are all forms of integration tests; what changes are their goal and purpose:

  • Functional tests tend to verify that we are exposing to our users the feature we actually intended. They don't care about intermediate results or side-effects; they just verify that the end result for the user is the one the specifications described, thus they are always black-box tests.
  • End-to-End (E2E) tests are a specific kind of functional test that involves the vertical integration of components. The most common E2E tests are where technologies such as Selenium are involved in accessing a real application instance through a web browser.
  • System tests are very similar to functional tests themselves, but instead of testing a single feature, they usually test a whole journey of the user across the system. So they usually simulate real usage patterns of the user to verify that the system as a whole behaves as expected.
  • Acceptance tests are a kind of functional test that is meant to confirm that the implementation of the feature does behave as expected. They usually express the primary usage flow of the feature, leaving less common flows for other integration tests, and are frequently provided by the specifications themselves to help the developer confirm that they implemented what was expected.

But those are not the only kinds of integration that people refer to; new types are continuously defined in the effort to distinguish the goals of tests and responsibilities. Component tests, contract tests, and many others are kinds of tests whose goal is to verify integration between different pieces of the software at different layers. Overall, you shouldn't be ashamed of asking your colleagues what they mean exactly when they use those names, because you will notice each one of them will value different properties of those tests when classifying them into the different categories.

The general distinction to keep in mind when distinguishing between integration tests and functional tests is that unit and integration tests aim to test the implementation, while functional tests aim to test the behavior.

How you do that can easily involve the same exact technologies and it's just a matter of different goals. Properly covering the behavior of your software with the right kind of tests can be the difference between buggy software and reliable software. That's why there has been a long debate about how to structure test suites, leading to the testing pyramid and the testing trophy as the most widespread models of test distribution.

Understanding the testing pyramid and trophy

Given the need to provide different kinds of tests – unit, integration, and E2E as each one of them has different benefits and costs, the next immediate question is how do we get the right balance?

Each kind of test comes with a benefit and a cost, so it's a matter of finding where we get the best return on investment:

  • E2E tests verify the real experience of what the user faces. They are, in theory, the most realistic kind of tests and can detect problems such as incompatibilities with specific platforms (for example, browsers) and exercise our system as a whole. But when something goes wrong, it is hard to spot where the problem lies. They are very slow and tend to be flaky (failing for reasons unrelated to our software, such as network conditions).
  • Integration tests usually provide a reasonable guarantee that the software is doing what it is expected to do and are fairly robust to internal implementation changes, requiring less frequent refactoring when the internals of the software change. But they can still get very slow if your system involves writes to database services, the rendering of page templates, routing HTTP requests, and generally slow parts. And when something goes wrong, we might have to go through tens of layers before being able to spot where the problem is.
  • Unit tests can be very fast (especially when talking of solitary units) and provide very pinpointed information about where problems are. But they can't always guarantee that the software as a whole does what it's expected to do and can make changing implementation details expensive because a change to internals that don't impact the software behavior might require changing tens of unit tests.

Each of them has its own pros and cons, and the development community has long argued how to get the right balance.

The two primary models that have emerged are the testing pyramid and the testing trophy, named after their shapes.

The testing pyramid

The testing pyramid originates from Mike Cohn's Succeeding with Agile book, where the two rules of thumb are "Write test with different granularities" (so you should have unit, integration, E2E, and so on...) and "the more you get high level, the less you should test" (so you should have tons of unit tests, and a few E2E tests).

While different people will argue about which different layers are contained within it, the testing pyramid can be simplified to look like this:

Figure 1.1 – Testing pyramid

The tip of the pyramid is narrow, thus meaning we have fewer of those tests, while the base is wider, meaning we should mostly cover code with those kinds of tests. So, as we move down through the layers, the lower we get, the more tests we should have.

The idea is that as unit tests are fast to run and expose pinpointed issues early on, you should have a lot of them and shrink the number of tests as they move to higher layers and thus get slower and vaguer about what's broken.

The testing pyramid is probably the most widespread practice for organizing tests and usually pairs well with test-driven development as unit tests are the founding tool for the TDD process.

The other most widespread model is the testing trophy, which instead emphasizes integration tests.

The testing trophy

The testing trophy originates from a phrase by Guillermo Rauch, the author of Socket.io and many other famous JavaScript-based technologies. Guillermo stated that developers should "Write tests. Not too many. Mostly integration."

Like Mike Cohn, he clearly states that tests are the foundation of any effective software development practice, but he argues that they have a diminishing return and thus it's important to find the sweet spot where you get the best return on the time spent writing tests.

That sweet spot is expected to live in integration tests because you usually need fewer of them to spot real problems, they are not too bound to implementation details, and they are still fast enough that you can afford to write a few of them.

So the testing trophy will look like this:

Figure 1.2 – Testing trophy

As you probably saw, the testing trophy puts a lot of value on static tests too, because the whole idea of the testing trophy is that what is really of value is the return on investment, and static checks are fairly cheap, up to the point that most development environments run them in real time. Linters, type checkers, and more advanced kinds of type analyzers are cheap enough that it would do no good to ignore them even if they are rarely able to spot bugs in your business logic.

Unit tests instead can cost developers time with the need to adapt them due to internal implementation detail changes that don't impact the final behavior of the software in any way, and thus the effort spent on them should be kept under control.

Those two models are the most common ways to distribute your tests, but more best practices are involved when thinking of testing distribution and coverage.

Testing distribution and coverage

While the importance of testing is widely recognized, there is also general agreement that test suites have a diminishing return.

There is little point in wasting hours on testing plain getters and setters or testing internal/private methods. The sweet spot is said to be around 80% of code coverage, even though I think that really depends on the language in use – the more expressive your language is, the less code you have to write to perform complex actions. And all complex actions should be properly tested, so in the case of Python, the sweet spots probably lies more in the range of 90%. But there are cases, such as porting projects from Python 2 to Python 3, where code coverage of 100% is the only way you can confirm that you haven't changed any behavior at all in the process of porting your code base.

Last but not least, most testing practices related to test-driven development take care of the testing practice up to the release point. It's important to keep in mind that when the software is released, the testing process hasn't finished.

Many teams forget to set up proper system tests and don't have a way to identify and reproduce issues that can only happen in production environments with real concurrent users and large amounts of data. Having staging environments and a suite to simulate incidents or real users' behaviors might be the only way to spot bugs that only happen after days of continuous use of the system. And some companies go as far as testing the production system with tools that inject real problems continuously for the sole purpose of verifying that the system is solid.

Summary

As we saw in the sections about integration tests, functional tests, and the testing pyramid/trophy models, there are many different visions about what should be tested, with which goals in mind, and how test suites should be organized. Getting this right can impact how much you trust your automatic test suite, and thus how much you evolve it because it provides you with value.

Learning to do proper automated testing is the gateway to major software development boosts, opening possibilities for practices such as continuous integration and continuous delivery, which would otherwise be impossible without a proper test suite.

But testing isn't easy; it comes with many side-effects that are not immediately obvious, and for which the software development industry started to provide tools and best practices only recently. So in the next chapters, we will look at some of those best practices and tools that can help you write a good, easily maintained test suite.

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Build robust Python applications using TDD and BDD methodologies
  • Test Python web applications using WebTest and web frameworks
  • Leverage PyTest to implement stringent testing mechanisms to ensure fault-tolerant applications

Description

Test-driven development (TDD) is a set of best practices that helps developers to build more scalable software and is used to increase the robustness of software by using automatic tests. This book shows you how to apply TDD practices effectively in Python projects. You’ll begin by learning about built-in unit tests and Mocks before covering rich frameworks like PyTest and web-based libraries such as WebTest and Robot Framework, discovering how Python allows you to embrace all modern testing practices with ease. Moving on, you’ll find out how to design tests and balance them with new feature development and learn how to create a complete test suite with PyTest. The book helps you adopt a hands-on approach to implementing TDD and associated methodologies that will have you up and running and make you more productive in no time. With the help of step-by-step explanations of essential concepts and practical examples, you’ll explore automatic tests and TDD best practices and get to grips with the methodologies and tools available in Python for creating effective and robust applications. By the end of this Python book, you will be able to write reliable test suites in Python to ensure the long-term resilience of your application using the range of libraries offered by Python for testing and development.

Who is this book for?

This book is for Python developers looking to get started with test-driven development and developers who want to learn about the testing tools available in Python. Developers who want to create web applications with Python and plan to implement TDD methodology with PyTest will find this book useful. Basic knowledge of Python programming is required.

What you will learn

  • Find out how tests can make your life easier as a developer and discover related best practices
  • Explore PyTest, the most widespread testing framework for Python
  • Get to grips with the most common PyTest plugins, including coverage, flaky, xdist, and picked
  • Write functional tests for WSGI web applications with WebTest
  • Run end-to-end tests for web applications using Robot Framework
  • Understand what test-driven development means and why it is important
  • Discover how to use the range of tools available in Python
  • Build reliable and robust applications

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Feb 18, 2021
Length: 338 pages
Edition : 1st
Language : English
ISBN-13 : 9781838642655
Languages :
Tools :

What do you get with a Packt Subscription?

Free for first 7 days. $19.99 p/m after that. Cancel any time!
Product feature icon Unlimited ad-free access to the largest independent learning library in tech. Access this title and thousands more!
Product feature icon 50+ new titles added per month, including many first-to-market concepts and exclusive early access to books as they are being written.
Product feature icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Product feature icon Thousands of reference materials covering every tech concept you need to stay up to date.
Subscribe now
View plans & pricing

Product Details

Publication date : Feb 18, 2021
Length: 338 pages
Edition : 1st
Language : English
ISBN-13 : 9781838642655
Languages :
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
€18.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
€189.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts
€264.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total 110.97
Python Object-Oriented Programming
€35.99
Crafting Test-Driven Software with Python
€36.99
Clean Code in Python
€37.99
Total 110.97 Stars icon

Table of Contents

17 Chapters
Section 1: Software Testing and Test-Driven Development Chevron down icon Chevron up icon
Getting Started with Software Testing Chevron down icon Chevron up icon
Test Doubles with a Chat Application Chevron down icon Chevron up icon
Test-Driven Development while Creating a TODO List Chevron down icon Chevron up icon
Scaling the Test Suite Chevron down icon Chevron up icon
Section 2: PyTest for Python Testing Chevron down icon Chevron up icon
Introduction to PyTest Chevron down icon Chevron up icon
Dynamic and Parametric Tests and Fixtures Chevron down icon Chevron up icon
Fitness Function with a Contact Book Application Chevron down icon Chevron up icon
PyTest Essential Plugins Chevron down icon Chevron up icon
Managing Test Environments with Tox Chevron down icon Chevron up icon
Testing Documentation and Property-Based Testing Chevron down icon Chevron up icon
Section 3: Testing for the Web Chevron down icon Chevron up icon
Testing for the Web: WSGI versus HTTP Chevron down icon Chevron up icon
End-to-End Testing with the Robot Framework Chevron down icon Chevron up icon
About Packt Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Top Reviews
Rating distribution
Full star icon Full star icon Full star icon Full star icon Half star icon 4.8
(11 Ratings)
5 star 81.8%
4 star 18.2%
3 star 0%
2 star 0%
1 star 0%
Filter icon Filter
Top Reviews

Filter reviews by




Vit Apr 09, 2021
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Great reference for those who want to learn more about testing and test-driven development practices. The book covers a wide variety of tools and concepts that are most commonly used in the software testing community. It offers a useful practical introduction to Python native unittest module and the pytest framework for those know how to test software in other languages but are just turning to Python and thus don't yet know which tools are available to them. Overall, highly recommended !
Amazon Verified review Amazon
Vincent Russo Apr 05, 2021
Full star icon Full star icon Full star icon Full star icon Full star icon 5
The text starts off with a well-written overview of TDD (test-driven-development). Lots of code samples and examples that readers can use to more deeply understand some of the testing tools and concepts that the author covers. This is a practical text for a QA software developer who works primarily in Python and wants to improve their workflow by using some of the more modern offerings of various Python modules like `pytest`. Highly recommended!
Amazon Verified review Amazon
isela gradilla May 03, 2021
Full star icon Full star icon Full star icon Full star icon Full star icon 5
This book is beginner friendly, as someone very comfortable with python I found this book very helpful as it is very detailed and well organized. I would highly recommend this if you are a python developer.
Amazon Verified review Amazon
Marco Carnini Apr 21, 2021
Full star icon Full star icon Full star icon Full star icon Full star icon 5
The book should be thought as a course in software testing. While the main focus is on test-driven development (starting from unit testing to go with integration and function testing), the book is way more than an abstract, arid and theoretical exposition of concepts and theory.The author present concrete examples of Test Driven Development (although idealized for the sake of shortness), and show how to build it up from scratch a test suite for the software. Different frameworks were illustrated (for example, I currently only use pytest) for different kind of projects (robot, tox, WebTest). Integration with travis is also presented (and much appreciated).I particularly like the exposition of behavior-driven development , which is a topic I always desire to learn. I am not an expert after only reading the book, but I feel the material is enough to start experimenting.In short: the book is detailed, it is easy to follow as long as an effort is paid in replicating the steps illustrated and represents a solid foundation to start practicing.A very good job.
Amazon Verified review Amazon
Vlad Bezden Mar 29, 2021
Full star icon Full star icon Full star icon Full star icon Full star icon 5
This is a second book by Alessandro and as with the first book he did a great job. In this book, he covers all aspects of testing. Not only he provides how to test using different frameworks like unittest and pytest, but he also provides information about different strategies of testing, such as unit tests, TDD, BDD, ATDD. The one chapter that I like the most is "Test Doubles with a Chat Application" where he is talking about dependencies injection.This book everything you need to know about software testing.
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is included in a Packt subscription? Chevron down icon Chevron up icon

A subscription provides you with full access to view all Packt and licnesed content online, this includes exclusive access to Early Access titles. Depending on the tier chosen you can also earn credits and discounts to use for owning content

How can I cancel my subscription? Chevron down icon Chevron up icon

To cancel your subscription with us simply go to the account page - found in the top right of the page or at https://subscription.packtpub.com/my-account/subscription - From here you will see the ‘cancel subscription’ button in the grey box with your subscription information in.

What are credits? Chevron down icon Chevron up icon

Credits can be earned from reading 40 section of any title within the payment cycle - a month starting from the day of subscription payment. You also earn a Credit every month if you subscribe to our annual or 18 month plans. Credits can be used to buy books DRM free, the same way that you would pay for a book. Your credits can be found in the subscription homepage - subscription.packtpub.com - clicking on ‘the my’ library dropdown and selecting ‘credits’.

What happens if an Early Access Course is cancelled? Chevron down icon Chevron up icon

Projects are rarely cancelled, but sometimes it's unavoidable. If an Early Access course is cancelled or excessively delayed, you can exchange your purchase for another course. For further details, please contact us here.

Where can I send feedback about an Early Access title? Chevron down icon Chevron up icon

If you have any feedback about the product you're reading, or Early Access in general, then please fill out a contact form here and we'll make sure the feedback gets to the right team. 

Can I download the code files for Early Access titles? Chevron down icon Chevron up icon

We try to ensure that all books in Early Access have code available to use, download, and fork on GitHub. This helps us be more agile in the development of the book, and helps keep the often changing code base of new versions and new technologies as up to date as possible. Unfortunately, however, there will be rare cases when it is not possible for us to have downloadable code samples available until publication.

When we publish the book, the code files will also be available to download from the Packt website.

How accurate is the publication date? Chevron down icon Chevron up icon

The publication date is as accurate as we can be at any point in the project. Unfortunately, delays can happen. Often those delays are out of our control, such as changes to the technology code base or delays in the tech release. We do our best to give you an accurate estimate of the publication date at any given time, and as more chapters are delivered, the more accurate the delivery date will become.

How will I know when new chapters are ready? Chevron down icon Chevron up icon

We'll let you know every time there has been an update to a course that you've bought in Early Access. You'll get an email to let you know there has been a new chapter, or a change to a previous chapter. The new chapters are automatically added to your account, so you can also check back there any time you're ready and download or read them online.

I am a Packt subscriber, do I get Early Access? Chevron down icon Chevron up icon

Yes, all Early Access content is fully available through your subscription. You will need to have a paid for or active trial subscription in order to access all titles.

How is Early Access delivered? Chevron down icon Chevron up icon

Early Access is currently only available as a PDF or through our online reader. As we make changes or add new chapters, the files in your Packt account will be updated so you can download them again or view them online immediately.

How do I buy Early Access content? Chevron down icon Chevron up icon

Early Access is a way of us getting our content to you quicker, but the method of buying the Early Access course is still the same. Just find the course you want to buy, go through the check-out steps, and you’ll get a confirmation email from us with information and a link to the relevant Early Access courses.

What is Early Access? Chevron down icon Chevron up icon

Keeping up to date with the latest technology is difficult; new versions, new frameworks, new techniques. This feature gives you a head-start to our content, as it's being created. With Early Access you'll receive each chapter as it's written, and get regular updates throughout the product's development, as well as the final course as soon as it's ready.We created Early Access as a means of giving you the information you need, as soon as it's available. As we go through the process of developing a course, 99% of it can be ready but we can't publish until that last 1% falls in to place. Early Access helps to unlock the potential of our content early, to help you start your learning when you need it most. You not only get access to every chapter as it's delivered, edited, and updated, but you'll also get the finalized, DRM-free product to download in any format you want when it's published. As a member of Packt, you'll also be eligible for our exclusive offers, including a free course every day, and discounts on new and popular titles.