Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases now! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Crafting Test-Driven Software with Python
Crafting Test-Driven Software with Python

Crafting Test-Driven Software with Python: Write test suites that scale with your applications' needs and complexity using Python and PyTest

eBook
€20.98 €29.99
Paperback
€36.99
Subscription
Free Trial
Renews at €18.99p/m

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
Table of content icon View table of contents Preview book icon Preview Book

Crafting Test-Driven Software with Python

Getting Started with Software Testing

Many think that the big step from "coding" to "software engineering" is made by having elegant architectures, well-defined execution plans, and software that moves big companies' processes. This mostly comes from our vision of the classic industrial product development world, where planning mostly mattered more than execution, because the execution was moved forward by an assembly line and software was an expensive internal utility that only big companies could afford

As software development science moved forward and matured, it became clear that classic industrial best practices weren't always a great fit for it. The reason being that every software product was very different, due to the technologies involved, the speed at which those technologies evolve, and in the end the fact that different software had to do totally different things. Thus the idea developed that software development was more similar to craftsmanship than to industry.

If you embrace that it's very hard, and not very effective, to try to eliminate uncertainty and issues with tons of preparation work due to the very nature of software itself, it becomes evident that the most important part of software development is detecting defects and ensuring it achieves the expected goals. Those two things are usually mostly done by having tests and a fitness function that can verify the software does what we really mean it to – founding pieces of the whole Software Quality Control discipline, which is what this chapter will introduce and, in practice, what this book is all about.

In this chapter, we will go through testing software products and the best practices in quality control. We will also introduce automatic tests and how they are superseding manual testing. We will take a look at what Test-Driven Development (TDD) is and how to apply it in Python, giving some guidance on how to distinguish between the various categories of tests, how to implement them, and how to get the right balance between test efficacy and test cost.

In this chapter, we will cover the following:

  • Introducing software testing and quality control
  • Introducing automatic tests and test suites
  • Introducing test-driven development and unit tests
  • Understanding integration and functional tests
  • Understanding the testing pyramid and trophy

Technical requirements

Introducing software testing and quality control

From the early days, it was clear that like any other machine, software needed a way to verify it was working properly and was built with no defects.

Software development processes have been heavily inspired by manufacturing industry standards, and early on, testing and quality control were introduced into the product development life cycle. So software companies frequently have a quality assurance team that focuses on setting up processes to guarantee robust software and track results.

Those processes usually include a quality control process where the quality of the built artifact is assessed before it can be considered ready for users.

The quality control process usually achieves such confidence through the execution of a test plan. This is usually a checklist that a dedicated team goes through during the various phases of production to ensure the software behaves as expected.

Test plans

A test plan is composed of multiple test cases, each specifying the following:

  • Preconditions: What's necessary to be able to verify the case
  • Steps: Actions that have to succeed when executed in the specified order
  • Postconditions: In which state the system is expected to be at the end of the steps

A sample test case of software where logging in with a username and password is involved, and we might want to allow the user to reset those, might look like the following table:

Test Case: 2.2 - Change User Password

Preconditions:

  • A user, user1 exists
  • The user is logged in as user1
  • The user is at the main menu
# Action Expected Response Success / Fail
1 Click the change password button. The system shows a dialog to insert a new password.
2

Enter newpass.

The dialog shows 7 asterisks in the password field.
3 Click the OK button. The system shows a dialog with a success message.
4 Wait 2 seconds. The success dialog goes away.

Postconditions:

  • The user1 password is now newpass

These test cases are divided into cases, are manually verified by a dedicated team, and a sample of them is usually selected to be executed during development, but most of them are checked when the development team declared the work done.

This meant that once the team finishes its work, it takes days/weeks for the release to happen, as the whole software has to be verified by humans clicking buttons, with all the unpredictable results that involves, as humans can get distracted, pressing the wrong button or receiving phone calls in the middle of a test case.

As software usage became more widespread, and business-to-consumer products became the norm, consumers started to appreciate faster release cycles. Companies that updated their products with new features frequently were those that ended up dominating the market in the long term.

If you think about modern release cycles, we are now used to getting a new version of our favorite mobile application weekly. Such applications are probably so complex that they involve thousands of test cases. If all those cases had to be performed by a human, there would be no way for the company to provide you with frequent releases.

The worst thing you can do, by the way, is to release a broken product. Your users will lose confidence and will switch to other more reliable competitors if they can't get their job done due to crashes or bugs. So how can we deliver such frequent releases without reducing our test coverage and thus incurring more bugs?

The solution came from automating the test process. So while we learned how to detect defects by writing and executing test plans, it's only by making them automatic that we can scale them to the number of cases that will ensure robust software in the long term.

Instead of having humans test software, have some other software test it. What a person does in seconds can happen in milliseconds with software and you can run thousands of tests in a few minutes.

Introducing automatic tests and test suites

Automated testing is, in practice, the art of writing another piece of software to test an original piece of software.

As testing a whole piece of software has to take millions of variables and possible code paths into account, a single program trying to test another one would be very complex and hard to maintain. For this reason, it's usually convenient to split that program into smaller isolated programs, each being a test case.

Each test case contains all the instructions that are required to set up the target software in a state where the parts that are the test case areas of interest can be tested, the tests can be done, and all the conditions can be verified and reset back to the state of the target software so a subsequent test case can find a known state from which to start.

When using the unittest module that comes with the Python Standard Library, each test case is declared by subclassing from the unittest.TestCase class and adding a method whose name starts with test, which will contain the test itself:

import unittest

class MyTestCase(unittest.TestCase):
def test_one(self):
pass

Trying to run our previous test will do nothing by the way:

$ python 01_automatictests.py
$

We declared our test case, but we have nothing that runs it.

As for manually executed tests, the automatic tests need someone in charge of gathering all test cases and running them all. That's the role of a test runner.

Test runners usually involve a discovery phase (during which they detect all test cases) and a run phase (during which they run the discovered tests).

The unittest module provides all the components necessary to build a test runner that does both the discovery and execution of tests. For convenience, it even provides the unittest.main() method, which configures a test runner that, by default, will run the tests in the current module:

import unittest

class MyTestCase(unittest.TestCase):
def test_one(self):
pass

if __name__ == '__main__':
unittest.main()

By adding a call to unittest.main() at the end of our tests, Python will automatically execute our tests when the module is invoked:

$ python 01_automatictests.py
.
----------------------------------------------------------------------
Ran 1 test in 0.000s

OK

We can confirm that the test we cared about was executed by using the -v option to print a more verbose output:

$ python 01_automatictests.py -v
test_one (__main__.MyTestCase) ... ok

----------------------------------------------------------------------
Ran 1 test in 0.000s

OK

During the discovery phase, unittest.main will look for all classes that inherit from unittest.TestCase within the module that is recognized as the main Python module (sys.modules['__main__']), and all those subclasses will be registered as test cases for the runner.

Individual tests are then defined by having methods with names starting with test in the test case classes. This means that if we add more methods with names that don't start with test, they won't be treated as tests:

class MyTestCase(unittest.TestCase):
def test_one(self):
pass

def notatest(self):
pass

Trying to start the test runner again will continue to run only the test_one test:

$ python 01_automatictests.py -v
test_one (__main__.MyTestCase) ... ok

----------------------------------------------------------------------
Ran 1 test in 0.000s

OK

In the previous example, only the test_one method was executed as a test, while notatest was recognized as not being a test but instead as a method that we are going to use ourselves in tests.

Being able to distinguish between tests (methods whose names start with test_) and other methods allows us to create helpers and utility methods within our test cases that the individual tests can reuse.

Given that a test suite is a collection of multiple test cases, to grow our test suite, we need to be able to actually write more than one single TestCase subclass and run its tests.

Multiple test cases

We already know that unittest.main is the function in charge of executing our test suite, but how can we make it execute more than one TestCase?

The discovery phase of unittest.main (the phase during which unittest.main decides which tests to run) looks for all subclasses or unittest.TestCase.

The same way we had MyTestCase tests executed, adding more test cases is as simple as declaring more classes:

import unittest


class MyTestCase(unittest.TestCase):
def test_one(self):
pass

def notatest(self):
pass


class MySecondTestCase(unittest.TestCase):
def test_two(self):
pass


if __name__ == '__main__':
unittest.main()

Running the 01_automatictests.py module again will lead to both test cases being verified:

$ python 01_automatictests.py -v
test_two (__main__.MySecondTestCase) ... ok
test_one (__main__.MyTestCase) ... ok

----------------------------------------------------------------------
Ran 2 tests in 0.000s

OK

If a test case is particularly complex, it can even be divided into multiple individual tests, each checking a specific subpart of it:

class MySecondTestCase(unittest.TestCase):
def test_two(self):
pass

def test_two_part2(self):
pass

This allows us to divide the test cases into smaller pieces and eventually share setup and teardown code between the individual tests. The individual tests will be executed by the test runner in alphabetical order, so in this case, test_two will be executed before test_two_part2:

$ python 01_automatictests.py -v
test_two (__main__.MySecondTestCase) ... ok
test_two_part2 (__main__.MySecondTestCase) ... ok
test_one (__main__.MyTestCase) ... ok

In that run of the tests, we can see that MySecondTestCase was actually executed before MyTestCase because "MyS" is less than "MyT".

In any case, generally, it's a good idea to consider your tests as being executed in a random order and to not rely on any specific sequence of execution, because other developers might add more test cases, add more individual tests to a case, or rename classes, and you want to allow those changes with no additional issues. Especially since relying on a specific known execution order of your tests might limit your ability to parallelize your test suite and run test cases concurrently, which will be required as the size of your test suite grows.

Once more tests are added, adding them all into the same class or file quickly gets confusing, so it's usually a good idea to start organizing tests.

Organizing tests

If you have more than a few tests, it's generally a good idea to group your test cases into multiple modules and create a tests directory where you can gather the whole test plan:

├── 02_tests
│ ├── tests_div.py
│ └── tests_sum.py

Those tests can be executed through the unittest discover mode, which will look for all modules with names matching test*.py within a target directory and will run all the contained test cases:

$ python -m unittest discover 02_tests -v
test_div0 (tests_div.TestDiv) ... ok
test_div1 (tests_div.TestDiv) ... ok
test_sum0 (tests_sum.TestSum) ... ok
test_sum1 (tests_sum.TestSum) ... ok

----------------------------------------------------------------------
Ran 4 tests in 0.000s

OK

You can even pick which tests to run by filtering them with a substring with the -k parameter; for example, -k sum will only run tests that contain "sum" in their names:

$ python -m unittest discover 02_tests -k sum -v
test_sum0 (tests_sum.TestSum) ... ok
test_sum1 (tests_sum.TestSum) ... ok

----------------------------------------------------------------------
Ran 2 tests in 0.000s

OK

And yes, you can nest tests further as long as you use Python packages:

├── 02_tests
│ ├── tests_div
│ │ ├── __init__.py
│ │ └── tests_div.py
│ └── tests_sum.py

Running tests structured like the previous directory tree will properly navigate into the subfolders and spot the nested tests.

So running unittest in discovery mode over that direction will properly find the TestDiv and TestSum classes declared inside the files even when they are nested in subdirectories:

$ python -m unittest discover 02_tests -v
test_div0 (tests_div.tests_div.TestDiv) ... ok
test_div1 (tests_div.tests_div.TestDiv) ... ok
test_sum0 (tests_sum.TestSum) ... ok
test_sum1 (tests_sum.TestSum) ... ok

----------------------------------------------------------------------
Ran 4 tests in 0.000s

OK

Now that we know how to write tests, run them, and organize multiple tests in a test suite. We can start introducing the concept of TDD and how unit tests allow us to achieve it.

Introducing test-driven development and unit tests

Our tests in the previous section were all empty. The purpose was to showcase how a test suite can be made, executed, and organized in test cases and individual tests, but in the end, our tests did not test much.

Most individual tests are written following the "Arrange, Act, Assert" pattern:

  • First, prepare any state you will need to perform the action you want to try.
  • Then perform that action.
  • Finally, verify the consequences of the action are those that you expected.

Generally speaking, in most cases, the action you are going to test is "calling a function," and for code that doesn't depend on any shared state, the state is usually all contained within the function arguments, so the Arrange phase might be omitted. Finally, the Assert phase will verify that the called function did what you expected, which usually means verifying the returned value and any effect at a distance that function might have:

import unittest

class SomeTestCase(unittest.TestCase):
def test_something(self):
# Arrange phase, nothing to prepare here.

# Act phase, call do_something
result = do_something()

# Assert phase, verify do_something did what we expect.
assert result == "did something"

The test_something test is structured as a typical test with those three phases explicitly exposed, with the do_something call representing the Act phase and the final assert statement representing the Assertion phase.

Now that we know how to structure tests properly, we can see how they are helpful in implementing TDD and how unit tests are usually expressed.

Test-driven development

Tests can do more than just validating our code is doing what we expect. The TDD process argues that tests are essential in designing code itself.

Writing tests before implementing the code itself forces us to reason about our requirements. We must explicitly express requirements in a strict, well-defined way – clearly enough that a computer itself (computers are known for not being very flexible in understanding things) can understand them and state whether the code you will be writing next satisfies those requirements.

First, you write a test for your primary scenario—in this case, testing that doing 3+2 does return 5 as the result:

import unittest

class AdditionTestCase(unittest.TestCase):
def test_main(self):
result = addition(3, 2)
assert result == 5

Then you make sure it fails, which proves you are really testing something:

$ python 03_tdd.py 
E
======================================================================
ERROR: test_main (__main__.AdditionTestCase)
----------------------------------------------------------------------
Traceback (most recent call last):
File "03_tdd.py", line 5, in test_main
result = addition(3, 2)
NameError: name 'addition' is not defined

----------------------------------------------------------------------
Ran 1 test in 0.000s

FAILED (errors=1)

Finally, you write the real code that is expected to make the test pass:

def addition(arg1, arg2):
return arg1 + arg2

And confirm it makes your test pass:

$ python 03_tdd.py 
.
----------------------------------------------------------------------
Ran 1 test in 0.000s

OK

Once the test is done and it passes, we can revise our implementation and refactor the code. If the test still passes, it means we haven't changed the behavior and we are still doing what we wanted.

For example, we can change our addition function to unpack arguments instead of having to specify the two arguments it can receive:

def addition(*args):
a1, a2 = args
return a1 + a2

If our test still passes, it means we haven't changed the behavior, and it's still as good as before from that point of view:

$ python 03_tdd.py 
.
----------------------------------------------------------------------
Ran 1 test in 0.000s

OK

Test-driven development is silent about when you reach a robust code base that satisfies all your needs. Obviously, you should at least make sure there are enough tests to cover all your requirements.

But as testing guides us in the process of development, development should guide us in the process of testing.

Looking at the code helps us come up with more white-box tests; tests that we can think of because we know how the code works internally. And while those tests might not guarantee that we are satisfying more requirements, they help us guarantee that our code is robust in most conditions, including corner cases.

While historically, test-first and test-driven were synonyms, today that's considered the one major difference with the test-first approach. In TDD we don't have the expectation to be able to write all tests first. Nor is it generally a good idea in the context of extreme programming practices, because you still don't know what the resulting interface that you want to test will be. What you want to test evolves as the code evolves, and we know that the code will evolve after every passing test, as a passing test gives us a chance for refactoring.

In our prior example, as we changed our addition function to accept a variable number of arguments, a reasonable question would be, "But what happens if I pass three arguments? Or none?" And our requirements, expressed by the tests, as a consequence, have to grow to support a variable number of arguments:

    def test_threeargs(self):
result = addition(3, 2, 1)
assert result == 6

def test_noargs(self):
result = addition()
assert result == 0

So, writing code helped us come up with more tests to verify the conditions that came to mind when looking at the code like a white box:

$ python 03_tdd.py 
.EE
======================================================================
ERROR: test_noargs (__main__.AdditionTestCase)
----------------------------------------------------------------------
Traceback (most recent call last):
File "03_tdd.py", line 13, in test_noargs
result = addition()
File "03_tdd.py", line 18, in addition
a1, a2 = args
ValueError: not enough values to unpack (expected 2, got 0)

======================================================================
ERROR: test_threeargs (__main__.AdditionTestCase)
----------------------------------------------------------------------
Traceback (most recent call last):
File "03_tdd.py", line 9, in test_threeargs
result = addition(3, 2, 1)
File "03_tdd.py", line 18, in addition
a1, a2 = args
ValueError: too many values to unpack (expected 2)

----------------------------------------------------------------------
Ran 3 tests in 0.001s

FAILED (errors=2)

And adding those failing tests helps us come up with more, and better, code that now properly handles the cases where any number of arguments is passed to our addition function:

def addition(*args):
total = 0
for a in args:
total += a
return total

Our addition function will now just iterate over the provided arguments, adding them to the total. Thus if no argument is provided, it will just return 0 because nothing was added to it.

If we run our test suite again, we will be able to confirm that both our new tests now pass, and thus we achieved what we wanted to:

$ python 03_tdd.py 
...
----------------------------------------------------------------------
Ran 3 test in 0.001s

OK

Writing tests and writing code should interleave continuously. If you find yourself spending all your time on one or the other, you are probably moving away from the benefits that TDD can give you, as the two phases are meant to support each other.

There are many kinds of tests you are going to write in your test suite during your development practice, but the most common one is probably going to be test units.

Test units

The immediate question once we know how to arrange our tests, is usually "what should I test?". The answer to that is usually "it depends."

You usually want tests that assert that the feature you are providing to your users does what you expect. But do tests do nothing to guarantee that, internally, the components that collaborate with that feature behave correctly? The exposed feature might be working as a very lucky side effect of 200 different bugs in the underlying components.

So it's generally a good idea to test those units individually and verify that they all work as expected.

What are those units? Well, the answer is "it depends" again.

In most cases, you could discuss that in procedural programming, the units are the individual functions, while in object-oriented programming, it might be defined as a single class. But classes, while we usually do our best to try to isolate them to a single responsibility, might cover multiple different behaviors based on which method you call. So they actually act as multiple components in our system, and in such cases, they should be considered as separate units.

In practice, a unit is the smallest testable entity that participates in your software.

If we have a piece of software that does "multiplication," we might implement it as a main function that fetches the two provided arguments and calls a multiply function to do the real job:

def main():
import sys
num1, num2 = sys.argv[1:]
num1, num2 = int(num1), int(num2)
print(multiply(num1, num2))


def multiply(num1, num2):
total = 0
for _ in range(num2):
total = addition(total, num1)
return total


def addition(*args):
total = 0
for a in args:
total += a
return total

In such a case, both addition and multiply are units of our software.

While addition can be tested in isolation, multiply must use addition to work. multiply is thus defined as a sociable unit, while addition is a solitary unit.

Sociable unit tests are frequently also referred to as component tests. Your architecture mostly defines the distinction between a sociable unit test and a component test and it's hard to state exactly when one name should be preferred over the other.

While sociable units usually lead to more complete testing, they are slower, require more effort during the Arrange phase, and are less isolated. This means that a change in addition can make a test of multiply fail, which tells us that there is a problem, but also makes it harder to guess where the problem lies exactly.

In the subsequent chapters, we will see how sociable units can be converted into solitary units by using test doubles. If you have complete testing coverage for the underlying units, solitary unit tests can reach a level of guarantee that is similar to that of sociable units with must less effort and a faster test suite.

Test units are usually great at testing software from a white-box perspective, but that's not the sole point of view we should account for in our testing strategy. Test units guarantee that the code does what the developer meant it to, but do little to guarantee that the code does what the user needs. Integration and functional tests are usually more effective in terms of testing at that level of abstraction.

Understanding integration and functional tests

Testing all our software with solitary units can't guarantee that it's really working as expected. Unit testing confirms that the single components are working as expected, but doesn't give us any confidence about their effectiveness when paired together.

It's like testing an engine by itself, testing the wheels by themselves, testing the gears, and then expecting the car to work. We wouldn't be accounting for any issues introduced in the assembly process.

So we have a need to verify that those modules do work as expected when paired together.

That's exactly what integration tests are expected to do. They take the modules we tested individually and test them together.

Integration tests

The scope of integration tests is blurry. They might integrate two modules, or they might integrate tens of them. While they are more effective when integrating fewer modules, it's also more expensive to move forward as an approach and most developers argue that the effort of testing all possible combinations of modules in isolation isn't usually worth the benefit.

The boundary between unit tests made of sociable units and integration tests is not easy to explain. It usually depends on the architecture of the software itself. We could consider sociable units tests those tests that test units together that are inside the same architectural components, while we could consider integration tests those tests that test different architectural components together.

In an application, two separate services will be involved: Authorization and Authentication. Authentication takes care of letting the user in and identifying them, while Authorization tells us what the user can do once it is authenticated. We can see this in the following code block:

class Authentication:
USERS = [{"username": "user1",
"password": "pwd1"}]

def login(self, username, password):
u = self.fetch_user(username)
if not u or u["password"] != password:
return None
return u

def fetch_user(self, username):
for u in self.USERS:
if u["username"] == username:
return u
else:
return None


class Authorization:
PERMISSIONS = [{"user": "user1",
"permissions": {"create", "edit", "delete"}}]

def can(self, user, action):
for u in self.PERMISSIONS:
if u["user"] == user["username"]:
return action in u["permissions"]
else:
return False

Our classes are composed of two primary methods: Authentication.login and Authorization.can. The first is in charge of authenticating the user with a username and password and returning the authenticated user, while the second is in charge of verifying that a user can do a specific action. Tests for those methods can be considered unit tests.

So TestAuthentication.test_login will be a unit test that verifies the behavior of the Authentication.login unit, while TestAuthorization.test_can will be a unit test that verifies the behavior of the Authorization.can unit:

class TestAuthentication(unittest.TestCase):
def test_login(self):
auth = Authentication()
auth.USERS = [{"username": "testuser", "password": "testpass"}]

resp = auth.login("testuser", "testpass")

assert resp == {"username": "testuser", "password": "testpass"}


class TestAuthorization(unittest.TestCase):
def test_can(self):
authz = Authorization()
authz.PERMISSIONS = [{"user": "testuser", "permissions":
{"create"}}]

resp = authz.can({"username": "testuser"}, "create")

assert resp is True

Here, we have the notable difference that TestAuthentication.test_login is a sociable unit test as it depends on Authentication.fetch_user while testing Authentication.login, and TestAuthorization.test_can is instead a solitary unit test as it doesn't depend on any other unit.

So where is the integration test?

The integration test will happen once we join those two components of our architecture (authorization and authentication) and test them together to confirm that we can actually have a user log in and verify their permissions:

class TestAuthorizeAuthenticatedUser(unittest.TestCase):
def test_auth(self):
auth = Authentication()
authz = Authorization()
auth.USERS = [{"username": "testuser", "password": "testpass"}]
authz.PERMISSIONS = [{"user": "testuser",
"permissions": {"create"}}]

u = auth.login("testuser", "testpass")
resp = authz.can(u, "create")

assert resp is True

Generally, it's important to be able to run your integration tests independently from your unit tests, as you will want to be able to run the unit tests continuously during development on every change:

$ python 05_integration.py TestAuthentication TestAuthorization
........
----------------------------------------------------------------------
Ran 8 tests in 0.000s

OK

While unit tests are usually verified frequently during the development cycle, it's common to run your integration tests only when you've reached a stable point where your unit tests all pass:

$ python 05_integration.py TestAuthorizeAuthenticatedUser
.
----------------------------------------------------------------------
Ran 1 test in 0.000s

OK

As you know that the units that you wrote or modified do what you expected, running the TestAuthorizeAuthenticatedUser case only will confirm that those entities work together as expected.

Integration tests integrate multiple components, but they actually divide themselves into many different kinds of tests depending on their purpose, with the most common kind being functional tests.

Functional tests

Integration tests can be very diverse. As you start integrating more and more components, you move toward a higher level of abstraction, and in the end, you move so far from the underlying components that people feel the need to distinguish those kinds of tests as they offer different benefits, complexities, and execution times.

That's why the naming of functional tests, end-to-end tests, system tests, acceptance tests, and so on all takes place.

Overall, those are all forms of integration tests; what changes are their goal and purpose:

  • Functional tests tend to verify that we are exposing to our users the feature we actually intended. They don't care about intermediate results or side-effects; they just verify that the end result for the user is the one the specifications described, thus they are always black-box tests.
  • End-to-End (E2E) tests are a specific kind of functional test that involves the vertical integration of components. The most common E2E tests are where technologies such as Selenium are involved in accessing a real application instance through a web browser.
  • System tests are very similar to functional tests themselves, but instead of testing a single feature, they usually test a whole journey of the user across the system. So they usually simulate real usage patterns of the user to verify that the system as a whole behaves as expected.
  • Acceptance tests are a kind of functional test that is meant to confirm that the implementation of the feature does behave as expected. They usually express the primary usage flow of the feature, leaving less common flows for other integration tests, and are frequently provided by the specifications themselves to help the developer confirm that they implemented what was expected.

But those are not the only kinds of integration that people refer to; new types are continuously defined in the effort to distinguish the goals of tests and responsibilities. Component tests, contract tests, and many others are kinds of tests whose goal is to verify integration between different pieces of the software at different layers. Overall, you shouldn't be ashamed of asking your colleagues what they mean exactly when they use those names, because you will notice each one of them will value different properties of those tests when classifying them into the different categories.

The general distinction to keep in mind when distinguishing between integration tests and functional tests is that unit and integration tests aim to test the implementation, while functional tests aim to test the behavior.

How you do that can easily involve the same exact technologies and it's just a matter of different goals. Properly covering the behavior of your software with the right kind of tests can be the difference between buggy software and reliable software. That's why there has been a long debate about how to structure test suites, leading to the testing pyramid and the testing trophy as the most widespread models of test distribution.

Understanding the testing pyramid and trophy

Given the need to provide different kinds of tests – unit, integration, and E2E as each one of them has different benefits and costs, the next immediate question is how do we get the right balance?

Each kind of test comes with a benefit and a cost, so it's a matter of finding where we get the best return on investment:

  • E2E tests verify the real experience of what the user faces. They are, in theory, the most realistic kind of tests and can detect problems such as incompatibilities with specific platforms (for example, browsers) and exercise our system as a whole. But when something goes wrong, it is hard to spot where the problem lies. They are very slow and tend to be flaky (failing for reasons unrelated to our software, such as network conditions).
  • Integration tests usually provide a reasonable guarantee that the software is doing what it is expected to do and are fairly robust to internal implementation changes, requiring less frequent refactoring when the internals of the software change. But they can still get very slow if your system involves writes to database services, the rendering of page templates, routing HTTP requests, and generally slow parts. And when something goes wrong, we might have to go through tens of layers before being able to spot where the problem is.
  • Unit tests can be very fast (especially when talking of solitary units) and provide very pinpointed information about where problems are. But they can't always guarantee that the software as a whole does what it's expected to do and can make changing implementation details expensive because a change to internals that don't impact the software behavior might require changing tens of unit tests.

Each of them has its own pros and cons, and the development community has long argued how to get the right balance.

The two primary models that have emerged are the testing pyramid and the testing trophy, named after their shapes.

The testing pyramid

The testing pyramid originates from Mike Cohn's Succeeding with Agile book, where the two rules of thumb are "Write test with different granularities" (so you should have unit, integration, E2E, and so on...) and "the more you get high level, the less you should test" (so you should have tons of unit tests, and a few E2E tests).

While different people will argue about which different layers are contained within it, the testing pyramid can be simplified to look like this:

Figure 1.1 – Testing pyramid

The tip of the pyramid is narrow, thus meaning we have fewer of those tests, while the base is wider, meaning we should mostly cover code with those kinds of tests. So, as we move down through the layers, the lower we get, the more tests we should have.

The idea is that as unit tests are fast to run and expose pinpointed issues early on, you should have a lot of them and shrink the number of tests as they move to higher layers and thus get slower and vaguer about what's broken.

The testing pyramid is probably the most widespread practice for organizing tests and usually pairs well with test-driven development as unit tests are the founding tool for the TDD process.

The other most widespread model is the testing trophy, which instead emphasizes integration tests.

The testing trophy

The testing trophy originates from a phrase by Guillermo Rauch, the author of Socket.io and many other famous JavaScript-based technologies. Guillermo stated that developers should "Write tests. Not too many. Mostly integration."

Like Mike Cohn, he clearly states that tests are the foundation of any effective software development practice, but he argues that they have a diminishing return and thus it's important to find the sweet spot where you get the best return on the time spent writing tests.

That sweet spot is expected to live in integration tests because you usually need fewer of them to spot real problems, they are not too bound to implementation details, and they are still fast enough that you can afford to write a few of them.

So the testing trophy will look like this:

Figure 1.2 – Testing trophy

As you probably saw, the testing trophy puts a lot of value on static tests too, because the whole idea of the testing trophy is that what is really of value is the return on investment, and static checks are fairly cheap, up to the point that most development environments run them in real time. Linters, type checkers, and more advanced kinds of type analyzers are cheap enough that it would do no good to ignore them even if they are rarely able to spot bugs in your business logic.

Unit tests instead can cost developers time with the need to adapt them due to internal implementation detail changes that don't impact the final behavior of the software in any way, and thus the effort spent on them should be kept under control.

Those two models are the most common ways to distribute your tests, but more best practices are involved when thinking of testing distribution and coverage.

Testing distribution and coverage

While the importance of testing is widely recognized, there is also general agreement that test suites have a diminishing return.

There is little point in wasting hours on testing plain getters and setters or testing internal/private methods. The sweet spot is said to be around 80% of code coverage, even though I think that really depends on the language in use – the more expressive your language is, the less code you have to write to perform complex actions. And all complex actions should be properly tested, so in the case of Python, the sweet spots probably lies more in the range of 90%. But there are cases, such as porting projects from Python 2 to Python 3, where code coverage of 100% is the only way you can confirm that you haven't changed any behavior at all in the process of porting your code base.

Last but not least, most testing practices related to test-driven development take care of the testing practice up to the release point. It's important to keep in mind that when the software is released, the testing process hasn't finished.

Many teams forget to set up proper system tests and don't have a way to identify and reproduce issues that can only happen in production environments with real concurrent users and large amounts of data. Having staging environments and a suite to simulate incidents or real users' behaviors might be the only way to spot bugs that only happen after days of continuous use of the system. And some companies go as far as testing the production system with tools that inject real problems continuously for the sole purpose of verifying that the system is solid.

Summary

As we saw in the sections about integration tests, functional tests, and the testing pyramid/trophy models, there are many different visions about what should be tested, with which goals in mind, and how test suites should be organized. Getting this right can impact how much you trust your automatic test suite, and thus how much you evolve it because it provides you with value.

Learning to do proper automated testing is the gateway to major software development boosts, opening possibilities for practices such as continuous integration and continuous delivery, which would otherwise be impossible without a proper test suite.

But testing isn't easy; it comes with many side-effects that are not immediately obvious, and for which the software development industry started to provide tools and best practices only recently. So in the next chapters, we will look at some of those best practices and tools that can help you write a good, easily maintained test suite.

Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Build robust Python applications using TDD and BDD methodologies
  • Test Python web applications using WebTest and web frameworks
  • Leverage PyTest to implement stringent testing mechanisms to ensure fault-tolerant applications

Description

Test-driven development (TDD) is a set of best practices that helps developers to build more scalable software and is used to increase the robustness of software by using automatic tests. This book shows you how to apply TDD practices effectively in Python projects. You’ll begin by learning about built-in unit tests and Mocks before covering rich frameworks like PyTest and web-based libraries such as WebTest and Robot Framework, discovering how Python allows you to embrace all modern testing practices with ease. Moving on, you’ll find out how to design tests and balance them with new feature development and learn how to create a complete test suite with PyTest. The book helps you adopt a hands-on approach to implementing TDD and associated methodologies that will have you up and running and make you more productive in no time. With the help of step-by-step explanations of essential concepts and practical examples, you’ll explore automatic tests and TDD best practices and get to grips with the methodologies and tools available in Python for creating effective and robust applications. By the end of this Python book, you will be able to write reliable test suites in Python to ensure the long-term resilience of your application using the range of libraries offered by Python for testing and development.

Who is this book for?

This book is for Python developers looking to get started with test-driven development and developers who want to learn about the testing tools available in Python. Developers who want to create web applications with Python and plan to implement TDD methodology with PyTest will find this book useful. Basic knowledge of Python programming is required.

What you will learn

  • Find out how tests can make your life easier as a developer and discover related best practices
  • Explore PyTest, the most widespread testing framework for Python
  • Get to grips with the most common PyTest plugins, including coverage, flaky, xdist, and picked
  • Write functional tests for WSGI web applications with WebTest
  • Run end-to-end tests for web applications using Robot Framework
  • Understand what test-driven development means and why it is important
  • Discover how to use the range of tools available in Python
  • Build reliable and robust applications
Estimated delivery fee Deliver to Ireland

Premium delivery 7 - 10 business days

€23.95
(Includes tracking information)

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Feb 18, 2021
Length: 338 pages
Edition : 1st
Language : English
ISBN-13 : 9781838642655
Languages :
Tools :

What do you get with Print?

Product feature icon Instant access to your digital eBook copy whilst your Print order is Shipped
Product feature icon Paperback book shipped to your preferred address
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
Estimated delivery fee Deliver to Ireland

Premium delivery 7 - 10 business days

€23.95
(Includes tracking information)

Product Details

Publication date : Feb 18, 2021
Length: 338 pages
Edition : 1st
Language : English
ISBN-13 : 9781838642655
Languages :
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
€18.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
€189.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts
€264.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just €5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total 110.97
Python Object-Oriented Programming
€35.99
Crafting Test-Driven Software with Python
€36.99
Clean Code in Python
€37.99
Total 110.97 Stars icon

Table of Contents

17 Chapters
Section 1: Software Testing and Test-Driven Development Chevron down icon Chevron up icon
Getting Started with Software Testing Chevron down icon Chevron up icon
Test Doubles with a Chat Application Chevron down icon Chevron up icon
Test-Driven Development while Creating a TODO List Chevron down icon Chevron up icon
Scaling the Test Suite Chevron down icon Chevron up icon
Section 2: PyTest for Python Testing Chevron down icon Chevron up icon
Introduction to PyTest Chevron down icon Chevron up icon
Dynamic and Parametric Tests and Fixtures Chevron down icon Chevron up icon
Fitness Function with a Contact Book Application Chevron down icon Chevron up icon
PyTest Essential Plugins Chevron down icon Chevron up icon
Managing Test Environments with Tox Chevron down icon Chevron up icon
Testing Documentation and Property-Based Testing Chevron down icon Chevron up icon
Section 3: Testing for the Web Chevron down icon Chevron up icon
Testing for the Web: WSGI versus HTTP Chevron down icon Chevron up icon
End-to-End Testing with the Robot Framework Chevron down icon Chevron up icon
About Packt Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Most Recent
Rating distribution
Full star icon Full star icon Full star icon Full star icon Half star icon 4.8
(11 Ratings)
5 star 81.8%
4 star 18.2%
3 star 0%
2 star 0%
1 star 0%
Filter icon Filter
Most Recent

Filter reviews by




Peter Can Aug 11, 2021
Full star icon Full star icon Full star icon Full star icon Full star icon 5
The book is very well crafted and takes you from an introduction to test driven development to more advanced examples about how to use Python and Pytest
Amazon Verified review Amazon
Chandra Jun 09, 2021
Full star icon Full star icon Full star icon Full star icon Empty star icon 4
Who this book is for?This is a great book for programmers and QA team working in a 'Python' shop. I highly recommend this book to someone who wants to know high level testing paradigms and serious python application developers who want to produce high-quality code that can pass software testing criterion. This is also a great companion for QA software professional who wants to know about 'Python' testing offeringsContent structure:What I most like about this book is Alessandro introduces testing concepts initially and progressively builds up the complexity of content.In section-1 (Chapters 1,2,3 and 4), Alessandro intoduces test-driven software development with 'unittest' package. In section-2, 'pytest', the latest testing framework, is introduced. Section 3 is about testing for the web and testing automation.Content quality:This book is very well written with lot of code examples and access to code in GitHub. Further, all examples are very well explained and sequentially built-up. I am sure anyone can read and follow the content presented in the book. I recommend author should make a YouTube video for readers who are more likely to follow visual examples.Overall a good book on testing using Python. Highly recommend.
Amazon Verified review Amazon
Bugli Lorenzo May 19, 2021
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Very pragmatic, straightforward and with the right amount of theory to face the practice. Concepts such as "Acceptance Test Driven Development (ATDD)" and "Behavior Driven Development (BDD)" are described neatly. Once learned, they are immediately focused on simple and comprehensive examples.It gets the point. In addition to providing an excellent knowledge base, it provides an overview available tools, frameworks and libraries for Python to build a solid all-round test framework, from a functional/code coverage point of view to performance.The division into three parts makes it further readable: the first part presents the topic and the TDD, the central one is dedicated to structuring a test codebase and deepening some advanced techniques, last one is entirely dedicated to web testing development.Even if the language used is Python, many principles and different techniques described in this book are suitable for all other languages too. The theory is very clear and although several things are applied in Python, I have found the same principles in other languages.I recommend it to any Python newbie to learn how to build solid tests and avoid beginner mistakes, at the same time, I recommend it to more experienced users, who would like to know how to organize test structure and how to solve some very common problems about tests.
Amazon Verified review Amazon
Daniel Correa May 17, 2021
Full star icon Full star icon Full star icon Full star icon Full star icon 5
In my opinion, all software developers, web developers, or programmers in general, should understand the basic elements of software testing. This will lead with the design and implementation of better applications, which helps to reduce the quantity of bugs, and improves the software quality.Alessandro Molina, the author of this book (Crafting Test-Driven Software with Python), makes an incredible job explaining the main elements of software testing, and provides very valuable examples about how to implement those elements in a practical way.The author explains and summarizes key concepts of testing, which includes, tests, test plans, automated testing, test runners, test-driven development, unit tests, coverage tests, integration tests, functional tests, end-to-end tests, performance tests, and acceptance tests, among others. Each concept is properly detailed and exemplified with valuable pieces of code in python.You will also find (i) how to organize your tests, (ii) the testing pyramid and the testing trophy, (iii) how to run static code analysis (with prospector), (iv) how to implement unit tests with unittest and pytest, (v) how to use dummy objects, stubs, mocks, spies, and replace dependencies with fakes, (vi) how to enable continuous integration with travis, (vii) how to apply BDD (Behavior Driven Development) with pytest-bdd and Gherkin language to allow stakeholders to speak the same language as the technical team, (viii) how to use Tox (a virtual environment manager for Python) which can automate the setup of our projects, (ix) how to create documentation of our application with Sphinx, and even test that documentation, and (x) how to test web applications (such as a Django app), and even execute end-to-end tests with Robot and selenium (we will test the google website).During the entire book, you will implement some small python applications, such as a contact book, a chat application, a TODO application, and http applications. The only thing that I didn’t like too much, was that some applications use elements such as, the use of threads, broadcasts, UNIX shell, and lambda functions, among others. Not a big deal, but maybe the same examples, but without the use of some of those elements would make the code easier to follow (specially for novice python programmers).Finally, I recommend this book for any programmer which wants to learn about software testing, and not only for python programmers.
Amazon Verified review Amazon
Arvind yadav May 15, 2021
Full star icon Full star icon Full star icon Full star icon Full star icon 5
This book is recommended for developers and tester. Very good explanation on how to implement pytest in TDD.
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is the delivery time and cost of print book? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela
What is custom duty/charge? Chevron down icon Chevron up icon

Customs duty are charges levied on goods when they cross international borders. It is a tax that is imposed on imported goods. These duties are charged by special authorities and bodies created by local governments and are meant to protect local industries, economies, and businesses.

Do I have to pay customs charges for the print book order? Chevron down icon Chevron up icon

The orders shipped to the countries that are listed under EU27 will not bear custom charges. They are paid by Packt as part of the order.

List of EU27 countries: www.gov.uk/eu-eea:

A custom duty or localized taxes may be applicable on the shipment and would be charged by the recipient country outside of the EU27 which should be paid by the customer and these duties are not included in the shipping charges been charged on the order.

How do I know my custom duty charges? Chevron down icon Chevron up icon

The amount of duty payable varies greatly depending on the imported goods, the country of origin and several other factors like the total invoice amount or dimensions like weight, and other such criteria applicable in your country.

For example:

  • If you live in Mexico, and the declared value of your ordered items is over $ 50, for you to receive a package, you will have to pay additional import tax of 19% which will be $ 9.50 to the courier service.
  • Whereas if you live in Turkey, and the declared value of your ordered items is over € 22, for you to receive a package, you will have to pay additional import tax of 18% which will be € 3.96 to the courier service.
How can I cancel my order? Chevron down icon Chevron up icon

Cancellation Policy for Published Printed Books:

You can cancel any order within 1 hour of placing the order. Simply contact customercare@packt.com with your order details or payment transaction id. If your order has already started the shipment process, we will do our best to stop it. However, if it is already on the way to you then when you receive it, you can contact us at customercare@packt.com using the returns and refund process.

Please understand that Packt Publishing cannot provide refunds or cancel any order except for the cases described in our Return Policy (i.e. Packt Publishing agrees to replace your printed book because it arrives damaged or material defect in book), Packt Publishing will not accept returns.

What is your returns and refunds policy? Chevron down icon Chevron up icon

Return Policy:

We want you to be happy with your purchase from Packtpub.com. We will not hassle you with returning print books to us. If the print book you receive from us is incorrect, damaged, doesn't work or is unacceptably late, please contact Customer Relations Team on customercare@packt.com with the order number and issue details as explained below:

  1. If you ordered (eBook, Video or Print Book) incorrectly or accidentally, please contact Customer Relations Team on customercare@packt.com within one hour of placing the order and we will replace/refund you the item cost.
  2. Sadly, if your eBook or Video file is faulty or a fault occurs during the eBook or Video being made available to you, i.e. during download then you should contact Customer Relations Team within 14 days of purchase on customercare@packt.com who will be able to resolve this issue for you.
  3. You will have a choice of replacement or refund of the problem items.(damaged, defective or incorrect)
  4. Once Customer Care Team confirms that you will be refunded, you should receive the refund within 10 to 12 working days.
  5. If you are only requesting a refund of one book from a multiple order, then we will refund you the appropriate single item.
  6. Where the items were shipped under a free shipping offer, there will be no shipping costs to refund.

On the off chance your printed book arrives damaged, with book material defect, contact our Customer Relation Team on customercare@packt.com within 14 days of receipt of the book with appropriate evidence of damage and we will work with you to secure a replacement copy, if necessary. Please note that each printed book you order from us is individually made by Packt's professional book-printing partner which is on a print-on-demand basis.

What tax is charged? Chevron down icon Chevron up icon

Currently, no tax is charged on the purchase of any print book (subject to change based on the laws and regulations). A localized VAT fee is charged only to our European and UK customers on eBooks, Video and subscriptions that they buy. GST is charged to Indian customers for eBooks and video purchases.

What payment methods can I use? Chevron down icon Chevron up icon

You can pay with the following card types:

  1. Visa Debit
  2. Visa Credit
  3. MasterCard
  4. PayPal
What is the delivery time and cost of print books? Chevron down icon Chevron up icon

Shipping Details

USA:

'

Economy: Delivery to most addresses in the US within 10-15 business days

Premium: Trackable Delivery to most addresses in the US within 3-8 business days

UK:

Economy: Delivery to most addresses in the U.K. within 7-9 business days.
Shipments are not trackable

Premium: Trackable delivery to most addresses in the U.K. within 3-4 business days!
Add one extra business day for deliveries to Northern Ireland and Scottish Highlands and islands

EU:

Premium: Trackable delivery to most EU destinations within 4-9 business days.

Australia:

Economy: Can deliver to P. O. Boxes and private residences.
Trackable service with delivery to addresses in Australia only.
Delivery time ranges from 7-9 business days for VIC and 8-10 business days for Interstate metro
Delivery time is up to 15 business days for remote areas of WA, NT & QLD.

Premium: Delivery to addresses in Australia only
Trackable delivery to most P. O. Boxes and private residences in Australia within 4-5 days based on the distance to a destination following dispatch.

India:

Premium: Delivery to most Indian addresses within 5-6 business days

Rest of the World:

Premium: Countries in the American continent: Trackable delivery to most countries within 4-7 business days

Asia:

Premium: Delivery to most Asian addresses within 5-9 business days

Disclaimer:
All orders received before 5 PM U.K time would start printing from the next business day. So the estimated delivery times start from the next day as well. Orders received after 5 PM U.K time (in our internal systems) on a business day or anytime on the weekend will begin printing the second to next business day. For example, an order placed at 11 AM today will begin printing tomorrow, whereas an order placed at 9 PM tonight will begin printing the day after tomorrow.


Unfortunately, due to several restrictions, we are unable to ship to the following countries:

  1. Afghanistan
  2. American Samoa
  3. Belarus
  4. Brunei Darussalam
  5. Central African Republic
  6. The Democratic Republic of Congo
  7. Eritrea
  8. Guinea-bissau
  9. Iran
  10. Lebanon
  11. Libiya Arab Jamahriya
  12. Somalia
  13. Sudan
  14. Russian Federation
  15. Syrian Arab Republic
  16. Ukraine
  17. Venezuela