The comment at the top of the sample tests.py
file states that the two tests: will both pass when you run "manage.py test"
. So let's see what happens if we try that:
Oops, we seem to have gotten ahead of ourselves here. We created our new Django project and application, but never edited the settings file to specify any database information. Clearly we need to do that in order to run the tests.
But will the tests use the production database we specify in settings.py
? That could be worrisome, since we might at some point code something in our tests that we wouldn't necessarily want to do to our production data. Fortunately, it's not a problem. The Django test runner creates an entirely new database for running the tests, uses it for the duration of the tests, and deletes it at the end of the test run. The name of this database is test_
followed by DATABASE_NAME
specified in settings.py
. So running tests will not interfere with production data.
In order to run the sample tests.py
file, we need to first set appropriate values for DATABASE_ENGINE
, DATABASE_NAME
, and whatever else may be required for the database we are using in settings.py
. Now would also be a good time to add our survey
application and django.contrib.admin
to INSTALLED_APPS
, as we will need both of those as we proceed. Once those changes have been made to settings.py
, manage.py test
works better:
That looks good. But what exactly got tested? Towards the end it says Ran 35 tests
, so there were certainly more tests run than the two tests in our simple tests.py
file. The other 33 tests are from the other applications listed by default in settings.py
: auth, content types, sessions, and sites. These Django "contrib" applications ship with their own tests, and by default, manage.py test
runs the tests for all applications listed in INSTALLED_APPS
.
Note
Note that if you do not add django.contrib.admin
to the INSTALLED_APPS
list in settings.py
, then manage.py test
may report some test failures. With Django 1.1, some of the tests for django.contrib.auth
rely on django.contrib.admin
also being included in INSTALLED_APPS
in order for the tests to pass. That inter-dependence may be fixed in the future, but for now it is easiest to avoid the possible errors by including django.contrib.admin
in INTALLED_APPS
from the start. We will want to use it soon enough anyway.
It is possible to run just the tests for certain applications. To do this, specify the application names on the command line. For example, to run only the survey
application tests:
There—Ran 2 tests
looks right for our sample tests.py
file. But what about all those messages about tables being created and indexes being installed? Why were the tables for these applications created when their tests were not going to be run? The reason for this is that the test runner does not know what dependencies may exist between the application(s) that are going to be tested and others listed in INSTALLED_APPS
that are not going to be tested.
For example, our survey application could have a model with a ForeignKey
to the django.contrib.auth User
model, and tests for the survey application may rely on being able to add and query User
entries. This would not work if the test runner neglected to create tables for the applications excluded from testing. Therefore, the test runner creates the tables for all applications listed in INSTALLED_APPS
, even those for which tests are not going to be run.
We now know how to run tests, how to limit the testing to just the application(s) we are interested in, and what a successful test run looks like. But, what about test failures? We're likely to encounter a fair number of those in real work, so it would be good to make sure we understand the test output when they occur. In the next section, then, we will introduce some deliberate breakage so that we can explore what failures look like and ensure that when we encounter real ones, we will know how to properly interpret what the test run is reporting.