Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Django 1.1 Testing and Debugging
Django 1.1 Testing and Debugging

Django 1.1 Testing and Debugging: Building rigorously tested and bug-free Django applications

eBook
$9.99 $32.99
Paperback
$54.99
Subscription
Free Trial
Renews at $19.99p/m

What do you get with a Packt Subscription?

Free for first 7 days. $19.99 p/m after that. Cancel any time!
Product feature icon Unlimited ad-free access to the largest independent learning library in tech. Access this title and thousands more!
Product feature icon 50+ new titles added per month, including many first-to-market concepts and exclusive early access to books as they are being written.
Product feature icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Product feature icon Thousands of reference materials covering every tech concept you need to stay up to date.
Subscribe now
View plans & pricing
Table of content icon View table of contents Preview book icon Preview Book

Django 1.1 Testing and Debugging

Chapter 1. Django Testing Overview

How do you know when code you have written is working as intended? Well, you test it. But how? For a web application, you can test the code by manually bringing up the pages of your application in a web browser and verifying that they are correct. This involves more than a quick glance to see whether they have the correct content, as you must also ensure, for example, that all the links work and that any forms work properly. As you can imagine, this sort of manual testing quickly becomes impossible to rely on as an application grows beyond a few simple pages. For any non-trivial application, automated testing is essential.

Automated testing of Django applications makes use of the fundamental test support built-in to the Python language: doctests and unit tests. When you create a new Django application with manage.py startapp, one of the generated files contains a sample doctest and unit test, intended to jump-start your own test writing. In this chapter, we will begin our study of testing Django applications. Specifically, we will:

  • Examine in detail the contents of the sample tests.py file, reviewing the fundamentals of Python's test support as we do so

  • See how to use Django utilities to run the tests contained in tests.py

  • Learn how to interpret the output of the tests, both when the tests succeed and when they fail

  • Review the effects of the various command-line options that can be used when testing

Getting started: Creating a new application


Let's get started by creating a new Django project and application. Just so we have something consistent to work with throughout this book, let's assume we are setting out to create a new market-research type website. At this point, we don't need to decide much about this site except some names for the Django project and at least one application that it will include. As market_research is a bit long, let's shorten that to marketr for the project name. We can use django-admin.py to create a new Django project:

kmt@lbox:/dj_projects$ django-admin.py startproject marketr

Then, from within the new marketr directory, we can create a new Django application using the manage.py utility. One of the core applications for our market research project will be a survey application, so we will start by creating it:

kmt@lbox:/dj_projects/marketr$ python manage.py startapp survey

Now we have the basic skeleton of a Django project and application: a settings.py file, a urls.py file, the manage.py utility, and a survey directory containing .py files for models, views, and tests. There is nothing of substance placed in the auto-generated models and views files, but in the tests.py file there are two sample tests: one unit test and one doctest. We will examine each in detail next.

Understanding the sample unit test


The unit test is the first test contained in tests.py, which begins:

""" 
This file demonstrates two different styles of tests (one doctest and one unittest). These will both pass when you run "manage.py test". 

Replace these with more appropriate tests for your application. 
"""

from django.test import TestCase 

class SimpleTest(TestCase): 
    def test_basic_addition(self): 
        """ 
        Tests that 1 + 1 always equals 2. 
        """ 
        self.failUnlessEqual(1 + 1, 2) 

The unit test starts by importing TestCase from django.test. The django.test.TestCase class is based on Python's unittest.TestCase, so it provides everything from the underlying Python unittest.TestCase plus features useful for testing Django applications. These Django extensions to unittest.TestCase will be covered in detail in Chapter 3, Testing 1, 2, 3: Basic Unit Testing and Chapter 4, Getting Fancier: Django Unit Test Extensions. The sample unit test here doesn't actually need any of that support, but it does not hurt to base the sample test case on the Django class anyway.

The sample unit test then declares a SimpleTest class based on Django's TestCase, and defines a test method named test_basic_addition within that class. That method contains a single statement:

self.failUnlessEqual(1 + 1, 2)

As you might expect, that statement will cause the test case to report a failure unless the two provided arguments are equal. As coded, we'd expect that test to succeed. We'll verify that later in this chapter, when we get to actually running the tests. But first, let's take a closer look at the sample doctest.

Understanding the sample doctest


The doctest portion of the sample tests.py is:

__test__ = {"doctest": """
Another way to test that 1 + 1 is equal to 2.

>>> 1 + 1 == 2
True
"""}

That looks a bit more mysterious than the unit test half. For the sample doctest, a special variable, __test__, is declared. This variable is set to be a dictionary containing one key, doctest. This key is set to a string value that resembles a docstring containing a comment followed by what looks like a snippet from an interactive Python shell session.

The part that looks like an interactive Python shell session is what makes up the doctest. That is, lines that start with >>> will be executed (minus the >>> prefix) during the test, and the actual output produced will be compared to the expected output found in the doctest below the line that starts with >>>. If any actual output fails to match the expected output, the test fails. For this sample test, we would expect entering 1 + 1 == 2 in an interactive Python shell session to result in the interpreter producing the output True, so again it looks like this sample test should pass.

Note that doctests do not have to be defined by using this special __test__ dictionary. In fact, Python's doctest test runner looks for doctests within all the docstrings found in the file. In Python, a docstring is a string literal that is the first statement in a module, function, class, or method definition. Given that, you'd expect snippets from an interactive Python shell session found in the comment at the very top of this tests.py file to also be run as a doctest. This is another thing we can experiment with once we start running these tests, which we'll do next.

Running the sample tests


The comment at the top of the sample tests.py file states that the two tests: will both pass when you run "manage.py test". So let's see what happens if we try that:

kmt@lbox:/dj_projects/marketr$ python manage.py test 
Creating test database... 
Traceback (most recent call last): 
  File "manage.py", line 11, in <module> 
    execute_manager(settings) 
  File "/usr/lib/python2.5/site-packages/django/core/management/__init__.py", line 362, in execute_manager 
    utility.execute() 
  File "/usr/lib/python2.5/site-packages/django/core/management/__init__.py", line 303, in execute 
    self.fetch_command(subcommand).run_from_argv(self.argv) 
  File "/usr/lib/python2.5/site-packages/django/core/management/base.py", line 195, in run_from_argv 
    self.execute(*args, **options.__dict__) 
  File "/usr/lib/python2.5/site-packages/django/core/management/base.py", line 222, in execute 
    output = self.handle(*args, **options) 
  File "/usr/lib/python2.5/site-packages/django/core/management/commands/test.py", line 23, in handle 
    failures = test_runner(test_labels, verbosity=verbosity, interactive=interactive) 
  File "/usr/lib/python2.5/site-packages/django/test/simple.py", line 191, in run_tests 
    connection.creation.create_test_db(verbosity, autoclobber=not interactive) 
  File "/usr/lib/python2.5/site-packages/django/db/backends/creation.py", line 327, in create_test_db 
    test_database_name = self._create_test_db(verbosity, autoclobber) 
  File "/usr/lib/python2.5/site-packages/django/db/backends/creation.py", line 363, in _create_test_db 
    cursor = self.connection.cursor() 
  File "/usr/lib/python2.5/site-packages/django/db/backends/dummy/base.py", line 15, in complain 
    raise ImproperlyConfigured, "You haven't set the DATABASE_ENGINE setting yet." 
django.core.exceptions.ImproperlyConfigured: You haven't set the DATABASE_ENGINE setting yet.

Oops, we seem to have gotten ahead of ourselves here. We created our new Django project and application, but never edited the settings file to specify any database information. Clearly we need to do that in order to run the tests.

But will the tests use the production database we specify in settings.py? That could be worrisome, since we might at some point code something in our tests that we wouldn't necessarily want to do to our production data. Fortunately, it's not a problem. The Django test runner creates an entirely new database for running the tests, uses it for the duration of the tests, and deletes it at the end of the test run. The name of this database is test_ followed by DATABASE_NAME specified in settings.py. So running tests will not interfere with production data.

In order to run the sample tests.py file, we need to first set appropriate values for DATABASE_ENGINE, DATABASE_NAME, and whatever else may be required for the database we are using in settings.py. Now would also be a good time to add our survey application and django.contrib.admin to INSTALLED_APPS, as we will need both of those as we proceed. Once those changes have been made to settings.py, manage.py test works better:

kmt@lbox:/dj_projects/marketr$ python manage.py test 
Creating test database... 
Creating table auth_permission 
Creating table auth_group 
Creating table auth_user 
Creating table auth_message 
Creating table django_content_type 
Creating table django_session 
Creating table django_site 
Creating table django_admin_log 
Installing index for auth.Permission model 
Installing index for auth.Message model 
Installing index for admin.LogEntry model 
................................... 
---------------------------------------------------------------------- 
Ran 35 tests in 2.012s 

OK 
Destroying test database...

That looks good. But what exactly got tested? Towards the end it says Ran 35 tests, so there were certainly more tests run than the two tests in our simple tests.py file. The other 33 tests are from the other applications listed by default in settings.py: auth, content types, sessions, and sites. These Django "contrib" applications ship with their own tests, and by default, manage.py test runs the tests for all applications listed in INSTALLED_APPS.

Note

Note that if you do not add django.contrib.admin to the INSTALLED_APPS list in settings.py, then manage.py test may report some test failures. With Django 1.1, some of the tests for django.contrib.auth rely on django.contrib.admin also being included in INSTALLED_APPS in order for the tests to pass. That inter-dependence may be fixed in the future, but for now it is easiest to avoid the possible errors by including django.contrib.admin in INTALLED_APPS from the start. We will want to use it soon enough anyway.

It is possible to run just the tests for certain applications. To do this, specify the application names on the command line. For example, to run only the survey application tests:

kmt@lbox:/dj_projects/marketr$ python manage.py test survey 
Creating test database... 
Creating table auth_permission 
Creating table auth_group 
Creating table auth_user 
Creating table auth_message 
Creating table django_content_type 
Creating table django_session 
Creating table django_site 
Creating table django_admin_log 
Installing index for auth.Permission model 
Installing index for auth.Message model 
Installing index for admin.LogEntry model 
.. 
---------------------------------------------------------------------- 
Ran 2 tests in 0.039s 

OK 
Destroying test database... 

There—Ran 2 tests looks right for our sample tests.py file. But what about all those messages about tables being created and indexes being installed? Why were the tables for these applications created when their tests were not going to be run? The reason for this is that the test runner does not know what dependencies may exist between the application(s) that are going to be tested and others listed in INSTALLED_APPS that are not going to be tested.

For example, our survey application could have a model with a ForeignKey to the django.contrib.auth User model, and tests for the survey application may rely on being able to add and query User entries. This would not work if the test runner neglected to create tables for the applications excluded from testing. Therefore, the test runner creates the tables for all applications listed in INSTALLED_APPS, even those for which tests are not going to be run.

We now know how to run tests, how to limit the testing to just the application(s) we are interested in, and what a successful test run looks like. But, what about test failures? We're likely to encounter a fair number of those in real work, so it would be good to make sure we understand the test output when they occur. In the next section, then, we will introduce some deliberate breakage so that we can explore what failures look like and ensure that when we encounter real ones, we will know how to properly interpret what the test run is reporting.

Breaking things on purpose


Let's start by introducing a single, simple failure. Change the unit test to expect that adding 1 + 1 will result in 3 instead of 2. That is, change the single statement in the unit test to be: self.failUnlessEqual(1 + 1, 3).

Now when we run the tests, we will get a failure:

kmt@lbox:/dj_projects/marketr$ python manage.py test
Creating test database... 
Creating table auth_permission 
Creating table auth_group 
Creating table auth_user 
Creating table auth_message 
Creating table django_content_type 
Creating table django_session 
Creating table django_site 
Creating table django_admin_log 
Installing index for auth.Permission model
Installing index for auth.Message model 
Installing index for admin.LogEntry model 
...........................F.......
====================================================================== 
FAIL: test_basic_addition (survey.tests.SimpleTest) 
---------------------------------------------------------------------- 
Traceback (most recent call last): 
  File "/dj_projects/marketr/survey/tests.py", line 15, in test_basic_addition 
    self.failUnlessEqual(1 + 1, 3) 
AssertionError: 2 != 3 

---------------------------------------------------------------------- 
Ran 35 tests in 2.759s 

FAILED (failures=1) 
Destroying test database...

That looks pretty straightforward. The failure has produced a block of output starting with a line of equal signs and then the specifics of the test that has failed. The failing method is identified, as well as the class containing it. There is a Traceback that shows the exact line of code that has generated the failure, and the AssertionError shows details of the cause of the failure.

Notice the line above the equal signs—it contains a bunch of dots and one F. What does that mean? This is a line we overlooked in the earlier test output listings. If you go back and look at them now, you'll see there has always been a line with some number of dots after the last Installing index message. This line is generated as the tests are run, and what is printed depends on the test results. F means a test has failed, dot means a test passed. When there are enough tests that they take a while to run, this real-time progress update can be useful to get a sense of how the run is going while it is in progress.

Finally at the end of the test output, we see FAILED (failures=1) instead of the OK we had seen previously. Any test failures make the overall test run outcome a failure instead of a success.

Next, let's see what a failing doctest looks like. If we restore the unit test back to its original form and change the doctest to expect the Python interpreter to respond True to 1 + 1 == 3, running the tests (restricting the tests to only the survey application this time) will then produce this output:

kmt@lbox:/dj_projects/marketr$ python manage.py test survey 
Creating test database... 
Creating table auth_permission 
Creating table auth_group 
Creating table auth_user 
Creating table auth_message 
Creating table django_content_type 
Creating table django_session 
Creating table django_site 
Creating table django_admin_log 
Installing index for auth.Permission model 
Installing index for auth.Message model 
Installing index for admin.LogEntry model 
.F 
====================================================================== 
FAIL: Doctest: survey.tests.__test__.doctest 
---------------------------------------------------------------------- 
Traceback (most recent call last): 
  File "/usr/lib/python2.5/site-packages/django/test/_doctest.py", line 2180, in runTest 
    raise self.failureException(self.format_failure(new.getvalue())) 
AssertionError: Failed doctest test for survey.tests.__test__.doctest 
  File "/dj_projects/marketr/survey/tests.py", line unknown line number, in doctest 

---------------------------------------------------------------------- 
File "/dj_projects/marketr/survey/tests.py", line ?, in survey.tests.__test__.doctest 
Failed example: 
    1 + 1 == 3 
Expected: 
    True 
Got: 
    False 


---------------------------------------------------------------------- 
Ran 2 tests in 0.054s 

FAILED (failures=1) 
Destroying test database... 

The output from the failing doctest is a little more verbose and a bit less straightforward to interpret than the unit test failure. The failing doctest is identified as survey.tests.__test__.doctest—this means the key doctest in the __test__ dictionary defined within the survey/tests.py file. The Traceback portion of the output is not as useful as it was in the unit test case as the AssertionError simply notes that the doctest failed. Fortunately, details of what caused the failure are then provided, and you can see the content of the line that caused the failure, what output was expected, and what output was actually produced by executing the failing line.

Note, though, that the test runner does not pinpoint the line number within tests.py where the failure occurred. It reports unknown line number and line ? in different portions of the output. Is this a general problem with doctests or perhaps a result of the way in which this particular doctest is defined, as part of the __test__ dictionary? We can answer that question by putting a test in the docstring at the top of tests.py. Let's restore the sample doctest to its original state and change the top of the file to look like this:

""" 
This file demonstrates two different styles of tests (one doctest and one unittest). These will both pass when you run "manage.py test". 

Replace these with more appropriate tests for your application. 

>>> 1 + 1 == 3 
True
""" 

Then when we run the tests we get:

kmt@lbox:/dj_projects/marketr$ python manage.py test survey 
Creating test database... 
Creating table auth_permission 
Creating table auth_group 
Creating table auth_user 
Creating table auth_message 
Creating table django_content_type 
Creating table django_session 
Creating table django_site 
Creating table django_admin_log 
Installing index for auth.Permission model 
Installing index for auth.Message model 
Installing index for admin.LogEntry model 
.F. 
====================================================================== 
FAIL: Doctest: survey.tests 
---------------------------------------------------------------------- 
Traceback (most recent call last): 
  File "/usr/lib/python2.5/site-packages/django/test/_doctest.py", line 2180, in runTest 
    raise self.failureException(self.format_failure(new.getvalue())) 
AssertionError: Failed doctest test for survey.tests 
  File "/dj_projects/marketr/survey/tests.py", line 0, in tests 

---------------------------------------------------------------------- 
File "/dj_projects/marketr/survey/tests.py", line 7, in survey.tests 
Failed example: 
    1 + 1 == 3 
Expected: 
    True 
Got: 
    False 


---------------------------------------------------------------------- 
Ran 3 tests in 0.052s 

FAILED (failures=1) 
Destroying test database... 

Here line numbers are provided. The Traceback portion apparently identifies the line above the line where the docstring containing the failing test line begins (the docstring starts on line 1 while the traceback reports line 0). The detailed failure output identifies the actual line in the file that causes the failure, in this case line 7.

The inability to pinpoint line numbers is thus a side-effect of defining the doctest within the __test__ dictionary. While it doesn't cause much of a problem here, as it is trivial to see what line is causing the problem in our simple test, it's something to keep in mind when writing more substantial doctests to be placed in the __test__ dictionary. If multiple lines in the test are identical and one of them causes a failure, it may be difficult to identify which exact line is causing the problem, as the failure output won't identify the specific line number where the failure occurred.

So far all of the mistakes we have introduced into the sample tests have involved expected output not matching actual results. These are reported as test failures. In addition to test failures, we may sometimes encounter test errors. These are described next.

Test errors versus test failures


To see what a test error looks like, let's remove the failing doctest introduced in the previous section and introduce a different kind of mistake into our sample unit test. Let's assume that instead of wanting to test that 1 + 1 equals the literal 2, we want to test that it equals the result of a function, sum_args, that is supposed to return the sum of its arguments. But we're going to make a mistake and forget to import that function. So change self.failUnlessEqual to:

self.failUnlessEqual(1 + 1, sum_args(1, 1))

Now when the tests are run we see:

kmt@lbox:/dj_projects/marketr$ python manage.py test survey 
Creating test database... 
Creating table auth_permission 
Creating table auth_group 
Creating table auth_user 
Creating table auth_message 
Creating table django_content_type 
Creating table django_session 
Creating table django_site 
Creating table django_admin_log 
Installing index for auth.Permission model 
Installing index for auth.Message model 
Installing index for admin.LogEntry model 
E. 
====================================================================== 
ERROR: test_basic_addition (survey.tests.SimpleTest) 
---------------------------------------------------------------------- 
Traceback (most recent call last): 
  File "/dj_projects/marketr/survey/tests.py", line 15, in test_basic_addition 
    self.failUnlessEqual(1 + 1, sum_args(1, 1)) 
NameError: global name 'sum_args' is not defined 

---------------------------------------------------------------------- 
Ran 2 tests in 0.041s 

FAILED (errors=1) 
Destroying test database... 

The test runner encountered an exception before it even got to the point where it could compare 1 + 1 to the return value of sum_args, as sum_args was not imported. In this case, the error is in the test itself, but it would still have been reported as an error, not a failure, if the code in sum_args was what caused a problem. Failures mean actual results didn't match what was expected, whereas errors mean some other problem (exception) was encountered during the test run. Errors may imply a mistake in the test itself, but don't necessarily have to imply that.

Note that a similar error made in a doctest is reported as a failure, not an error. For example, we can change the doctest 1 + 1 line to:

>>> 1 + 1 == sum_args(1, 1) 

If we then run the tests, the output will be:

kmt@lbox:/dj_projects/marketr$ python manage.py test survey 
Creating test database... 
Creating table auth_permission 
Creating table auth_group 
Creating table auth_user 
Creating table auth_message 
Creating table django_content_type 
Creating table django_session 
Creating table django_site 
Creating table django_admin_log 
Installing index for auth.Permission model 
Installing index for auth.Message model 
Installing index for admin.LogEntry model 
EF 
====================================================================== 
ERROR: test_basic_addition (survey.tests.SimpleTest) 
---------------------------------------------------------------------- 
Traceback (most recent call last): 
  File "/dj_projects/marketr/survey/tests.py", line 15, in test_basic_addition 
    self.failUnlessEqual(1 + 1, sum_args(1, 1)) 
NameError: global name 'sum_args' is not defined 

====================================================================== 
FAIL: Doctest: survey.tests.__test__.doctest 
---------------------------------------------------------------------- 
Traceback (most recent call last): 
  File "/usr/lib/python2.5/site-packages/django/test/_doctest.py", line 2180, in runTest 
    raise self.failureException(self.format_failure(new.getvalue())) 
AssertionError: Failed doctest test for survey.tests.__test__.doctest 
 File "/dj_projects/marketr/survey/tests.py", line unknown line number, in doctest 

---------------------------------------------------------------------- 
File "/dj_projects/marketr/survey/tests.py", line ?, in survey.tests.__test__.doctest 
Failed example: 
    1 + 1 == sum_args(1, 1) 
Exception raised: 
    Traceback (most recent call last): 
      File "/usr/lib/python2.5/site-packages/django/test/_doctest.py", line 1267, in __run 
        compileflags, 1) in test.globs 
      File "<doctest survey.tests.__test__.doctest[0]>", line 1, in <module> 
        1 + 1 == sum_args(1, 1) 
    NameError: name 'sum_args' is not defined 


---------------------------------------------------------------------- 
Ran 2 tests in 0.044s 

FAILED (failures=1, errors=1) 
Destroying test database... 

Thus, the error versus failure distinction made for unit tests does not necessarily apply to doctests. So, if your tests include doctests, the summary of failure and error counts printed at the end doesn't necessarily reflect how many tests produced unexpected results (unit test failure count) or had some other error (unit test error count). However, in any case, neither failures nor errors are desired. The ultimate goal is to have zero for both, so if the difference between them is a bit fuzzy at times that's not such a big deal. It can be useful though, to understand under what circumstances one is reported instead of the other.

We have now seen how to run tests, and what the results look like for both overall success and a few failures and errors. Next we will examine the various command line options supported by the manage.py test command.

Command line options for running tests


Beyond specifying the exact applications to test on the command line, what other options are there for controlling the behavior of manage.py test? The easiest way to find out is to try running the command with the option --help:

kmt@lbox:/dj_projects/marketr$ python manage.py test --help
Usage: manage.py test [options] [appname ...]

Runs the test suite for the specified applications, or the entire site if no apps are specified.

Options:
  -v VERBOSITY, --verbosity=VERBOSITY
                      Verbosity level; 0=minimal output, 1=normal output,
                      2=all output
  --settings=SETTINGS   The Python path to a settings module, e.g.
                        "myproject.settings.main". If this isn't provided, the
                        DJANGO_SETTINGS_MODULE environment variable will 
                        be used.
  --pythonpath=PYTHONPATH
                        A directory to add to the Python path, e.g.
                        "/home/djangoprojects/myproject".
  --traceback           Print traceback on exception
  --noinput             Tells Django to NOT prompt the user for input of 
                        any kind.
  --version             show program's version number and exit
  -h, --help            show this help message and exit

Let's consider each of these in turn (excepting help, as we've already seen what it does):

Verbosity

Verbosity is a numeric value between 0 and 2. It controls how much output the tests produce. The default value is 1, so the output we have seen so far corresponds to specifying -v 1 or --verbosity=1. Setting verbosity to 0 suppresses all of the messages about creating the test database and tables, but not summary, failure, or error information. If we correct the last doctest failure introduced in the previous section and re-run the tests specifying -v0, we will see:

kmt@lbox:/dj_projects/marketr$ python manage.py test survey -v0 
====================================================================== 
ERROR: test_basic_addition (survey.tests.SimpleTest) 
---------------------------------------------------------------------- 
Traceback (most recent call last): 
  File "/dj_projects/marketr/survey/tests.py", line 15, in test_basic_addition 
    self.failUnlessEqual(1 + 1, sum_args(1, 1)) 
NameError: global name 'sum_args' is not defined 

---------------------------------------------------------------------- 
Ran 2 tests in 0.008s 

FAILED (errors=1) 

Setting verbosity to 2 produces a great deal more output. If we fix this remaining error and run the tests with verbosity set to its highest level, we will see:

kmt@lbox:/dj_projects/marketr$ python manage.py test survey --verbosity=2 
Creating test database... 
Processing auth.Permission model 
Creating table auth_permission 
Processing auth.Group model 
Creating table auth_group 

[...more snipped...]

Creating many-to-many tables for auth.Group model 
Creating many-to-many tables for auth.User model 
Running post-sync handlers for application auth 
Adding permission 'auth | permission | Can add permission' 
Adding permission 'auth | permission | Can change permission' 

[...more snipped...]

No custom SQL for auth.Permission model 
No custom SQL for auth.Group model 

[...more snipped...]

Installing index for auth.Permission model 
Installing index for auth.Message model 
Installing index for admin.LogEntry model 
Loading 'initial_data' fixtures... 
Checking '/usr/lib/python2.5/site-packages/django/contrib/auth/fixtures' for fixtures... 
Trying '/usr/lib/python2.5/site-packages/django/contrib/auth/fixtures' for initial_data.xml fixture 'initial_data'... 
No xml fixture 'initial_data' in '/usr/lib/python2.5/site-packages/django/contrib/auth/fixtures'. 

[....much more snipped...]
No fixtures found. 
test_basic_addition (survey.tests.SimpleTest) ... ok 
Doctest: survey.tests.__test__.doctest ... ok 

---------------------------------------------------------------------- 
Ran 2 tests in 0.004s 

OK 
Destroying test database...

As you can see, at this level of verbosity the command reports in excruciating detail all of what it is doing to set up the test database. In addition to the creation of database tables and indexes that we saw earlier, we now see that the database setup phase includes:

  1. Running post-syncdb signal handlers. The django.contrib.auth application, for example, uses this signal to automatically add permissions for models as each application is installed. Thus you see messages about permissions being created as the post-syncdb signal is sent for each application listed in INSTALLED_APPS.

  2. Running custom SQL for each model that has been created in the database. Based on the output, it does not look like any of the applications in INSTALLED_APPS use custom SQL.

  3. Loading initial_data fixtures. Initial data fixtures are a way to automatically pre-populate the database with some constant data. None of the applications we have listed in INSTALLED_APPS make use of this feature, but a great deal of output is produced as the test runner looks for initial data fixtures, which may be found under any of several different names. There are messages for each possible file that is checked and for whether anything was found. This output might come in handy at some point if we run into trouble with the test runner finding an initial data fixture (we'll cover fixtures in detail in Chapter 3), but for now this output is not very interesting.

Once the test runner finishes initializing the database, it settles down to running the tests. At verbosity level 2, the line of dots, Fs, and Es we saw previously is replaced by a more detailed report of each test as it is run. The name of the test is printed, followed by three dots, then the test result, which will either be ok, ERROR, or FAIL. If there are any errors or failures, the detailed information about why they occurred will be printed at the end of the test run. So as you watch a long test run proceeding with verbosity set to 2, you will be able to see what tests are running into problems, but you will not get the details of the reasons why they occurred until the run completes.

Settings

You can pass the settings option to the test command to specify a settings file to use instead of the project default one. This can come in handy if you want to run tests using a database that's different from the one you normally use (either for speed of testing or to verify your code runs correctly on different databases), for example.

Note the help text for this option states that the DJANGO_SETTINGS_MODULE environment variable will be used to locate the settings file if the settings option is not specified on the command line. This is only accurate when the test command is being run via the django-admin.py utility. When using manage.py test, the manage.py utility takes care of setting this environment variable to specify the settings.py file in the current directory.

Pythonpath

This option allows you to append an additional directory to the Python path used during the test run. It's primarily of use when using django-admin.py, where it is often necessary to add the project path to the standard Python path. The manage.py utility takes care of adding the project path to the Python path, so this option is not generally needed when using manage.py test.

Traceback

This option is not actually used by the test command. It is inherited as one of the default options supported by all django-admin.py (and manage.py) commands, but the test command never checks for it. Thus you can specify it, but it will have no effect.

Noinput

This option causes the test runner to not prompt for user input, which raises the question: When would the test runner require user input? We haven't encountered that so far. The test runner prompts for input during the test database creation if a database with the test database name already exists. For example, if you hit Ctrl + C during a test run, the test database may not be destroyed and you may encounter a message like this the next time you attempt to run tests:

kmt@lbox:/dj_projects/marketr$ python manage.py test 
Creating test database... 
Got an error creating the test database: (1007, "Can't create database 'test_marketr'; database exists") 
Type 'yes' if you would like to try deleting the test database 'test_marketr', or 'no' to cancel: 

If --noinput is passed on the command line, the prompt is not printed and the test runner proceeds as if the user had entered 'yes' in response. This is useful if you want to run the tests from an unattended script and ensure that the script does not hang while waiting for user input that will never be entered.

Version

This option reports the version of Django in use and then exits. Thus when using --version with manage.py or django-admin.py, you do not actually need to specify a subcommand such as test. In fact, due to a bug in the way Django processes command options, at the time of writing this book, if you do specify both --version and a subcommand, the version will get printed twice. That will likely get fixed at some point.

Summary


The overview of Django testing is now complete. In this chapter, we:

  • Looked in detail at the sample tests.py file generated when a new Django application is created

  • Learned how to run the provided sample tests

  • Experimented with introducing deliberate mistakes into the tests in order to see and understand what information is provided when tests fail or encounter errors

  • Finally, we examined all of the command line options that may be used with manage.py test

We will continue to build on this knowledge in the next chapter, as we focus on doctests in depth.

Left arrow icon Right arrow icon

Key benefits

  • Develop Django applications quickly with fewer bugs through effective use of automated testing and debugging tools. Ensure your code is accurate and stable throughout development and production by using Django's test framework. Understand the working of code and its generated output with the help of debugging tools. Packed with detailed working examples that illustrate the techniques and tools for debugging

Description

Bugs are a time consuming burden during software development. Django's built-in test framework and debugging support help lessen this burden. This book will teach you quick and efficient techniques for using Django and Python tools to eradicate bugs and ensure your Django application works correctly. This book will walk you step by step through development of a complete sample Django application. You will learn how best to test and debug models, views, URL configuration, templates, and template tags. This book will help you integrate with and make use of the rich external environment of test and debugging tools for Python and Django applications. The book starts with a basic overview of testing. It will highlight areas to look out for while testing. You will learn about different kinds of tests available, and the pros and cons of each, and also details of test extensions provided by Django that simplify the task of testing Django applications. You will see an illustration of how external tools that provide even more sophisticated testing features can be integrated into Django's framework. On the debugging front, the book illustrates how to interpret the extensive debugging information provided by Django's debug error pages, and how to utilize logging and other external tools to learn what code is doing.

Who is this book for?

If you are a Django application developer who wants to create robust applications quickly that work well and are easy to maintain in the long term, this book is for you. This book is the right pick if you want to be smartly tutored to make best use of Django's rich testing and debugging support and make testing an effortless task.Basic knowledge of Python, Django, and the overall structure of a database-driven web application is assumed. However, the code samples are fully explained so that even beginners who are new to the area can learn a great deal from this book.

What you will learn

  • Build a complete application in manageable pieces that can be written, tested, and debugged individually. Come to grips with the nuances of testing and the pros and cons of each type of test Simplify the task of testing web applications by using specific test extensions provided by Django. Integrate other test tools into Django s framework to obtain test coverage information and more easily test forms. Analyze the copious debug information provided by Django s debug error pages. Write your own add-on debugging aids. Easily acquire enormous and important information with the help of external tools such as the Django debug toolbar. Decipher code behavior by using logging and effectively debug problems in production, when debug error pages are not available.Learn what your code and other library support code actually does by skilled use of a debugger. Tackle problems external to your code with available fixes. Debug common problems that arise during the move from development to production.

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Apr 19, 2010
Length: 436 pages
Edition : 1st
Language : English
ISBN-13 : 9781847197566
Vendor :
Django
Languages :

What do you get with a Packt Subscription?

Free for first 7 days. $19.99 p/m after that. Cancel any time!
Product feature icon Unlimited ad-free access to the largest independent learning library in tech. Access this title and thousands more!
Product feature icon 50+ new titles added per month, including many first-to-market concepts and exclusive early access to books as they are being written.
Product feature icon Innovative learning tools, including AI book assistants, code context explainers, and text-to-speech.
Product feature icon Thousands of reference materials covering every tech concept you need to stay up to date.
Subscribe now
View plans & pricing

Product Details

Publication date : Apr 19, 2010
Length: 436 pages
Edition : 1st
Language : English
ISBN-13 : 9781847197566
Vendor :
Django
Languages :

Packt Subscriptions

See our plans and pricing
Modal Close icon
$19.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
$199.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts
$279.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total $ 158.97
Programming ArcGIS 10.1 with Python Cookbook
$48.99
WiX: A Developer's Guide to Windows Installer XML
$54.99
Django 1.1 Testing and Debugging
$54.99
Total $ 158.97 Stars icon
Banner background image

Table of Contents

11 Chapters
Django Testing Overview Chevron down icon Chevron up icon
Does This Code Work? Doctests in Depth Chevron down icon Chevron up icon
Testing 1, 2, 3: Basic Unit Testing Chevron down icon Chevron up icon
Getting Fancier: Django Unit Test Extensions Chevron down icon Chevron up icon
Filling in the Blanks: Integrating Django and Other Test Tools Chevron down icon Chevron up icon
Django Debugging Overview Chevron down icon Chevron up icon
When the Wheels Fall Off: Understanding a Django Debug Page Chevron down icon Chevron up icon
When Problems Hide: Getting More Information Chevron down icon Chevron up icon
When You Don't Even Know What to Log: Using Debuggers Chevron down icon Chevron up icon
When All Else Fails: Getting Outside Help Chevron down icon Chevron up icon
When it's Time to Go Live: Moving to Production Chevron down icon Chevron up icon

Customer reviews

Top Reviews
Rating distribution
Full star icon Full star icon Full star icon Full star icon Half star icon 4.4
(11 Ratings)
5 star 63.6%
4 star 27.3%
3 star 0%
2 star 0%
1 star 9.1%
Filter icon Filter
Top Reviews

Filter reviews by




Chris Lawlor May 01, 2013
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Don't let the Django 1.1 label on the cover fool you, this is still the best book available on testing Django applications.
Amazon Verified review Amazon
J. Mccollum May 28, 2010
Full star icon Full star icon Full star icon Full star icon Full star icon 5
This book is nothing if not ambitious. Weighing in at over 400 pages, it aims to highlight the development of an entire web app, start to finish, with a particular emphasis on the testing and debugging tools that Django provides.Starting with the fundamentals of doctests and unit tests, the book also discusses what should be tested - not just how. And in doing this, the book reveals its target audience. I would particularly recommend this book for the following groups of people: developers who are relatively new to Django, and developers who are new to MVC frameworks in general.The book then moves on to describe some of the tools you can use to extend Django's testing and debugging capabilities - the django-debug-toolbar, and Twill to name two.A detailed discussion of Django error pages comes next, before what was, for me, the highlight of the book: an examination of PDB - the Python Debugger. If, like me, your initial exposure to Python came through Django, then you might well have missed some of the gems that the standard library contains, such as PDB. The book contains a detailed walkthrough of how to use PDB, and if you haven't used it before, well, you'll love it.The book ends with a chapter on deployment, even including a section on load testing. Advanced topics such as testing threading issues are covered here too, ensuring that even seasoned Django developers will learn something from this book.The book's greatest strength is its breadth - covering the entire development process from start to finish. If there's one flaw though, it is that it goes into a little too much detail in places. Like Juho, I could have done without the section on reporting bugs in Django.You shouldn't let that put you off though - if the worst criticism I can muster is 'too much detail', that has to be a good thing! In particular, if you want to learn how to test your Django applications properly, or are new to Django and want to see the testing and debugging tools on offer, I would wholeheartedly recommend this book.
Amazon Verified review Amazon
Randall Degges Aug 15, 2010
Full star icon Full star icon Full star icon Full star icon Full star icon 5
First, a little about my background: I've been using Django for approximately 8 months, both professionally and for fun.This book really blew me away. I learned more new things in this book than I have reading the past 5 or so tech books alone. This book is extremely useful for any developers who would like to be better all around Django programmers, as it provides a solid, proven system for developing, testing, and debugging Django applications.The rest of my review will be spent breaking this book down for anyone interested.TARGET AUDIENCE---------------The target audience for this book are developers comfortable writing Django websites. This includes proficiency with:* models* forms* views* templates* settings (settings.py)* python modules* the concept of unit testing* the concept of 'agile development'If you are decent at any of the above, or can at least understand all of the terminology, and know WHY things work the way they do, then this book will be extremely useful to you.If you aren't really sure you know the topics above, consider reading another Django book before this one (but still buy this one).TOPICS COVERED--------------The book covers the following topics in-depth:* Running tests on Django websites / applications.* How to write useful doctests.* How to write useful unit tests.* Doctest dependence and creation / specification.* Unicode problems with doctests, and how to fix them.* Basic, intermediate, and advanced unit testing.* What to unit test, what not to worry about. (EG: Don't test native parts of Django, it is already tested.)* Environmental dependence in tests: database dependence, test interdependence, etc.* How to generate test fixtures.* How to use test fixtures.* Testing Django admin applications.* Testing emails.* Testing other HTTP methods.* Testing URL configurations.* Testing templates with Django's test client, and twil.* Using nose as a test runner.* How to understand Django's debug pages.* How to use PDB to do intensive low-level debugging of running applications.* How to perform performance / stress tests using siege.* Fixing multi-threaded coding errors.* Tracking SQL queries and optimizing SQL requests.* Using Django Debug Toolbar.* How to use the Django ticketing system.* How to deploy using mod_wsgi + apache for fast websites.* How to log code effectively using the native python logger.* How to get *real* coverage reports using django-coverage.OVERALL REVIEW--------------This book is by far the best book on agile development with Django that has been written. If it were up to me, I would require every programmer who uses Django to read this book before writing production code. It outlines the best practices for developing with Django (with a strong focus on testing), and WHY they are best practices.The author is an extremely good technical writer, and her focus and clarity is easily identifiable in the text. She is able to elegantly describe even the most complex thoughts, actions, and results with little reader friction.If you work with Django (for fun or profit), and haven't read this book, do yourself a favor and buy it. It will make a world of difference in your day-to-day work, and will make future programmers working on your code love you.It also covers details such as the best way to test models, how to test forms, etc., in a straightforward, easy to understand method.
Amazon Verified review Amazon
Marcel Sjohann Chastain Dec 29, 2012
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Just finished <Django 1.1 Testing and Debugging> by Karen M. Tracy.The chapter on using the Python debugger alone was worth the cost of the entire book. Even though it (obviously) doesn't include information about Django's newer testing features/integration, all the information is incredibly relevant.The author walks the reader through building a simple survey application, addressing best practices on debugging, testing, efficiency and resiliency along the way. It reads like a brisk but casual conversation with the author -- I feel almost surprised at the amount of technical knowledge conveyed so effortlessly. No words wasted, no pointless ramblings.Highly recommended (if you're into this kinda thing).
Amazon Verified review Amazon
Leon Matthews Feb 28, 2011
Full star icon Full star icon Full star icon Full star icon Full star icon 5
A fantastic introduction, and deep delve into Django test-driven development. Much of the material applies to plain Python programming as well. If I had to pick just one book for a new Django developer, it would be this one, no question.Only relatively recently has the Django community started to develop a good test-driven methodology, and I suspect this book might be part of the reason why -- it's a brilliant book.Don't let the slightly unfortunate title put you off, it's just as useful to users of Django 1.3 -- the only outdated advice are a couple of places where the author points out something that doesn't quite work yet, that now works in the latest versions of Django.There are relatively few nits. The visual presentation isn't great, and it can sometimes be a little difficult to follow along with the examples -- you have to make the same (deliberate) mistakes she does. Not knowing where the example was going, I had to back-track more than once and add the error I'd absentmindedly fixed!
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

What is included in a Packt subscription? Chevron down icon Chevron up icon

A subscription provides you with full access to view all Packt and licnesed content online, this includes exclusive access to Early Access titles. Depending on the tier chosen you can also earn credits and discounts to use for owning content

How can I cancel my subscription? Chevron down icon Chevron up icon

To cancel your subscription with us simply go to the account page - found in the top right of the page or at https://subscription.packtpub.com/my-account/subscription - From here you will see the ‘cancel subscription’ button in the grey box with your subscription information in.

What are credits? Chevron down icon Chevron up icon

Credits can be earned from reading 40 section of any title within the payment cycle - a month starting from the day of subscription payment. You also earn a Credit every month if you subscribe to our annual or 18 month plans. Credits can be used to buy books DRM free, the same way that you would pay for a book. Your credits can be found in the subscription homepage - subscription.packtpub.com - clicking on ‘the my’ library dropdown and selecting ‘credits’.

What happens if an Early Access Course is cancelled? Chevron down icon Chevron up icon

Projects are rarely cancelled, but sometimes it's unavoidable. If an Early Access course is cancelled or excessively delayed, you can exchange your purchase for another course. For further details, please contact us here.

Where can I send feedback about an Early Access title? Chevron down icon Chevron up icon

If you have any feedback about the product you're reading, or Early Access in general, then please fill out a contact form here and we'll make sure the feedback gets to the right team. 

Can I download the code files for Early Access titles? Chevron down icon Chevron up icon

We try to ensure that all books in Early Access have code available to use, download, and fork on GitHub. This helps us be more agile in the development of the book, and helps keep the often changing code base of new versions and new technologies as up to date as possible. Unfortunately, however, there will be rare cases when it is not possible for us to have downloadable code samples available until publication.

When we publish the book, the code files will also be available to download from the Packt website.

How accurate is the publication date? Chevron down icon Chevron up icon

The publication date is as accurate as we can be at any point in the project. Unfortunately, delays can happen. Often those delays are out of our control, such as changes to the technology code base or delays in the tech release. We do our best to give you an accurate estimate of the publication date at any given time, and as more chapters are delivered, the more accurate the delivery date will become.

How will I know when new chapters are ready? Chevron down icon Chevron up icon

We'll let you know every time there has been an update to a course that you've bought in Early Access. You'll get an email to let you know there has been a new chapter, or a change to a previous chapter. The new chapters are automatically added to your account, so you can also check back there any time you're ready and download or read them online.

I am a Packt subscriber, do I get Early Access? Chevron down icon Chevron up icon

Yes, all Early Access content is fully available through your subscription. You will need to have a paid for or active trial subscription in order to access all titles.

How is Early Access delivered? Chevron down icon Chevron up icon

Early Access is currently only available as a PDF or through our online reader. As we make changes or add new chapters, the files in your Packt account will be updated so you can download them again or view them online immediately.

How do I buy Early Access content? Chevron down icon Chevron up icon

Early Access is a way of us getting our content to you quicker, but the method of buying the Early Access course is still the same. Just find the course you want to buy, go through the check-out steps, and you’ll get a confirmation email from us with information and a link to the relevant Early Access courses.

What is Early Access? Chevron down icon Chevron up icon

Keeping up to date with the latest technology is difficult; new versions, new frameworks, new techniques. This feature gives you a head-start to our content, as it's being created. With Early Access you'll receive each chapter as it's written, and get regular updates throughout the product's development, as well as the final course as soon as it's ready.We created Early Access as a means of giving you the information you need, as soon as it's available. As we go through the process of developing a course, 99% of it can be ready but we can't publish until that last 1% falls in to place. Early Access helps to unlock the potential of our content early, to help you start your learning when you need it most. You not only get access to every chapter as it's delivered, edited, and updated, but you'll also get the finalized, DRM-free product to download in any format you want when it's published. As a member of Packt, you'll also be eligible for our exclusive offers, including a free course every day, and discounts on new and popular titles.