What happens when you run manage.py test?
The translation of the article was prepared especially for the students of the course “Python Web-Developer”…
You run the tests with the command manage.py test
but do you know what’s going on under the hood? How does the test runner work and how does it place the dots, E and F on the screen?
As you learn how Django works, you will discover many use cases, such as changing cookies, setting global headers, and logging requests. Likewise, once you understand how tests work, you can customize processes to, for example, load tests in a different order, configure test parameters without a separate file, or block outgoing HTTP requests.
In this article, we’ll go through vital customization of the output of your tests, as well as changing the style of displaying test results from dots and letters to emoji.
However, before we write the code, let’s do a refactoring of the testing process.
Test output
Let’s take a look at the test results. Let’s take a project with an empty test as a basis:
from django.test import TestCase
class ExampleTests(TestCase):
def test_one(self):
pass
When we run the tests, we get familiar output:
$ python manage.py test
Creating test database for alias 'default'...
System check identified no issues (0 silenced).
.
----------------------------------------------------------------------
Ran 1 test in 0.001s
OK
Destroying test database for alias 'default'...
To understand what is happening, ask the program to tell you more about it by adding a flag -v 3
:
$ python manage.py test -v 3
Creating test database for alias 'default' ('file:memorydb_default?mode=memory&cache=shared')...
Operations to perform:
Synchronize unmigrated apps: core
Apply all migrations: (none)
Synchronizing apps without migrations:
Creating tables...
Running deferred SQL...
Running migrations:
No migrations to apply.
System check identified no issues (0 silenced).
test_one (example.core.tests.test_example.ExampleTests) ... ok
----------------------------------------------------------------------
Ran 1 test in 0.004s
OK
Destroying test database for alias 'default' ('file:memorydb_default?mode=memory&cache=shared')...
Great, that’s enough! Now let’s figure it out.
On the first line we see the message “Creating test database …” – this is how Django reports the creation of the test database. If your project has multiple databases, you will see one row for each.
I am using SQLite in this project, so Django automatically puts mode = memory in the database address field. This will make database operations about 10 times faster. Other databases, such as PostgreSQL, do not have these modes, but there are other in-memory startup methods for them.
Second line “Operations to perform” and the next few lines are the output of the command migrate in test databases. That is, the output here turns out to be identical to the one we get when executing manage.py
migrate
on an empty database. Now I am using a small project without migrations, but if there were, then there would be one line in the output for each migration.
Next comes the line with the inscription “System check identified no issues”… It comes from Django, which runs a series of “pre-flight checks” to make sure your project is configured correctly. You can run the check separately using the command manage.py check
and it will also run automatically with most of the control commands running. However, in the case of tests, it will be deferred until the test databases are ready, as some of the validation steps use database connections.
You can write your own checks to detect configuration errors. Since they are executed earlier, sometimes it makes sense to write them, rather than write a separate test. I would love to talk about this too, but this topic draws on a separate article.
The next lines are about our tests. By default, the test runner outputs one character per test, but as verbosity is increased in Django, a separate line will be displayed for each test. Here we have just one test, “testone”, and when it has finished executing, test runner added to line “Ok”…
To mark the end of a run, a separator is placed “—“… If we had any errors or failures, they would be displayed in front of these dashes. This is followed by a short description of the tests performed and “OK” indicating that the tests were successful.
The last line tells us that the test database has been dropped.
As a result, we get the following sequence of steps:
Creation of test databases.
Database migration.
Run system checks.
Running tests.
Report on the number of tests and success / failure.
Removing test databases.
Let’s see what components within Django are responsible for these steps.
Django and unittest
As you probably already know, the Django testing framework extends the unittest framework from the Python standard library. Each component responsible for the steps above is either built into unittest or one of the Django extensions. We can reflect this in a diagram:
We can find the components of each side by looking at the code.
Test management command “test”
The first thing to look at is the test management command that Django finds and executes at startup manage.py test
… It is located in django.core.management.commands.test
…
As for the control commands, they are rather short – less than 100 lines. Method handle()
responsible for processing in TestRunner. To simplify to three main lines:
def handle(self, *test_labels, **options):
TestRunner = get_runner(settings, options['testrunner'])
...
test_runner = TestRunner(**options)
...
failures = test_runner.run_tests(test_labels)
...
So what is the class TestRunner? This is the Django component that coordinates the testing process. It’s customizable, but the default class and the only one in Django itself is django.test.runner.DiscoverRunner
… Let’s look at it next.
DiscoverRunner class
DiscoverRunner – the main coordinator of the testing process. It handles adding additional control command arguments, creating and passing subcomponents, and doing some environment customization.
It starts something like this:
class DiscoverRunner:
"""A Django test runner that uses unittest2 test discovery."""
test_suite = unittest.TestSuite
parallel_test_suite = ParallelTestSuite
test_runner = unittest.TextTestRunner
test_loader = unittest.defaultTestLoader
These class attributes point to other classes that perform different steps in the testing process. As you can see, most of them are unittest components.
Note that one of them is called test_runner, so we have two different concepts, which are called “test runner” – this is DiscoverRunner
from Django and TextTestRunner
from unittest. DiscoverRunner
does much more than TextTestRunner
and it has a different interface. Perhaps in Django one could call DiscoverRunner
differently, for example TestCoordinator, but now it’s too late to think about it.
Main stream in DiscoverRunner is in method runtests()
… With a bunch of details removed, run_tests () looks something like this:
def run_tests(self, test_labels, extra_tests=None, **kwargs):
self.setup_test_environment()
suite = self.build_suite(test_labels, extra_tests)
databases = self.get_databases(suite)
old_config = self.setup_databases(aliases=databases)
self.run_checks(databases)
result = self.run_suite(suite)
self.teardown_databases(old_config)
self.teardown_test_environment()
return self.suite_result(suite, result)
There are very few steps here. Many of the methods follow the steps in the list above:
setup_databases()
creates test databases. But this method only creates the databases that are needed for the selected tests, filtered withget_databases()
so if you only runSimpleTestCases
without databases, Django won’t create anything. Inside this method, databases are created and the command is executedmigrate
…run_checks()
runs checks.run_suite()
runs the test suite, including all output.Function
teardown_databases()
deletes test databases.
And a couple more methods that you can talk about:
setup_test_environment()
andteardown_test_environment()
set or clear some settings, such as the local email server.suite_result()
returns the number of errors in response to the test control command.
All of these methods are useful to review in order to understand the settings of the testing process. But they are all part of Django. Other methods are passed to components in unittest
– build_suite()
and run_suite()
…
Let’s talk about them too.
buildsuite ()
buildsuite()
looks for tests to run and moves them to the “suite“. This is a long method, but if simplified it looks something like this:
def build_suite(self, test_labels=None, extra_tests=None, **kwargs):
suite = self.test_suite()
test_labels = test_labels or ['.']
for label in test_labels:
tests = self.test_loader.loadTestsFromName(label)
suite.addTests(tests)
if self.parallel > 1:
suite = self.parallel_test_suite(suite, self.parallel, self.failfast)
return suite
This method uses three of the four classes that, as we have seen, are accessed by DiscoverRunner:
test_suite
is a unittest component that serves as a container for running tests.parallel_test_suite
is a test suite wrapper that is used with Django’s parallel testing feature.test_loader
– componentunittest
, which can find test modules on disk and load them into a set.
runsuite ()
Another method DiscoverRunnerto talk about is run_suite()
… We will not simplify it, and just see how it looks:
def run_suite(self, suite, **kwargs):
kwargs = self.get_test_runner_kwargs()
runner = self.test_runner(**kwargs)
return runner.run(suite)
Its only task is to create a test runner and tell it to run the assembled test suite. This is the last of the unittest components referenced by the class attribute. He uses unittest.TextTestRunner
– test runner the default for outputting results as text, as opposed to, for example, an XML file for passing the results to your CI system.
We will finish our little investigation by looking into the class TextTestRunner
…
TextTestRunner class
This unittest component takes a test case or suite and runs it. It starts like this:
class TextTestRunner(object):
"""A test runner class that displays results in textual form.
It prints out the names of tests as they are run, errors as they
occur, and a summary of the results at the end of the test run.
"""
resultclass = TextTestResult
def __init__(self, ..., resultclass=None, ...):
(Original the code)
By analogy with DiscoverRunner
, it uses a class attribute to refer to another class. Class TextTestResult
is responsible for text output by default. Unlike class references DiscoverRunner
, we can override resultclass
passing the alternative TextTestRunner._init_()
…
Now we can finally customize the testing process. But first, let’s get back to our little research.
Map
Now we can expand the map and show the classes that we found on it:
Of course, more details could be added, for example, the content of several important methods from DiscoverRunner
… But what we have found is already enough to implement many useful settings.
How to customize
Django offers two ways to customize the test execution flow:
Override test command with custom subclass…
Override the DiscoverRunner class by specifying in TESTRUNNER settings custom subclass.
Since the test run command is easy, we will spend most of our time rewriting DiscoverRunner… Because the DiscoverRunner refers to components unittest using class attributes, we can replace them by overriding the attributes in our own subclass.
Super Fast Test Runner
As a basic example, let’s say we need to skip tests and notify us every time that the test passes. We can do this by creating a subclass DiscoverRunner
with a new method runtests()
which will not call the method super()
:
# example/test.py
from django.test.runner import DiscoverRunner
class SuperFastTestRunner(DiscoverRunner):
def run_tests(self, *args, **kwargs):
print("All tests passed! A+")
failures = 0
return failures
Then we will use it in the configuration file as follows:
TEST_RUNNER = "example.test.SuperFastTestRunner"
And then we execute manage.py test
, and get the result in record time!
$ python manage.py test
All tests passed! A+
Great, very helpful!
Now let’s make it even more practical and move on to displaying test results as emoji!
Emoji output
We have already found out that the component TextTestResult
of unittest
responsible for the output. We can replace it in DiscoverRunner
passing the value resultclass
at TextTestRunner
…
Django already has options to replace resultclass
e.g. option –debug-sql optionthat prints executed queries for failed tests.
DiscoverRunner.run_suite()
creates TextTestRunner
with arguments from the method DiscoverRunner.get_test_runner_kwargs()
:
def get_test_runner_kwargs(self): return { 'failfast': self.failfast, 'resultclass': self.get_resultclass(), 'verbosity': self.verbosity, 'buffer': self.buffer, }
He, in turn, calls get_resultclass()
which returns a different class if one of the two test command parameters was used (--debug-sql
or --pdb
):
def get_resultclass(self):
if self.debug_sql:
return DebugSQLTextTestResult
elif self.pdb:
return PDBDebugResult
If none of the parameters are specified, the method implicitly returns Nonesaying TextTestResult
use by default resultclass
… We can see this None in our own subclass and replace it with a subclass TextTestResult
:
class EmojiTestRunner(DiscoverRunner):
def get_resultclass(self):
klass = super().get_resultclass()
if klass is None:
return EmojiTestResult
return klass
Our class EmojiTestResult
expands TextTestResult
and replaces the default dot output with emoji. In the end, it turns out to be quite long, since there is a separate method for each type of result:
class EmojiTestResult(unittest.TextTestResult):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
# If the "dots" style was going to be used, show emoji instead
self.emojis = self.dots
self.dots = False
def addSuccess(self, test):
super().addSuccess(test)
if self.emojis:
self.stream.write('✅')
self.stream.flush()
def addError(self, test, err):
super().addError(test, err)
if self.emojis:
self.stream.write('?')
self.stream.flush()
def addFailure(self, test, err):
super().addFailure(test, err)
if self.emojis:
self.stream.write('❌')
self.stream.flush()
def addSkip(self, test, reason):
super().addSkip(test, reason)
if self.emojis:
self.stream.write("⏭")
self.stream.flush()
def addExpectedFailure(self, test, err):
super().addExpectedFailure(test, err)
if self.emojis:
self.stream.write("❎")
self.stream.flush()
def addUnexpectedSuccess(self, test):
super().addUnexpectedSuccess(test)
if self.emojis:
self.stream.write("✳️")
self.stream.flush()
def printErrors(self):
if self.emojis:
self.stream.writeln()
super().printErrors()
After specifying TEST_RUNNER
at EmojiTestRunner
, we can run tests and see emoji:
$ python manage.py test
Creating test database for alias 'default'...
System check identified no issues (0 silenced).
?❎❌⏭✅✅✅✳️
...
----------------------------------------------------------------------
Ran 8 tests in 0.003s
FAILED (failures=1, errors=1, skipped=1, expected failures=1, unexpected successes=1)
Destroying test database for alias 'default'...
Uraaa!
Instead of a conclusion
After our speleological research, we saw that the unittest architecture is relatively simple. We can replace classes with subclasses to change any behavior during testing.
This is how project-specific customization works, but other people’s customizations are not easy to use. This is because architecture is designed for inheritance, not composition. We have to use multiple inheritance across the class network to integrate the customizations, and whether or not that will work depends on the quality of the implementation. In fact, it is because of this that there is no plugin ecosystem for unittest.
I am only familiar with two libraries that provide custom subclasses DiscoverRunner
:
unittest-xml-reporting provides XML output for your CI system.
django-slow-tests provides a measure of the execution time of tests to find the slowest tests.
I haven’t tried it myself, but combining them may not work as they both override the inference process.
On the other hand, pytest there is a thriving ecosystem with more than 700 plugins… This is due to the fact that its architecture uses composition with hooks that work by analogy with signals in Django. Plugins are registered only for the hooks they need, and pytest calls each registered hook function at the appropriate point in the testing process. Many of pytest’s built-in functions are even implemented as plugins.
If you are interested in more detailed customization of the testing process, refer to pytest.
the end
Thank you for taking this journey with me. I hope you’ve learned something new about how Django runs your tests and how to customize them.
Read more:
Why internationalization and localization matter