.com
Hosted by:
Unit testing expertise at your fingertips!
Home | Discuss | Lists

XUnit Basics

The book has now been published and the content of this chapter has likely changed substanstially.

About This Chapter

In the Test Automation Strategy narrative I introduced the "hard to change" decisions that we need to get right early in the project. This chapter serves two purpose. First, I introduce the xUnit terminology and diagramming notation used throughout this book. Second, I explain how the xUnit framework operates beneath the covers and why it was built that way. This can help the builder of a new Test Automation Framework (page X) understand how to port xUnit. It can also help test automaters understand how to use certain features of xUnit

An Introduction to XUnit

The term xUnit is how we refer to any member of the family of Test Automation Frameworks used for automating Hand-Scripted Tests (see Scripted Test on page X) that share the common set of features described here. Most programming languages in widespread use today have at least one implementation of xUnit and Hand-Scripted Tests are usually automated using the same programming language as that used for building the system under test (SUT). Although this is not necessarily the case, it is usually much easier because our tests have easy access to the SUT API. By using a programming language with which the developers are familiar, the effort of learning how to automate Fully Automated Tests (see Goals of Test Automation on page X) is reduced. (See the sidebar Testing Stored Procs with JUnit (page X) for an example of using a testing framework in one language to test an SUT in another language.)

Common Features

Since most members of the xUnit family are implemented using object-oriented programming languages (OOPL), I will describe them first and then note where the non-OOPL members of the family differ.

All the members of the xUnit family implement a basic set of features. They all provide a way to:

Many members of the family support Test Method Discovery (see Test Discovery on page X) so that we do not have to use Test Enumeration (page X) to manually add each Test Method we want to run to a test suite. Some members also support some form of Test Selection (page X) to run subsets of test methods based on some criteria.

The Bare Minimum

The bare minimum we need to understand about how xUnit operates is:

Defining Tests

Each test is represented by a Test Method that implements a single Four-Phase Test (page X) by:



Sketch Static Test Structure embedded from Static Test Structure.gif

Fig. X: The static test structure as seen by a test automater.

The test automater only sees the static structure as they read or write tests. They write one Test Methods with four distinct phases for each test in the Testcase Class. The Test Suite Factory (see Test Enumeration) is used only for Test Enumeration. The runtime structure (shown greyed out) is left to their imagination.

The most common types of tests are the Simple Success Test (see Test Method) which verifies that the SUT has behaved correctly with valid inputs, and the Expected Exception Test (see Test Method) which verifies that the SUT raises an exception when used incorrectly. A special type of test, the Constructor Test (see Test Method), is used to verify that object constructor logic builds new objects correctly. There may be a need for both "simple success" and "expected exception" forms of the Constructor Test. The Test Methods that contain our test logic need to live somewhere so we define them as methods of a Testcase Class(Note that this is called a test fixture in some variants of xUnit probably because the creators assumed we would have a single Testcase Class per Fixture (page X).); we pass the name of the Testcase Class (or the module or assembly it resides in) to the Test Runner (page X) to run our tests.

What's a Fixture?

The test fixture is everything we need to have in place to exercise the SUT. Typically, this is at least an instance of the class whose method we are testing. It may also include other objects on which the SUT depends. Note that some members of the xUnit family call the Testcase Class the test fixture. This is probably on the assumption that all the Test Methods on the Testcase Class should use the same fixture. This unfortunate name collision makes discussing test fixtures particularly problematic. In this book I've tried to be consistent by using a different name for the Testcase Class and the test fixture it creates. I trust the reader to translate this terminology to the terminology of their member of the xUnit family.

Defining Suites of Tests

Most Test Runners "auto-magically" construct a test suite containing all the Test Methods on our Testcase Class. Often, this is all we need. Sometimes we want to run all the tests for an entire application and at other times we want to be able to run just the tests for a specific subset of the functionality. Some members of the xUnit family and some third-party tools implement Testcase Class Discovery (see Test Discovery) in which the Test Runner finds the test suites by searching either the file system or an executable for test suites. If we do not have this capability, we need to use Test Suite Enumeration (see Test Enumeration) in which we define the overall test suite for the entire system or application as an aggregate of several smaller test suites. We do this by defining a special Test Suite Factory class whose suite method returns a Test Suite Object containing the Test Suite Object to run. This collection of test suites into larger and larger Suites of Suites is commonly used as a way to include the unit test suite for a class into the suite for the package or module which is in turn included in the suite for the entire system. This hierarchical organization supports the running of suites of tests with varying degrees of completeness and provides a practical way for developers to run the subset of the tests that are most relevant to the software they are working. It also allows them to run all the tests with a single command before they commit their changes into the source code Repository[SCM].

Running Tests

Tests are run by using a Test Runner of which there are several different kinds available for most members of the xUnit family. A Graphical Test Runner (see Test Runner) provides a visual way for the user to specify, invoke and observe the results of running a test suite. The Graphical Test Runner provides a way for the user to specify a test to run. Some allow the user to type in the name of a Test Suite Factory while others provide a graphical Test Tree Explorer (see Test Runner) that can be used to select a specific Test Method to execute from within a tree of test suites with the Test Methods as the leaves. Many Graphical Test Runner are integrated into an IDE to make running tests as easy as selecting the Run Test command from a context menu.

A Command-Line Test Runner (see Test Runner) can be used to run tests when running the test suite from the command line. The name of the Test Suite Factory to be used to create the test suite is included as a command line parameter. Command-Line Test Runners are most commonly used when invoking the Test Runner from Integration Build[SCM] scripts or sometimes from within an IDE.

>ruby testrunner.rb c:/examples/tests/SmellHandlerTest.rb
Loaded suite SmellHandlerTest
Started
.....Finished in 0.016 seconds.5 tests, 6 assertions, 0 failures, 0 errors
>Exit code: 0
Example CommandLineTestRunnerConsole embedded from Ruby/console.txt

Test Results

Naturally, the main reason for running automated tests is to determine the results. For the results to be meaningful, we need a standard way to describe them. In general, members of the xUnit family follow the Hollywood Principle of "don't call us, we'll call you." In other words, "No news is good news"; the tests will "call you" when there is a problem. This allows us to focus on the test failures rather than inspecting a bunch of passing tests as they roll by.

Test results are classified into one of three categories, each of which is treated slightly differently. When a test runs without any errors or failures, it is considered to be successful. In general, xUnit does not do anything special for successful tests as there should be no need to examine any output when a Self-Checking Test (see Goals of Test Automation) passes.

A test is considered to have failed when an assertion fails. That is, the test asserts that something should be true by calling an Assertion Method and it turns out not to be the case. When it fails, an Assertion Method throws an assertion failure exception (or whatever facsimile the language supports.) The Test Automation Framework increments a counter for each failure and adds the failure details to a list of failures which can be examined after the test run is complete. The failure of a single test, while significant, does not prevent the remaining tests from being run;this is in keeping with the principle Keep Tests Independent (see Principles of Test Automation on page X).

A test is considered to have an error when either the SUT or the test itself fails in an unexpected way. Depending on the language being used, this could be an uncaught exception, a raised error, etc. As with assertion failures, the Test Automation Framework increments a counter for each error and adds the error details to a list of errors which can be examined after the test run is complete.

For each test error or test failure, xUnit records information that can be examined to help understand exactly what went wrong. As a minimum, the name of the Test Method and Testcase Class are recorded along with the nature of the problem (whether it was a failed assertion or a software error). In most Graphical Test Runners that are integrated with an IDE, one merely has to (double) click on the appropriate line in the traceback to be shown the source code that emitted the error or failure.

Because the name test error sounds more drastic than a test failure some test automaters try to catch all the errors raised by the SUT and turn them into test failures. This is simply unnecessary. Ironically, in most cases it is easier to determine the cause of a test error than a test failure because the stack trace for a test error will typically pinpoint the problem code within the SUT while the stack track for a test failure only shows the location in the test where the failed assertion was made. It is, however, worthwhile using Guard Assertions (page X) to avoid executing code within the Test Method that would result in an test error being raised from within the Test Method(E.g. before executing an assertion on the contents of a field of an object returned by the SUT, it is worthwhile to assertNotNull on the object reference to avoid a "null reference" error.) itself; this is just a normal part of verifying the expected outcome of exercising the SUT and does not remove useful diagnostic tracebacks.

Under the xUnit Covers

The description thus far has focused on Test Methods and Testcase Classes with the odd mention of test suites. This is a simplified "compile time" view that is enough for most people to get started writing automated unit tests in xUnit. It is possible to use xUnit without any further understanding of how the Test Automation Framework operates but that is likely to lead to confusion when building and reusing test fixtures therefore it is better to understand how xUnit actually runs the Test Methods. In most(NUnit is a notable exception and there may be others I am unaware of. See the sidebar There's Always an Exception (page X) for more information.) members of the xUnit family, each test is represented at run time by a Testcase Object (page X) because it is a lot easier to manipulate tests if they are "first class" objects. The Testcase Objects are aggregated into Test Suite Objects that can be used to run many tests with a single user action.



Sketch Runtime Test Structure embedded from Runtime Test Structure.gif

Fig. X: The runtime test structure as seen by the Test Automation Framework.

At run time, the Test Runner asks the Testcase Class or a Test Suite Factory to instantiate one Testcase Object for each Test Method all wrapped up in a Test Suite Object. The Test Runner tells this Composite[GOF] object to run its tests and collect the results. Each Testcase Object runs one Test Method.

Test Commands

The Test Runner cannot possibly know how to call each Test Method individually. To avoid the need for this, most members of the xUnit family convert each Test Method into a Command[GOF] object with a run method. To create these Testcase Objects, the Test Runner calls the suite method of the Testcase Class to get a Test Suite Object. It then calls the run method via the standard test interface. The run method of a Testcase Object executes the specific Test Method for which it was instantiated and reports whether it passed or failed. The run method of a Test Suite Object iterates over all the members of the collection of tests keeping track of which ones passed and which ones failed.

Test Suite Objects

A Test Suite Object is a Composite object that implements the same standard test interface that all Testcase Objects implement. That interface (implicit in languages lacking a type or interface construct) requires provision of a run method. The expectation is that when run is invoked, all of the tests contained in the receiver will be run. In the case of a Testcase Object, it is itself a "test" and will run the corresponding Test Method. In the case of a Test Suite Object, that means invoking run on all of the Testcase Objects it contains. The value of using a Composite Command is that it makes running one or running many tests exactly the same.

So far, we have assumed that we already have the Test Suite Object instantiated but where did it come from? By convention, each Testcase Class acts as a Test Suite Factory by providing a class method called suite that returns a Test Suite Object containing one Testcase Object for each Test Method in the class. In languages that support some form of reflection, xUnit may use Test Method Discovery to discover the test methods automatically and construct the Test Suite Object containing them. Other members of the xUnit family require the test automater to implement the suite method themselves; this Test Enumeration takes more effort and is more likely to lead to Lost Tests (see Production Bugs on page X).

XUnit in the Procedural World

Test Automation Frameworks and test-driven development only became popular once object-oriented programming became commonplace and most members of the xUnit family are implemented in object-oriented programming languages that allow us to implement the concept of a Testcase Object. The lack of objects should not, however, keep us from testing procedural code but it does make writing Self-Checking Tests more work and building generic, reusable Test Runners more difficult.

In the absence of objects or classes, Test Methods must be treated as global (public static) procedures. These are typically stored in files or modules (or whatever modularity mechanism the language supports). If the language supports the concept of procedure variables (also known as a function pointer) we can define a generic Test Suite Procedure (see Test Suite Object) that takes an array of Test Methods (commonly called "test procedures") as an argument. Typically, the Test Methods must be aggregated into the arrays using Test Enumeration because very few non-object programming languages support reflection.

If the language does not support any way of treating Test Methods as data, the test suites must be defined by writing Test Suite Procedures that call Test Methods and/or other Test Suite Procedures. Test runs may be initiated by defining a main method on the module.

A final option is to encode the tests as data in a file and use a single Data-Driven Test (page X) interpreter to execute them. The main disadvantage of this approach is that it constrains the kinds of tests that can be run to those implemented by the Data-Driven Test interpreter which must be written anew for each SUT. It does have the advantage of moving the coding of the actual tests out of the developer arena and more into the end-user or tester arena and that makes it particularly appropriate for customer tests.

What's Next?

In this chapter we have established our basic terminology for talking about how xUnit tests are put together. Now I turn our attention to constructing our first test fixture in the Transient Fixture Management narrative chapter.



Page generated at Wed Feb 09 16:39:32 +1100 2011

Copyright © 2003-2008 Gerard Meszaros all rights reserved

All Categories
Introductory Narratives
Web Site Instructions
Code Refactorings
Database Patterns
DfT Patterns
External Patterns
Fixture Setup Patterns
Fixture Teardown Patterns
Front Matter
Glossary
Misc
References
Result Verification Patterns
Sidebars
Terminology
Test Double Patterns
Test Organization
Test Refactorings
Test Smells
Test Strategy
Tools
Value Patterns
XUnit Basics
xUnit Members
All "Introductory Narratives"
A Brief Tour
Test Smells
Goals of Test Automation
Philosophy Of Test Automation
Principles of Test Automation
Test Automation Strategy
XUnit Basics
Transient Fixture Management
Persistent Fixture Management
Result Verification
Using Test Doubles
Organizing Our Tests
Testing With Databases
A Roadmap to Effective Test Automation