.com
Hosted by:
Unit testing expertise at your fingertips!
Home | Discuss | Lists

Organizing Our Tests

The book has now been published and the content of this chapter has likely changed substanstially.

About This Chapter

In the chapters concluding with the Using Test Doubles narrative we looked at various techniques for interacting with the system under test (SUT) for the purpose of verifying its behavior. In this chapter we turn our attention to the question of how to organize all this test code to make it easy to find and understand.

The basic unit of test code organization is the Test Method (page X). Deciding what to put in the Test Method and where to put it is central to the topic of test organization. When we only have a few tests, how we organize them isn't terribly important. When we have hundreds of tests, test organization becomes a critical factor in keeping our tests easy to understand and find.

I start by discussing what we should and should not include in a Test Method. Next, I will discuss how we can decide on which Testcase Classes (page X) to put our Test Methods. Test naming depends heavily on how we have organized our tests so we will talk about them next. Then we will visit the topic of how to organize the Testcase Classes into test suites and where to put test code. The final topic will be discussing test code reuse in general and specifically where to put reusable test code.

Basic xUnit Mechanisms

The xUnit family of Test Automation Frameworks (page X) provide a number of features to help us organize our tests. The basic question "Where to I code my tests?" is answered by putting our test code into a Test Method on a Testcase Class. We then use either Test Discovery (page X) or Test Enumeration (page X) to create a Test Suite Object (page X) containing all the tests from the Testcase Class. The Test Runner (page X) invokes a method on the Test Suite Object to run all the Test Methods.

Right-sizing Test Methods

A test condition is something we need to prove the SUT really does; it can be described in terms of the starting state of the SUT, how we exercise the SUT, how we expect the SUT to respond and what the ending state of the SUT is expected to be. A Test Method is a sequence of statements in our test scripting language that results in exercising one or more test conditions. What should we include in a single test method?



Sketch Four Phase Test embedded from Four Phase Test.gif

Fig. X: The four phases of a typical test.

Each Test Method implements a Four-Phase Test (page X) verifying, ideally, a single test condition. Not all phases of the Four-Phase Test need be in the Test Method.

Many xUnit purists insist on Single Condition Tests (see Principles of Test Automation on page X) because it gives them good Defect Localization (see Goals of Test Automation on page X). That is, when a test fails they know exactly what is wrong in the SUT because each test verifies exactly one test condition. This is very much in contrast with manual testing where one tends to build long, involved multi-condition tests because of the overhead involved in setting up each test's pre-conditions. When creating xUnit-based automated tests, we have many ways of dealing with this frequently repeated fixture setup as described in the chapter Transient Fixture Management so we tend to favor Single Condition Tests. We call a test that verifies too many test conditions an Eager Tests (see Assertion Roulette on page X) and consider it a code smell.

Single Condition Test verifies a single test condition. That is, it executes a single code path through the SUT and it should execute exactly the same path each time it runs; that is what makes it a Repeatable Test (see Goals of Test Automation). Yes, that means we need as many test methods as we have paths through the code but how else can we expect to get full code coverage? What makes this manageable is that we Isolate the SUT (see Principles of Test Automation) when we write unit tests for each class so we only have to focus on paths through a single object. Also, because each test should only verify a single path through the code each test method should consist of strictly sequentional statements that describe what should happen on that one path. (A Test Method that contains Conditional Test Logic (page X) is a sign of a test trying to accommodate different circumstance because it does not have control of all the indirect inputs of the SUT or which is trying to verify complex expected states inline within the Test Method.) Another reason we Verify One Condition per Test (see Principles of Test Automation) is to Minimize Test Overlap (see Principles of Test Automation) so that we have fewer tests to modify when we modify the behavior of the SUT.

Brian Marrick has an interesting compromise that I call "While We're At It" (He calls it "Just for laughs" but I don't find that very intent revealing.) that leverages the test fixture we already have set up to run some additional checks and assertions. But he clearly marks these with a comment to indicate that if changes to the SUT obsolete this part of the test, they can be easily deleted thus saving the effort of maintaining the extra test code.

Test Methods and Testcase Classes

A Test Method needs to live on a Testcase Class. Should we put all our Test Methods onto a single Testcase Class for the application or should we create a Testcase Class for each Test Method? Of course, the right answer lies somewhere between these two extremes and it will vary throughout the life of our project.

Testcase Class per Class

When we write our first few Test Methods, we can put them all onto a single Testcase Class. As the number of Test Methods increases, we will likely want to split the Testcase Class into one Testcase Class per Class (page X) being tested to reduce the number of Test Methods per class. As those Testcase Classes get too big we usually split the classes further and we need to decide which Test Methods to include in each Testcase Class.



Sketch Testcase Class per Class embedded from Testcase Class per Class.gif

Fig. X: A production class with a single Testcase Class.

With Testcase Class per Class we have a single Testcase Class holding all the Test Methods for all the behavior of our SUT class. Each Test Methods may need to create a different fixture either in-line or by delegating to a Creation Method (page X).

Testcase Class per Feature

One school of thought is to put all Test Methods that verify a particular feature of the SUT(a feature is one or more methods and attributes that collectively implement some capability of the SUT.) into a single Testcase Class. This makes it easy to see what all the test conditions are for that feature. (Use of appropriate Test Naming Conventions helps achieve this.)



Sketch Testcase Class per Feature embedded from Testcase Class per Feature.gif

Fig. X: A production class with one Testcase Class for each feature.

With Testcase Class per Class we have a one Testcase Class for each major capability or feature supported by our SUT class. The Test Methods on that test class exercise various aspects of that feature after building whatever test fixture they require.

Testcase Class per Fixture

The opposing view is that one should group all the Test Methods that require the same test fixture (same pre-conditions) into one Testcase Class per Fixture (page X). This facilitates putting the test fixture setup code into the setUp method (Implicit Setup (page X)) but can result in scattering of the test conditions for each feature across many Testcase Classes.



Sketch Testcase Class per Fixture embedded from Testcase Class per Fixture.gif

Fig. X: A production class with one Testcase Class for each fixture.

With Testcase Class per Fixture we have a one Testcase Class for each possible test fixture (test precondition) of our SUT class. The Test Methods on that test class exercise various features from the common starting point.

Choosing a Test Method Organization Strategy

Clearly, there is no single "best practice" we can always follow; the best practice is the one most appropriate for the circumstance. Testcase Class per Fixture is commonly used when writing unit tests for stateful objects where each method needs to be tested in each state of the object. Testcase Class per Feature (page X) is more appropriate when we are writing customer tests against a Service Facade[CJ2EEP] so we can keep all the tests together. It is also more commonly used when using a Prebuilt Fixture (page X) because there is no fixture setup logic required in each test. When each test needs a slightly different fixture the right answer may be to use Testcase Class per Feature and use Delegated Setup (page X) to make setting up the fixtures easier.

Test Naming Conventions

Naming of Testcase Classes and Test Methods is crucial for making our tests easy to find and understand. We can make the test coverage more obvious by naming each Test Method systematically based on what test condition is being verified. Regardless of which test method organization scheme we use, we would like the combination of the names of the test package, the Testcase Class and Test Method to convey at least the following information:

These items are the "input" part of the test condition. Obviously, this is a lot to communicate in just two names but the reward is high if we can achieve it: We can tell exactly what test conditions we have tests for merely by looking at the names of the classes and methods in an outline view of our IDE.



Sketch Testcase Class per Fixture ScreenShot embedded from Testcase Class per Fixture ScreenShot.gif

Fig. X: A production class with one Testcase Class for each test fixture.

When using a Testcase Class per Fixture the class name can describe the fixture leaving the method name available for describing the inputs and expected outputs.

Could use the following text "figure" instead of screen shot above.
AllTests
-  suite()
TestAwaitingApprovalFlight
-  setUp()
-  testRequestApproval_shouldThrowInvalidRequestEx()
-  testSchedule_shouldThrowInvalidRequestEx()
-  testDeschedule_shouldThrowInvalidRequestEx()
-  testApprove_shouldEndUpInScheduledState()
-  testApproveWithNullArg_shouldThrowInvalidArg()
-  testApproveInvalidApprover_shouldThrowInvalidArg()
TestScheduledFlight
-  setUp()
-  testDeschedule_shouldEndUpInUnscheduleState()
-  testRequestApproval_shouldThrowInvalidRequestEx()
-  testSchedule_shouldThrowInvalidRequestEx()
-  testApprove_shouldThrowInvalidRequestEx()
TestUnscheduledFlight
-  setUp()
-  testRequestApproval_shouldEndUpInAwaitingApproval()
-  testSchedule_shouldEndUpInScheduledState()
-  testApprove_shouldThrowInvalidRequestEx()
-  testDeschedule_shouldThrowInvalidRequestEx()
Example TestcaseClassPerFixtureNaming embedded from java/com/clrstream/ex3/solution/flightbooking/domain/flightstate/fixturetests/Testcase Class per Fixture Naming.txt

This example also shows how useful it is to include the "expectations" side of the test condition:

These can be included in the name of the Test Method prefixed by "should". If this makes the names too long we can always access the expected outcome by looking at the body of the Test Method. (Many xUnit variants "encourage" us to start all our Test Methods with "test" by automatically detecting these methods and adding them to the Test Suite Object. This constrains our naming somewhat compared to variants that indicate test methods via method attributes or annotations. )

Organising Test Suites

The Testcase Class acts a Test Suite Factory (see Test Enumeration) when it returns a Test Suite Object containing a collection of Testcase Objects (page X) each representing a Test Method. This is the default organization mechanism provide by xUnit. Most Test Runners allow any class to act as a Test Suite Factory by implementing a Factory Method[GOF] typically called suite.



Sketch Test Suite Factory embedded from Test Suite Factory.gif

Fig. X: A Testcase Class acting as a Test Suite Factory.

By default, the Testcase Class acts as a Test Suite Factory to produce the Test Suite Object the Test Runner requires to execute our tests. We can also enumerate a specific set of tests we want to run by providing a Test Suite Factory that returns Test Suite Object containing only the desired tests.

Running Groups of Tests

We often want to run groups as tests (test suite) but we don't want that to constrain how we organize them. A common convention is to create a special Test Suite Factory called AllTests for each package of tests. We don't need to stop there; we can create Named Test Suites (page X) for any collection of tests we want to be able to run together. A good example is a Subset Suite (see Named Test Suite) that allows us to run just those tests that need software deployed to the web server (or not deployed to the web server!) We usually have at least a Subset Suite for all the unit tests and another for just the customer tests because they often take too long to execute. Some variants of xUnit support Test Selection (page X) which we can use instead of defining Subset Suites.

Such run-time groupings of tests are often based on the environment in which they need to run. We might have a Subset Suite that includes all the tests that can be run without the database and another for the ones that depend on the database. Ditto for a web server. If we have these various kinds of test suites in a test package, we can define "AllTests" as a Suite of Suites (see Test Suite Object). This ensures that a test added to one of the specialized suites is also run in AllTests without incurring extra test maintenance effort.

Running a Single Test

Suppose we have a Test Method failing in a Testcase Class and we want to put a breakpoint on a particular method but that method is called in every test. Our first reaction might be to just muddle through by hitting "go" each time the breakpoint is hit until we are being called from the test of interest. One possibility is to disable (by commenting out) all the other Test Methods so they don't get run. Another option is to rename all the other Test Methods so that the xUnit Test Discovery mechanism would not recognize them as tests. In variants of xUnit that use method attributes or annotations, we can add the "Ignore" attribute to a test method instead. Each of these approaches introduces the potential problem of a Lost Test (see Production Bugs on page X); the "Ignore" approach does remind us that some tests are being ignored. In members of the xUnit family that provide a Test Tree Explorer (see Test Runner), we can simply select a single test to be run from the hierarchy view of the test suite:



Sketch Test Tree Explorer embedded from Test Tree Explorer.gif

Fig. X: A Test Tree Explorer showing the structure of the tests in our suite.

We can use the Test Tree Explorer to drill down into the runtime structure of the test suite and run individual tests or sub-suites.

Could use the following text example instead of the screenshot.
TestSuite("...flightstate.featuretests.AllTests")
   TestSuite("...flightstate.featuretests.TestApproveFlight")   
      TestApproveFlight("testScheduledState_shouldThrowIn..ReEx")
      TestApproveFlight("testUnsheduled_shouldEndUpInAwai..oval")
      TestApproveFlight("testAwaitingApproval_shouldThrow..stEx")
      TestApproveFlight("testWithNullArgument_shouldThrow..ntEx")
      TestApproveFlight("testWithInvalidApprover_shouldTh..ntEx")
   TestSuite("...flightstate.featuretests.TestDescheduleFlight")
      TestDescheduleFlight("testScheduled_shouldEndUpInSc..tate")
      TestDescheduleFlight("testUnscheduled_shouldThrowIn..stEx")
      TestDescheduleFlight("testAwaitingApproval_shouldTh..stEx")
   TestSuite("...flightstate.featuretests.TestRequestApproval")
      TestRequestApproval("testScheduledState_shouldThrow..stEx")
      TestRequestApproval("testUnsheduledState_shouldEndU..oval")
      TestRequestApproval("testAwaitingApprovalState_shou..stEx")
   TestSuite("...flightstate.featuretests.TestScheduleFlight")
      TestScheduleFlight("testUnscheduled_shouldEndUpInSc..uled")
      TestScheduleFlight("testScheduledState_shouldThrowI..stEx")
      TestScheduleFlight("testAwaitingApproval_shouldThro..stEx")
Example TestcaseClassPerFeatureObjectTree embedded from java/com/clrstream/ex3/solution/flightbooking/domain/flightstate/featuretests/FeatureTestsObjectTree.txt

Fig. X: The structure of our Test Suite Objects with the Testcase Objects they contain.

Each Test Method is installed as the Pluggable Behavior[SBPP] of an instance of the Testcase Class on which it lives. We can use the Test Tree Explorer to drill down into the runtime structure of the test suite and run individual tests or sub-suites.

When none of these options is available we can also use a Test Suite Factory to run a single test! Wait a minute! Aren't test suites all about running groups of tests that live in different Testcase Classes? Well, yes, but that doesn't mean we can't use them for other purposes. We can define a Single Test Suite (see Named Test Suite)(I usually call it "MyTest".) which runs a particular test. This can be done by calling the constructor of the Testcase Class with the specific Test Methods name as an argument.

Test Code Reuse

Test Code Duplication (page X) can increase the cost of writing and maintaining tests significantly. There are a number of techniques for reusing test logic. The most important thing to note is that any reuse must not compromise the value of the Tests as Documentation (see Goals of Test Automation). I don't recommend reuse of the actual Test Method in different circumstances (e.g. with different fixtures) as this is typically a sign of a Flexible Test (see Conditional Test Logic on page X) that tests different things in different circumstances. Most test code reuse is achieved either through Implicit Setup or Test Utility Methods (page X). The big exception to this is the reuse of Test Doubles (page X) by many tests; we can treat these Test Double classes as a special kind of Test Helper (page X) when thinking about where to put them.

Test Utility Method Locations



Sketch Test Utility Method Locations embedded from Test Utility Method Locations.gif

Fig. X: The various places we can put Test Utility Methods.

The primary decision-making criteria is the desired scope of reusability of the Test Methods.

Many variants of xUnit provide a special Testcase Superclass (page X) (typically called "TestCase") from which all Testcase Classes should (and in some cases must) inherit either directly or indirectly. If we have any useful utility methods on our Testcase Class and we want to reuse them in other Testcase Classes, we may find it useful to create one or more Testcase Superclasses from which to inherit instead of "TestCase" but we need to be careful if those methods need to see types or classes that are in various packages within the SUT as we don't want our root Testcase Superclass to depend on them directly. We may be able to create a Testcase Superclass for each test package to keep our test class dependencies non-cyclic. The alternative is to create a Test Helper for each domain package and put them in the appropriate test packages. This way, a Testcase Class is not forced to choose a single Testcase Superclass; it can merely "use" the appropriate Test Helper(s).

TestCase Inheritance and Reuse

The most commonly used reason for inheriting methods from a Testcase Superclass is to access Test Utility Methods. Another use is when testing frameworks and their plug-ins; it can be useful to create a conformance test that specifies the general behavior of the plug-in via a Template Method[GOF] that calls methods provided by a subclass specific to the kind of plug-in being tested to specific plug-in specific details. This scenario is rare enough that I won't describe it further here; please refer to [FaT] for a more complete description.

Test File Organization

The next question is where should we put our Testcase Classes? It should go without saying that they should be stored in the source code Repository[SCM] along with the production code. Beyond that, there are quite a range of choices. The test packaging strategy we choose will very much depend on our environment as many IDE's have constraints that make certain strategies unworkable. The key thing is to achieve the fundamental goals of having No Test Logic in Production Code (see Principles of Test Automation) and yet be able to find the corresponding test for each piece of code or functionality.

Built in Self Test

With built-in self test, the tests are included with the production code and can be run at any time. There is no provision made for keeping them separate. Most organizations feel a lot more comfortable with No Test Logic in Production Code so this may not be a good approach for them. This is particularly important in memory constrained environments where we don't want tests taking up space.

Some development environments encourage us to keep the tests and the production code together. For example, SAP's ABAP Unit includes a special keyword "For Testing" that tells the system to disable the tests when the code is transported into the production environment.

Test Packages

If we decide to put the Testcase Classes into separate test packages, they can be organized in several ways. We can keep the tests separate by putting the tests into one or more test packages and keep them in the same source tree, or we may put them into the same logical package but physically store them in a parallel source tree. The latter is frequently used in Java because it gets around the problem of tests not being able to see "package protected" methods on the SUT. (There is another way to get around the visibility issue in Java; we can define our own test Security Manager to allow tests to get access to all methods on the SUT, not just the "package protected ones". This solves the problem in a general way but it does require a pretty good understanding of Java class loaders. Other languages may not have the equivalent functionality (or problem!)) Some IDE's may prevent using this approach if they insist that a package is wholly contained within a single folder or project. When we use test packages under each production code package we may need to use a build-time test stripper to exclude them from production builds.

Test Dependencies

However we decide to store/manage the source code, we need to ensure that we don't have any Test Dependency in Production (see Test Logic in Production on page X) because even a test stripper cannot remove the tests if production code needs them to be present to run. This makes paying attention to our class dependencies important. We also don't want to have any Test Logic in Production as that means we aren't testing the same code we will be running in production. This is discussed in more detail in the Test Automation Strategy narrative.

What's Next?

Now that we've looked at how to organize our test code, there are a few more testing patterns that I would like to introduce in the Testing With Databases narrative chapter.



Page generated at Wed Feb 09 16:39:25 +1100 2011

Copyright © 2003-2008 Gerard Meszaros all rights reserved

All Categories
Introductory Narratives
Web Site Instructions
Code Refactorings
Database Patterns
DfT Patterns
External Patterns
Fixture Setup Patterns
Fixture Teardown Patterns
Front Matter
Glossary
Misc
References
Result Verification Patterns
Sidebars
Terminology
Test Double Patterns
Test Organization
Test Refactorings
Test Smells
Test Strategy
Tools
Value Patterns
XUnit Basics
xUnit Members
All "Introductory Narratives"
A Brief Tour
Test Smells
Goals of Test Automation
Philosophy Of Test Automation
Principles of Test Automation
Test Automation Strategy
XUnit Basics
Transient Fixture Management
Persistent Fixture Management
Result Verification
Using Test Doubles
Organizing Our Tests
Testing With Databases
A Roadmap to Effective Test Automation