Hosted by:
Unit testing expertise at your fingertips!
Home | Discuss | Lists

Transient Fixture Management

The book has now been published and the content of this chapter has likely changed substanstially.

About This Chapter

In the Test Automation Strategy narrative, we looked at the strategic decisions that we need to make. That included the definition of the term "fixture" and selection of a test fixture strategy. We also established our basic xUnit terminology and diagramming notation in the XUnit Basics narrative chapter. This chapter builds on both these chapters by focusing on the mechanics of implementing the chosen fixture strategy. There are several different ways to set up a Fresh Fixture (page X) and our decisions affect how much effort it takes to write the tests, the maintainability of our tests and whether we achieve Tests as Documentation (see Goals of Test Automation on page X). Persistent Fresh Fixtures (see Fresh Fixture) are set up the same way as Transient Fresh Fixture (see Fresh Fixture) but there are additional factors to consider related to fixture tear down. Shared Fixtures (page X) introduce an additional set of considerations. I will discuss Persistent Fresh Fixtures and Shared Fixtures in the next chapter.

Sketch Transient Fresh Fixture embedded from Transient Fresh Fixture.gif

Fig. X: Transient Fresh Fixture

Fresh Fixtures come in two flavors: Transient and Persistent. Both require fixture setup; the latter also require fixture tear down.

Test Fixture Terminology

Before we can talk about setting up a fixture, we need to agree upon what a fixture is.

What is a Fixture?

Every test consists of four parts as described in Four-Phase Test (page X). The first part is where we create the system under test (SUT) and everything it depends on and put them into the state required to exercise the SUT. In xUnit, we call everything we need in place to exercise the SUT the test fixture and the part of the test logic that we execute to set it up fixture setup.

The most common way to set up the fixture is using front door fixture set up by calling the appropriate methods on the SUT to put it into the starting state. This may require constructing other objects and passing them to the SUT as arguments of method calls. When the state of the SUT is stored in other objects or components, we can do Back Door Setup (see Back Door Manipulation on page X) by inserting the necessary records directly into the other component on which the behavior of the SUT depends. We use Back Door Setup most often with databases or when we need to use a Mock Object (page X) or Test Double (page X); I will cover these in more detail in the chapters on Testing With Databases and Using Test Doubles.

It is worth noting that the term fixture is used to mean various things in different kinds of test automation. The xUnit variants for the Microsoft languages call the Testcase Class (page X) the test fixture. Most other variants of xUnit distinguish between the Testcase Class and the test fixture (or test context) it sets up. In Fit[FitB], the term fixture is used to mean the custom-built parts of the Data-Driven Test (page X) interpreter that we use to define our Higher Level Language (see Principles of Test Automation on page X). Whenever I say "test fixture" without further qualifying it I mean the stuff we set up before exercising the SUT. If I want to refer to the class that hosts the Test Methods (page X), whether it be in Java or C#, Ruby or VB, I'll say Testcase Class.

What is a Fresh Fixture?

In a Fresh Fixture strategy we set up a brand new fixture for every test we run. That is, each Testcase Object (page X) builds it's own fixture before exercising the SUT and does so every time it is rerun. That is what makes the fixture "fresh". As a result, we completely avoid the problems associated with Interacting Tests (see Erratic Test on page X).

Sketch Fresh Fixture embedded from Fresh Fixture.gif

Fig. X: A pair of Fresh Fixtures, each with its creator.

A Fresh Fixture is built specifically for a single test, used once and then retired.

What is a Transient Fresh Fixture?

When our fixture is an in-memory fixture referenced only by local variables or instance variables(See the sidebar There's Always an Exception (page X)), the fixture just "disappears" after every test courtesy of Garbage-Collected Teardown (page X). When fixtures are persistent, this is not the case so we have some decisions to make about how we implement the Fresh Fixture strategy. We have two different ways to keep them "fresh". The obvious option is tear down the fixture after each test. The less obvious option is to leave the old fixture around and build a brand new fixture in a way that ensures it does not collide with the old fixture.

As most Fresh Fixtures we build are transient I will cover that case first and then come back to managing persistent Fresh Fixtures.

Building Fresh Fixtures

Whether we are building a Transient Fresh Fixture or a Persistent Fresh Fixture, the choices we have for how to construct it are pretty much the same. The fixture setup logic includes the code needed to instantiate the SUT(This discussion assumes that the SUT is an object and not just static methods on a class.), the code to put the SUT into the appropriate starting state and the code to create and initialize the state of anything the SUT depends ons or which will be passed to it as arguments. The most obvious way to set up a Fresh Fixture is Inline Setup (page X) where all the fixture set up logic is within the Test Method. It can also be constructed using Delegated Setup (page X) by calling Test Utility Methods (page X) or using Implicit Setup (page X) where the Test Automation Framework (page X) calls a special setUp method we have provided on our Testcase Class. We can also use a combination of the three but let's start by looking at each one individually.

Inline Fixture Setup

In Inline Setup, the test does all the fixture set up within the body of the Test Method. We construct objects, call methods on them, construct the SUT and call methods on it to put into a specific state. We do this all from within our Test Method. Think of Inline Setup as being the do-it-yourself approach to fixture creation.

   public void testStatus_initial() {
      // inline setup:
      Airport departureAirport = new Airport("Calgary", "YYC");
      Airport destinationAirport = new Airport("Toronto", "YYZ");
      Flight flight = new Flight(flightNumber, departureAirport,
      // Exercise SUT & verify outcome
      assertEquals(FlightState.PROPOSED, flight.getStatus());
      // tearDown: //    garbage-collected
Example InlineSetupShort embedded from java/com/clrstream/ex6/services/test/SetupStyles.java

The main drawback of Inline Setup is that it tends to lead to a lot of Test Code Duplication (page X) because each Test Method needs to construct the SUT. Many of the Test Methods also need to do similar fixture set up. All this Test Code Duplication leads to High Test Maintenance Cost (page X) caused by Fragile Tests (page X). If the work to create the fixture is complex, it can also lead to Obscure Test (page X). A related problem is that Inline Setup tends to encourage Hard-Coded Test Data (see Obscure Test) within each Test Method because creating a local variable with an Intent Revealing Name[SBPP] may seem like too much work for the benefit.

All these test smells can be prevented by moving the code that sets up the fixture out of the Test Method to somewhere else. Where we move it to determines which of the alternate fixture set up strategies we have used.

Delegated Fixture Setup

A quick and easy way to reduce Test Code Duplication and the resulting Obscure Tests is to refactor our Test Methods to use Delegated Setup. We can use an Extract Method[Fowler] refactoring to move a sequence of statements used in several Test Methods into a Test Utility Method that we then call from those Test Methods. This is a very simple and safe refactoring, especially when we let our IDE do all the heavy lifting for us. When the extracted method contains logic to create an object on which our test depends we call it a Creation Method (page X). Creation Methods(When referenced via a Test Helper (page X) class, they are often called the Object Mother (see Test Helper on page X) pattern.) with Intent Revealing Names make the test's preconditions readily apparent to the reader while avoiding unnecessary Test Code Duplication. They allow both the test reader and test automater to focus on what is being created without being distracted by how it is created. The Creation Methods act as reusable building blocks for test fixture construction.

   public void testGetStatus_inital() { // setup:
      Flight flight = createAnonymousFlight();
      // Exercise SUT & verify outcome:
      assertEquals(FlightState.PROPOSED, flight.getStatus());
      // tearDown:
      //     garbage-collected
Example DelegatedSetupShort embedded from java/com/clrstream/ex6/services/test/SetupStyles.java

One of the goals of these Creation Methods is to remove the need for every test to know the details of how the objects it requires are created. This goes a long way toward preventing Fragile Tests caused by changes to constructor method signatures or semantics. When a test does not care about the specific identity of the objects it is creating, we can use Anonymous Creation Methods (see Creation Method). They generate any unique keys required by the object being created. By using a Distinct Generated Value (see Generated Value on page X) we can guarantee that no other test instance that requires a similar object will end up accidently using the same object as this test. This prevents many forms of the behavior smell Erratic Test including Unrepeatable Test, Interacting Tests and Test Run War even if we happen to be using a persistent object Repository[SCM] that enables Shared Fixtures.

When a test does care about some of the attributes of the object being created, we use a Parameterized Anonymous Creation Method (see Creation Method); the method is passed any attributes that the test cares about (that are important to the test outcome) leaving all other attributes to be defaulted by the implementation of the Creation Method. My motto is:

"When it is not important for something to be seen in the test method, it is important that it not be seen in the test method!"

A common application of Delegated Setup is when writing input validation tests for SUT methods that are expected to validate the attributes of an object argument. We need to write a separate test for each invalid attribute that should be detected. Building all these slightly invalid objects would be a lot of work using Inline Setup. We can reduce the effort and Test Code Duplication dramatically by using the pattern One Bad Attribute (see Derived Value on page X). We first call a Creation Method to create a valid object and then we replace one attribute with an invalid value that should be rejected by the SUT. Similarly, we might create an object in the correct state by using a Named State Reaching Method (see Creation Method).

Some people prefer to Reuse Test for Fixture Setup (see Creation Method). That is, they call other tests directly within the setup part of their test. This is not an unreasonable thing to do as long as it is readily apparent to the test reader what the other test is setting up for this test. Unfortunately, very few tests are named is such a way as to convey this intention. So, if we value Tests as Documentation, we will want to consider wrapping the called test with a Creation Method that has an Intent Revealing Name so that test reader can get a sense of what the fixture looks like.

The various Creation Methods can be kept as private methods on the Testcase Class, pulled up to a Testcase Superclass (page X) or moved to a Test Helper (page X). The "mother of all creation methods" is Object Mother (see Test Helper). This strategy-level pattern describes a family of approaches that centers around the use of Creation Methods on one or more Test Helperes and may include Automated Teardowns (page X) as well.

Implicit Fixture Setup

Most members of the xUnit family provide a convenient hook for calling code that needs to be run before every Test Method. Some members call a method with a specific name (e.g. setUp) while others call a method that has a specific annotation (e.g. "@before" in JUnit) or method attribute (e.g. "[Setup]" in NUnit.) To avoid repeating this every time I need to refer to this mechanism I will simply call it the setUp method regardless of how we indicate this to the Test Automation Framework. An empty default implementation is typically provided the framework so we do not have to provide one.

In Implicit Setup we take advantage of this framework "hook" by putting all the fixture creation logic into the setUp method. Because every Test Method on the Testcase Class shares this fixture setup logic, all the Test Methods need to be able to use the fixture it creates. This certainly addresses the Test Code Duplication problem but it does have several consequences.

   Airport departureAirport;
   Airport destinationAirport;
   Flight flight;
   public void setUp() throws Exception{
      departureAirport = new Airport("Calgary", "YYC");
      destinationAirport = new Airport("Toronto", "YYZ");
      BigDecimal flightNumber = new BigDecimal("999");
      flight = new Flight( flightNumber , departureAirport, destinationAirport);
   public void testGetStatus_inital() {
      // implicit setup
      // Exercise SUT & verify outcome
      assertEquals(FlightState.PROPOSED, flight.getStatus());
Example ImplicitSetupShort embedded from java/com/clrstream/ex6/services/test/ImplicitSetup.java

The first consequence is that it can make the tests harder to understand because we cannot see how the preconditions of the test (the test fixture) relate to the expected outcome within the Test Method; we have to look in the setUp method to see it. We can mitigate this by naming our Testcase Class based on the test fixture created in the setUp method. This only make sense if all the Test Methods really need the same fixture. This is an example of Testcase Class per Fixture (page X). As mentioned earlier, several members of the xUnit family (VbUnit and NUnit to name two) use the term "TestFixture" to describe what we call Testcase Class in this book. This naming is probably based on the assumption that we are using a Testcase Class per Fixture.

A consequence of using Implicit Setup is that we cannot use local variables to hold references to the objects in our fixture; we are forced to use instance variables to refer to any objects that are constructed in the setUp method and which are needed either when exercising the SUT, when verifying the expected outcome or when tearing down the fixture. These instance variables act as global variables between the parts of the test but as long as we stick to instance variables rather than class variables, the test fixture will be newly constructed for each test case in the Testcase Class. Most members of xUnit provide isolation between the fixture created for each Test Method but at least one, NUnit, does not; see the sidebar There's Always an Exception (page X) for more information. We should definitely give the variables Intent Revealing Names so that we do not need to keep referring back to the setUp method to understand what they hold.

Misuse of the SetUp Method

When you have a new hammer, everything looks like a nail!

Like any feature of any system, the setUp method can be abused. We should not feel obligated to use it just because it is provided. It is one of several code reuse mechanisms that are available to use. When object-oriented languages first came out, everyone was enamored with inheritance and tried to do all reuse using it. Over time, we learned when it was appropriate to use and when we should use other mechanisms like delegation. The setUp method is xUnit's inheritance.

The most common abuse of the setUp method is when it is used to build a General Fixture (see Obscure Test) with multiple distinct parts each dedicated for different Test Methods. This can lead to Slow Tests (page X) if we are building a Persistent Fresh Fixture but more importantly, it can lead to Obscure Tests.

If we do not adopt the practice of grouping the Test Methods into Testcase Classes based on identical fixtures but we do use the setUp method, we should only build the lowest common denominator part of the fixture in the setUp method. That is, only the set up logic which will not cause problems in any of the tests can be put into the setUp method. Even the fixture setup code that does not cause problems for any of the Test Methods can still cause other problems if we use the setUp method to build a General Fixture instead of a Minimal Fixture (page X). A General Fixture is a common cause of Slow Tests because each test spends much more time than necessary building the test fixture. It also leads to Obscure Test because the test reader cannot easily see what part of the fixture a particular Test Method depends on. A General Fixture often becomes a Fragile Fixture (see Fragile Test) as the relationship between its various elements and the tests that use this is forgotten; changes made to the fixture to support a newly added test may cause existing tests to fail.

Note that if we use a class variable to hold the object we may have crossed line into the world of Persistent Fresh Fixtures; use of Lazy Setup (page X) to populate the variable takes us into the world of Shared Fixtures because later tests within the test suite may reuse the object(s) created in earlier tests and thus may become dependent on the changes the other test (should have) made to it.

Hybrid Fixture Setup

Thus far, I have presented these three styles of fixture construction as strict alternatives. In practice, there is value in combining them. It is pretty common to call some Creation Methods from within the Test Method and then to do some additional set up inline. The readability of the setUp can also be improved it it calls Creation Methods to construct the fixture. An additional benefit is that the Creation Methods can be unit tested much more easily that either inline fixture construction logic or the setUp method. They can also be located on a class outside the Testcase Class hierarchy such as a Test Helper.

Tearing Down Transient Fresh Fixtures

The really nice thing about Transient Fresh Fixtures is that there is very little effort involved in fixture tear down. Most members of the xUnit family are implemented in languages that support garbage collection. As long as we hold our references to the fixture in variables that go out of scope, we can count on Garbage-Collected Teardown doing all the work for us. See the sidebar There's Always an Exception for a description of why this is not quite true in NUnit.

If we are using one of the few members of the xUnit family that do not support garbage collection, we may have to treat all Fresh Fixtures as persistent.

What's Next?

This chapter introduced techniques for setting up and tearing down an in-memory Fresh Fixture. With some planning and a bit of luck, this is all you should need for the large majority of your tests. Managing Fresh Fixtures is more complicated when the fixture is persisted either by the SUT or the test itself. In the next chapter, Persistent Fixture Management, I introduce additional techniques needed for managing persistent fixtures including Persistent Fresh Fixtures and Shared Fixtures.

Page generated at Wed Feb 09 16:39:24 +1100 2011

Copyright © 2003-2008 Gerard Meszaros all rights reserved

All Categories
Introductory Narratives
Web Site Instructions
Code Refactorings
Database Patterns
DfT Patterns
External Patterns
Fixture Setup Patterns
Fixture Teardown Patterns
Front Matter
Result Verification Patterns
Test Double Patterns
Test Organization
Test Refactorings
Test Smells
Test Strategy
Value Patterns
XUnit Basics
xUnit Members
All "Introductory Narratives"
A Brief Tour
Test Smells
Goals of Test Automation
Philosophy Of Test Automation
Principles of Test Automation
Test Automation Strategy
XUnit Basics
Transient Fixture Management
Persistent Fixture Management
Result Verification
Using Test Doubles
Organizing Our Tests
Testing With Databases
A Roadmap to Effective Test Automation