Monday, March 23, 2009

Using inheritance in your tests

In my previous post I described the new structure of my test code, but what happens if you have multiple tests that you need to write that use the same setup code, but with a slight difference. For example lets say that in my setup code I am setting up a call to a service to send out email notifications. In all instances I would expect that this service would be called; however I have a new test where I would like to use that same basic setup code but I would like to test the negative behavior (ie that emails are not sent out based on a condition).

When this presents itself what I do is is the following:
  1. Create a new folder in Visual Studio named the same as your namespace
  2. Extract the tests from the class file into another class file.
  3. Name the new class file the same as the test name
  4. Inherit from the base class file (ie the class file we removed the tests from)
  5. Create a virtual void method in the base class named observe()
  6. Call the virtual method at the end of the Setup method in the base class
  7. Create an override method in the sub-class to set the conditions that would exercise the expected behavior
  8. Name the new Test method in the sub-class execute()
  9. Place all class files into the new folder

The downside of this approach is that you end up with quite a few test class files because you end up with one test per class. I am ok with this because I can look at my test names and easily determine what they are visually in Visual Studio.

The upside is I can easily group me tests together and reuse setup code across multiple tests.

2 comments:

Sir Richard Hoare said...

how about composition? via/or a setup helper scoped to the appropriate folder level?

Michael said...

Composition is great if you want to delegate responsibility over to another class for some behavior; however inheritance in the testing scenario works better for me in a number of ways. First I can set up virtual methods that the descendant can overload that is specific to the context of that test. This is useful for data set up scenarios where the data needed to support the test may change based on the context.

I can also customize any part of the test setup pipeline in this way and set up higher level contracts that the descendants must follow. This also tells me when I need to stop and create a new level of tests.