Monday, March 23, 2009

Using inheritance in your tests

In my previous post I described the new structure of my test code, but what happens if you have multiple tests that you need to write that use the same setup code, but with a slight difference. For example lets say that in my setup code I am setting up a call to a service to send out email notifications. In all instances I would expect that this service would be called; however I have a new test where I would like to use that same basic setup code but I would like to test the negative behavior (ie that emails are not sent out based on a condition).

When this presents itself what I do is is the following:
  1. Create a new folder in Visual Studio named the same as your namespace
  2. Extract the tests from the class file into another class file.
  3. Name the new class file the same as the test name
  4. Inherit from the base class file (ie the class file we removed the tests from)
  5. Create a virtual void method in the base class named observe()
  6. Call the virtual method at the end of the Setup method in the base class
  7. Create an override method in the sub-class to set the conditions that would exercise the expected behavior
  8. Name the new Test method in the sub-class execute()
  9. Place all class files into the new folder

The downside of this approach is that you end up with quite a few test class files because you end up with one test per class. I am ok with this because I can look at my test names and easily determine what they are visually in Visual Studio.

The upside is I can easily group me tests together and reuse setup code across multiple tests.

Sunday, March 22, 2009

My new way of writing tests

After my experience at the TDD Firestarter event in Tampa, FL I changed the way I write tests. Before this event I would write tests that would verify an expectation. This would manifest itself in the name of the test method as verify_that_the_result_of_x_equals_the_expected_value. When I left the TDD Firestarter event my tests looked a little more like context specifications. Below is a sample template of what a class file structure would look like in the new form.


namespace spec_for_message_processor
{
[TestFixture]
public class when_the_message_processor_processes_a_valid_message {

[Setup]
public void context() {
}

[Test]
public void should_call_the_packager_to_package_artifacts() {
}
}
}

Let's look at this structure and pick it apart a little.
  1. The namespace reflects the spec for what we are testing. This could be a class or a module where we need to test the interactions and the behavior of that class or module.
  2. The name of the class uses then when x happens wording. This wording places context around what you are testing.
  3. The name of the method states what should happen based on the context, so in this example when the message processor processes a valid message then it should call a packager to package some artifacts.
This approach reads well from top to bottom and it makes it easier in my opinion to understand what the intent and purpose of the code is. In my next blog post I will discuss using inheritance in my tests and how that changes what we have written here.

Wednesday, March 18, 2009

An Experiment in Pair Programming

I decided to run an experiment at work to block out an hour a day and publish a request for anyone to pair program with me during that hour. The environment that I work in is not very progressive in terms of TDD (Test Driven Development) and XP (Extreme Programming). In other words people are set in their ways a bit and not very aggressive in terms of learning new ways of doing things in my view. I thought it would be a good idea to share some things I have learned with my co-workers and what better way than to pair program.

The purpose of this Blog post is to catalog my experiences during this experiment. To be honest I have not ever worked in an XP environment, so this process is a bit new to me also.

Day 1

On the first day I had three people show up to pair with me. I had only asked for one person and that it was first come first serve, but I am not going to turn people away, so what we did was we discussed the project that I was working on and we walked through the code a bit.

I thought this went well; however it made me think that what I should do in the future is post a wiki entry that would explain what we would be working on that day. This way people had the opportunity to come prepared to write code. As I move through this experiment I am sure I will learn a ton.

Tuesday, January 27, 2009

100% Code Coverage is hard on Legacy Code

I have some legacy code that I am working on that has a simplified contract that looks like this:

virtual void ProcessBatch(OutputBatch batch)



The class that implements this contract is loaded by reflection and this method is called based on the execution context.I had a requirement to send a message to a queue once this method completed successfully so I created the following overload:
void ProcessBatch(OutputBatch batch, DataSourceQueue queue)
What I had to do to make this work is extract the execution code from the first method and place that into my overload and then call my overload from the first method. This all works fine; however I have no way to test the fact that the first method calls the second method.To demonstrate I wrote the following test:
[Test]
public void should_call_the_overload_method_taking_the_queue()
{
device.ProcessBatch(batch);
device.AssertWasCalled(x => x.ProcessBatch (Arg<OutputBatch>.Is.TypeOf, Arg<ProviderMailboxQueue>.Is.TypeOf));
}
I must be doing something wrong here because my brain tells me this should work but instead I get the message that this should be called once but was not.

Wednesday, January 21, 2009

So what reads better

I went to the TDD Firestarter this past weekend in Tampa, FL and I definately walked away with some new techniques for the tool belt.  Today I leveraged some of those techniques and I must say I believe the test code reads better.  There is no better way to express this then with an example.
I had written some test code approximately a week ago that made sure that file names complied with a file naming standard.  I was using the RowTest feature in MBUnit such that I had only a single test to validate the file naming convention for different scenarios.  My test looked like this:


[TestFixtureSetUp]
public override void Setup()
{
_mock = new MockRepository();
_testDataDir = TestHelper.GetTestDataDir(this.GetType());
_pathToMessageXml = Path.Combine(_testDataDir.FullName, "message.xml");
_pathToArBlob = Path.Combine(_testDataDir.FullName, "ARBlob.obj");
_pathToWriteCompressedFiles = Path.Combine(_testDataDir.FullName, CompressedFilesPath);

if (!Directory.Exists(_pathToWriteCompressedFiles)) Directory.CreateDirectory(_pathToWriteCompressedFiles);
}

[TestFixtureTearDown]
public override void TearDown()
{
Directory.Delete(_pathToWriteCompressedFiles, true);
}

[SetUp]
public override void TestSetup()
{

_settingsLookupService = _mock.StrictMock<TransmissionSettingsLookupService>();
_jobSystemDb = _mock.StrictMock<IJobSystemDatabase>();
_config = _mock.StrictMock<IConfig>();
_settings = new TestTransmissionSettings(_jobId, PackagingMethod.PGP,
File.ReadAllText(Path.Combine(_testDataDir.FullName, "publickeyusedtoencryptfile.txt")),
File.ReadAllText(Path.Combine(_testDataDir.FullName, "privatekeyusedtosignfile.txt")),
PassPhraseUsedToSignFile,
DataReturnType.FullDetailReport);

using (_mock.Record())
{

SetExpectationsOnConfig(_config, _testDataDir);
SetupResult.For(_jobSystemDb.IsValidJob(_jobId)).Return(true);
SetupResult.For(_settingsLookupService.GetTransmissionSettingsByKey(_jobId.ToString(),
_config)).IgnoreArguments().Return(_settings);
}
}

[RowTest]
[Row(new object[] { ExecutionContext.ProviderMailboxing, PackagingMethod.Zip,
@"PDF_\d+_\d+_\d{4}_\d{2}_\d{2}T\d{2}_\d{2}_\d{2}.zip", DataReturnType.FullDetailReport })]

[Row(new object[] { ExecutionContext.ProviderMailboxing, PackagingMethod.None,
@"835_\d+_\d+_\d{4}_\d{2}_\d{2}T\d{2}_\d{2}_\d{2}.edi", DataReturnType.Edi })]
public void verify_file_naming_convention_based_on_execution_context_packaging_method_and_data_return_type(string
executionContext, string packagingMethod, string filePattern, string dataReturnType)

{
_settings.SetPackagingMethod(packagingMethod);
_settings.SetDataReturnType(dataReturnType);
var filePatternToExpect = new Regex(filePattern);

using (Stream stream = CreateMessageFromXml(_pathToMessageXml, executionContext))
{

using (_mock.Playback())
{
var factory = new ServiceContextDataReturnMessageFactory();
var parsedMessage = PDFRegenHelper.ParseMessage(stream);
DataReturnMessage message = factory.CreateMessage(parsedMessage, _settingsLookupService, _config,
_jobSystemDb);

IJobPackager packager = GetPackager(packagingMethod, message);
var packagedFile = packager.Package(GetFilesToPackage(dataReturnType),
message.FileNameAndPathOfCompressedFile);

Assert.IsTrue(filePatternToExpect.Match(packagedFile.Name).Success,
string.Format("Expected a file in the format {0}, but received a file with name {1}", filePattern,
packagedFile.Name));
}
}
}


Believe it or not I received a requirement change for this on the Monday immediately following the TDD Firestarter event so I figured there is no time like the present to put what I learned into action.  The result of my Test method now looks like this:


[TestFixture]
public class when_generating_the_pdf_file_name: using_the_pdf_regen_helper
{

[SetUp]
public override void Observe()
{
fileNamePattern = @"PDF_\d+_\d+_\d{4}_\d{2}_\d{2}T\d{2}_\d{2}_\d{2}.zip";
settings.DataReturnType = DataReturnType.FullDetailReport;

compressedFileName = PDFRegenHelper.GetCompressedFileName(ExecutionContext.ProviderMailboxing, settings, config);
filesInDirectory.AddRange(testDataDir.GetFiles("*.obj"));

packager = new JobZipPackager();
packagedFile = packager.Package(filesInDirectory, Path.Combine(testDataDir.FullName, @"PackagedFiles\" + compressedFileName));

}

[Test]
public void the_file_follows_the_standard_file_naming_convention()
{

Assert.IsTrue(Regex.IsMatch(packagedFile.Name, fileNamePattern), string.Format("The file {0} does not match the format of {1}", packagedFile.Name, fileNamePattern));

}
}


So tell me what you think... I am leaning more toward the new method. The one drawback to this approach is the differences are defined in the observe method which means that the use of the RowTest in this context goes away or maybe there is a clean way to use RowTests with this approach but at first glance I do not see it.

Wednesday, March 19, 2008

Code Reviews using Subversion

I am a big fan of code reviews.  Code reviews are important for ensuring good design principles and to make sure that BAC (buggy ass code) doesn't make its way into your application.  I have recently adopted a model that I am sure the open source community is familiar with, which is using Subversion patches as a means by which to conduct code reviews.  The process is divided into a few component tasks.  One of the biggest parts of this process is the developer that is making the code change, major system enchancement or whatever will create their own feature branch in subversion.  If you are not sure how to create a branch the TortiseSVN guide, or the Subversion guide (if you are a command line junkie) are informative on this subject.


So lets step through a simple example.  Lets say I need to cut a feature branch off of the trunk.  I have a local working copy of the trunk in a directory c:\trunk.  I then use my tool of choice to create a branch call NewFeatureBranch from this working copy (You are actually creating a branch from the subversion trunk).   Then you create a directory structure of your liking on your local machine to check this new branch into.  What I typically do is create a directory hierarchy that represents the release you are working toward, so for example if I am working on a feature in the 3.1 release of my product I will have a directory named c:\ProductName_V3x0x1\NewFeatureBranch.


The c:\ProductNameV3x0x1\NewFeatureBranch is the working copy you will make all of your changes in.  Feel free to check in, roll-back, add new libraries in this working copy, or whaever you need to implement the feature.  When you are ready for code review it is time to do two things.  First update your working copy of the trunk so that it is completely green.  In other words there are no differences between the working copy of the trunk and the subversion repository.  Next merge from your branch to your working copy of the trunk (or the branch you cut from).  When you perform this function you may have conflicts and you will definately have changes.  Address all of the conflicts accordingly and then use subversion to create a patch from the root of your project.  A patch in subversion terms is simply a collection of all of the diffs of your code.  If you follow this process the patch will only contain your diffs and this patch can be circulated for code reviews.


The person who receives your patch will always perform an update on the trunk (or whereever you cut your branch from) before performing the review.  Once the update is complete the code reviewer will by convention apply the patch to the root of the project unless you tell them otherwise.  This will enable the reviewer to see the diffs of your code, make comments in your code and submit a patch to you with either code changes, comments etc.  This process has worked very well for me in the past, and I continue to use it today.


Powered by Qumana

Friday, February 29, 2008

NANT Scripts Rock

I recently had the fortunate opportunity to be given a project of my own to work on.  I would say that Agile practices are extremely important to me and I do my best to put these into practice every day.  Currently we use CruiseControl.NET as out continuous integration server and I wanted to get our tests integrated into the process, so I decided to create a new build script and create a new cruise control project for this purpose.  For those of you who are pausing and saying...What,  you do not have your tests integrated into your build.  I agree.  Sometimes things take a little longer to get into place than we would like, but we are getting there slowly but surely.


I must say that after my experience today I have an even higher level of respect for build/release engineers.  It took me all day to get the nant script to do as I wanted it to do and to get the test results output I desired.  As a practice we keep all of our test projects in a directory named tests at the root of our directory hierarchy.  I created a new solution that contained only the tests that I wanted to integrate into the build.


The build script runs msbuild on the solution file and then does some magic to copy the directory structure into a build-temp directory which is the directory where the nunit-console.exe is applied to the dll.  The process is broken into a number of tasks namely build the solution, copy the test project structure to a build-temp directory, run nunit-console.exe on the test dll's and then copy the results of those tests to the test-results directory.  The script looks a little like this:


This copies the results of the build into a build-temp directory


<target name="copy-tests" depends="build">
     <echo message="Copying tests to ${build.dir}\tests" />
     <mkdir dir="${build.dir}\tests"/>
     <copy todir="${build.dir}\tests">
        <fileset basedir="tests" failonempty="true">
          <include name="*PatternOfTestsToInclude*/**"/>
        </fileset>
    </copy>
</target>


This iterates through the directories and executes nunit-console.exe on each Test dll file.  It moves the results of the test to the test.results directory

<target name="tests" depends="copy-tests">
     <echo message="Create directory ${results.dir}"/>
     <mkdir dir="${results.dir}"/>
     <echo message="Create directory ${test.results}"/>
     <mkdir dir="${test.results}"/>
     <echo message="Running Nunit Tests"/>
     <foreach item="Folder" property="foldername">
         <in>
             <items>
                 <include name="${build.dir}\tests\*"/>
             </items>
         </in>
        <do>
            <echo message="Iterating through folder ${foldername}\bin\${build-configuration}"/>
            <foreach item="File" property="filename">
                <in>
                    <items>
                         <include name="${foldername}\bin\${build-configuration}\*Test*.dll"/>
                    </items>
                </in>
        <do>
            <echo message="Running test for file ${filename}"/>
            <echo message="Writing test results to test-results.xml for dll ${filename}"/>
            <exec program="${nunit-console.exe}" failonerror="false" resultproperty="testresult">
                <arg value="${filename}"/>
                <arg value="/xml=test-results.xml" />
            </exec>
            <property name="niceFileName" value="${path::get-file-name-without-extension(filename)}"/>
            <move
                 file="test-results.xml"
                 tofile="${test.results}\${niceFileName}-test-results.xml"
                 overwrite="true"/>
            <fail message="Failures reported in unit test for ${filename}." unless="${int::parse(testresult)==0}" />
        </do>
    </foreach>
</do></foreach></target>


This approach is cool because I can inject into the script the pattern I want to use to select the test groups I want as a part of my test suite.  In my situation this makes sense because I only want those tests that are specific to my project.  As long as you use some standard naming conventions for your tests then you are good.


Powered by Qumana