I'm a big fan of NHibernate, and also of its Query Object the ICriteria API. I don't much care for 'magic strings' in my code though.

LINQ provides a way to strongly type your queries, but there are still times when you want an opaque query language with an object-oriented interface.

With .Net 3.5 came Lambda Expressions, which allow you to strongly type the individual expressions in a query. I started writing some extension methods to allow me to use lambda expressions with ICriteria, so code like:

        .Add(Expression.Eq("Name", "smith"))

... turns into code like:

        .Add<Person>(p => p.Name == "smith")

There turned out to be quite a few extension methods to write to cover all the different combinations of ICriteria that can be written (including DetachedCriteria, aliases, subqueries, ...) I've packaged it into a project on Google Code for anyone who wants to use it:

Submit this story to DotNetKicks Shout it

An Alternative to CodeSmith

I'm a big fan of code-generation for automating boiler-plate code. I used to use CodeSmith for this, but the freeware version is no longer maintained. T4 is my new favourite tool for code generation, and there's a wealth of information available on using it on Oleg Sych's site.

Passing Command-Line Parameters to TextTransform

The tools come with a command-line host for T4 (TextTransform.exe), but there's no obvious way to pass parameters to the templates.

You can pass parameters using the command-line option -a:

    -a mappingFile!c:\myPath\MyClass.hbm.xml
    -out c:\myGeneratedFiles\MyClass_Generated.cs

In order to retrieve the parameter from inside the template, you can use a little reflection:

private string GetCommandLineProperty(string propertyName)
    PropertyInfo parametersProperty =
                BindingFlags.NonPublic | BindingFlags.Instance);

    StringCollection parameters =
            .GetValue(Host, null);

    foreach (string parameter in parameters)
        string[] values = parameter.Split('!');
        int length = values.Length;
        if (values[length - 2] == propertyName)
            return values[length - 1];
    return null;

Replacing NHibernate Magic Strings

Now I can loop through my .hbm.xml files in my build, generating the 'magic strings' for the properties of each class (example template and project is linked below).

This turns code like:

IList<Media> mediaWithNameAndType =
  .Add(Expression.Eq("OwningLibrary", this))
  .Add(Expression.Eq("Type", media.Type))
  .Add(Expression.Eq("Name", media.Name))

Into code like:

IList<Media> mediaWithNameAndType =
  .Add(Expression.Eq(Media.Properties.OwningLibrary, this))
  .Add(Expression.Eq(Media.Properties.Type, media.Type))
  .Add(Expression.Eq(Media.Properties.Name, media.Name))

Some links:

Submit this story to DotNetKicks Shout it


In a previous post I demonstrated testing a Compact Framework application using Passive View. I also mentioned that I would use the same technique to test the presentation layer on other platforms. So to put that to the test I migrated that simple application to Silverlight.

Testing Platform

There are many more testing options for Silverlight than there are for the Compact Framework. If you're using the IDE, you might consider reading this for some of your options.

I chose to test my POCO controller classes inside the regular .Net framework using NUnit.

Instead of retargeting, I cross-compiled the controller class to the 'real' .Net framework. The view controls in Silverlight are a subset of the complete WPF controls, so they can be substituted in the test environment to simulate the runtime controls.

The only slight hindrance I came across was that when you attempt to use the WPF controls in NUnit you get the exception "InvalidOperationException : The calling thread must be STA". The controls insist on running on an STA thread, which requires a .config file for the test assembly containing the following:

        <add key="ApartmentState" value="STA" />

Passive View

As previously, the view is completely dumb. It contains no logic, just the controls that will be manipulated by the controller. The XAML is kept simple with mostly positioning logic.

<StackPanel x:Name="LayoutRoot"

    Demonstration of Passive View to test
        Silverlight client

<Button x:Name="ShowMessage"
    Content="Show Message"
    Width="150"  Margin="20"/>

I exposed the controls as public fields on the View class. These were populated in the 'Loaded' event (note, this only fires in the Silverlight runtime - not the tests), then the controller was wired up:

public class MainView : UserControl

  public Button ShowMessage;
  public TextBlock Message;

  public MainView()
    Loaded +=
      new RoutedEventHandler(UserControl_Loaded);

  private void UserControl_Loaded(object sender,
      RoutedEventArgs e)
    ShowMessage = (Button)FindName("ShowMessage");
    Message = (TextBlock)FindName("Message");
    new MainController(this);

During the tests a testable view class simply populates the fields with 'empty' controls allowing the controller logic to be tested, without requiring a full presentation runtime.

public class TestableView : MainView
  public TestableView()
    ShowMessage = new Button();
    Message = new TextBlock();
    new MainController(this);

Migration from Windows Forms

The only remaining task was to migrate the namespaces to WPF from Windows Forms. Mostly this was just a search/replace exercise (with the compiler telling me everywhere something had changed):

  • Namespace moved to System.Windows;
  • Enabled property is now IsEnabled property;
  • MessageBox.Show has slightly fewer options;
  • Setting colours now uses a Brush class;
  • etc., ...

With just a little more work, you could wrap each control in a common interface (e.g., IButton, IComboBox), then the same controller could be used to run a view on multiple platforms (Silverlight/WPF/Windows Forms). You might even be able to use the controller to run a web application using a framework like Visual WebGui (which makes coding a web application magically like coding a Windows Forms application).

For a larger application I would definitely do this extra work - it is bound to pay itself back at some point, not least of all because you could have stubbed control implementations in the tests instead of the 'real' ones.


The modifications to move the demo from Windows Forms to Silverlight were a rewriting of the View (inevitable), and some minor syntax changes in the controller.

On a larger application the ability to keep 90% of your code (and the tests!) when migrating to another platform is invaluable, and Passive View is the best way I've seen of writing (and testing) the presentation layer.

Silverlight Passive View Demo
download, unzip, run CommandPrompt.bat, and type 'NAnt'

View Silverlight Passive View Demo Online

Submit this story to DotNetKicks Shout it

Following Richard's "HTML is crap" post this got me thinking about the real underlying message of his post. Yes technology has moved on a lot in recent years to the point where old reasons for choosing those [HTML applications] types of application have largely gone. However this isn't the first or the last time software people will have this frustration and here's why: too many business people come with software solutions and not business problems! I've no idea why this has happened other than some mistaken idea that software is easy and business is hard. In my experience when customers bring me software solutions rather than business problems the final software solutions I'm able to deliver to them are invariably poorer than they should be. So the next question is how can we stop people bringing us their half-baked solutions...

Submit this story to DotNetKicks Shout it

OK, so it's not crap. It's great for web sites, disseminating and formatting information, and linking content across the entire world. In fact, it's pretty amazing. But ... it is crap for writing applications.

Over the years we have jumped through all sort of hoops to get round the (extreme) limitations of HTML and browser technology. We have invented ways of maintaining user state by keeping cookies on browsers, by adding cryptic information to URLs, by posting hidden values in forms, and by restoring large state objects on the server to overcome some of these problems.

We have invented increasingly complicated frameworks to allow us to split presentation logic over two physical tiers with a (typically) compiled language running on the server while Javascript manipulates widgets on the client, and then found more and more ingenious ways of making this seem slick to the user. We have made code more complicated by the fact that these two separate tiers have to communicate in some fashion, and we convince ourselves that the solution is elegant.

As soon as we have to target more than one type of browser (or even different versions of the same browser) it is not going to look the same on what is essentially different platforms. (Not to mention the difficulties of actually writing and debugging for two versions of IE on the same machine.) You would have to be nuts to choose a platform where the output will be unpredictable, right?!

A common argument with the customer might go something like this:

Programmer: You have high expectations of usability. It is best to use a rich client.
Customer: It needs to be a web (HTML) application.
Programmer: Um ... why?
Customer: Because that's what we want.
Programmer: You do realise it will take twice as long to write, and be less usable?
Customer: It needs to be a web (HTML) application.
Three months later the programmer is blamed for the product not being amazingly slick, being slightly behind the time-scales, and the customer is insisting that they need their spelling mistakes underlined 'just like MS Word'!

Another common reason for a customer to insist on a web application is for ease of deployment, but it is not hard to push rich applications to the desktop using technologies like ClickOnce these days either.

The only people who most often have a genuine need to target such a low common denominator are those writing applications that target the Internet (e.g., Amazon, eBay, etc.) But even then it is clear that for years the best looking stuff on the web has Flash content.

In summary:

Choose WPF;
Choose Windows Forms;
Choose Silverlight;
Choose Flash;
Just don't choose HTML.

Choosing to write an HTML application, especially when you are going to deploy in a company intranet, is nothing short of insanity these days ... it's long past the time for developers to make customers realise this.

Submit this story to DotNetKicks Shout it

The Challenge

You have a Compact Framework application to write. You've chosen to make it a rich client (Windows Forms), and you want a suite of automated tests covering the code.

Passive View

I investigated the options for testing directly on the Compact Framework itself, but this turns out to be tricky and ties you to running inside the emulator (a virtual machine).

However the Compact Framework uses a .Net feature called retargeting. This allows your Compact Framework code to be run in the full .Net environment as long as it doesn't reference any classes that are unique to the Compact Framework (e.g., the SIP).

If you use Passive View, you can take your controller class and test it in the regular .Net Framework using NUnit leaving only a tiny bit of code in the View that is specific to the Compact Framework.

Creating the View

In the example code below I created a simple GUI that would show or hide a message, allow the user to change the colour of the message from a drop-down, and would prompt the user for confirmation before hiding the message. This provides a simple enough example to cover in a blog, while being complex enough to demonstrate several useful techniques for testing using Passive View.

The view can be created directly in Visual Studio (with the Window Mobile 6 SDK installed). The designer gives you an excellent rendition of a PDA client making it easy to design a user interface (notwithstanding that WPF would have been nicer - but hasn't been implemented on the Compact Framework yet).

To create the view:

  • Create a Form;
  • Create the controls;
  • Name the controls, and mark them with the modifier 'Public';
  • Wire up a controller when the View is instantiated.

To wire up a controller, create a controller class (which is going to hold all the application's presentation logic), and construct it when the View is instantiated.

internal class MainController

    private MainView _view;

    public MainController(MainView view)
        _view = view;
public partial class MainView : Form

    public MainView()
        new MainController(this);

And that's it. No other logic should exist in the View.

Handling User Input

Most of the user input can be handled directly on the View in the tests. To simulate someone typing into a text-box, simply set the Text property. To simulate someone making a selection from a drop-down, set the SelectedIndex property.

Simulating a button 'click' requires a tiny bit of reflection to invoke the protected method:

private void Click(Button button)
    if (!button.Enabled)
        Assert.Fail("Attempt to click button '"
            + button.Text
            + "' while it is not enabled");

    MethodInfo onClick = button.GetType()
                    | BindingFlags.Instance);
    onClick.Invoke(button, new object[] { null });

You end up with a fairly readable test:

public void Test_WhenShowMessageIsClicked_Then_…
    MainView view = new TestableView();



Handling the Visible Property

You can set most properties directly on the real framework classes. However, some properties don't always behave correctly outside of the Windows Forms runtime. The Visible property is one such case.

You could mock/stub the Visible property on the controls, however typically these might get called many times in a single test; mocking could make your tests very brittle. Alternatively you could have an adapter/proxy for each of the controls that let you simulate events/properties.

I find it easier to have a fake implementation I can use just inside the tests. In this example I wrapped all calls to the Visible property with a (virtual) method:

public partial class MainView : Form
    public virtual void SetVisible(
                            Control control,
                            bool    isVisible)
        control.Visible = isVisible;

    public virtual bool IsVisible(Control control)
        return control.Visible;

This was replaced with a fake implementation in the tests:

public class TestableView : MainView
    private Dictionary<Control, bool>
            = new Dictionary<Control, bool>();

    public override void SetVisible(
                            Control control,
                            bool    isVisible)
        _controlVisibility[control] = isVisible;

    public override bool IsVisible(Control control)
        if (!_controlVisibility.ContainsKey(control))
            Assert.Fail("Control '" + control.Name
                + "' visibility has not been set by…

        return _controlVisibility[control];

Handling User Interaction

Another thing that needs to be tested is user interaction (e.g., clicking yes/no on a message box). While you could use a test double/stub for this, it does tend get a little tedious (you typically end up with a different stub method for every test).

It is better to Mock the user input by (in this case) creating a proxy to the message box class instead of going to the (hard to test) static methods directly:

public class DialogHandler

    public virtual DialogResult ShowMessageBox(
        string                  text,
        string                  caption,
        MessageBoxButtons       buttons,
        MessageBoxIcon          icon,
        MessageBoxDefaultButton defaultButton)

In this example I've used the excellent Rhino Mocks to mock the DialogHandler implementation. So you can set expectations, and their responses, directly in the tests:

public void Test_WhenHideMessageIsClicked_…
    MockRepository mocks = new MockRepository();
    DialogHandler dialogHandler =
    MainView view = new TestableView(dialogHandler);

                "Are you sure?",



Now you have the highest possible test-coverage on your presentation layer, without resorting to (heavyweight) full integration/acceptance tests.

The example code (link below) demonstrates each of the above sections using:

  • NAnt and NUnit - to automate the build and tests;
  • Retargeting - to allow the Compact Framework classes to be tested inside the regular .Net Framework;
  • Passive View - to allow testing of as much of the application as possible while not requiring the Windows Forms runtime;
  • Test Double - (specifically a fake ) to allow testing of properties that normally only work in the Windows Forms runtime;
  • Mocks - using the excellent Rhino Mocks to test user interaction in the tests.

Worth noting is that there is nothing here that is special to the Compact Framework. I would use exactly the same technique to test any presentation layer (e.g., WPF/Silverlight). The Compact Framework is merely a useful demonstration of where the real runtime is not easy to use, and these design patterns and testing techniques shine.

Passive View Demo
download, unzip, run CommandPrompt.bat, and type 'NAnt'

Submit this story to DotNetKicks Shout it

In my continuing adventures with legacy code I've inherited a code base that initially had over 500 compiler warnings. Now many of these warnings were pretty straightforward to fix but some of them are proving a little more interesting. My current personal favourite is CS0252.

I've got around half a dozen places where it's really unclear as to exactly what the original programmer was intending to do. Was he/she really meaning to do a reference comparison? Unlikely. So now I've got an interesting predicament. Do I change the code to remove the warning or do I leave the code as is and accept that as it's a legacy system the code must work fine. Safety (and sadly professionalism) must win and in these situations I must leave the code as is until I get round to covering the relevant code with tests. However it's like a real stone in my shoe and I've got to ask: wouldn't it have been so much nicer if the original programmer simply looked at the warning and thought, "I don't think that's quite right...."?

So the moral of the story is please pay attention to your compiler warnings. They'll help you identify potential bugs in your code and they might even help write code that's easier to understand and consequently more maintainable.

Submit this story to DotNetKicks Shout it

In my professional life I'm finding it's more and more common for me to be working with a legacy code base. Whether that's because a system we've written is simply long-standing and has become legacy or whether it's because we've inherited a system to support and maintain, the problems you encounter are pretty similar. However I think it's worth pointing out one of the major differences, and why it's sometimes better to be involved in the latter situation rather than the former.

I'm a big fan of egoless programming but trying to separate people from their code can be hard to achieve. Indeed, if you were responsible for creating the system that has become legacy it can be extremely difficult to be entirely honest with yourself about what state the system is in. Now that's not to say people cannot admit their own mistakes, just that sometimes because your head is filled with all the history of why certain decisions where taken it can be difficult to look at things entirely objectively. Ultimately, the system is where it is and how it got there is relatively unimportant.

As it transpires though, I'm currently involved in working with a legacy system where I've inherited the code. The original developers are gone and whilst their experience and skills are most definitely missed, their absence allows a certain amount of freedom in moving things forward.

I think being able to look at a system without the baggage that often builds up helps in two ways:

  1. You don't have any pre-existing priorities about the things you want to fix - I know that whenever I deliver any system there are always things I'm not happy with and this can sometimes cloud my judgement about what is really important.
  2. You don't worry about upsetting anyone - I know from painful experience that developers can be sensitive souls and sometimes they simply cannot be separated from their code.

So from my experience I'm convinced that it's always nicer not to have had a hand in the original system development if you're going to moving it forward...

Submit this story to DotNetKicks Shout it

I've just recently download Windows Live Writer to help me out with my blog posting.... maybe it'll make me write more stuff! 

This is just a wee test post to make it works like I want.

Submit this story to DotNetKicks Shout it

NAnt vs MSBuild

I like to use NAnt for building my .Net projects.

I am not a huge fan of the IDE (Visual Studio), but I am well aware that I am in the minority here.

So if I plan to use NAnt on a project, I've got to do it in such a way that the other developers can still use the features of the IDE, mainly intelli-sense and the Error List window.

I've been using a very small .csproj file (described below) which calls out to NAnt, and a small custom MSBuild task to log errors so that the IDE will display them correctly.

Adding the source-code

Starting with a blank project file, you can add all the C# files with a single MSBuild entry like this:

<Compile Include="**\*.cs" />

And that's it! Now you can browse all your C# code using the solution explorer. The intelli-sense will recognise your classes and start auto-expanding as you type.

I typically add nodes for each file-type I want to be able to edit: .cs, .build, .xml, .config, etc.


The IDE will use intelli-sense for the classes defined in your C# code files. You also want it to pick up external references. Add a line for each reference you need intelli-sense for in another ItemGroup:

<Reference Include="NHibernate" />

The IDE will look for the assemblies under the path defined in the <OutputPath> node near the top of your project file. If you want, you can add a <HintPath> sub-element:

<Reference Include="NHibernate">

Now you'll have intelli-sense for your referenced assemblies too.

Running NAnt

When Visual Studio builds it runs the 'DefaultTargets' for the project (usually an MS-defined target named 'Build'). When Visual Studio cleans it runs a target named 'Clean', and when it re-builds it runs the target named 'Rebuild'.

You can change the 'DefaultTargets' for the project at the top of the project file:

<Project DefaultTargets="NAntBuild" ...

Then add a target that uses the Exec task to call NAnt:

<Target Name="NAntBuild">
  <Exec Command="SDKs\nant-0.85\bin\NAnt.exe" />

Now the solution will build using NAnt. Add targets for 'Clean' and 'Rebuild' too if you want to do these via NAnt.

Note, when you customise your targets the IDE will give you a security warning. Since you customised it yourself, you can safely ignore this warning.

Error List

So far this works, but build errors are only found by examining the output window. It would be nice to have errors reported in the 'Error List' window so that you can just double-click them.

We wrote a custom MS-Build task called ExecParse, which inherits from 'Exec', to allow parsing of of the output and logging errors to the MS-Build engine.

The configuration for ExecParse takes a regular expression to search the output for, and allows logging of errors using text from captures in the regular expression.

A typical compiler error looks like:

[csc] c:\...\MyFile.cs(24,60): error CS0246: ...

So we can create a regular expression that captures this, and log an error using the MSBuild engine:

<UsingTask AssemblyFile="...\ExecParse.dll"
           TaskName="ExecParse.ExecParse" />
      <Search>\[csc\] (.*?)\((\d+),(\d+)\): error …
        ([^:]*): (.*?)[\n\r]</Search>
<Target Name="NAntBuild">
  <ExecParse Command="SDKs\nant-0.85\bin\NAnt.exe"
           Configuration="$(ExecParseConfiguration)" />

Now you have a NAnt build, building within the IDE, using intelli-sense, and logging errors to the 'Error List' window.

Some links:

Submit this story to DotNetKicks Shout it


NHibernate uses an Identity Map to maintain a single instance of each persistent entity in memory. This allows you to run round a domain model without having to worry about which instance of an entity you are modifying.

NHibernate also uses Lazy Load. The default implementation for this uses Dynamic Proxy (DP) to generate a class at runtime that inherits from your real class. This generated class intercepts all of the overridable methods and properties in your class to load the instance from the database if required.

What Identity Map Allows

Identity Map allows you to load up the same object twice but to have your references point to the same underlying instance.

The following example NUnit test demonstrates loading 3 objects, two of them with an ID of 1, and the other with an ID of 2:

public void TestIdentityMap()
    MyClass myInstance1 = Session.Load<MyClass>(1);
    MyClass myInstance2 = Session.Load<MyClass>(1);
    MyClass myInstance3 = Session.Load<MyClass>(2);

    Assert.AreSame(myInstance1, myInstance2);
    Assert.AreNotSame(myInstance1, myInstance3);

The following diagram shows how the references relate to the objects created on the heap.

The Identity Map ensures that there is only one instance of the object with ID equal to 1 in memory.

Under the hood of NHibernate

The basic principle of Identity Map is easy to pick up. However, there is a further complication because of the implementation of the proxies used for Lazy Load.

When you ask NHibernate for an object, it returns you a instance of a class created at runtime by DP. This class intercepts any overridable methods/properties and redirects them to a second object instance that has the 'real' class type.

So in the above example, the following is actually created on the heap:

Limitations of the current implementation

Since there are really two instances created on the heap when NHibernate gives you a proxy, there is an identity problem that you could run into.

Problem 1, the 'this' pointer:

Consider the following class:

public MyClass
    public virtual bool IsMe(MyClass anotherMyClass)
        return anotherMyClass == this;

The following test will fail because the reference myInstance points to the proxy, while the 'this' pointer is the real class:

public void TestIsMe()
    MyClass myInstance = Session.Load<MyClass>(1);

    // The above line will fail

It is easy to imagine an example where a child entity notifies its parent of an event, and the parent then updates all its children except the child that initiated the event. The parent or child might have to be careful about reference comparison in such an example.

Problem 2, private methods/fields:

NHibernate relies on DP intercepting access to an instance of a class, and redirecting these calls to the 'real' instance. However DP is powerless to intercept private field and method access.

Situations where this could occur might be:

  1. sibling objects of a common parent object that communicate with each other;
  2. parent/child objects of the same type that communicate with each other (e.g., a tree);
  3. static methods that reference an instance of the same type.

Consider the following class:

public MyClass
    private int _calculation = -1;

    public virtual int Calculation
        get { return _calculation; }

    public static void SetCalculation(
                                MyClass instance,
                                int calculation)
        instance._calculation = calculation;

The following test will fail because the static method sets the field not on the 'real' class but the proxy object, while the property accessor is intercepted and redirected to the 'real' instance.

public void TestCalculation()
    MyClass myInstance = _session.Load<MyClass>(1);
    MyClass.SetCalculation(myInstance, 7);

    Assert.AreEqual(7, myInstance.Calculation);
    // the above line will fail

It is not uncommon for a class to have an interface for talking to other instances of itself while keeping that interface private from outsiders.


I suspect NHibernate could be improved to make the proxies self-proxy. If this could be done then there would be no second instance created, and no potential identity problem.

To avoid problems with the 'this' pointer, simply try to avoid using it for comparison with another object.

To avoid problems with one instance of a class talking to another instance of the same class, prefer virtual methods over non-virtual ones to allow Dynamic Proxy to intercept any calls made on the instance. (i.e., avoid using private fields or methods from one instance to another instance of the same class.)

A complete example of the above, with failing NUnit tests, can be downloaded here:

Submit this story to DotNetKicks Shout it