Broloco

As I explained previously I tend to complete an Iteration 0 prior to the rest of the development team starting work on a project as I feel it helps ensure that the development team can be as productive as possible when they start.  Normally I spend this time coding alone, but consulting with others as required, putting the structure of the system in place.  I make decisions about the main structural patterns we're going to use (e.g.,  Domain Model, Service Layer etc.) and the technologies (e.g., NUnit, NHibernate, Spring.NET etc) but I try to defer as many as I reasonably can.  If I don't spend this time I've found that Iteration 1 is spent arguing about the best way to do just about everything and frankly when you're working on fixed price contract you just can't afford that.

Once Iteration 0 is complete and the rest of the development team start they concentrate on cranking out functionality as quickly as they can.  They don't really need to know about the dependency injection framework, the fancy AOP aspects intercepted to provided comprehensive logging to all the services exposed.  The project naturally gathers pace and for most of the developers the plumbing code is a bit like magic: it just works!  They don't know how and, to be honest, they don't need to.  It's at this point when the warning signs appear.  If you're the Technical Lead and nobody on your development team is starting to ask those awkward questions that make you re-evaluate your view on things like "why is the logging like this?", "why aren't we using Fluent NHibernate", start to worry.  If nobody on the development team other than you is taking the time to understand what's going on under the hood and questioning your decisions, rest assured that if you step off that project monsters will be eating your bridges in no time at all.

Now, on one of the systems I've built when I stepped off and went back later to make a bug fix, there was no sign of monsters.  So the question is: what was different? The answer is that on the project one of the less experienced developers on the development team started asking awkward questions and I started answering them.  Some of the questions were straightforward to answer, but some of the questions made me really question the decisions I'd made.  The less experienced developed started to understand not only how things were put together but why.  They started to learn, fast!  When the time came for me to step off the project they were then capable to assume the role of Technical Lead and the "Sim City" effect didn't happen.  Sure when I returned things had changed but there was no monsters in site.

Submit this story to DotNetKicks Shout it

I tend to fulfil the technical lead role on the projects I work on.  This role means that I’m overall technically responsible for the system.  Put simply: the buck stops with me!

As such, at the start of each new project I tend to spend a week or so working long hours getting the “framework” of the application in place and, attempting to provide the simplest end-to-end example of how the system will be structured that I can.  I also used to write a document that explained the shape of the system too, but I’ve found, through costly experience, that nothing matters more than an end-to-end example in code.

Despite protestations from the “truly agile” I believe this time is well spent getting the various architectural elements of the system , the levels of testing, and the tools we are going to use in place and thankfully I’m not alone in this.  Over the course of the project these decisions will bend, some may be completely reversed, but largely they’ll stay with the project until it’s dying day.  Once the framework is in place, more junior developers can be productive very quickly and concentrate primarily on business functionality and not technical “plumbing”.

Typically, I work on the project through to completion and work as a senior programmer on the project.  Unfortunately sometimes I’m forced to “step off” from a project before it’s completion.  It is when this happens that I tend to see something I’ve termed lovingly the “Sim City” effect.

For those who remember playing Sim City this scenario may sound familiar…

In the older versions of Sim City (I’ve no idea about the newer versions as I moved on to Championship Manager when I hit 12!), you could spend the first couple of hours of the game getting into loads of debt building you’re beautiful city utopia.  All shiny and new you marvelled at how good your Police and Fire Service coverage was and you watched the taxes roll in from happy citizens.  It was around this time that typically your mother shouted you for dinner and you left the city to it's own devices.  An hour or so would go by and eventually you head back to your bedroom and your beautiful utopia.  But what’s this?  Monsters eating your previously beautifully crafted bridges, mass civil unrest and power cuts all over the city.  Despair typically follows… followed by a shutdown!

So the same feeling of despair I got when I returned to my once utopian city can also occur when I return to a project I've started but never got to finish.  My once beautifully crafted tests are now broken or worse ignored.  CruiseControl is shot to bits, and where previously RhinoMocks nicely mocked out underlying system layers to facilitate true unit tests these have been superseded with hand cranked stub classes that have roughly as many lines of code as there are stars in the universe! 

Martin Fowler talked about Test Cancer and, despite some rather emotive language, I think that is just one of the “monsters eating your bridges” that you can see.  Architectural and methodological deviation in general tend to be common place.  That’s not to say it happens all the time and it’s worth remembering too that utopia is in the eye of the beholder but in my experience, the “Sim City” effect is all too common.

Submit this story to DotNetKicks Shout it

Introduction

If you have a Domain Model, you're going to have code that validates rules and checks for inconsistencies. Methods in the Domain Model are going to have to indicate failure in some way (often corresponding to an exception scenario in a use case).

My personal preference is always to start with the simplest way of handling business logic errors: throw an exception.

Avoid Return Codes

We know that the wrong way to indicate a problem is by using an error return-code from each function. The call-stack can be quite deep, and you cannot (and should not) rely on the callee of each function to check each return value.

We also know that failing fast is (perhaps unintuitively) the way to write more stable applications.

Exceptions are a language mechanism that is perfect for indicating that something has failed. The semantics allow you to 'get the hell out of Dodge' without having to worry about handling the failure, or how deep in the call stack you are. If the exception is not handled correctly, it will expose the problem in an obvious way.

Additionally, throwing an exception means that you don't have to worry about the code that would have been executed next. This can be of advantage where:

  • The subsequent code would otherwise have failed (e.g., if the validation found that a string was not unique and inserting it into a database would cause a unique-key violation);
  • The subsequent code used a non-transactional resource (e.g., a synchronous external web service, or the file system.)

Catching Exceptions in the Presentation

When you're handling exceptions in the Presentation layer you have an opportunity to concisely handle the specific scenarios you know about while elegantly ignoring any others:

public void CreateButton_OnClick()
{
    try
    {
        library.CreateBook( View.BookTitle,
                            View.BookAuthor,
                            View.BookDescription);
    }
    catch (BookTitleNotUniqueException)
    {
        View.ShowMessage("The supplied book title is already in use");
    }
}
        

The above try-catch construct provides a way to indicate that there is a programmed response when the supplied name is not unique. More importantly, it indicates that any other error should automatically be propagated up the call chain.

Service Layer

Many applications still separate the presentation from the domain with a (remote) Service Layer. However, most remoting technologies (e.g., WCF Web Services) don't play well with exceptions.

Typically the Service Layer will catch the exception, roll-back any transaction, and use a custom DTO or framework-provided mechanism to pass the details of the exception the to client. (WCF supplies the FaultContract class for this)

The DTO should be turned back into an exception on the client-side (using AOP/interception) so that it will still be propagated up the call chain if it is not handled. When using WCF it is advisable to convert the FaultException object back into a regular exception to prevent coupling your presentation layer to WCF.

Where you have no Service Layer, you'll probably need to call a method (probably in the constructor of your custom exception class) to indicate to any transaction handler that the transaction should be rolled back.

Multiple Failures

A common scenario where failing fast is not immediately suitable is where the client wants multiple validation errors handled in a single call.

In these cases I try to wrap the multiple checks into a fluent interface that throws a single exception that contains a collection of validation errors. The code might look something like:

new ValidationBuilder()
    .Add(ValidateAddressLine1())
    .Add(ValidateAddressLine2())
    .Add(ValidateAddressLine3())
    .Add(ValidatePostcode())
    .ThrowOnError();
        

While this is a common scenario to find in Internet applications, (e.g., when filling out your car insurance quote online), it is rarely required for every screen of an average internal LOB application.

I personally try to avoid this until the client asks for it. The logic required to continue the validation code typically becomes harder to write when you can't assume that preceding validation was successful.

Common Arguments

There seems to be conflicting advice on using exceptions, with common advice being to avoid them for anything other than runtime errors in the infrastructure (e.g., disk full, connection broken, etc.) I am not aware of a technical reason for not allowing exceptions to be used for business logic errors.

Others have sophisticated solutions for passing notifications out from the domain that avoid the use of exceptions. I try to avoid resorting to more complicated solutions until they are required - exceptions are a simple mechanism that are easily understood by programmers.

Less interesting arguments, however, usually revolve around performance. Any performance loss from throwing an exception is usually insignificant by comparison to the length of time taken for the business transaction itself.

Summary

Exceptions are the perfect mechanism for reporting inconsistencies in domain logic. I believe judicious use of them will make your applications more stable, and make your code neater and easier to maintain.

Be wary of anyone telling you that this is not what exceptions are for ... this is exactly what they are perfect for!

Submit this story to DotNetKicks Shout it

After reading Ayende’s recent blog post it got me thinking about a situation I regularly find myself in when building systems.  Whilst Ayende’s post mentions that he’s not really interested in whether a child is responsible for creating itself or whether a parent is responsible for creating a child, I personally always favour the later. 

However, whilst some objects within a domain model have an obvious parent that is also within the domain model some simply don’t.  In these situations the question about who is the responsible party is slightly more confusing as whilst (as a child object) I know I’m not the responsible party for something I don’t really have anyone who is. 

In these situations I create a notional “parent” for these orphans.  I call this object the “System” or “Mother” object as the only reason it exists is to act as parent for objects who have no notional parent.

So who is the responsible party when nobody loves me?  Simple, someone always loves you! :-)

Submit this story to DotNetKicks Shout it

I've found that the best way to get to understand a particular technology is play around with it. As such, when I started working on a WCF project for one of my customers I created a simple test harness that let me play about with WCF Service Contracts and WCF Data Contracts quickly and easily.  Anyway, I've made my WCF test harness available here:

http://code.google.com/p/koln/

The test harness uses RhinoMocks to mock out my WCF Services and NUnit to enable a simple set of automated unit tests.

Submit this story to DotNetKicks Shout it

(or Using NHibernate and WCF)

Introduction

If you have a Domain Model, and a rich-client (e.g, WinForms/WPF), you might want to expose your application's functionality across a Service Layer/ Remote Facade. Your Service Layer will need to return objects containing domain information, and I prefer to re-use my Domain Model classes directly, rather than invent unnecessary DTO objects (DRY).

Typically your Domain Model might be entities mapped to a relational database using NHibernate (or some other ORM), and a Service Layer exposed using WCF (or some other remoting technology).

This should be simple, however there are a few issues you might run into.

Lazy Load

Most good ORM solutions will use Lazy Load to allow the domain objects to access each other using their associations while silently taking care of the data-access concern.

When you come to serialise an object that came from a connected Domain Model you now have a problem. With Lazy Load it's like pulling a loose thread that could potentially load up the entire object model into memory. You need a way to indicate the 'section' of the object graph that you want.

Lazy Load is also often implemented using interception, which in turn usually means that the object might actually be a generated proxy. Some remoting technologies do not play well with auto-generated classes; you might need to transform them back to their 'real' class to send them across the wire.

(WCF can generate a System.ServiceModel.CommunicationException: ... Add any types not known statically to the list of known types - for example, by using the KnownTypeAttribute attribute or by adding them to the list of known types passed to DataContractSerializer)

In addition, the serialisation of the objects often occurs after the service call is complete. If the database connection is already closed then you may be unable to Lazy Load more of the domain.

(NHibernate can generate a NHibernate.LazyInitializationException: failed to lazily initialize a collection, no session or session was closed)

Domain Object Graph Depth

Some ORM solutions allow you to 'disconnect' a portion of the object graph to prevent Lazy Load issues, but then you have to handle two further issues:

1. Loaded Graph is Too Deep ...

During a single service call the domain objects might load a deep graph to solve a given problem, and then return a single object as its result. In this case the loaded object graph might be much larger than you want to send across the wire. (Either for performance reasons, or security reasons perhaps.)

2. Loaded Graph is Too Shallow ...

Other times the graph will only be partially loaded, and you want to return a result from a service call that loads deeper information. For example, a call GetPersonDetail() might need the Person object, and it's parents, children, addresses, ... or specifically exclude some personal details. In addition, you need to be careful not to fall into the select N+1 problem while fetching all the objects you want.

In both of these cases you again need to indicate the portion of the object graph that needs to be sent across the wire.

Cyclic Object Graphs

Most connected Domain Models are full of cyclic references, with bi-direction associations between objects being common. A typical object-graph for these classes:

public class Person
{
    // 1:m association between Person and Child
    public IList<Child> Children;
    ...

public class Child
{
    // m:1 association (back pointer)
    public Person Parent;
    ...
        

... might look like this in memory:

However, if you're going to try exposing your services using SOAP you're going to find out that cyclic graphs like this are not allowed. (I can only assume the guys that defined SOAP (WS-I) were asleep the day they decided that no-one would need to send cyclic graphs across the wire.)

To send this across the wire you might need to transform the above graph into something like this:

Solving Using a Graph Utility Class

My preferred option for sorting these problems is to use a utility class that allows you to quickly specify a graph of objects to 'copy'.

With a sprinkling of Lambda Expressions and extension methods you can have code that looks like something like this:

// Copies a person, its children (with its parent),
// and the grandchildren.
Person personCopyToReturnAcrossWcf =
    myRootPersonObject
        .Graph()
        .Add(p => p.Children, new Graph<Child>()
            .Add(c => c.Parent))
            .Add(c => c.Granchildren))
        .Copy();
        

The source for the utility class can be found here: link

Finally

Some people dislike the idea of sending the 'real' domain objects across the wire. Instead they create DTO objects for everything that has to be passed back from a service.

My personal preference is to only create DTO classes when they are needed, hopefully saving time on creating both the DTO objects and the code to map my domain properties to them. Reporting is an example where this usually breaks down and I find myself creating DTO classes to send information to a client.

Utimately, whether you use graphing, or use DTOs, you still have to be aware of all of the above issues when using a Domain Model and a Service Layer/Remote Facade.

Submit this story to DotNetKicks Shout it