Broloco

As I explained previously I tend to complete an Iteration 0 prior to the rest of the development team starting work on a project as I feel it helps ensure that the development team can be as productive as possible when they start.  Normally I spend this time coding alone, but consulting with others as required, putting the structure of the system in place.  I make decisions about the main structural patterns we're going to use (e.g.,  Domain Model, Service Layer etc.) and the technologies (e.g., NUnit, NHibernate, Spring.NET etc) but I try to defer as many as I reasonably can.  If I don't spend this time I've found that Iteration 1 is spent arguing about the best way to do just about everything and frankly when you're working on fixed price contract you just can't afford that.

Once Iteration 0 is complete and the rest of the development team start they concentrate on cranking out functionality as quickly as they can.  They don't really need to know about the dependency injection framework, the fancy AOP aspects intercepted to provided comprehensive logging to all the services exposed.  The project naturally gathers pace and for most of the developers the plumbing code is a bit like magic: it just works!  They don't know how and, to be honest, they don't need to.  It's at this point when the warning signs appear.  If you're the Technical Lead and nobody on your development team is starting to ask those awkward questions that make you re-evaluate your view on things like "why is the logging like this?", "why aren't we using Fluent NHibernate", start to worry.  If nobody on the development team other than you is taking the time to understand what's going on under the hood and questioning your decisions, rest assured that if you step off that project monsters will be eating your bridges in no time at all.

Now, on one of the systems I've built when I stepped off and went back later to make a bug fix, there was no sign of monsters.  So the question is: what was different? The answer is that on the project one of the less experienced developers on the development team started asking awkward questions and I started answering them.  Some of the questions were straightforward to answer, but some of the questions made me really question the decisions I'd made.  The less experienced developed started to understand not only how things were put together but why.  They started to learn, fast!  When the time came for me to step off the project they were then capable to assume the role of Technical Lead and the "Sim City" effect didn't happen.  Sure when I returned things had changed but there was no monsters in site.

Submit this story to DotNetKicks Shout it

I tend to fulfil the technical lead role on the projects I work on.  This role means that I’m overall technically responsible for the system.  Put simply: the buck stops with me!

As such, at the start of each new project I tend to spend a week or so working long hours getting the “framework” of the application in place and, attempting to provide the simplest end-to-end example of how the system will be structured that I can.  I also used to write a document that explained the shape of the system too, but I’ve found, through costly experience, that nothing matters more than an end-to-end example in code.

Despite protestations from the “truly agile” I believe this time is well spent getting the various architectural elements of the system , the levels of testing, and the tools we are going to use in place and thankfully I’m not alone in this.  Over the course of the project these decisions will bend, some may be completely reversed, but largely they’ll stay with the project until it’s dying day.  Once the framework is in place, more junior developers can be productive very quickly and concentrate primarily on business functionality and not technical “plumbing”.

Typically, I work on the project through to completion and work as a senior programmer on the project.  Unfortunately sometimes I’m forced to “step off” from a project before it’s completion.  It is when this happens that I tend to see something I’ve termed lovingly the “Sim City” effect.

For those who remember playing Sim City this scenario may sound familiar…

In the older versions of Sim City (I’ve no idea about the newer versions as I moved on to Championship Manager when I hit 12!), you could spend the first couple of hours of the game getting into loads of debt building you’re beautiful city utopia.  All shiny and new you marvelled at how good your Police and Fire Service coverage was and you watched the taxes roll in from happy citizens.  It was around this time that typically your mother shouted you for dinner and you left the city to it's own devices.  An hour or so would go by and eventually you head back to your bedroom and your beautiful utopia.  But what’s this?  Monsters eating your previously beautifully crafted bridges, mass civil unrest and power cuts all over the city.  Despair typically follows… followed by a shutdown!

So the same feeling of despair I got when I returned to my once utopian city can also occur when I return to a project I've started but never got to finish.  My once beautifully crafted tests are now broken or worse ignored.  CruiseControl is shot to bits, and where previously RhinoMocks nicely mocked out underlying system layers to facilitate true unit tests these have been superseded with hand cranked stub classes that have roughly as many lines of code as there are stars in the universe! 

Martin Fowler talked about Test Cancer and, despite some rather emotive language, I think that is just one of the “monsters eating your bridges” that you can see.  Architectural and methodological deviation in general tend to be common place.  That’s not to say it happens all the time and it’s worth remembering too that utopia is in the eye of the beholder but in my experience, the “Sim City” effect is all too common.

Submit this story to DotNetKicks Shout it

Introduction

If you have a Domain Model, you're going to have code that validates rules and checks for inconsistencies. Methods in the Domain Model are going to have to indicate failure in some way (often corresponding to an exception scenario in a use case).

My personal preference is always to start with the simplest way of handling business logic errors: throw an exception.

Avoid Return Codes

We know that the wrong way to indicate a problem is by using an error return-code from each function. The call-stack can be quite deep, and you cannot (and should not) rely on the callee of each function to check each return value.

We also know that failing fast is (perhaps unintuitively) the way to write more stable applications.

Exceptions are a language mechanism that is perfect for indicating that something has failed. The semantics allow you to 'get the hell out of Dodge' without having to worry about handling the failure, or how deep in the call stack you are. If the exception is not handled correctly, it will expose the problem in an obvious way.

Additionally, throwing an exception means that you don't have to worry about the code that would have been executed next. This can be of advantage where:

  • The subsequent code would otherwise have failed (e.g., if the validation found that a string was not unique and inserting it into a database would cause a unique-key violation);
  • The subsequent code used a non-transactional resource (e.g., a synchronous external web service, or the file system.)

Catching Exceptions in the Presentation

When you're handling exceptions in the Presentation layer you have an opportunity to concisely handle the specific scenarios you know about while elegantly ignoring any others:

public void CreateButton_OnClick()
{
    try
    {
        library.CreateBook( View.BookTitle,
                            View.BookAuthor,
                            View.BookDescription);
    }
    catch (BookTitleNotUniqueException)
    {
        View.ShowMessage("The supplied book title is already in use");
    }
}
        

The above try-catch construct provides a way to indicate that there is a programmed response when the supplied name is not unique. More importantly, it indicates that any other error should automatically be propagated up the call chain.

Service Layer

Many applications still separate the presentation from the domain with a (remote) Service Layer. However, most remoting technologies (e.g., WCF Web Services) don't play well with exceptions.

Typically the Service Layer will catch the exception, roll-back any transaction, and use a custom DTO or framework-provided mechanism to pass the details of the exception the to client. (WCF supplies the FaultContract class for this)

The DTO should be turned back into an exception on the client-side (using AOP/interception) so that it will still be propagated up the call chain if it is not handled. When using WCF it is advisable to convert the FaultException object back into a regular exception to prevent coupling your presentation layer to WCF.

Where you have no Service Layer, you'll probably need to call a method (probably in the constructor of your custom exception class) to indicate to any transaction handler that the transaction should be rolled back.

Multiple Failures

A common scenario where failing fast is not immediately suitable is where the client wants multiple validation errors handled in a single call.

In these cases I try to wrap the multiple checks into a fluent interface that throws a single exception that contains a collection of validation errors. The code might look something like:

new ValidationBuilder()
    .Add(ValidateAddressLine1())
    .Add(ValidateAddressLine2())
    .Add(ValidateAddressLine3())
    .Add(ValidatePostcode())
    .ThrowOnError();
        

While this is a common scenario to find in Internet applications, (e.g., when filling out your car insurance quote online), it is rarely required for every screen of an average internal LOB application.

I personally try to avoid this until the client asks for it. The logic required to continue the validation code typically becomes harder to write when you can't assume that preceding validation was successful.

Common Arguments

There seems to be conflicting advice on using exceptions, with common advice being to avoid them for anything other than runtime errors in the infrastructure (e.g., disk full, connection broken, etc.) I am not aware of a technical reason for not allowing exceptions to be used for business logic errors.

Others have sophisticated solutions for passing notifications out from the domain that avoid the use of exceptions. I try to avoid resorting to more complicated solutions until they are required - exceptions are a simple mechanism that are easily understood by programmers.

Less interesting arguments, however, usually revolve around performance. Any performance loss from throwing an exception is usually insignificant by comparison to the length of time taken for the business transaction itself.

Summary

Exceptions are the perfect mechanism for reporting inconsistencies in domain logic. I believe judicious use of them will make your applications more stable, and make your code neater and easier to maintain.

Be wary of anyone telling you that this is not what exceptions are for ... this is exactly what they are perfect for!

Submit this story to DotNetKicks Shout it