Broloco

Introduction

Most software development teams are now Agile ... or at least they're calling themselves Agile. In addition, I see many claiming to practice all the right techniques. I often find, however, that when you look behind the scenes they are making fundamental mistakes in their implementation.

Continuous Integration is arguably the most important practice you need to be Agile. If you're not constantly integrating new functionality into the existing system, and verifying that it doesn't break the existing functionality, then you are going to struggle to deliver incrementally.

Build on the Developer's Machine

A not uncommon mistake is to put the automated build on a server ... and only on the server. I've seen teams spend time getting the 'build server' set up in such a way that it's the only machine in the department that can actually run the build (and the tests).

This sometimes comes from the misunderstanding that the build tool is not the scheduler that runs on the server (e.g., CCNet). It's usually a command-line driven tool like NAnt or MSBuild. (This kind of misunderstanding can lead to a discussion like the following: link.)

Incidentally, I'm a huge fan of CCNet, and have used it almost everywhere I've worked. However, as James Shore has pointed out before, people can have a tendency to be distracted from the actual practice of Continuous Integration in favour of getting a build server working.

Integrate with ALL Code

So you've got the build going, and people are running the build locally before checking in. So now you're practising Continuous Integration, right? Well ... maybe.

Despite TDD not being a new technique, I think it is (unfortunately) still far from mainstream. A team might try to introduce (or increase the amount of) automated testing but, as James Shore points out, it is fundamental that the entire team agrees (see step 5 here) to practice Continuous Integration. Not only important that they all buy into the practice, but that they all understand the practice and what it entails.

When the developers adds a new feature to the existing code, they typically have to modify an existing implementation. They will try hard (sometimes very hard) not to break the existing implementation. Unfortunately, some developers might not extend that care to the tests.

I've seen developers treat tests as 'special' code that is immune to these rules. It is sometimes seen as acceptable to ignore tests (using the [Ignore] attribute), remove the [Test] attribute completely so they don't even get reported as ignored, or worst of all comment them out or delete them! The effect of this has been described as Test Cancer.

The excuse is often that it was 'pragmatic' ... where 'pragmatic' is used as a euphemism to being reckless or irresponsible. If you're practising Continuous Integration properly, then you should be integrating with all the existing code, and that includes the tests.

The Implementation as a Second-Class Citizen

Accepting that tests should be treated with the same priority as the implementation isn't far enough in my opinion. For those practising TDD the tests are significantly more important than the implementation.

Each test that gets added to the solution acts as an invariant, and might change little during the development of the product. It is the implementation that changes constantly to accommodate each new requirement as the project progresses. It is the implementation where pragmatic approaches like faking and hard-coding are acceptable until there is a test that demonstrates the need for it to evolve past the simplest solution.

You can see this effect in action very clearly while watching one of Uncle Bob's Katas (e.g., the Prime Numbers Kata); the tests, once written, are static, but the implementation is open to constant redesign/rework. (Note, I'm avoiding the much abused term Refactor here deliberately).

Once you've embraced the tests and are continually integrating with them, the implementation becomes a secondary concern that is an output designed (and redesigned) to satisfy the all tests. The implementation becomes the second-class citizen that is open to faking and hard-coding, while the battery of tests mean "you just don't have to worry where you walk".

Summary

In order to say you are practising Continuous Integration correctly, the following should be true:

  • The whole teams is collectively agreeing to the practice;
  • The automated build (including tests) must be runnable on a developer's machine;
  • The tests are treated with at least the same priority as the implementing code;
  • There is nothing more important than a broken build!

Do not think of tests as second-class citizens in the code. Instead strive towards the tests being the input, and the implementation becoming an output that passes the tests.

Finally, I don't think I've said anything new here; just reiterated many people's well worn points that have stood the test of time. I firmly believe that ignoring these points can cause a sharp decline in project quality, and that teams should bear them in mind before claiming to be Agile, or claiming they are practising Continuous Integration.

Submit this story to DotNetKicks Shout it

As I explained previously I tend to complete an Iteration 0 prior to the rest of the development team starting work on a project as I feel it helps ensure that the development team can be as productive as possible when they start.  Normally I spend this time coding alone, but consulting with others as required, putting the structure of the system in place.  I make decisions about the main structural patterns we're going to use (e.g.,  Domain Model, Service Layer etc.) and the technologies (e.g., NUnit, NHibernate, Spring.NET etc) but I try to defer as many as I reasonably can.  If I don't spend this time I've found that Iteration 1 is spent arguing about the best way to do just about everything and frankly when you're working on fixed price contract you just can't afford that.

Once Iteration 0 is complete and the rest of the development team start they concentrate on cranking out functionality as quickly as they can.  They don't really need to know about the dependency injection framework, the fancy AOP aspects intercepted to provided comprehensive logging to all the services exposed.  The project naturally gathers pace and for most of the developers the plumbing code is a bit like magic: it just works!  They don't know how and, to be honest, they don't need to.  It's at this point when the warning signs appear.  If you're the Technical Lead and nobody on your development team is starting to ask those awkward questions that make you re-evaluate your view on things like "why is the logging like this?", "why aren't we using Fluent NHibernate", start to worry.  If nobody on the development team other than you is taking the time to understand what's going on under the hood and questioning your decisions, rest assured that if you step off that project monsters will be eating your bridges in no time at all.

Now, on one of the systems I've built when I stepped off and went back later to make a bug fix, there was no sign of monsters.  So the question is: what was different? The answer is that on the project one of the less experienced developers on the development team started asking awkward questions and I started answering them.  Some of the questions were straightforward to answer, but some of the questions made me really question the decisions I'd made.  The less experienced developed started to understand not only how things were put together but why.  They started to learn, fast!  When the time came for me to step off the project they were then capable to assume the role of Technical Lead and the "Sim City" effect didn't happen.  Sure when I returned things had changed but there was no monsters in site.

Submit this story to DotNetKicks Shout it

I tend to fulfil the technical lead role on the projects I work on.  This role means that I’m overall technically responsible for the system.  Put simply: the buck stops with me!

As such, at the start of each new project I tend to spend a week or so working long hours getting the “framework” of the application in place and, attempting to provide the simplest end-to-end example of how the system will be structured that I can.  I also used to write a document that explained the shape of the system too, but I’ve found, through costly experience, that nothing matters more than an end-to-end example in code.

Despite protestations from the “truly agile” I believe this time is well spent getting the various architectural elements of the system , the levels of testing, and the tools we are going to use in place and thankfully I’m not alone in this.  Over the course of the project these decisions will bend, some may be completely reversed, but largely they’ll stay with the project until it’s dying day.  Once the framework is in place, more junior developers can be productive very quickly and concentrate primarily on business functionality and not technical “plumbing”.

Typically, I work on the project through to completion and work as a senior programmer on the project.  Unfortunately sometimes I’m forced to “step off” from a project before it’s completion.  It is when this happens that I tend to see something I’ve termed lovingly the “Sim City” effect.

For those who remember playing Sim City this scenario may sound familiar…

In the older versions of Sim City (I’ve no idea about the newer versions as I moved on to Championship Manager when I hit 12!), you could spend the first couple of hours of the game getting into loads of debt building you’re beautiful city utopia.  All shiny and new you marvelled at how good your Police and Fire Service coverage was and you watched the taxes roll in from happy citizens.  It was around this time that typically your mother shouted you for dinner and you left the city to it's own devices.  An hour or so would go by and eventually you head back to your bedroom and your beautiful utopia.  But what’s this?  Monsters eating your previously beautifully crafted bridges, mass civil unrest and power cuts all over the city.  Despair typically follows… followed by a shutdown!

So the same feeling of despair I got when I returned to my once utopian city can also occur when I return to a project I've started but never got to finish.  My once beautifully crafted tests are now broken or worse ignored.  CruiseControl is shot to bits, and where previously RhinoMocks nicely mocked out underlying system layers to facilitate true unit tests these have been superseded with hand cranked stub classes that have roughly as many lines of code as there are stars in the universe! 

Martin Fowler talked about Test Cancer and, despite some rather emotive language, I think that is just one of the “monsters eating your bridges” that you can see.  Architectural and methodological deviation in general tend to be common place.  That’s not to say it happens all the time and it’s worth remembering too that utopia is in the eye of the beholder but in my experience, the “Sim City” effect is all too common.

Submit this story to DotNetKicks Shout it

Introduction

If you have a Domain Model, you're going to have code that validates rules and checks for inconsistencies. Methods in the Domain Model are going to have to indicate failure in some way (often corresponding to an exception scenario in a use case).

My personal preference is always to start with the simplest way of handling business logic errors: throw an exception.

Avoid Return Codes

We know that the wrong way to indicate a problem is by using an error return-code from each function. The call-stack can be quite deep, and you cannot (and should not) rely on the callee of each function to check each return value.

We also know that failing fast is (perhaps unintuitively) the way to write more stable applications.

Exceptions are a language mechanism that is perfect for indicating that something has failed. The semantics allow you to 'get the hell out of Dodge' without having to worry about handling the failure, or how deep in the call stack you are. If the exception is not handled correctly, it will expose the problem in an obvious way.

Additionally, throwing an exception means that you don't have to worry about the code that would have been executed next. This can be of advantage where:

  • The subsequent code would otherwise have failed (e.g., if the validation found that a string was not unique and inserting it into a database would cause a unique-key violation);
  • The subsequent code used a non-transactional resource (e.g., a synchronous external web service, or the file system.)

Catching Exceptions in the Presentation

When you're handling exceptions in the Presentation layer you have an opportunity to concisely handle the specific scenarios you know about while elegantly ignoring any others:

public void CreateButton_OnClick()
{
    try
    {
        library.CreateBook( View.BookTitle,
                            View.BookAuthor,
                            View.BookDescription);
    }
    catch (BookTitleNotUniqueException)
    {
        View.ShowMessage("The supplied book title is already in use");
    }
}
        

The above try-catch construct provides a way to indicate that there is a programmed response when the supplied name is not unique. More importantly, it indicates that any other error should automatically be propagated up the call chain.

Service Layer

Many applications still separate the presentation from the domain with a (remote) Service Layer. However, most remoting technologies (e.g., WCF Web Services) don't play well with exceptions.

Typically the Service Layer will catch the exception, roll-back any transaction, and use a custom DTO or framework-provided mechanism to pass the details of the exception the to client. (WCF supplies the FaultContract class for this)

The DTO should be turned back into an exception on the client-side (using AOP/interception) so that it will still be propagated up the call chain if it is not handled. When using WCF it is advisable to convert the FaultException object back into a regular exception to prevent coupling your presentation layer to WCF.

Where you have no Service Layer, you'll probably need to call a method (probably in the constructor of your custom exception class) to indicate to any transaction handler that the transaction should be rolled back.

Multiple Failures

A common scenario where failing fast is not immediately suitable is where the client wants multiple validation errors handled in a single call.

In these cases I try to wrap the multiple checks into a fluent interface that throws a single exception that contains a collection of validation errors. The code might look something like:

new ValidationBuilder()
    .Add(ValidateAddressLine1())
    .Add(ValidateAddressLine2())
    .Add(ValidateAddressLine3())
    .Add(ValidatePostcode())
    .ThrowOnError();
        

While this is a common scenario to find in Internet applications, (e.g., when filling out your car insurance quote online), it is rarely required for every screen of an average internal LOB application.

I personally try to avoid this until the client asks for it. The logic required to continue the validation code typically becomes harder to write when you can't assume that preceding validation was successful.

Common Arguments

There seems to be conflicting advice on using exceptions, with common advice being to avoid them for anything other than runtime errors in the infrastructure (e.g., disk full, connection broken, etc.) I am not aware of a technical reason for not allowing exceptions to be used for business logic errors.

Others have sophisticated solutions for passing notifications out from the domain that avoid the use of exceptions. I try to avoid resorting to more complicated solutions until they are required - exceptions are a simple mechanism that are easily understood by programmers.

Less interesting arguments, however, usually revolve around performance. Any performance loss from throwing an exception is usually insignificant by comparison to the length of time taken for the business transaction itself.

Summary

Exceptions are the perfect mechanism for reporting inconsistencies in domain logic. I believe judicious use of them will make your applications more stable, and make your code neater and easier to maintain.

Be wary of anyone telling you that this is not what exceptions are for ... this is exactly what they are perfect for!

Submit this story to DotNetKicks Shout it

After reading Ayende’s recent blog post it got me thinking about a situation I regularly find myself in when building systems.  Whilst Ayende’s post mentions that he’s not really interested in whether a child is responsible for creating itself or whether a parent is responsible for creating a child, I personally always favour the later. 

However, whilst some objects within a domain model have an obvious parent that is also within the domain model some simply don’t.  In these situations the question about who is the responsible party is slightly more confusing as whilst (as a child object) I know I’m not the responsible party for something I don’t really have anyone who is. 

In these situations I create a notional “parent” for these orphans.  I call this object the “System” or “Mother” object as the only reason it exists is to act as parent for objects who have no notional parent.

So who is the responsible party when nobody loves me?  Simple, someone always loves you! :-)

Submit this story to DotNetKicks Shout it