tag:blogger.com,1999:blog-339669342024-03-14T05:58:22.234+00:00brolocoA blog on software development.Neil Loganhttp://www.blogger.com/profile/04189198374043005196noreply@blogger.comBlogger22125tag:blogger.com,1999:blog-33966934.post-44414537541704872922010-03-02T07:00:00.000+00:002010-03-02T07:02:03.658+00:00Tests are not Second-Class Citizens<h3>Introduction</h3>
<p>
Most software development teams are now Agile ... or at least they're calling themselves Agile.
In addition, I see many claiming to practice all the right techniques.
I often find, however, that when you look behind the scenes they are making fundamental mistakes
in their implementation.
</p>
<p>
<a target="_blank" href="http://martinfowler.com/articles/continuousIntegration.html">Continuous Integration</a>
is arguably the most important practice you need to be Agile. If you're
not constantly integrating new functionality into the existing system, and verifying that it
doesn't break the existing functionality, then you are going to struggle to deliver incrementally.
</p>
<h3>Build on the Developer's Machine</h3>
<p>
A not uncommon mistake is to put the automated build on a server ... and <i>only</i> on the server.
I've seen teams spend time getting the 'build server' set up in such a way that it's the only
machine in the department that can actually run the build (and the tests).
</p>
<p>
This sometimes comes from the misunderstanding that the <i>build</i> tool is not
the scheduler that runs on the server (e.g., CCNet). It's usually a command-line driven
tool like NAnt or MSBuild. (This kind of misunderstanding can lead to a discussion like the following:
<a target="_blank" href="http://old.nabble.com/Nant-VS-CC.NEt-td27460124.html">link</a>.)
</p>
<p>
Incidentally, I'm a huge fan of CCNet, and have used it almost everywhere I've worked.
However, as James Shore has pointed out before, people can have a tendency to be distracted
from the actual
<a target="_blank" href="http://jamesshore.com/Blog/Continuous-Integration-is-an-Attitude.html">practice</a>
of Continuous Integration in favour of getting a build server working.
</p>
<h3>Integrate with ALL Code</h3>
<p>
So you've got the build going, and people are running the build locally before checking in.
So now you're practising Continuous Integration, right? Well ... maybe.
</p>
<p>
Despite TDD not being a new technique, I think it is (unfortunately) still far from mainstream.
A team might try to introduce (or increase the amount of) automated testing but, as James Shore points
out, it is fundamental that the entire team agrees (see step 5
<a target="_blank" href="http://jamesshore.com/Blog/Continuous-Integration-on-a-Dollar-a-Day.html">here</a>)
to practice Continuous Integration.
Not only important that they all buy into the practice, but that they all understand
the practice and what it entails.
</p>
<p>
When the developers adds a new feature to the existing code,
they typically have to modify an existing implementation.
They will try hard (sometimes very hard) not to break the existing implementation. Unfortunately,
some developers might not extend that care to the tests.
</p>
<p>
I've seen developers treat tests as 'special' code that is immune to these rules.
It is sometimes seen as acceptable to ignore tests (using the [Ignore] attribute), remove the
[Test] attribute completely so they don't even get reported as ignored, or worst of all comment
them out or delete them! The effect of this has been described as
<a target="_blank" href="http://martinfowler.com/bliki/TestCancer.html">Test Cancer</a>.
</p>
<p>
The excuse is often that it was 'pragmatic' ... where 'pragmatic'
is used as a euphemism to being reckless or irresponsible.
If you're practising Continuous Integration properly, then you should be integrating with
all the existing code, and that includes the tests.
</p>
<h3>The Implementation as a Second-Class Citizen</h3>
<p>
Accepting that tests should be treated with the same priority as the implementation
isn't far enough in my opinion.
For those practising TDD the tests are significantly <b>more</b> important than the implementation.
</p>
<p>
Each test that gets added to the solution acts as an invariant, and might change little
during the development of the product.
It is the implementation that changes constantly to accommodate each new requirement
as the project progresses.
It is the implementation where pragmatic approaches like
<a target="_blank" href="http://c2.com/cgi/wiki/wiki?FakeIt">faking</a>
and
<a target="_blank" href="http://ayende.com/Blog/archive/2008/08/21/Enabling-change-by-hard-coding-everything-the-smart-way.aspx">hard-coding</a>
are acceptable until there is a test that demonstrates the need for it to evolve past
the simplest solution.
</p>
<p>
You can see this effect in action very clearly while watching one of Uncle Bob's Katas
(e.g., the <a target="_blank" href="http://butunclebob.com/ArticleS.UncleBob.ThePrimeFactorsKata">Prime Numbers Kata</a>);
the tests, once written, are static, but the implementation
is open to constant redesign/rework.
(Note, I'm avoiding the much abused term
<a target="_blank" href="http://twitter.com/jeremydmiller/status/9744558574">Refactor</a>
here deliberately).
</p>
<p>
Once you've embraced the tests and are continually integrating with them, the
implementation becomes a secondary concern that is an output designed (and
redesigned) to satisfy the all tests.
The implementation becomes the second-class citizen that is open to faking and hard-coding,
while the battery of tests mean
<a target="_blank" href="http://twitter.com/KentBeck/status/8635742385">"you just don't have to worry where you walk"</a>.
</p>
<h3>Summary</h3>
<p>
In order to say you are practising Continuous Integration correctly, the following should be true:
<ul>
<li>The whole teams is collectively agreeing to the practice;</li>
<li>The automated build (including tests) must be runnable on a developer's machine;</li>
<li>The tests are treated with at least the same priority as the implementing code;</li>
<li>There is <u>nothing</u> more important than a broken build!</li>
</ul>
</p>
<p>
Do not think of tests as second-class citizens in the code.
Instead strive towards the tests being the input, and the
implementation becoming an output that passes the tests.
</p>
<p>
Finally, I don't think I've said anything new here; just reiterated
many people's well worn points that have stood the test of time.
I firmly believe that ignoring these points can cause a sharp
decline in project quality, and that teams should bear
them in mind before claiming to be Agile, or claiming they
are practising Continuous Integration.
</p>Richard Brownhttp://www.blogger.com/profile/00271013570243148544noreply@blogger.com5tag:blogger.com,1999:blog-33966934.post-2084048644893952572009-06-20T18:19:00.002+01:002009-06-22T08:45:37.025+01:00The "Sim City" effect: warning signs and avoidance<p>As I explained <a href="http://broloco.blogspot.com/2009/06/sim-city-effect_16.html" target="_blank">previously</a> I tend to complete an Iteration 0 prior to the rest of the development team starting work on a project as I feel it helps ensure that the development team can be as productive as possible when they start.  Normally I spend this time coding alone, but consulting with others as required, putting the structure of the system in place.  I make decisions about the main structural patterns we're going to use (e.g.,  Domain Model, Service Layer etc.) and the technologies (e.g., NUnit, NHibernate, Spring.NET etc) but I try to defer as many as I reasonably can.  If I don't spend this time I've found that Iteration 1 is spent arguing about the best way to do just about everything and frankly when you're working on fixed price contract you just can't afford that.</p> <p>Once Iteration 0 is complete and the rest of the development team start they concentrate on cranking out functionality as quickly as they can.  They don't really need to know about the dependency injection framework, the fancy AOP aspects intercepted to provided comprehensive logging to all the services exposed.  The project naturally gathers pace and for most of the developers the plumbing code is a bit like magic: it just works!  They don't know how and, to be honest, they don't need to.  It's at this point when the warning signs appear.  If you're the Technical Lead and nobody on your development team is starting to ask those awkward questions that make you re-evaluate your view on things like "why is the logging like this?", "why aren't we using Fluent NHibernate", start to worry.  If nobody on the development team other than you is taking the time to understand what's going on under the hood and questioning your decisions, rest assured that if you step off that project monsters will be eating your bridges in no time at all.</p> <p>Now, on one of the systems I've built when I stepped off and went back later to make a bug fix, there was no sign of monsters.  So the question is: what was different? The answer is that on the project one of the less experienced developers on the development team started asking awkward questions and I started answering them.  Some of the questions were straightforward to answer, but some of the questions made me really question the decisions I'd made.  The less experienced developed started to understand not only how things were put together but why.  They started to learn, fast!  When the time came for me to step off the project they were then capable to assume the role of Technical Lead and the "Sim City" effect didn't happen.  Sure when I returned things had changed but there was no monsters in site.</p>Neil Loganhttp://www.blogger.com/profile/04189198374043005196noreply@blogger.com0tag:blogger.com,1999:blog-33966934.post-57752822243236293022009-06-16T08:03:00.001+01:002009-06-16T08:03:02.340+01:00The “Sim City” effect<p>I tend to fulfil the technical lead role on the projects I work on.  This role means that I’m overall technically responsible for the system.  Put simply: the buck stops with me!</p> <p>As such, at the start of each new project I tend to spend a week or so working long hours getting the “framework” of the application in place and, attempting to provide the simplest end-to-end example of how the system will be structured that I can.  I also used to write a document that explained the shape of the system too, but I’ve found, through costly experience, that nothing matters more than an end-to-end example in code.</p> <p>Despite protestations from the “truly agile” I believe this time is well spent getting the various architectural elements of the system , the levels of testing, and the tools we are going to use in place and thankfully I’m not alone in <a href="http://codebetter.com/blogs/jeremy.miller/archive/2006/10/02/My-Gameplan-for-Starting-a-New-Project-from-Scratch.aspx">this</a>.  Over the course of the project these decisions will bend, some may be completely reversed, but largely they’ll stay with the project until it’s dying day.  Once the framework is in place, more junior developers can be productive very quickly and concentrate primarily on business functionality and not technical “plumbing”.</p> <p>Typically, I work on the project through to completion and work as a senior programmer on the project.  Unfortunately sometimes I’m forced to “step off” from a project before it’s completion.  It is when this happens that I tend to see something I’ve termed lovingly the “Sim City” effect.</p> <p>For those who remember playing Sim City this scenario may sound familiar…</p> <p>In the older versions of Sim City (I’ve no idea about the newer versions as I moved on to Championship Manager when I hit 12!), you could spend the first couple of hours of the game getting into loads of debt building you’re beautiful city utopia.  All shiny and new you marvelled at how good your Police and Fire Service coverage was and you watched the taxes roll in from happy citizens.  It was around this time that typically your mother shouted you for dinner and you left the city to it's own devices.  An hour or so would go by and eventually you head back to your bedroom and your beautiful utopia.  But what’s this?  Monsters eating your previously beautifully crafted bridges, mass civil unrest and power cuts all over the city.  Despair typically follows… followed by a shutdown!</p> <p>So the same feeling of despair I got when I returned to my once utopian city can also occur when I return to a project I've started but never got to finish.  My once beautifully crafted tests are now broken or worse ignored.  CruiseControl is shot to bits, and where previously RhinoMocks nicely mocked out underlying system layers to facilitate true unit tests these have been superseded with hand cranked stub classes that have roughly as many lines of code as there are stars in the universe!  </p> <p>Martin Fowler talked about <a href="http://www.martinfowler.com/bliki/TestCancer.html">Test Cancer</a> and, despite some rather emotive language, I think that is just one of the “monsters eating your bridges” that you can see.  Architectural and methodological deviation in general tend to be common place.  That’s not to say it happens all the time and it’s worth remembering too that utopia is in the eye of the beholder but in my experience, the “Sim City” effect is all too common.</p> Neil Loganhttp://www.blogger.com/profile/04189198374043005196noreply@blogger.com2tag:blogger.com,1999:blog-33966934.post-16514396290230295962009-06-15T18:33:00.001+01:002009-06-18T16:36:51.197+01:00Throw Exceptions In Your Domain<h3>Introduction</h3>
<p>
If you have a Domain Model,
you're going to have code that validates rules and
checks for inconsistencies.
Methods in the Domain Model are going to have to indicate
failure in some way (often corresponding to an exception
scenario in a use case).
</p>
<p>
My personal preference is always to start
with the simplest way of handling business logic errors:
throw an exception.
</p>
<h3>Avoid Return Codes</h3>
<p>
We know that the
wrong way
to indicate a problem is by
using an error return-code from each function.
The call-stack can be quite deep, and you cannot
(and should not) rely on the callee of each function
to check each return value.
</p>
<p>
We also know that
<a target="_blank" href="http://martinfowler.com/ieeeSoftware/failFast.pdf">failing fast</a>
is (perhaps unintuitively)
the way to write more stable applications.
</p>
<p>
<a target="_blank" href="http://c2.com/cgi/wiki?UseExceptionsInsteadOfErrorValues">Exceptions</a>
are a language mechanism that is perfect for
indicating that something has failed. The semantics
allow you to 'get the hell out of Dodge' without
having to worry about handling the failure,
or how deep in the call stack you are. If the exception
is not handled correctly, it will expose the problem in an
obvious way.
</p>
<p>
Additionally, throwing an exception means that you
don't have to worry about the code that would have been executed
next. This can be of advantage where:
<ul>
<li>
The subsequent code would otherwise have failed (e.g.,
if the validation found that a string was not unique and
inserting it into a database would cause a unique-key
violation);
</li>
<li>
The subsequent code used a non-transactional resource
(e.g., a synchronous external web service, or the
file system.)
</li>
</ul>
</p>
<h3>Catching Exceptions in the Presentation</h3>
<p>
When you're handling exceptions in the Presentation layer
you have an opportunity to concisely handle the specific
scenarios you know about while elegantly ignoring any others:
</p>
<code><pre>
public void CreateButton_OnClick()
{
try
{
library.CreateBook( View.BookTitle,
View.BookAuthor,
View.BookDescription);
}
catch (BookTitleNotUniqueException)
{
View.ShowMessage("The supplied book title is already in use");
}
}
</pre></code>
<p>
The above try-catch construct provides a way to indicate that there
is a programmed response when the supplied name is not unique.
More importantly, it indicates that any other error should
automatically be propagated up the call chain.
</p>
<h3>Service Layer</h3>
<p>
Many applications still separate the presentation
from the domain with a (remote) Service
Layer. However, most remoting technologies
(e.g., WCF Web Services)
don't play well with exceptions.
</p>
<p>
Typically the Service Layer will catch the exception,
roll-back any transaction, and
use a custom DTO or framework-provided mechanism
to pass the details of the exception the to client.
(WCF supplies the FaultContract class for this)
</p>
<p>
The DTO should be turned back into an exception
on the client-side (using AOP/interception) so that it will
still be propagated up the call chain if it
is not handled.
When using WCF it is advisable to
<a target="_blank" href="http://www.olegsych.com/2008/07/simplifying-wcf-using-exceptions-as-faults/">
convert the FaultException object
</a>
back into a regular exception to prevent coupling your presentation
layer to WCF.
</p>
<p>
Where you have no Service Layer, you'll probably need to
call a method (probably in the constructor
of your custom exception class) to indicate to any
transaction handler that the transaction should be rolled
back.
</p>
<h3>Multiple Failures</h3>
<p>
A common scenario where
<a target="_blank" href="http://martinfowler.com/ieeeSoftware/failFast.pdf">failing fast</a>
is not immediately suitable is where the client wants
multiple validation errors handled in a single call.
</p>
<p>
In these cases I try to wrap the multiple checks into
a fluent interface that throws a single exception
that contains a collection of validation errors.
The code might look something like:
</p>
<code><pre>
new ValidationBuilder()
.Add(ValidateAddressLine1())
.Add(ValidateAddressLine2())
.Add(ValidateAddressLine3())
.Add(ValidatePostcode())
.ThrowOnError();
</pre></code>
<p>
While this is a common scenario to find in Internet applications,
(e.g., when filling out your car insurance quote online),
it is rarely required for every screen of an average internal
LOB application.
</p>
<p>
I personally try to avoid this until the client
asks for it. The logic required to
continue the validation code typically becomes harder to write
when you can't assume that preceding validation
was successful.
</p>
<h3>Common Arguments</h3>
<p>
There seems to be conflicting advice on using exceptions,
with common advice being to avoid them for anything other
than runtime errors in the infrastructure
(e.g., disk full, connection broken, etc.)
I am not aware of a technical reason for not allowing exceptions to
be used for business logic errors.
</p>
<p>
Others have
<a target="_blank" href="http://www.udidahan.com/2008/02/29/how-to-create-fully-encapsulated-domain-models/">sophisticated solutions</a>
for passing
<a target="_blank" href="http://www.martinfowler.com/eaaDev/Notification.html">notifications</a>
out from the domain that avoid
the use of exceptions.
I try to avoid resorting to more complicated solutions until
they are required - exceptions are a simple mechanism
that are easily understood by programmers.
</p>
<p>
Less interesting arguments, however, usually revolve
around performance. Any performance loss from throwing
an exception is usually insignificant by comparison
to the length of time taken for the business transaction
itself.
</p>
<h3>Summary</h3>
<p>
Exceptions are the perfect mechanism for reporting
inconsistencies in domain logic.
I believe judicious use of them will make your
applications more stable, and make your code
neater and easier to maintain.
</p>
<p>
Be wary of anyone telling you that this is not what
exceptions are for ... this is <i>exactly</i> what
they are perfect for!
</p>Richard Brownhttp://www.blogger.com/profile/00271013570243148544noreply@blogger.com0tag:blogger.com,1999:blog-33966934.post-51837774833932738852009-05-20T10:29:00.001+01:002009-05-20T10:29:50.507+01:00Who is the responsible party when nobody loves me?<p>After reading <a href="http://ayende.com/Blog/archive/2009/05/18/who-is-the-responsible-party.aspx">Ayende’s recent blog post</a> it got me thinking about a situation I regularly find myself in when building systems.  Whilst Ayende’s post mentions that he’s not really interested in whether a child is responsible for creating itself or whether a parent is responsible for creating a child, I personally always favour the later.  </p> <p>However, whilst some objects within a domain model have an obvious parent that is also within the domain model some simply don’t.  In these situations the question about who is the responsible party is slightly more confusing as whilst (as a child object) I know I’m not the responsible party for something I don’t really have anyone who is.  </p> <p>In these situations I create a notional “parent” for these orphans.  I call this object the “System” or “Mother” object as the only reason it exists is to act as parent for objects who have no notional parent.</p> <p>So who is the responsible party when nobody loves me?  Simple, someone always loves you! :-)</p> Neil Loganhttp://www.blogger.com/profile/04189198374043005196noreply@blogger.com0tag:blogger.com,1999:blog-33966934.post-47904593836642831502009-02-09T08:40:00.001+00:002009-02-09T08:40:11.841+00:00WCF Test Harness<p>I've found that the best way to get to understand a particular technology is play around with it. As such, when I started working on a WCF project for one of my customers I created a simple test harness that let me play about with WCF Service Contracts and WCF Data Contracts quickly and easily.  Anyway, I've made my WCF test harness available here: </p> <p><a href="http://code.google.com/p/koln/">http://code.google.com/p/koln/</a></p> <p>The test harness uses RhinoMocks to mock out my WCF Services and NUnit to enable a simple set of automated unit tests.</p> Neil Loganhttp://www.blogger.com/profile/04189198374043005196noreply@blogger.com0tag:blogger.com,1999:blog-33966934.post-26656542074009194222009-02-05T11:20:00.000+00:002009-02-05T11:21:44.542+00:00Sending Domain Objects Across the Wire<h4>(or Using NHibernate and WCF)</h4>
<h3>Introduction</h3>
<p>
If you have a
<a target="_blank" href="http://martinfowler.com/eaaCatalog/domainModel.html">Domain Model</a>,
and a rich-client (e.g, WinForms/WPF),
you might want to expose your application's functionality
across a
<a target="_blank" href="http://martinfowler.com/eaaCatalog/serviceLayer.html">Service Layer</a>/
<a target="_blank" href="http://martinfowler.com/eaaCatalog/remoteFacade.html">Remote Facade</a>.
Your Service Layer will
need to return objects containing domain information, and I prefer to re-use my
Domain Model classes directly, rather than invent unnecessary
<a target="_blank" href="http://martinfowler.com/eaaCatalog/dataTransferObject.html">DTO</a>
objects (DRY).
</p>
<p>
Typically your Domain Model might be entities mapped to a relational
database using NHibernate (or some other ORM),
and a Service Layer exposed using WCF (or some other remoting technology).
</p>
<p>
This <i>should</i> be simple, however there are a few issues you
might run into.
</p>
<h3>Lazy Load</h3>
<p>
Most good ORM solutions will use
<a target="_blank" href="http://martinfowler.com/eaaCatalog/lazyLoad.html">Lazy Load</a>
to allow the domain
objects to access each other using their associations
while silently taking care of
the data-access concern.
</p>
<p>
When you come to serialise an object that came from a connected
Domain Model you now have a problem. With Lazy Load it's like pulling
a loose thread that could potentially load up the entire object
model into memory. You need a way to indicate the 'section' of the object
graph that you want.
</p>
<p>
Lazy Load is also often implemented using
<a target="_blank" href="http://ayende.com/Blog/archive/2007/07/02/7-Approaches-for-AOP-in-.Net.aspx">interception</a>,
which in turn usually means that the object
might actually be a generated proxy. Some remoting
technologies do not play well with auto-generated classes;
you might need to transform them back to their 'real' class
to send them across the wire.
</p>
<p>
<i style="font-size:smaller;">
(WCF can generate a System.ServiceModel.CommunicationException: ...
Add any types not known statically to the list of known types -
for example, by using the KnownTypeAttribute attribute or by adding
them to the list of known types passed to DataContractSerializer)
</i>
</p>
<p>
In addition, the serialisation of the objects often occurs after the service
call is complete. If the database connection is already closed then you may
be unable to Lazy Load more of the domain.
</p>
<p>
<i style="font-size:smaller;">
(NHibernate can generate a NHibernate.LazyInitializationException:
failed to lazily initialize a collection, no session or session was closed)
</i>
</p>
<h3>Domain Object Graph Depth</h3>
<p>
Some ORM solutions allow you to 'disconnect' a portion of the
object graph to prevent Lazy Load issues, but then you have to handle
two further issues:
</p>
<h4 style="font-size:smaller;">1. Loaded Graph is Too Deep ...</h4>
<p>
During a single service call the domain objects might
load a deep graph to solve a given problem, and then return
a single object as its result. In this case the loaded object
graph might be much larger than you want to send across the wire.
(Either for performance reasons, or security reasons perhaps.)
</p>
<h4 style="font-size:smaller;">2. Loaded Graph is Too Shallow ...</h4>
<p>
Other times the graph will only be partially loaded, and you want
to return a result from a service call that loads deeper information.
For example, a call GetPersonDetail() might need the Person object,
and it's parents, children, addresses, ... or specifically exclude
some personal details. In addition, you need to be careful not
to fall into the
<a target="_blank" href="http://ayende.com/Blog/archive/2008/12/01/solving-the-select-n1-problem.aspx">select N+1</a>
problem while fetching all the objects you want.
</p>
<p>
In both of these cases you again need to indicate the portion of
the object graph that needs to be sent across the wire.
</p>
<h3>Cyclic Object Graphs</h3>
<p>
Most connected Domain Models are full of cyclic references,
with bi-direction associations between objects being common.
A typical object-graph for these classes:
</p>
<code><pre>
public class Person
{
<i style="color:green">// 1:m association between Person and Child</i>
public IList<Child> Children;
...
public class Child
{
<i style="color:green">// m:1 association (back pointer)</i>
public Person Parent;
...
</pre></code>
<p>
... might look like this in memory:
</p>
<center>
<img src="http://3.bp.blogspot.com/_tKpxmnWzIIw/SYq8TkyM-3I/AAAAAAAAACY/9RajFJGTGIM/s320/ObjectGraphCyclic.jpg" />
</center>
<p>
However, if you're going to try exposing your services using SOAP
you're going to find out that cyclic graphs like this are not allowed.
<i>(I can only assume the guys that defined SOAP (WS-I) were asleep the day
they decided that no-one would need to send cyclic graphs across the
wire.)</i>
</p>
<p>
To send this across the wire
you might need to transform the above graph into something like this:
</p>
<center>
<img src="http://3.bp.blogspot.com/_tKpxmnWzIIw/SYq9keXFrrI/AAAAAAAAACg/ZGMts41nWho/s320/ObjectGraphExpanded.jpg" />
</center>
<h3>Solving Using a Graph Utility Class</h3>
<p>
My preferred option for sorting these problems is to use a utility class that
allows you to quickly specify a graph of objects to 'copy'.
</p>
<p>
With a sprinkling of Lambda Expressions and extension methods
you can have code that looks like something like this:
</p>
<code><pre>
<i style="color:green">// Copies a person, its children (with its parent),
// and the grandchildren.</i>
Person personCopyToReturnAcrossWcf =
myRootPersonObject
.Graph()
.Add(p => p.Children, new Graph<Child>()
.Add(c => c.Parent))
.Add(c => c.Granchildren))
.Copy();
</pre></code>
<p>
<i>
The source for the utility class can be found here:
<a target="_blank" href="http://code.google.com/p/atlanta/source/browse/trunk/Source/Application/Domain/DomainBase/Graph.cs">link</a>
</i>
</p>
<h3>Finally</h3>
<p>
Some people dislike the idea of sending the 'real' domain objects across the wire.
Instead they create DTO objects for everything that has to be passed back from a service.
</p>
<p>
My personal preference is to only create DTO classes when they are needed, hopefully saving time
on creating both the DTO objects and the code to map my domain properties to them. Reporting
is an example where this usually breaks down and I find myself creating DTO classes
to send information to a client.
</p>
<p>
Utimately, whether you use graphing, or use DTOs, you still have to be aware of all of the above issues
when using a Domain Model and a Service Layer/Remote Facade.
</p>Richard Brownhttp://www.blogger.com/profile/00271013570243148544noreply@blogger.com4tag:blogger.com,1999:blog-33966934.post-34511187925666210632008-12-23T10:44:00.002+00:002009-01-22T13:45:24.020+00:00Using Lambda Expressions with NHibernate<p>
I'm a big fan of NHibernate, and also of its
<a target="_blank" href="http://martinfowler.com/eaaCatalog/queryObject.html">Query Object</a>
the <a target="_blank" href="http://www.nhforge.org/doc/nh/en/index.html#querycriteria">ICriteria</a> API.
I don't much care for 'magic strings' in my code though.
</p>
<p>
LINQ provides a way to strongly type your queries, but there are still times
when you want an opaque query language with an object-oriented interface.
</p>
<p>
With .Net 3.5 came Lambda Expressions, which allow you to strongly type
the individual expressions in a query. I started writing some extension methods
to allow me to use lambda expressions
with ICriteria, so code like:
</p>
<code><pre>
mySession
.CreateCriteria(typeof(Person))
.Add(Expression.Eq("Name", "smith"))
.List<Person>();
</pre></code>
<p>
... turns into code like:
</p>
<code><pre>
mySession
.CreateCriteria(typeof(Person))
.Add<Person>(p => p.Name == "smith")
.List<Person>();
</pre></code>
<p>
There turned out to be quite a few extension methods to write to cover all
the different combinations of ICriteria that can be written
(including DetachedCriteria, aliases, subqueries, ...)
I've packaged it into a project on Google Code for anyone who wants to use it:
<ul>
<li><a href="http://nhlambdaextensions.googlecode.com/files/NhLambdaExtensions.html">Documentation</a>;</li>
<li><a href="http://code.google.com/p/nhlambdaextensions/downloads/list">Download</a>;</li>
<li><a href="http://code.google.com/p/nhlambdaextensions/">Project home</a>.</li>
</ul>
</p>Richard Brownhttp://www.blogger.com/profile/00271013570243148544noreply@blogger.com3tag:blogger.com,1999:blog-33966934.post-63958625507025625652008-11-24T16:59:00.003+00:002008-11-24T17:04:34.185+00:00Using T4 Command-Line Parameters - Generating NHibernate Magic Strings<h3>An Alternative to CodeSmith</h3>
<p>
I'm a big fan of code-generation for automating boiler-plate code.
I used to use
<a target="_blank" href="http://www.codesmithtools.com/freeware.aspx">CodeSmith</a>
for this, but the freeware version is no longer maintained.
<a target="_blank" href="http://www.microsoft.com/downloads/details.aspx?familyid=693EE22D-4BB1-450D-835C-59EEBCB9F2AE&displaylang=en">T4</a>
is my new favourite tool for code generation, and there's a wealth of information
available on using it on
<a target="_blank" href="http://www.olegsych.com/2007/12/text-template-transformation-toolkit">Oleg Sych's</a>
site.
</p>
<h3>Passing Command-Line Parameters to TextTransform</h3>
<p>
The tools come with a command-line host for T4
(<a target="_blank" href="http://msdn.microsoft.com/en-us/library/bb126461.aspx">TextTransform.exe</a>),
but there's no obvious way to pass parameters to the templates.
</p>
<p>
You can pass parameters using the command-line option -a:
</p>
<code><pre>
TextTransform.exe
-a mappingFile!c:\myPath\MyClass.hbm.xml
-out c:\myGeneratedFiles\MyClass_Generated.cs
c:\myTemplates\GenerateNHibernateProperties.tt
</pre></code>
<p>
In order to retrieve the parameter from inside the template, you can
use a little reflection:
</p>
<code><pre>
private string GetCommandLineProperty(string propertyName)
{
PropertyInfo parametersProperty =
Host
.GetType()
.GetProperty("Parameters",
BindingFlags.NonPublic | BindingFlags.Instance);
StringCollection parameters =
(StringCollection)parametersProperty
.GetValue(Host, null);
foreach (string parameter in parameters)
{
string[] values = parameter.Split('!');
int length = values.Length;
if (values[length - 2] == propertyName)
return values[length - 1];
}
return null;
}
</pre></code>
<h3>Replacing NHibernate Magic Strings</h3>
<p>
Now I can loop through my .hbm.xml files in my build, generating
the 'magic strings' for the properties of each class (example
template and project is linked below).
</p>
<p>
This turns code like:
</p>
<code><pre>
IList<Media> mediaWithNameAndType =
DomainRegistry.Repository
.CreateQuery<Media>()
.Add(Expression.Eq("OwningLibrary", this))
.Add(Expression.Eq("Type", media.Type))
.Add(Expression.Eq("Name", media.Name))
.List<Media>();
</pre></code>
<p>
Into code like:
</p>
<code><pre>
IList<Media> mediaWithNameAndType =
DomainRegistry.Repository
.CreateQuery<Media>()
.Add(Expression.Eq(Media.Properties.OwningLibrary, this))
.Add(Expression.Eq(Media.Properties.Type, media.Type))
.Add(Expression.Eq(Media.Properties.Name, media.Name))
.List<Media>();
</pre></code>
<p>
Some links:
</p>
<ul>
<li><a target="_blank" href="http://atlanta.googlecode.com/svn-history/r166/trunk/Tools/Templates/Utility.tt"
>Example template to retrieve TextTransform.exe parameters</a></li>
<li><a target="_blank" href="http://atlanta.googlecode.com/svn-history/r166/trunk/Tools/Templates/StaticProperties.tt"
>Example template to generate NHibernate magic strings</a></li>
<li><a target="_blank" href="http://code.google.com/p/atlanta/"
>Example project that generates NHibernate Magic Strings in the build</a></li>
</ul>Richard Brownhttp://www.blogger.com/profile/00271013570243148544noreply@blogger.com1tag:blogger.com,1999:blog-33966934.post-66737059342268074492008-11-19T10:19:00.001+00:002008-11-19T10:38:16.988+00:00Test Silverlight using Passive View<h3>Introduction</h3>
<p>
In a
<a target="_blank" href="http://broloco.blogspot.com/2008/10/test-compact-framework-code-with.html">previous post</a>
I demonstrated testing a Compact Framework application
using Passive View. I also mentioned that I would use the same technique to
test the presentation layer on other platforms. So to put that to the test
I migrated that simple application to Silverlight.
</p>
<h3>Testing Platform</h3>
<p>
There are many more testing options for Silverlight than there are for the
Compact Framework. If you're using the IDE, you might consider reading
<a target="_blank" href="http://weblogs.asp.net/nunitaddin/archive/2008/05/01/silverlight-nunit-projects.aspx">this</a>
for some of your options.
</p>
<p>
I chose to test my
<a target="_blank" href="http://en.wikipedia.org/wiki/Plain_Old_CLR_Object">POCO</a>
controller classes inside the regular .Net framework using NUnit.
</p>
<p>
Instead of retargeting, I cross-compiled the controller class to the 'real' .Net framework.
The view controls in Silverlight are a subset of the complete WPF controls, so they can be
substituted in the test environment to simulate the runtime controls.
</p>
<p>
The only slight hindrance I came across was that
when you attempt to use the WPF controls in NUnit you get the exception
"InvalidOperationException : The calling thread must be STA".
The controls insist on running on an STA thread, which requires a .config file for
the test assembly containing the following:
</p>
<code><pre>
<NUnit>
<TestRunner>
<add key="ApartmentState" value="STA" />
</TestRunner>
</NUnit>
</pre></code>
<h3>Passive View</h3>
<p>
As previously, the view is completely dumb. It contains no logic, just the controls
that will be manipulated by the controller. The XAML is kept simple with mostly positioning
logic.
</p>
<code><pre>
<StackPanel x:Name="LayoutRoot"
Background="White">
<TextBlock>
Demonstration of Passive View to test
Silverlight client
</TextBlock>
<Button x:Name="ShowMessage"
Content="Show Message"
Width="150" Margin="20"/>
...
</pre></code>
<p>
I exposed the controls as public fields on the View class. These were populated
in the 'Loaded' event (note, this only fires in the Silverlight runtime - not the tests),
then the controller was wired up:
</p>
<code><pre>
public class MainView : UserControl
{
public Button ShowMessage;
public TextBlock Message;
...
public MainView()
{
Loaded +=
new RoutedEventHandler(UserControl_Loaded);
}
private void UserControl_Loaded(object sender,
RoutedEventArgs e)
{
ShowMessage = (Button)FindName("ShowMessage");
Message = (TextBlock)FindName("Message");
...
new MainController(this);
...
</pre></code>
<p>
During the tests a testable view class simply populates the fields with
'empty' controls allowing the controller logic to be tested, without
requiring a full presentation runtime.
</p>
<code><pre>
public class TestableView : MainView
{
public TestableView()
{
ShowMessage = new Button();
Message = new TextBlock();
...
new MainController(this);
...
</pre></code>
<h3>Migration from Windows Forms</h3>
<p>
The only remaining task was to migrate the namespaces to WPF from Windows Forms.
Mostly this was just a search/replace exercise (with the compiler telling me everywhere
something had changed):
<ul>
<li>Namespace moved to System.Windows;</li>
<li>Enabled property is now IsEnabled property;</li>
<li>MessageBox.Show has slightly fewer options;</li>
<li>Setting colours now uses a Brush class;</li>
<li>etc., ...</li>
</ul>
</p>
<p>
With just a little more work, you could wrap each control in a common interface
(e.g., IButton, IComboBox), then the same controller could be used to run
a view on multiple platforms (Silverlight/WPF/Windows Forms).
You might even be able to use the controller to run a web application using a
framework like
<a target="_blank" href="http://www.visualwebgui.com/">Visual WebGui</a>
(which makes coding a web application magically like coding a Windows Forms application).
</p>
<p>
For a larger application I would definitely do this extra work - it is bound to pay itself
back at some point, not least of all because you could have stubbed control implementations
in the tests instead of the 'real' ones.
</p>
<h3>Summary</h3>
<p>
The modifications to move the demo from Windows Forms to Silverlight were
a rewriting of the View (inevitable), and some minor syntax changes in the controller.
</p>
<p>
On a larger application the ability to keep 90% of your code (and the tests!) when migrating to another
platform is invaluable, and Passive View is the best way I've seen of writing (and testing)
the presentation layer.
</p>
<br>
<p>
<a target="_blank" href="http://rgb-playground.googlecode.com/files/SilverlightPassiveViewDemo.zip">Silverlight Passive View Demo</a>
<br>
<i>download, unzip, run CommandPrompt.bat, and type 'NAnt'</i>
</p>
<p>
<a target="_blank" href="http://silverlight.services.live.com/invoke/84306/SilverlightPassiveViewDemo/iframe.html">View Silverlight Passive View Demo Online</a>
</p>Richard Brownhttp://www.blogger.com/profile/00271013570243148544noreply@blogger.com0tag:blogger.com,1999:blog-33966934.post-13666281500872759182008-11-16T09:43:00.003+00:002008-11-16T09:53:24.055+00:00Bring me problems not solutions...Following Richard's "HTML is crap" post this got me thinking about the real underlying message of his post. Yes technology has moved on a lot in recent years to the point where old reasons for choosing those [HTML applications] types of application have largely gone. However this isn't the first or the last time software people will have this frustration and here's why: too many business people come with software solutions and not business problems!
I've no idea why this has happened other than some mistaken idea that software is easy and business is hard. In my experience when customers bring me software solutions rather than business problems the final software solutions I'm able to deliver to them are invariably poorer than they should be.
So the next question is how can we stop people bringing us their half-baked solutions...Neil Loganhttp://www.blogger.com/profile/04189198374043005196noreply@blogger.com0tag:blogger.com,1999:blog-33966934.post-65415873146049685842008-11-13T15:30:00.002+00:002008-11-13T16:43:09.683+00:00HTML is Crap<p>
OK, so it's not crap. It's great for web sites, disseminating and formatting
information, and linking content across the entire world. In fact, it's pretty amazing.
But ... it <b>is</b> crap for writing applications.
</p>
<p>
Over the years we have jumped through all sort of hoops to get round the (extreme) limitations
of HTML and browser technology. We have invented ways of maintaining user state by keeping
cookies on browsers, by adding cryptic information to URLs, by posting hidden values in forms,
and by restoring large state objects on the server to overcome some of these problems.
</p>
<p>
We have invented increasingly complicated frameworks to allow us to split presentation logic
over two physical tiers with a (typically) compiled language running on the server while
Javascript manipulates widgets on the client, and then found more and more ingenious ways of making
this seem slick to the user. We have made code more complicated by the fact that these two
separate tiers have to communicate in some fashion, and we convince ourselves that the solution
is elegant.
</p>
<p>
As soon as we have to target more than one type of browser (or even different versions of
the same browser) it is
<a target="_blank" href="http://www.alistapart.com/articles/12lessonsCSSandstandards">not going to look the same</a>
on what is essentially different platforms.
(Not to mention the difficulties of actually writing and debugging for two versions of IE on the same machine.)
You would have to be nuts to choose a platform where the output will be unpredictable, right?!
</p>
<p>
<style>
.hic_person { text-align:right; vertical-align:top; font-style:italic; font-size:11px; font-family:verdana; }
.hic_statement { font-size:11px; font-family:verdana; }
</style>
A common argument with the customer might go something like this:
<table>
<tr>
<td class="hic_person">Programmer:</td>
<td class="hic_statement">You have high expectations of usability. It is best to use a rich client.</td>
</tr>
<tr>
<td class="hic_person">Customer:</td>
<td class="hic_statement">It needs to be a web (HTML) application.</td>
</tr>
<tr>
<td class="hic_person">Programmer:</td>
<td class="hic_statement">Um ... why?</td>
</tr>
<tr>
<td class="hic_person">Customer:</td>
<td class="hic_statement">Because that's what we want.</td>
</tr>
<tr>
<td class="hic_person">Programmer:</td>
<td class="hic_statement">You do realise it will take twice as long to write, and be less usable?</td>
</tr>
<tr>
<td class="hic_person">Customer:</td>
<td class="hic_statement">It needs to be a web (HTML) application.</td>
</tr>
</table>
Three months later the programmer is blamed for the product not being amazingly slick,
being slightly behind the time-scales, and the customer is insisting that they <i>need</i>
their spelling mistakes underlined 'just like MS Word'!
</p>
<p>
Another common reason for a customer to insist on a web application is for ease of
deployment, but it is not hard to push rich applications to the desktop using technologies
like
<a target="_blank" href="http://en.wikipedia.org/wiki/ClickOnce">ClickOnce</a> these days either.
</p>
<p>
The only people who most often have a genuine need to target such a low common denominator are those
writing applications that target the Internet (e.g., Amazon, eBay, etc.)
But even then it is clear that for years the best looking stuff on the web has Flash content.
</p>
<p>
<style>
.hic_indent { padding-left:15px; }
</style>
In summary:
<div class="hic_indent">Choose WPF;</div>
<div class="hic_indent">Choose Windows Forms;</div>
<div class="hic_indent">Choose Silverlight;</div>
<div class="hic_indent">Choose Flash;</div>
<div class="hic_indent">Just don't choose HTML.</div>
</p>
<p>
Choosing to write an HTML application, especially when you are going to deploy in a company intranet,
is nothing short of insanity these days ... it's long past the time for developers to make customers realise
this.
</p>Richard Brownhttp://www.blogger.com/profile/00271013570243148544noreply@blogger.com4tag:blogger.com,1999:blog-33966934.post-76572610902114158242008-10-24T13:27:00.006+01:002008-10-29T18:05:05.915+00:00Test Compact Framework Code with Passive View<h3>The Challenge</h3>
<p>
You have a Compact Framework application to write. You've chosen to make it a rich client
(Windows Forms), and you want a suite of automated tests covering the code.
</p>
<h3>Passive View</h3>
<p>
I investigated the options for testing directly on the Compact Framework itself, but this turns out to be tricky
and ties you to running inside the emulator (a virtual machine).
</p>
<p>
However the Compact Framework uses a .Net feature called
<a target="_blank" href="http://msdn.microsoft.com/en-us/magazine/cc163387.aspx">retargeting</a>.
This allows your Compact Framework code to be run in the full .Net environment as long as it doesn't reference
any classes that are unique to the Compact Framework (e.g., the SIP).
</p>
<p>
If you use <a target="_blank" href="http://martinfowler.com/eaaDev/PassiveScreen.html">Passive View</a>,
you can take your controller class and test it in the regular .Net
Framework using NUnit leaving only a tiny bit of code in the View that is specific to the
Compact Framework.
</p>
<h3>Creating the View</h3>
<p>
In the example code below I created a simple GUI that would show or hide a message,
allow the user to change the colour of the message from a drop-down, and would prompt
the user for confirmation before hiding the message. This provides a simple enough example
to cover in a blog, while being complex enough to demonstrate several useful techniques
for testing using Passive View.
</p>
<p>
The view can be created directly in Visual Studio (with the Window Mobile 6 SDK installed).
The designer gives you an excellent rendition of a PDA client making it easy to design a
user interface (notwithstanding that WPF would have been nicer - but hasn't been implemented
on the Compact Framework yet).
</p>
<p>
To create the view:
<ul>
<li>Create a Form;</li>
<li>Create the controls;</li>
<li>Name the controls, and mark them with the modifier 'Public';</li>
<li>Wire up a controller when the View is instantiated.</li>
</ul>
</p>
<p>
<img src="http://3.bp.blogspot.com/_tKpxmnWzIIw/SQG_qnChr_I/AAAAAAAAAB0/KhuYqipokFo/s320/PassiveViewDemoDesigner.jpg"/>
</p>
<p>
To wire up a controller, create a controller class (which is going to hold all the
application's presentation logic), and construct it when the View is instantiated.
</p>
<code><pre>
internal class MainController
{
private MainView _view;
public MainController(MainView view)
{
_view = view;
...
</pre></code>
<code><pre>
public partial class MainView : Form
{
public MainView()
{
InitializeComponent();
<b>new MainController(this);</b>
...
</pre></code>
<p>
And that's it. No other logic should exist in the View.
</p>
<h3>Handling User Input</h3>
<p>
Most of the user input can be handled directly on the View in the tests.
To simulate someone typing into a text-box, simply set the Text property.
To simulate someone making a selection from a drop-down, set the SelectedIndex
property.
</p>
<p>
Simulating a button 'click' requires a tiny bit of reflection to invoke the protected method:
</p>
<code><pre>
private void Click(Button button)
{
if (!button.Enabled)
Assert.Fail("Attempt to click button '"
+ button.Text
+ "' while it is not enabled");
MethodInfo onClick = button.GetType()
.GetMethod("OnClick",
BindingFlags.NonPublic
| BindingFlags.Instance);
onClick.Invoke(button, new object[] { null });
}
</pre></code>
<p>
You end up with a fairly readable test:
</p>
<code><pre>
[Test]
public void Test_WhenShowMessageIsClicked_Then_…
MessageIsDisplayed_And_ColourSelectionIsPopulated()
{
MainView view = new TestableView();
Click(view.ShowMessage);
Assert.AreEqual(false,
view.ShowMessage.Enabled);
...
</pre></code>
<h3>Handling the Visible Property</h3>
<p>
You can set most properties directly on the real framework classes. However,
some properties don't always behave correctly outside of the Windows Forms runtime.
The <i>Visible</i> property is one such case.
</p>
<p>
You could mock/stub the <i>Visible</i> property on the controls, however typically these
might get called many times in a single test; mocking could make your tests very brittle.
Alternatively you could have an adapter/proxy
for each of the controls that let you simulate events/properties.
</p>
<p>
I find it easier to have a fake implementation I can use just inside the tests.
In this example I wrapped all calls to the <i>Visible</i> property with a (virtual) method:
</p>
<code><pre>
public partial class MainView : Form
{
...
public virtual void SetVisible(
Control control,
bool isVisible)
{
control.Visible = isVisible;
}
public virtual bool IsVisible(Control control)
{
return control.Visible;
}
...
</pre></code>
<p>
This was replaced with a fake implementation in the tests:
</p>
<code><pre>
public class TestableView : MainView
{
...
private Dictionary<Control, bool>
_controlVisibility
= new Dictionary<Control, bool>();
public override void SetVisible(
Control control,
bool isVisible)
{
_controlVisibility[control] = isVisible;
}
public override bool IsVisible(Control control)
{
if (!_controlVisibility.ContainsKey(control))
Assert.Fail("Control '" + control.Name
+ "' visibility has not been set by…
SetVisible()");
return _controlVisibility[control];
}
...
</pre></code>
<h3>Handling User Interaction</h3>
<p>
Another thing that needs to be tested is user interaction (e.g., clicking yes/no
on a message box). While you could use a test double/stub
for this, it does tend get a little tedious (you typically end up with a different stub method for every test).
</p>
<p>
It is better to Mock the user input by (in this case) creating a proxy to the message box class instead of going to
the (hard to test) static methods directly:
</p>
<code><pre>
public class DialogHandler
{
public virtual DialogResult ShowMessageBox(
string text,
string caption,
MessageBoxButtons buttons,
MessageBoxIcon icon,
MessageBoxDefaultButton defaultButton)
...
</pre></code>
<p>
In this example I've used the excellent
<a target="_blank" href="http://ayende.com/projects/rhino-mocks.aspx">Rhino Mocks</a>
to mock the DialogHandler implementation.
So you can set expectations, and their responses, directly in the tests:
</p>
<code><pre>
[Test]
public void Test_WhenHideMessageIsClicked_…
AndUserConfirms_Then_MessageIsHidden()
{
MockRepository mocks = new MockRepository();
DialogHandler dialogHandler =
mocks.StrictMock<DialogHandler>();
MainView view = new TestableView(dialogHandler);
Expect
.Call(dialogHandler
.ShowMessageBox(
"Are you sure?",
"Check",
MessageBoxButtons.YesNo,
MessageBoxIcon.Question,
MessageBoxDefaultButton.Button1))
.Return(DialogResult.Yes);
mocks.ReplayAll();
Click(view.ShowMessage);
Click(view.HideMessage);
mocks.VerifyAll();
...
</pre></code>
<h3>Summary</h3>
<p>
Now you have the highest possible test-coverage on your presentation layer, without resorting to
(heavyweight) full integration/acceptance tests.
</p>
<p>
The example code (link below) demonstrates each of the above sections using:
<ul>
<li>
<a target="_blank" href="http://nant.sourceforge.net/">NAnt</a>
and
<a target="_blank" href="http://www.nunit.org/index.php">NUnit</a>
- to automate the build and tests;
</li>
<li>
<a target="_blank" href="http://msdn.microsoft.com/en-us/magazine/cc163387.aspx">Retargeting</a>
- to allow the Compact Framework classes to be tested inside the regular .Net Framework;
</li>
<li>
<a target="_blank" href="http://martinfowler.com/eaaDev/PassiveScreen.html">Passive View</a>
- to allow testing of as much of the application as possible while not requiring the Windows
Forms runtime;
</li>
<li>
<a target="_blank" href="http://xunitpatterns.com/Test%20Double.html">Test Double</a>
- (specifically a
<a target="_blank" href="http://xunitpatterns.com/Fake%20Object.html">fake</a>
) to allow testing of properties that normally only work in the Windows Forms runtime;
</li>
<li>
<a target="_blank" href="http://xunitpatterns.com/Mock%20Object.html">Mocks</a>
- using the excellent
<a target="_blank" href="http://ayende.com/projects/rhino-mocks.aspx">Rhino Mocks</a>
to test user interaction in the tests.
</li>
</ul>
</p>
<p>
Worth noting is that there is nothing here that is special to the Compact Framework. I would use
exactly the same technique to test any presentation layer (e.g., WPF/Silverlight).
The Compact Framework is merely a useful
demonstration of where the <i>real</i> runtime is not easy to use, and these design patterns
and testing techniques shine.
</p>
<br>
<p>
<a target="_blank" href="http://rgb-playground.googlecode.com/files/PassiveViewDemo.zip">Passive View Demo</a>
<br>
<i>download, unzip, run CommandPrompt.bat, and type 'NAnt'</i>
</p>Richard Brownhttp://www.blogger.com/profile/00271013570243148544noreply@blogger.com0tag:blogger.com,1999:blog-33966934.post-36125559966888416832008-09-30T14:29:00.003+01:002008-09-30T15:42:18.102+01:00Working effectively with legacy code - Zero Warnings Are Acceptable<p>In my continuing adventures with legacy code I've inherited a code base that initially had over 500 compiler warnings. Now many of these warnings were pretty straightforward to fix but some of them are proving a little more interesting. My current personal favourite is <a href="http://msdn.microsoft.com/en-us/library/f6dtw2ah(VS.80).aspx" target="_blank">CS0252</a>. </p><p>I've got around half a dozen places where it's really unclear as to exactly what the original programmer was intending to do. Was he/she really meaning to do a reference comparison? Unlikely. So now I've got an interesting predicament. Do I change the code to remove the warning or do I leave the code as is and accept that as it's a legacy system the code must work fine. Safety (and sadly professionalism) must win and in these situations I must leave the code as is until I get round to covering the relevant code with tests. However it's like a real stone in my shoe and I've got to ask: wouldn't it have been so much nicer if the original programmer simply looked at the warning and thought, "I don't think that's quite right...."?</p><p>So the moral of the story is please pay attention to your compiler warnings. They'll help you identify potential bugs in your code and they might even help write code that's easier to understand and consequently more maintainable.</p>Neil Loganhttp://www.blogger.com/profile/04189198374043005196noreply@blogger.com0tag:blogger.com,1999:blog-33966934.post-22797987064535427092008-09-30T14:11:00.002+01:002008-09-30T15:06:49.100+01:00Working effectively with legacy code - Egoless Programming<p>In my professional life I'm finding it's more and more common for me to be working with a legacy code base. Whether that's because a system we've written is simply long-standing and has become legacy or whether it's because we've inherited a system to support and maintain, the problems you encounter are pretty similar. However I think it's worth pointing out one of the major differences, and why it's sometimes better to be involved in the latter situation rather than the former.</p><p>I'm a big fan of <a href="http://www.codinghorror.com/blog/archives/000584.html" target="_blank">egoless programming</a> but trying to separate people from their code can be hard to achieve. Indeed, if you were responsible for creating the system that has become legacy it can be extremely difficult to be entirely honest with yourself about what state the system is in. Now that's not to say people cannot admit their own mistakes, just that sometimes because your head is filled with all the history of why certain decisions where taken it can be difficult to look at things entirely objectively. Ultimately, the system is where it is and how it got there is relatively unimportant. </p><p>As it transpires though, I'm currently involved in working with a legacy system where I've inherited the code. The original developers are gone and whilst their experience and skills are most definitely missed, their absence allows a certain amount of freedom in moving things forward. </p><p>I think being able to look at a system without the baggage that often builds up helps in two ways:</p><ol><li>You don't have any pre-existing priorities about the things you want to fix - I know that whenever I deliver any system there are always things I'm not happy with and this can sometimes cloud my judgement about what is really important. </li><li>You don't worry about upsetting anyone - I know from painful experience that developers can be sensitive souls and sometimes they simply cannot be separated from their code. </li></ol><p>So from my experience I'm convinced that it's always nicer not to have had a hand in the original system development if you're going to moving it forward...</p>Neil Loganhttp://www.blogger.com/profile/04189198374043005196noreply@blogger.com0tag:blogger.com,1999:blog-33966934.post-49966614330826593212008-06-26T08:32:00.003+01:002008-06-26T08:44:47.823+01:00Hello from Windows Live Writer<p>I've just recently download Windows Live Writer to help me out with my blog posting.... maybe it'll make me write more stuff!  </p> <p>This is just a wee test post to make it works like I want.</p> Neil Loganhttp://www.blogger.com/profile/04189198374043005196noreply@blogger.com0tag:blogger.com,1999:blog-33966934.post-40667819694157551302008-03-18T09:53:00.001+00:002008-03-27T12:56:12.329+00:00Using NAnt with Visual Studio<h3>NAnt vs MSBuild</h3>
<p>
I like to use NAnt for building my .Net projects.
</p>
<p>
I am not a huge fan of the IDE (Visual Studio),
but I am well aware that I am in the minority here.
</p>
<p>
So if I plan to use NAnt on a project,
I've got to do it in such a way that the other developers can still use the features of the IDE, mainly
intelli-sense and the Error List window.
</p>
<p>
I've been using a very small .csproj file (described below) which calls out to
NAnt, and a small custom MSBuild task to log errors so that the IDE will display them correctly.
</p>
<h3>Adding the source-code</h3>
<p>
Starting with a blank project file, you can add all the C# files with a single MSBuild entry like this:
</p>
<code><pre>
<Compile Include="**\*.cs" />
</pre></code>
<p>
And that's it! Now you can browse all your C# code using the solution explorer.
The intelli-sense will recognise your classes and start auto-expanding as you type.
</p>
<p>
I typically add nodes for each file-type I want to be able to edit: .cs, .build, .xml, .config, etc.
</p>
<h3>Intelli-Sense</h3>
<p>
The IDE will use intelli-sense for the classes defined in your C# code files. You also want it to pick
up external references. Add a line for each reference you need intelli-sense for in another ItemGroup:
</p>
<code><pre>
<Reference Include="NHibernate" />
</pre></code>
<p>
The IDE will look for the assemblies under the path defined in the <OutputPath> node
near the top of your project file. If you want, you can add a <HintPath> sub-element:
</p>
<code><pre>
<Reference Include="NHibernate">
<HintPath>SDKs\NHibernate\NHibernate.dll</HintPath>
</Reference>
</pre></code>
<p>
Now you'll have intelli-sense for your referenced assemblies too.
</p>
<h3>Running NAnt</h3>
<p>
When Visual Studio builds it runs the 'DefaultTargets' for the project (usually an MS-defined target named 'Build').
When Visual Studio cleans it runs a target named 'Clean', and when it re-builds it runs the
target named 'Rebuild'.
</p>
<p>
You can change the 'DefaultTargets' for the project at the top of the project file:
</p>
<code><pre>
<Project DefaultTargets="NAntBuild" ...
</pre></code>
<p>
Then add a target that uses the Exec task to call NAnt:
</p>
<code><pre>
<Target Name="NAntBuild">
<Exec Command="SDKs\nant-0.85\bin\NAnt.exe" />
</Target>
</pre></code>
<p>
Now the solution will build using NAnt. Add targets for 'Clean' and 'Rebuild' too
if you want to do these via NAnt.
</p>
<p><i>
Note, when you customise your targets the IDE will give you a security warning.
Since you customised it yourself, you can safely ignore this warning.
</i></p>
<h3>Error List</h3>
<p>
So far this works, but build errors are only found by examining the output window.
It would be nice to have errors reported in the 'Error List' window so that you can just
double-click them.
</p>
<p>
We wrote a custom MS-Build task called ExecParse, which inherits from 'Exec', to allow parsing of
of the output and logging errors to the MS-Build engine.
</p>
<p>
The configuration for ExecParse takes a
<a target="_blank" href="http://msdn2.microsoft.com/en-us/library/1400241x(VS.85).aspx" >regular expression</a>
to search the output for, and allows logging of errors using text from captures in the regular expression.
</p>
<p>
A typical compiler error looks like:
</p>
<code><pre><div style="display:inline;" nowrap="true">
[csc] c:\...\MyFile.cs(24,60): error CS0246: ...
</div></pre></code>
<p>
So we can create a regular expression that captures this, and log an error
using the MSBuild engine:
</p>
<code><pre>
<UsingTask AssemblyFile="...\ExecParse.dll"
TaskName="ExecParse.ExecParse" />
...
<PropertyGroup>
<ExecParseConfiguration>
<Error>
<Search>\[csc\] (.*?)\((\d+),(\d+)\): error …
([^:]*): (.*?)[\n\r]</Search>
<File>$1</File>
<LineNumber>$2</LineNumber>
<ColumnNumber>$3</ColumnNumber>
<Subcategory>$4</Subcategory>
<Message>$5</Message>
</Error>
</ExecParseConfiguration>
</PropertyGroup>
...
<Target Name="NAntBuild">
<ExecParse Command="SDKs\nant-0.85\bin\NAnt.exe"
Configuration="$(ExecParseConfiguration)" />
</Target>
</pre></code>
<p>
Now you have a NAnt build, building within the IDE, using intelli-sense,
and logging errors to the 'Error List' window.
</p>
<br>
<h3>Some links:</h3>
<p>
<ul>
<li>
<a target="_blank" href="http://atlanta.googlecode.com/svn/trunk/Atlanta.csproj">A complete example project file</a>
</li>
<li>
<a target="_blank" href="http://code.google.com/p/atlanta/">A (usually) working project on GoogleCode</a>
</li>
<p></p>
<li>
<a target="_blank" href="http://code.google.com/p/execparse">ExecParse project home</a>
</li>
<li>
<a target="_blank" href="http://code.google.com/p/execparse/downloads/list">ExecParse download</a>
</li>
</ul>
</p>Richard Brownhttp://www.blogger.com/profile/00271013570243148544noreply@blogger.com1tag:blogger.com,1999:blog-33966934.post-70821309912587009252008-01-03T13:04:00.002+00:002008-04-01T15:47:57.149+01:00NHibernate, Identity Map, and Proxies<h3>Introduction</h3>
<p>
NHibernate uses an <a href="http://martinfowler.com/eaaCatalog/identityMap.html" target="_blank">Identity Map</a>
to maintain a single instance of each persistent entity in memory.
This allows you to run round a domain model without having to worry about <i>which</i> instance of
an entity you are modifying.
</p>
<p>
NHibernate also uses <a href="http://martinfowler.com/eaaCatalog/lazyLoad.html" target="_blank">Lazy Load</a>.
The default implementation for this uses
<a href="http://www.castleproject.org/dynamicproxy/index.html" target="_blank">Dynamic Proxy</a> (DP)
to generate a class at runtime that inherits from your real class. This generated class intercepts all
of the overridable methods and properties in your class to load the instance from the database if required.
</p>
<h3>What Identity Map Allows</h3>
<p>
Identity Map allows you to load up the same object twice but to have your references point to the same
underlying instance.
</p>
<p>
The following example NUnit test demonstrates loading 3 objects, two of them with an ID
of 1, and the other with an ID of 2:
</p>
<pre><code>
[Test]
public void TestIdentityMap()
{
MyClass myInstance1 = Session.Load<MyClass>(1);
MyClass myInstance2 = Session.Load<MyClass>(1);
MyClass myInstance3 = Session.Load<MyClass>(2);
Assert.AreSame(myInstance1, myInstance2);
Assert.AreNotSame(myInstance1, myInstance3);
}
</code></pre>
<p>
The following diagram shows how the references relate to the objects created on the heap.
</p>
<p>
<img src="http://freespace.virgin.net/richard.brown0308/NHibernateObjectReferences1.gif" />
</p>
<p>
The Identity Map ensures that there is only one instance of the object with ID equal to 1 in memory.
</p>
<h3>Under the hood of NHibernate</h3>
<p>
The basic principle of Identity Map is easy to pick up. However, there is a further complication because
of the implementation of the proxies used for Lazy Load.
</p>
<p>
When you ask NHibernate for an object, it returns you a instance of a class created at runtime by DP.
This class intercepts any overridable methods/properties and redirects them to a second
object instance that has the 'real' class type.
</p>
<p>
So in the above example, the following is actually created on the heap:
</p>
<p>
<img src="http://freespace.virgin.net/richard.brown0308/NHibernateObjectReferences2.gif" />
</p>
<h3>Limitations of the current implementation</h3>
<p>
Since there are really two instances created on the heap when NHibernate gives you a proxy,
there is an identity problem that you could run into.
</p>
<h4>Problem 1, the 'this' pointer:</h4>
<p>
Consider the following class:
</p>
<pre><code>
public MyClass
{
public virtual bool IsMe(MyClass anotherMyClass)
{
return anotherMyClass == this;
}
}
</code></pre>
<p>
The following test will fail because the reference myInstance points to the proxy,
while the 'this' pointer is the real class:
</p>
<pre><code>
[Test]
public void TestIsMe()
{
MyClass myInstance = Session.Load<MyClass>(1);
Assert.IsTrue(myInstance.IsMe(myInstance));
// The above line will fail
}
</code></pre>
<p>
It is easy to imagine an example where a child entity notifies its parent of an event,
and the parent then updates all its children except the child that initiated the event.
The parent or child might have to be careful about reference comparison in such an example.
</p>
<h4>Problem 2, private methods/fields:</h4>
<p>
NHibernate relies on DP intercepting access to an instance of a class, and redirecting
these calls to the 'real' instance. However DP is powerless to intercept private field
and method access.
</p>
<p>
Situations where this could occur might be:
<ol>
<li style="MARGIN-BOTTOM: 10px">sibling objects of a common parent object that communicate with each other;</li>
<li style="MARGIN-BOTTOM: 10px">parent/child objects of the same type that communicate with each other (e.g., a tree);</li>
<li style="MARGIN-BOTTOM: 10px">static methods that reference an instance of the same type.</li>
</ol>
<p></p>
<p>
Consider the following class:
</p>
<pre><code>
public MyClass
{
private int _calculation = -1;
public virtual int Calculation
{
get { return _calculation; }
}
public static void SetCalculation(
MyClass instance,
int calculation)
{
instance._calculation = calculation;
}
}
</code></pre>
<p>
The following test will fail because the static method sets the field not on the 'real'
class but the proxy object,
while the property accessor is intercepted and redirected to the 'real' instance.
</p>
<pre><code>
[Test]
public void TestCalculation()
{
MyClass myInstance = _session.Load<MyClass>(1);
MyClass.SetCalculation(myInstance, 7);
Assert.AreEqual(7, myInstance.Calculation);
// the above line will fail
}
</code></pre>
<p>
It is not uncommon for a class to have an interface for talking to other instances of itself
while keeping that interface private from outsiders.
</p>
<h3>Conclusion</h3>
<p>
I suspect NHibernate could be improved
to make the proxies self-proxy. If this could be done then there would be no second instance created,
and no potential identity problem.
</p>
<p>
To avoid problems with the 'this' pointer, simply try to avoid using it for comparison with another
object.
</p>
<p>
To avoid problems with one instance of a class talking to another instance of the same class,
prefer virtual methods over non-virtual ones to allow Dynamic Proxy to
intercept any calls made on the instance.
(i.e., avoid using private fields or methods from one instance to another instance of the same class.)
</p>
<p>
A complete example of the above, with failing NUnit tests, can be downloaded here:
<a href="http://freespace.virgin.net/richard.brown0308/NHibernateProxyDemo.zip" target="_blank">NHibernateProxyDemo.zip</a>
</p>Richard Brownhttp://www.blogger.com/profile/00271013570243148544noreply@blogger.com1tag:blogger.com,1999:blog-33966934.post-1170091441739877632007-01-29T17:23:00.000+00:002007-01-29T17:24:01.750+00:00NAntScript<h3>NAnt Tasks</h3>
<p>
NAnt allows you to combine tasks to automate your build process. Tasks can be grouped together into
targets, and targets can call other targets (much like functions/methods in any other language).
</p>
<p>
If you need to do something that cannot be done by an existing task, you have two options:
<ol>
<li>Create a target that combines existing tasks;</li>
<li>Write a custom NAnt task.</li>
</ol>
</p>
<p>
Option 1 has some limitations; you can create a target, give it a name, and call it in your
script by using <code><call target="myTarget"/></code>. However, the target cannot specify a list of
parameters, so you have to rely on setting (agreed) properties before calling the target
if you want to pass information to it.
</p>
<p>
Option 2 allows you to create a new task for NAnt. They are considerably more effort to write
and maintain as they need knowledge of a .Net language (e.g., C#). If you want to do
something that cannot be done by existing tasks, then they are probably
your only option.
</p>
<p>
Sometimes you want to write a task that <i>can</i> be done with existing tasks (e.g., check a target
is up to date; if it isn't, code-generate some code, compile it, and register the assembly
in the GAC), but without resorting to C#.
We thought a middle ground would be nice, so we wrote a couple of custom NAnt tasks that allow
you to create new custom NAnt tasks that are written using regular NAnt script.
</p>
<h3>Enter NAntScript</h3>
<p>
NAntScript is an open source project we wrote to allow NAnt users to script custom NAnt tasks using only
existing NAnt tasks (no C# required).
</p>
<p>
Two custom tasks are provided:
<ul>
<li><b>taskdef</b> - defines a scripted custom task;</li>
<li><b>tdc</b> - compiles a collection of taskdef nodes in files into an assembly.</li>
</ul>
</p>
<p>
Now you can write custom NAnt tasks without resorting to C#.
</p>
<br>
<p>
<ul>
<li>
<a target="_blank" href="http://code.google.com/p/nantscript">NAntScript project home</a>
</li>
<li>
<a target="_blank" href="http://code.google.com/p/nantscript/downloads/list">NAntScript downloads</a>
</li>
</ul>
</p>Richard Brownhttp://www.blogger.com/profile/00271013570243148544noreply@blogger.com0tag:blogger.com,1999:blog-33966934.post-1159959422202085502006-10-04T11:55:00.000+01:002006-10-04T14:44:41.123+01:00What is Analysis?<h3>Introduction</h3>
<p>
Every software project needs to gather requirements, perform analysis and design, implement, and
finally deliver a working system.
</p>
<p>
These days we have a lot of tools to get us through the implementation and delivery, from
TDD through to automated installers (e.g., WiX). We have a maturing vocabulary of Design Patterns to aid communication
in a design phase. However analysis is often left as a hand-waving exercise with
less clearly defined tools and methods.
</p>
<h3>Gathering Requirements</h3>
<p>
All too often the requirements gathering is merely verbatim recordings of wish
lists dictated by the customer. These requirements are often recorded by
people with lofty titles such as Business Analyst; ironic considering they
rarely contain much, if any, analysis of the problem.
</p>
<p>
The requirements lists are usually driven out of a more traditional waterfall
style development, and serve two main purposes:
<ul>
<li>define the scope of the functionality;</li>
<li>define a contract.</li>
</ul>
</p>
<p>
The first of these points (scope) cannot be defined up-front. Software development
is now recognised as new product development, and not a manufacturing
process. Requirements change. Always. The way to confront this problem is to
gather a few requirements at a time, implement, deliver, and iterate.
</p>
<p>
The second point is born out of a naïve company's need to be able to prove
that what they have delivered is legally acceptable, despite the fact it will
invariably not be what the customer wants. The preference should of course be for
"customer collaboration over contract negotiation" [Agile].
</p>
<p>
The interesting question to ask when confronted by one of these bullet-list style
requirements documents is "how does this help me solve the problem?" These documents
rarely (if ever) contribute to a solution - instead they allow nervous project managers
to relax contented in their ignorant belief that the solution can be proven correct.
</p>
<h3>Turn to an Alternative</h3>
<p>
So we know to avoid requirements lists. We are educated enough to use a more modern
approach to documenting requirements such as use cases, user stories, scenarios, etc.
Typically I choose use cases. All we need to do is describe the steps that an actor
performs to complete a task, <i>then</i> we have accurately described our requirements, right?
</p>
<p>
Well, perhaps ... but ... it's still not necessarily analysis.
</p>
<p>
These could still essentially
be verbatim recordings of the customer's description of how a given actor performs
a task. But if that's all that is recorded then you've missed a trick - you've missed
the opportunity to actually analyse the problem and start down the road to a solution.
</p>
<p>
<i>Note: I am not suggesting that these use cases are useless, just that they could be so
much more. If someone's already done this work then it can still be useful for verifying
that delivered software actually solves the problem at hand. However, without analysis,
they still don't provide progress to a solution.</i>
</p>
<h3>Ubiquitous Language</h3>
<p>
The task of analysis is to define the objects in your proposed model. Rather than just defining
them then forgetting them, instead they should become the nouns that you continually use when discussing
functionality.
</p>
<p>
Use these nouns when discussing requirements with the customer. Use them when writing your use cases.
Use them in every technical note, and every phone call.
In short, use them everywhere; they then become the Ubiquitous Language [Evans] of the system.
</p>
<p>
Building the analysis model is a creative process, and as such does not have easily defined rules.
Often the customer will already have names for items in the problem domain; these might translate
directly to objects in the analysis model. Other times the customer might
have several different names for related items, and it is the job of the analyst to abstract these to
a single object in the analysis model. Often the customer will have concepts that require a new
noun to be invented. These are the tools of the analyst: mapping, abstraction, and invention.
</p>
<p>
In addition to using the Ubiquitous Language, the language should be persisted along with any other documentation.
It may be defined using a UML class diagram. It may be defined using a dictionary/glossary style
document. It may be a combination of a class diagram and supporting text. What is important is that
the definitions are concrete, and presentable to the customer.
</p>
<p>
Once the initial model is in place, the use cases can be fleshed out. It is important that the use cases
use the language of the analysis model. This starts to extract the actions that can be performed
the objects, and starts to highlight the relationships and constraints between them.
</p>
<p>
The process of defining the model, the objects and their relationships, the properties of the objects,
the actions that can be performed on them, and the constraints on their use - <i>that's</i> analysis.
</p>
<h3>A Simple Example</h3>
<p>
Suppose we take a look at the process of hill-starting a car (we'll assume for simplicity it's an
automatic).
</p>
<p>
The use case (recorded verbatim) might read:
<ul>
<li>Actor starts the car;</li>
<li>Actor secures the car, and puts it in gear;</li>
<li>Actor checks there's nothing coming;</li>
<li>Actor allows the car to move off.</li>
</ul>
</p>
<p>
<i>Okay, so it's a slightly artificial example - but that's only obvious to us because we know what a
car is, and how it works. If we'd never seen or heard of a car before, then we wouldn't know whether
these steps read appropriately or not.</i>
</p>
<p>
This use case describes the process of getting the car moving, but does not bring us any closer
to designing a car. Instead we should be analysing the scenario to try and extract the key objects
in the domain.
</p>
<p>
If we start asking questions as to how someone starts the car, we might come up with the idea that a car
has a "starting component" - we might even make the jump to calling it an "Ignition". We now have our second
object in the domain (the first was a Car), we have a relationship defined (a Car 'has an' Ignition), we
might define the actions (activate?), and a constraint (the Ignition can only be activated when the Car
is not already started).
</p>
<p>
We might very quickly end up with a rich new language (Car, Ignition, Brake, Gear Stick, ...), the relationships between
them (a Car 'has a' Brake), some constraints (the Gear Stick is in one of park, drive, ...), and the actions that
can be performed on them (the Brake can be 'applied'). If we'd never seen a car before, the analysis might
need to invent a new name for an item - although we're comfortable with the term nowadays, do you think the
first person to hear the pedal called an "Accelerator" though it was an ideal name? I doubt it, but it
has become an accepted part of the Ubiquitous Language of driving a Car.
</p>
<p>
The use case might now read:
</p>
<ul>
<li>Actor activates the Ignition;</li>
<li>Actor applies the Brake, and moves the Gear Stick from park to drive;</li>
<li>Actor views the Mirrors; (needs an exception scenario for when they're not empty)</li>
<li>Actor releases the Brake, and applies the Accelerator.</li>
</ul>
<p>
Notice the nouns are capitalised. Notice there are clearly defined verbs on the nouns. We are now a step closer
to building a car; we know what a car has, and can start to design some of its components. We have something
that is giving us a step towards a solution.
</p>
<h3>Summary</h3>
<p>
Define the nouns, and use them religiously, not just in your use cases, but in every
conversation you have with the customer. Describe the relationships between the
objects in this model, and the constraints. Construct your use cases using these
names as proper nouns, defining the kinds of operations that can be performed.
</p>
<p>
If you do this you'll have analysed the problem, and proposed a solution.
</p>
<p>
Avoid bullet-lists of requirements recorded verbatim. They try to limit scope.
They try to define a contract. In short, they are anti-agile!
</p>
<br>
<p>
<ul>
<li>
[Agile] - "Manifesto for Agile Software Development"
(<a target="_blank" href="http://www.agilemanifesto.org/">http://www.agilemanifesto.org/</a>)
</li>
<li>
[Evans] - "Domain-Driven Design: Tackling Complexity in the Heart of Software", Addison-Wesley, 2003
(<a target="_blank" href="http://www.domaindrivendesign.org/books/index.html">http://www.domaindrivendesign.org/books/index.html</a>)
</li>
</ul>
</p>Richard Brownhttp://www.blogger.com/profile/00271013570243148544noreply@blogger.com0tag:blogger.com,1999:blog-33966934.post-1158042347287155082006-09-12T07:09:00.000+01:002006-10-06T09:27:24.593+01:00To spec or not to spec...<P>For a few years every project I have started has, what some of my colleagues and I have termed, a Technical Architecture Specification. This document sits alongside the standard Requirements/Design Specification, and aims to describe the "how" of the system rather than the "what". It describes the "shape" of the system and is written with express reference to the design patterns used and as such requires a little bit of knowledge to make best use of it.</P>
<P>Unfortunately though, every Technical Architecture Specification I have ever written has been criticized. Not for its content I may add, but more for the very nature of the document itself. I've been faced with the same arguments time after time: where's the benefit in this document; will the customer understand it; why all these design patterns? So, of late I've started to have second thoughts as whether I should produce this document after all, it doesn't really cause me any problems if I don't write it, I know how the system should work!</P>
<P>Thank goodness then, I read this blog entry to help erase my doubts: (<a href="http://www.ddj.com/blog/architectblog/archives/2006/09/who_needs_a_sof.html">http://www.ddj.com/blog/architectblog/archives/2006/09/who_needs_a_sof.html</a>)</P>Neil Loganhttp://www.blogger.com/profile/04189198374043005196noreply@blogger.com0tag:blogger.com,1999:blog-33966934.post-1157892149267630342006-09-10T13:37:00.000+01:002006-10-04T14:42:29.966+01:00In search of a metaphorSince software development first reared it's head way back in the 50s and 60s the industry has
been in search for a metaphor that fits. <p style="margin-bottom: 0cm;">For years the industry flogged (and still continues to flog) the idea that software development is a manufacturing discipline. I for one however, firmly believe that software development is <b><u>NOT</u><span style="text-decoration: none;"> </span></b><span style="text-decoration: none;"><span style="">a manufacturing discipline. </span></span> </p> <p style="margin-bottom: 0cm; text-decoration: none;"> One of my favorite definitions of manufacturing reads as follows:</p> <ul><li><p style="margin-bottom: 0cm; text-decoration: none;"> to produce in a mechanical way without inspiration or originality .</p> </li></ul> <p style="margin-bottom: 0cm; text-decoration: none;"> Software development couldn't be further from this definition if tried! The development of software requires inspiration and creativity and the only element of the process that could be truly considered mechanical, is the bit where we hit F5 or type nant at the command line and our design becomes reality.</p> <p style="margin-bottom: 0cm; text-decoration: none;"> Yet still people both outside and inside our industry still cling to the idea that software development can be reduced to the mechanical. This idea has seen countless management guru's making a fortune selling methodologies that promise “follow our process and every software development project you undertake will be a success”. </p> <p style="margin-bottom: 0cm; text-decoration: none;"> What the software development industry seems to be unwilling to accept is that software development is hard. No amount of process or fancy tools really takes much away from this inherent difficultly. Sure it may become slightly easier, but ultimately if the developers don't engage their brains and work, every project will fail. </p> <p style="margin-bottom: 0cm; text-decoration: none;"> Fundamentally I believe software development is a creative and communicative process. It requires those involved to think, generate ideas and communicate those ideas to others. </p> <p style="margin-bottom: 0cm; text-decoration: none;"> For too long now software development has considered itself the younger brother of engineering and manufacturing disciplines and has attempted to follow their examples. However, when I look at software development today I cannot help but think it's time we stopped trying to justify our existence by trying desperately to relate to others.
</p><p style="margin-bottom: 0cm; text-decoration: none;">Software development is starting to grow up and it's time we followed our path.</p>Neil Loganhttp://www.blogger.com/profile/04189198374043005196noreply@blogger.com0