Every software project needs to gather requirements, perform analysis and design, implement, and finally deliver a working system.

These days we have a lot of tools to get us through the implementation and delivery, from TDD through to automated installers (e.g., WiX). We have a maturing vocabulary of Design Patterns to aid communication in a design phase. However analysis is often left as a hand-waving exercise with less clearly defined tools and methods.

Gathering Requirements

All too often the requirements gathering is merely verbatim recordings of wish lists dictated by the customer. These requirements are often recorded by people with lofty titles such as Business Analyst; ironic considering they rarely contain much, if any, analysis of the problem.

The requirements lists are usually driven out of a more traditional waterfall style development, and serve two main purposes:

  • define the scope of the functionality;
  • define a contract.

The first of these points (scope) cannot be defined up-front. Software development is now recognised as new product development, and not a manufacturing process. Requirements change. Always. The way to confront this problem is to gather a few requirements at a time, implement, deliver, and iterate.

The second point is born out of a naïve company's need to be able to prove that what they have delivered is legally acceptable, despite the fact it will invariably not be what the customer wants. The preference should of course be for "customer collaboration over contract negotiation" [Agile].

The interesting question to ask when confronted by one of these bullet-list style requirements documents is "how does this help me solve the problem?" These documents rarely (if ever) contribute to a solution - instead they allow nervous project managers to relax contented in their ignorant belief that the solution can be proven correct.

Turn to an Alternative

So we know to avoid requirements lists. We are educated enough to use a more modern approach to documenting requirements such as use cases, user stories, scenarios, etc. Typically I choose use cases. All we need to do is describe the steps that an actor performs to complete a task, then we have accurately described our requirements, right?

Well, perhaps ... but ... it's still not necessarily analysis.

These could still essentially be verbatim recordings of the customer's description of how a given actor performs a task. But if that's all that is recorded then you've missed a trick - you've missed the opportunity to actually analyse the problem and start down the road to a solution.

Note: I am not suggesting that these use cases are useless, just that they could be so much more. If someone's already done this work then it can still be useful for verifying that delivered software actually solves the problem at hand. However, without analysis, they still don't provide progress to a solution.

Ubiquitous Language

The task of analysis is to define the objects in your proposed model. Rather than just defining them then forgetting them, instead they should become the nouns that you continually use when discussing functionality.

Use these nouns when discussing requirements with the customer. Use them when writing your use cases. Use them in every technical note, and every phone call. In short, use them everywhere; they then become the Ubiquitous Language [Evans] of the system.

Building the analysis model is a creative process, and as such does not have easily defined rules. Often the customer will already have names for items in the problem domain; these might translate directly to objects in the analysis model. Other times the customer might have several different names for related items, and it is the job of the analyst to abstract these to a single object in the analysis model. Often the customer will have concepts that require a new noun to be invented. These are the tools of the analyst: mapping, abstraction, and invention.

In addition to using the Ubiquitous Language, the language should be persisted along with any other documentation. It may be defined using a UML class diagram. It may be defined using a dictionary/glossary style document. It may be a combination of a class diagram and supporting text. What is important is that the definitions are concrete, and presentable to the customer.

Once the initial model is in place, the use cases can be fleshed out. It is important that the use cases use the language of the analysis model. This starts to extract the actions that can be performed the objects, and starts to highlight the relationships and constraints between them.

The process of defining the model, the objects and their relationships, the properties of the objects, the actions that can be performed on them, and the constraints on their use - that's analysis.

A Simple Example

Suppose we take a look at the process of hill-starting a car (we'll assume for simplicity it's an automatic).

The use case (recorded verbatim) might read:

  • Actor starts the car;
  • Actor secures the car, and puts it in gear;
  • Actor checks there's nothing coming;
  • Actor allows the car to move off.

Okay, so it's a slightly artificial example - but that's only obvious to us because we know what a car is, and how it works. If we'd never seen or heard of a car before, then we wouldn't know whether these steps read appropriately or not.

This use case describes the process of getting the car moving, but does not bring us any closer to designing a car. Instead we should be analysing the scenario to try and extract the key objects in the domain.

If we start asking questions as to how someone starts the car, we might come up with the idea that a car has a "starting component" - we might even make the jump to calling it an "Ignition". We now have our second object in the domain (the first was a Car), we have a relationship defined (a Car 'has an' Ignition), we might define the actions (activate?), and a constraint (the Ignition can only be activated when the Car is not already started).

We might very quickly end up with a rich new language (Car, Ignition, Brake, Gear Stick, ...), the relationships between them (a Car 'has a' Brake), some constraints (the Gear Stick is in one of park, drive, ...), and the actions that can be performed on them (the Brake can be 'applied'). If we'd never seen a car before, the analysis might need to invent a new name for an item - although we're comfortable with the term nowadays, do you think the first person to hear the pedal called an "Accelerator" though it was an ideal name? I doubt it, but it has become an accepted part of the Ubiquitous Language of driving a Car.

The use case might now read:

  • Actor activates the Ignition;
  • Actor applies the Brake, and moves the Gear Stick from park to drive;
  • Actor views the Mirrors; (needs an exception scenario for when they're not empty)
  • Actor releases the Brake, and applies the Accelerator.

Notice the nouns are capitalised. Notice there are clearly defined verbs on the nouns. We are now a step closer to building a car; we know what a car has, and can start to design some of its components. We have something that is giving us a step towards a solution.


Define the nouns, and use them religiously, not just in your use cases, but in every conversation you have with the customer. Describe the relationships between the objects in this model, and the constraints. Construct your use cases using these names as proper nouns, defining the kinds of operations that can be performed.

If you do this you'll have analysed the problem, and proposed a solution.

Avoid bullet-lists of requirements recorded verbatim. They try to limit scope. They try to define a contract. In short, they are anti-agile!

Submit this story to DotNetKicks Shout it

For a few years every project I have started has, what some of my colleagues and I have termed, a Technical Architecture Specification. This document sits alongside the standard Requirements/Design Specification, and aims to describe the "how" of the system rather than the "what". It describes the "shape" of the system and is written with express reference to the design patterns used and as such requires a little bit of knowledge to make best use of it.

Unfortunately though, every Technical Architecture Specification I have ever written has been criticized. Not for its content I may add, but more for the very nature of the document itself. I've been faced with the same arguments time after time: where's the benefit in this document; will the customer understand it; why all these design patterns? So, of late I've started to have second thoughts as whether I should produce this document after all, it doesn't really cause me any problems if I don't write it, I know how the system should work!

Thank goodness then, I read this blog entry to help erase my doubts: (

Submit this story to DotNetKicks Shout it

Since software development first reared it's head way back in the 50s and 60s the industry has been in search for a metaphor that fits.

For years the industry flogged (and still continues to flog) the idea that software development is a manufacturing discipline. I for one however, firmly believe that software development is NOT a manufacturing discipline.

One of my favorite definitions of manufacturing reads as follows:

  • to produce in a mechanical way without inspiration or originality .

Software development couldn't be further from this definition if tried! The development of software requires inspiration and creativity and the only element of the process that could be truly considered mechanical, is the bit where we hit F5 or type nant at the command line and our design becomes reality.

Yet still people both outside and inside our industry still cling to the idea that software development can be reduced to the mechanical. This idea has seen countless management guru's making a fortune selling methodologies that promise “follow our process and every software development project you undertake will be a success”.

What the software development industry seems to be unwilling to accept is that software development is hard. No amount of process or fancy tools really takes much away from this inherent difficultly. Sure it may become slightly easier, but ultimately if the developers don't engage their brains and work, every project will fail.

Fundamentally I believe software development is a creative and communicative process. It requires those involved to think, generate ideas and communicate those ideas to others.

For too long now software development has considered itself the younger brother of engineering and manufacturing disciplines and has attempted to follow their examples. However, when I look at software development today I cannot help but think it's time we stopped trying to justify our existence by trying desperately to relate to others.

Software development is starting to grow up and it's time we followed our path.

Submit this story to DotNetKicks Shout it