Archive for the 'Software Development Lifecycle' Category

Importance of web page look ‘n’ feel

The look ‘n’ feel of your website is important. BUT, it is less important than the text-based content. In most commercial websites, the role of the traditional graphic designer is relatively minor. The role of the information architect is central

This article focuses on look and feel.

  • “To look good is to be good - that’s the primary test when people assess a Web site’s credibility” B.J. Fogg, Ph.D (Stanford University 2002) [link]
  • “Uniformity an inherit part of a usable web site design” – Sigma Infotech [link]
  • “Complex and beautiful may win awards, but ugly and simple might just win the marathon.” – Gerry McGovern [link]
  • “Consistency is one of the most powerful usability principles”, “users spend most of their time on other websites.” – Jacob Nielsen [link]
Figure 1 – Scott Adams ‘Dilbert’ on web design (lifted from here)


  • Ensure page layout and content style is part of the design
  • Decide on tone, phrasing and naming conventions for all language used on the site  
  • Decide on the page flow and use the same flow for all pages
  • Template as much of the layout as possible (e.g. Master page)
  • Use cascading style sheets (CSS)
  • Create reusable page components (e.g. User Controls / Server Controls)
  • Seek the advice of an imagery expert when using graphics / icons


  • Design as you go
  • Implement each page with no regard to how other aspects of the application work
  • Recreate components that have already been written for other parts of the application
  • Use inline styles, unless there is a good reason
  • Confuse the user with poor use of language / symbols
  • Resize, stretch, crop or distort images when displaying them as part of your application (unless this is the purpose of the application)

There are several other key elements that shouldn’t be neglected in the design phase of a project.

  • Ensure consistent feedback is given to the user (in terms of error, success messages)
  • Adopt the keep it simple stupid (KISS) approach to design
  • Ask non developers to test your application – usable web pages don’t require a manual to operate them
  • If you need to use a picture, get it sized and formatted for web site usage

Further reading:
9 Essential Principles for Good Web Design


What are managers/leaders?


Figure 1 – Toilet, shamelessly stolen from Tame the bear

Okay, so why are we looking at a picture of a toilet? Well, it is quite simple.

Think about your organisation.

How cost effective are toilets in your organisation?

The answer is – Extremely cost effective! Yes, there is legislations that ensures your employer provides such bathroom facilities, but imagine your office/building without toilets. You’d need to go home every few hours, or walk to a public facility… this certainly wont help productivity!

Toilets provide a valued service, although most of us, apart from those in the facilities management trade even think about it!

Turning our attentions back to Managers…

How cost effective are managers in your organisation?

It is not something you can easily measure as they don’t necessarily produce any tangible products. Do managers provide a service? Yes… they provide a service to their team.

It is important to realise that, the members of a team may appear on an organisational chart to work for the manager, however, it is more realistic to suggest that the manager works for the team.

The other important axis to management – Leadership.

Leadership is a skill that excellent managers possess. Leadership is not about counting beans, measuring performance and chairing meetings.

Leadership is about:

  • communication of a shared goal or vision to the team
  • motivating the team
  • ensuring the team has the resources to achieve it’s goal

Just a little taster of what I’ve been learning over the last few weeks.


Use Cases: estimating effort, without using a finger in the air

I’ve been writing over the last month or so about design and processes

It has been a big challenge to find a suitable development process for some of our larger high risk projects. After a long discussion with the key stakeholders, it was agreed that it would be a good idea to roll out a process that can be tailored rather than a mandated rigid process. Such a process can then be flexible enough to keep most of the people happy. It can also be adopted at different stages in the process, so even if you are about to start development and haven’t applied it to your requirements elicitation phase, you should be able to make it work.


Surprisingly, the standard/process/convention (for what of a better name) that was decided on was OpenUP… for those that don’t know much about it, please do go and research it’s background in more detail than I am about to give here. Essentially it is the ‘open source’ version of the Rational Unified Process (RUP).

It is a truly tremendous leap for my organisation to take, as the waterfall model seems to be the only one our ancient organisational processes seem to work with. My team have been using iterative approaches for many years, but it has been largely against the grain… some people still build software in the same way they’d build an oil rig :-)

Figure 1: The OpenUP process in a nutshell

The main reason for utilising this process is its flexibility, and it’s lean approach to documentation, with it’s ethos being, if you think what you are about to document is not going to be read by anyone, don’t write it!

I am also a fan of it’s ‘code to quality’ ethos. This would certainly have helped us out of many a hole in previous projects where we didn’t apply OpenUP. Coding to quality means that no matter what you build, be it a small scoping study or prototype, you build it to the same standard you would build the final product. This means that you don’t have to spend a vast amount of time refactoring your starting block. Of course, it will be commented and written to a defined coding standard, so the person who wrote it first wont become a critical resource on the project.

The more you read about OpenUP, the more you’ll probably say – “Don’t we already do iterative processes?”, or “We do that at the moment, it’s just that we don’t document it at all/in the same way”. If you are saying those things, it bears well for you, as adoption will be easy.

OpenUP also talks about Use Cases. As an organisation we’ve been using Enterprise Architect for many years to design them… but we’ve never ever used them to come up with estimates of how much effort would be required to produce (design, implement, test, maintain) them. This brings me onto the topic of this post (sorry for the long preamble)

Using use cases to inform effort

First things first – this method will only work well if:

  1. Use cases are produced in the requirements capture/design phase of your project (if you don’t have any use cases, you can’t do much with this method :))
  2. You use use cases in the ‘traditional’ way, i.e. you are modelling user scenarios [user goal-level] (e.g. user clicks button, system displays dialog etc.) and not system scenarios (system processes batch b and passes result to class a, class a performs operation d, system calculates x) [system goal-level]. A good example is on Mike Cohn’s use case estimation page.

The weight (or complexity) of a use case is determined by the number of different use case transactions in the interaction between the actor and the system to be built.

According to Jacobson’s use case points method [1], the criteria to assign a weight to a use case are*:

  • Simple use case - 1 to 3 transactions, weight = 5
  • Average use case - 4 to 7 transactions, weight = 10
  • Complex use case - more than 7 transactions, weight = 15

The same can be applied to your actors in the system. There is no hard and fast way to assess an actor (that I’m currently aware of) so you need to use judgement.

  • Simple – e.g. Another system through an API, weight = 1
  • Average- e.g. Another system through a protocol or A person through a text-based user interface, weight = 2
  • Complex – e.g. A person through a graphical user interface, weight = 3

Figure 2: How use case effort estimation works [1] 

*I would argue that the three levels of granularity are not enough if you find that your use cases are too low level. In which case you could use judgement and assign a weight between 1 and 15 based on the number of transactions. This would mean that, for example, you could assign the dead easy ones a weighting of 2, and the medium complexity ones 13.

Okay, so what is a transaction in this context?

A use case transaction is a round trip. The best guide on this is on Remi-Armand and Eef’s article [3]. It is important to clearly understand this, as this is an important step in estimating the effort of your use cases.

How does it work?

Once you’ve assessed the complexity of your use cases you end up with your Unadjusted Use Case Weight (UUCW):

Use case complexity


Number of use cases

Product (Weight * #Use Cases)

Simple 5 10 50
Average 10 13 130
Complex 15 6 90
    TOTAL 270

Table 1: An example table of use case weights/products/totals

You now need to assess the complexity of your actors in the system. You will then end up with your Unadjusted Actor Weight (UAW):

Actor case complexity


Number of actors

Product (Weight * #Actors)

Simple 1 3 3
Average 2 2 4
Complex 3 1 3
    TOTAL 10

Table 2: An example table of actors weights/products/totals

Now you can work out the Unadjusted Use Case Points (UUCP) for your project. This is calculated as follows:

  • 280 = 270 + 10

The black art of the metric is the understanding of the next process. The total effort to develop a system is influenced by factors beyond the collection of use cases that describe the functionality of the intended system, therefore it is necessary to adjust the UUCP by the technical and environmental complexity. This is essentially how this particular method models the real world.

Technical Complexity





Distributed system



Performance objectives



End-user efficiency



Complex processing



Reusable code



Easy to install



Easy to use






Easy to change



Concurrent use






Access for third parties



Training needs


Table 3: Technical Complexity Weights [1]

Each factor is assigned a value between 0 and 5 depending on its assumed influence on the project.

  • 0 means no influence
  • 3 is average influence
  • 5 is large influence

The technical complexity factor (TCF) is calculated multiplying the value (the influence) of each factor in Table 1 by its weight and then adding all these numbers to get the sum called the TFactor. Finally, the following formula is applied:

TCF = 0.6 + (0.01 * TFactor)

Environmental Complexity





Familiar with the development process



Application experience



Object-oriented experience



Lead analyst capability






Stable requirements



Part-time staff



Difficult programming language


Table 4: Environmental Complexity Weights [1]

Each environmental factor is assigned a value between 0 and 5 depending on its assumed impact on the project.

  • 0 means no impact
  • 3 is average impact
  • 5 is large impact

The environmental factor (EF) is calculated accordingly by multiplying the value (the impact) of each factor as mentioned in the table 4 by its weight and adding all the products to get the sum called the EFactor. The following formula is applied:

EF = 1.4 + (-0.03 * EFactor)

So… what do I get from all that?

You need to calculate the final (adjusted) Use Case Points (UCP) total, that will inform the effort required (with a little tuning)

This is done with the following formula:

  • UCP = UUCP * TCF * EF
  • 378 = 280 * 1.5 * 0.9

NB: I’ve calculated 1.5 for TCF and 0.9 for EF but not shown the workings as part of this article.

Great, so how do I know the effort?

Another piece of black art ensues. You need to put a figure on how many hours your team would take to design, implement and test one use case point. It really is *that* simple. Multiply the Use Case Points by that figure and you have your very rough estimate.

  • Estimated Effort = UCP * #hours to implement one use case point
  • 3780 hours = 378 * 10

If you look at my references, you can do some further reading on how you should tune this ‘number of hours for each use case point’ figure based on past projects.

Figure 3: The cone of uncertainty

Don’t forget the dreaded cone of uncertainty… estimates done at the beginning of the project are destined to be out. As my organisation adopts OpenUP, we can go back and rework our estimates to reflect reality with each iteration. I hope this will prove most useful… but the jury is out :-)



  1. Jacobson, Ivar et al., Object-Oriented Software Engineering. A Use Case Driven Approach, revised printing, Addison-Wesley 1993.
  2. Cockburn, Alistair, Writing Effective Use Cases, Addison-Wesley, 2001.
  3. Remi-Armand Collaris, Eef Dekker, Software cost estimation using use case points: Getting use case transactions straight, IBM 2009
  4. Mike Cohn, Estimating With Use Case Points, Mountain Goat Software 2005 (published in Methods & Tools magazine, a free global software development resource).

Why can’t/don’t software developers design

It’s nothing new… you’ve got a team of excellent software developers, but when you try to put anything on paper, they scurry out of the room as if they’d had the promise of extra strength coffee in another place.

This leads me to point out the following articles:

Enjoy :-)


Software Development Lifecycle - which one do you choose, and why?

Having spend most of this afternoon racking my brains for some sort of holy grail of software development lifecycle that will ensure success in every possible scenario… I quickly remembered my Software Design lecturer words…

"If you can read, understand and interpret the ISO 12207 (International Standard of Software Lifecycle Processes) document then you are well on your way to being as confused about things as everyone else seems to be"

The document, unless you want to read all 18 pages of it, is basically an overview of what needs to happen in a Software Development Lifecycle… My favourite quote from it is:

"The standard is flexible and usable with: any life cycle model (such as, Waterfall, incremental, evolutionary, Spiral, or other); any software engineering method (object-oriented design, structured coding, top-down testing, or other); or any programming language (Ada, assembly, machine, or other). These are very much dependent upon the software project and state-of-the-technology, and their selection is left to the user of the standard."

Okay, so the joke is on me.. the document doesn’t really give me anything that I didn’t already know… As a software team leader or software project manager it is down to you to make the difficult decision as to which approach to adopt.

Right, so I reckon you can decide based on:

  1. Software Engineering Method (OO, Structured etc.)
  2. State-of-the-technology used on the project (e.g. workflow technologies, AJAX, C++)
  3. Past experience
  4. Company ethos
  5. Delivery timescale

1 and 2 are basically driven by your user and system requirements… as you can’t build what you want in any way you like, you need to pick a selection of technologies that can deliver your project in the required timescale… well, I guess you could do it any way you liked if it was a student project, or you had a ridiculously big budget, or if it was part of the project to work out the most effective solution in every conceivable technology.

3 only applies if you’ve ‘been there and got the T-shirt’… It wont give you much if all your projects have been run in the same way, as you don’t have the visibility of other methodologies… so more than likely you’ll adapt your current way of thinking and tweak it to improve efficiency. This is not necessarily a bad thing.

4 is more down to the organisation you work in… are there rigid corporate standards to adhere to? … do you have to talk to twenty people to get a slight change to policy approved? Can you really decide how you wish to develop your product? If you can… then if it all goes well, a hero you will become… I hate to think about the price of failure when a new development strategy is adopted.

5 is really important, and I think it is not really considered in enough detail… so you want to adopt a certain development strategy… how does this impact your development team, and more importantly your stakeholders… does your customer insist that you follow a certain strategy? are you tied down to a contract that means you cannot perform a true iterative approach?

Hmm… so all I’ve done so far is to help stir up the murky waters a little more.

Is change for the point of change a bad thing?

The development strategies (that for the purposes of this argument are the ones under consideration)

  1. Waterfall, and for those in a comedy mood: waterfall 2006 conference :-D
  2. Agile (e.g. Scrum, Lean etc.) … I’m also going to group iterative or incremental development into this category
  3. Automated Software Synthesis - you need ACM access to read the full article, but you can search for free results :-)

The waterfall model has many critics… many of the criticisms are valid, however don’t be too quick to ignore that it has to offer, if you know what you are building, and the requirements are 90% stable, then it might be exactly what you are looking for. In terms of fixed price contracts, I think that the waterfall model has a great deal to offer, the future can be planned, and progress can be measured in deliverables rather than other intangible means

Agile development is (IMHO) the new word for a structured, defined and industry standard of iterative or incremental software development. To be honest with you, if it wasn’t for the reading of Andrew Woodward’s blog, I wouldn’t have had the foggiest idea about what it could offer. I also cast eye over Hillel Glazer’s Agile CMMi blog… since my organisation is a CMMi level 3 operation now :-). Where to begin with Agile… well it is a daunting task for any PM for Software Team Lead to take in all the information I’ve just been though… and getting your development team to adopt a new way of thinking will just as daunting as the learning.

"Although there are some evidences that CMMI and Agile can coexist, the overall impression of people dealing with process improvement is that there are still important cultural differences between the two communities" - from CMMI: Less Hyped Than Agile but Equally Popular on

Oh boy have you got an uphill battle if you, like me, are in a CMMi ordained organisation… or perhaps not, it depends on how you approach the change.

Automated Software Synthesis is something I had never considered until I utilised K2 [blackpearl] and K2 [blackpoint]. Essentially your design when using those particular technologies is a high level business process that you wish to implement. You can’t really go about doing a detailed UML model of how each of the classes in the system will work - because it engineers itself through the tools provided, be it K2 Studio or Visual Studio. Yes you can write your own code, but it forms a small part of your development if you can utilise the technology to your advantage. You still need to gather requirements though… which is really important! Other such examples include rapid application development tools that can write your code based on a detailed design… I’ve got no experience of these.

I’m still none the wiser, and the deadline looms in the distance… and upper management are after those darn estimates.

…. SpittingCAML

You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.