Archive for the 'Estimation' Category

Use Cases: estimating effort, without using a finger in the air

I’ve been writing over the last month or so about design and processes

It has been a big challenge to find a suitable development process for some of our larger high risk projects. After a long discussion with the key stakeholders, it was agreed that it would be a good idea to roll out a process that can be tailored rather than a mandated rigid process. Such a process can then be flexible enough to keep most of the people happy. It can also be adopted at different stages in the process, so even if you are about to start development and haven’t applied it to your requirements elicitation phase, you should be able to make it work.

OpenUP

Surprisingly, the standard/process/convention (for what of a better name) that was decided on was OpenUP… for those that don’t know much about it, please do go and research it’s background in more detail than I am about to give here. Essentially it is the ‘open source’ version of the Rational Unified Process (RUP).

It is a truly tremendous leap for my organisation to take, as the waterfall model seems to be the only one our ancient organisational processes seem to work with. My team have been using iterative approaches for many years, but it has been largely against the grain… some people still build software in the same way they’d build an oil rig :-)

openup
Figure 1: The OpenUP process in a nutshell

The main reason for utilising this process is its flexibility, and it’s lean approach to documentation, with it’s ethos being, if you think what you are about to document is not going to be read by anyone, don’t write it!

I am also a fan of it’s ‘code to quality’ ethos. This would certainly have helped us out of many a hole in previous projects where we didn’t apply OpenUP. Coding to quality means that no matter what you build, be it a small scoping study or prototype, you build it to the same standard you would build the final product. This means that you don’t have to spend a vast amount of time refactoring your starting block. Of course, it will be commented and written to a defined coding standard, so the person who wrote it first wont become a critical resource on the project.

The more you read about OpenUP, the more you’ll probably say – “Don’t we already do iterative processes?”, or “We do that at the moment, it’s just that we don’t document it at all/in the same way”. If you are saying those things, it bears well for you, as adoption will be easy.

OpenUP also talks about Use Cases. As an organisation we’ve been using Enterprise Architect for many years to design them… but we’ve never ever used them to come up with estimates of how much effort would be required to produce (design, implement, test, maintain) them. This brings me onto the topic of this post (sorry for the long preamble)

Using use cases to inform effort

First things first – this method will only work well if:

  1. Use cases are produced in the requirements capture/design phase of your project (if you don’t have any use cases, you can’t do much with this method :))
  2. You use use cases in the ‘traditional’ way, i.e. you are modelling user scenarios [user goal-level] (e.g. user clicks button, system displays dialog etc.) and not system scenarios (system processes batch b and passes result to class a, class a performs operation d, system calculates x) [system goal-level]. A good example is on Mike Cohn’s use case estimation page.

The weight (or complexity) of a use case is determined by the number of different use case transactions in the interaction between the actor and the system to be built.

According to Jacobson’s use case points method [1], the criteria to assign a weight to a use case are*:

  • Simple use case - 1 to 3 transactions, weight = 5
  • Average use case - 4 to 7 transactions, weight = 10
  • Complex use case - more than 7 transactions, weight = 15

The same can be applied to your actors in the system. There is no hard and fast way to assess an actor (that I’m currently aware of) so you need to use judgement.

  • Simple – e.g. Another system through an API, weight = 1
  • Average- e.g. Another system through a protocol or A person through a text-based user interface, weight = 2
  • Complex – e.g. A person through a graphical user interface, weight = 3

usecasepoints
Figure 2: How use case effort estimation works [1] 

*I would argue that the three levels of granularity are not enough if you find that your use cases are too low level. In which case you could use judgement and assign a weight between 1 and 15 based on the number of transactions. This would mean that, for example, you could assign the dead easy ones a weighting of 2, and the medium complexity ones 13.

Okay, so what is a transaction in this context?

A use case transaction is a round trip. The best guide on this is on Remi-Armand and Eef’s article [3]. It is important to clearly understand this, as this is an important step in estimating the effort of your use cases.

How does it work?

Once you’ve assessed the complexity of your use cases you end up with your Unadjusted Use Case Weight (UUCW):

Use case complexity

Weight

Number of use cases

Product (Weight * #Use Cases)

Simple 5 10 50
Average 10 13 130
Complex 15 6 90
    TOTAL 270

Table 1: An example table of use case weights/products/totals

You now need to assess the complexity of your actors in the system. You will then end up with your Unadjusted Actor Weight (UAW):

Actor case complexity

Weight

Number of actors

Product (Weight * #Actors)

Simple 1 3 3
Average 2 2 4
Complex 3 1 3
    TOTAL 10

Table 2: An example table of actors weights/products/totals

Now you can work out the Unadjusted Use Case Points (UUCP) for your project. This is calculated as follows:

  • UUCP = UUCW + UAW
  • 280 = 270 + 10

The black art of the metric is the understanding of the next process. The total effort to develop a system is influenced by factors beyond the collection of use cases that describe the functionality of the intended system, therefore it is necessary to adjust the UUCP by the technical and environmental complexity. This is essentially how this particular method models the real world.

Technical Complexity

Factor

Description

Weight

T1

Distributed system

2

T2

Performance objectives

2

T3

End-user efficiency

1

T4

Complex processing

1

T5

Reusable code

1

T6

Easy to install

0.5

T7

Easy to use

0.5

T8

Portable

2

T9

Easy to change

1

T10

Concurrent use

1

T11

Security

1

T12

Access for third parties

1

T13

Training needs

1

Table 3: Technical Complexity Weights [1]

Each factor is assigned a value between 0 and 5 depending on its assumed influence on the project.

  • 0 means no influence
  • 3 is average influence
  • 5 is large influence

The technical complexity factor (TCF) is calculated multiplying the value (the influence) of each factor in Table 1 by its weight and then adding all these numbers to get the sum called the TFactor. Finally, the following formula is applied:

TCF = 0.6 + (0.01 * TFactor)

Environmental Complexity

Factor

Description

Weight

E1

Familiar with the development process

1.5

E2

Application experience

0.5

E3

Object-oriented experience

1

E4

Lead analyst capability

0.5

E5

Motivation

1

E6

Stable requirements

2

E7

Part-time staff

-1

E8

Difficult programming language

-1

Table 4: Environmental Complexity Weights [1]

Each environmental factor is assigned a value between 0 and 5 depending on its assumed impact on the project.

  • 0 means no impact
  • 3 is average impact
  • 5 is large impact

The environmental factor (EF) is calculated accordingly by multiplying the value (the impact) of each factor as mentioned in the table 4 by its weight and adding all the products to get the sum called the EFactor. The following formula is applied:

EF = 1.4 + (-0.03 * EFactor)

So… what do I get from all that?

You need to calculate the final (adjusted) Use Case Points (UCP) total, that will inform the effort required (with a little tuning)

This is done with the following formula:

  • UCP = UUCP * TCF * EF
  • 378 = 280 * 1.5 * 0.9

NB: I’ve calculated 1.5 for TCF and 0.9 for EF but not shown the workings as part of this article.

Great, so how do I know the effort?

Another piece of black art ensues. You need to put a figure on how many hours your team would take to design, implement and test one use case point. It really is *that* simple. Multiply the Use Case Points by that figure and you have your very rough estimate.

  • Estimated Effort = UCP * #hours to implement one use case point
  • 3780 hours = 378 * 10

If you look at my references, you can do some further reading on how you should tune this ‘number of hours for each use case point’ figure based on past projects.

cone-of-uncertainty
Figure 3: The cone of uncertainty

Don’t forget the dreaded cone of uncertainty… estimates done at the beginning of the project are destined to be out. As my organisation adopts OpenUP, we can go back and rework our estimates to reflect reality with each iteration. I hope this will prove most useful… but the jury is out :-)

SpittingCAML

References

  1. Jacobson, Ivar et al., Object-Oriented Software Engineering. A Use Case Driven Approach, revised printing, Addison-Wesley 1993.
  2. Cockburn, Alistair, Writing Effective Use Cases, Addison-Wesley, 2001.
  3. Remi-Armand Collaris, Eef Dekker, Software cost estimation using use case points: Getting use case transactions straight, IBM 2009
  4. Mike Cohn, Estimating With Use Case Points, Mountain Goat Software 2005 (published in Methods & Tools magazine, a free global software development resource).



You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.