Archive for March, 2009

InfoPath and SharePoint verses ASP.NET and a Traditional Database verses ASP.NET and using SharePoint as a database technology

I was recently asked by a colleague

“I’ve got to build a new application to support x (an anonymous set of requirements that I cannot divulge here!), I’ve not got long to do it, and my developer resources are thin on the ground. I’ve heard you talk about SharePoint and InfoPath, and need to call on your experience, do you think I could develop my application using those two technologies? It requires a complex interface layer and needs to be able to provide neat looking reports.”

Okay I said, I’ll give you my experiences in the form of some potential solutions and potential pros and cons. I realise by posting this I’m likely to anger the gods and provoke some really large debate… but that was my plan all along :-)

 

So your decision basically is between three development strategies/options

  1. InfoPath and SharePoint 2007 (MOSS)
  2. ASP.NET and MOSS
  3. ASP.NET and SQL Server 2005

This means the first step is to consider the requirements for the interface layer (IL)… ask yourself: will the user want to do anything fancy on the front end? e.g. sorting data grids, combo boxes, interface with external system. If the answer to that is yes, then you’ll probably want to consider an ASP.NET front end.

If the user really only requires a simple form, then InfoPath is a good choice for the IL… but to make the waters even more murky you’ll need to consider the storage/reporting requirements as InfoPath on it’s own will only offer XML based storage, either on disk, email or SharePoint forms library. ASP.NET forms are more flexible and can enable you to store the data in a SharePoint list, database or if you really wanted, and XML file.

InfoPath pros and cons
Pros

  • Forms can be produced by pretty much anyone with no training
  • Simple to build prototypes (quick and cheap)
  • Easy for user’s to use and understand
  • Allows offline editing (by saving the form to local hard drive)
  • Doesn’t need to be designed in detail before development can be started

Cons

  • Which version of InfoPath does your corporate desktop/laptop build support? InfoPath 2003 is getting a little tired now (this means it’s old, and wont support newer controls, and will limit the ‘code behind’ that you can produce)
  • InfoPath does not allow you to build flexible, custom interfaces
  • Can’t reuse rules from other forms without having to recreate them
  • Rules are difficult to navigate/debug
  • Difficult to migrate (without reworking the forms)
  • If used in conjunction with an SharePoint form library, the coupling is very tight, so if you move the site/rename it you might have to alter the form

ASP.NET pros and cons
Pros

  • Can do whatever you like (within reason) as you have access to .NET 3.5. [this includes things like sending email etc.]
  • Can produce flexible interfaces
  • Easy to debug using Visual Studio
  • Can reuse code and layouts using classes and master pages
  • Can interface with SharePoint, SQL Server, Oracle, XML and lots of other ODBC compliant technologies

Cons

  • Requires that the developers have ASP.NET training
  • Prototypes take longer to build than in InfoPath
  • Does not allow offline use, without extensive development of a side by side offline system
  • Users may require training if something is ’specialised’
  • You need to design the pages (if you want a sensible solution)

You can also have a read of my blog: http://blog.mgallen.com/?p=206, where I’ve linked to Jason Apergis’ blog who explains the pros and cons in a workflow context, but he decides that InfoPath is better for his organisation.

Now you can compare traditional databases and SharePoint

SharePoint pros and cons
Pros

  • Easy to build sites and site collections (quick and cheapish)
  • Has plethora of web parts that can be dragged and dropped by novice users to create dynamic content
  • Links well with InfoPath
  • List items can be produced via MOSS API and Web Services from other technologies such as ASP.NET
  • Sites can be generated through the MOSS API
  • Does rudimentary version control (albeit not in the best possible way… perhaps this isn’t a pro after all :-))
  • Can create production level sites/storage facilities without a detailed design

Cons

  • It should not be used like a traditional database (… and can’t really be used like one either as it can’t do joins between lists)
  • Difficult to report from MOSS lists and libraries, although you can used Reporting Services to query lists it is generally more difficult compared to SQL queries
  • Uses lots of hard drive space (the MOSS database grows quite large)
  • It is not straight forward to migrate from a dev environment to a live environment

Traditional Database (e.g. SQL Server 2005)
Pros

  • Very flexible
  • Can use proper joins, sorts
  • Links very well with Reporting Services to produce powerful outputs
  • Links very well with ASP.NET and other .NET technologies

Cons

  • Requires a detailed design (or not… but don’t do that to yourself!)
  • Can’t be used directly with InfoPath
  • Requires a production and dev server in an ideal world

Okay, so if you read between the lines… I think you should go for options 2 or 3… preferably 3.

The perception is that as its quick and cheap to use InfoPath and SharePoint… and that perception is right 90% of the way…. You’ll find that once you’ve done 90%… The last 10% will take you an absolute age, and will probably consist of workarounds, squirming out of meeting requirements and swearing at the computer.

The decision is yours, so be pragmatic, and assess the requirements in front of you, and ask difficult questions to try to ascertain whether any potential requirements creep puts you in the ASP.NET frame or the InfoPath frame. If reporting is a major player, I would urge you to think about using SQL Server and Reporting Services.

I hope this has helped you a little bit anyway, good luck :-)

SpittingCAML



Use Cases: estimating effort, without using a finger in the air

I’ve been writing over the last month or so about design and processes

It has been a big challenge to find a suitable development process for some of our larger high risk projects. After a long discussion with the key stakeholders, it was agreed that it would be a good idea to roll out a process that can be tailored rather than a mandated rigid process. Such a process can then be flexible enough to keep most of the people happy. It can also be adopted at different stages in the process, so even if you are about to start development and haven’t applied it to your requirements elicitation phase, you should be able to make it work.

OpenUP

Surprisingly, the standard/process/convention (for what of a better name) that was decided on was OpenUP… for those that don’t know much about it, please do go and research it’s background in more detail than I am about to give here. Essentially it is the ‘open source’ version of the Rational Unified Process (RUP).

It is a truly tremendous leap for my organisation to take, as the waterfall model seems to be the only one our ancient organisational processes seem to work with. My team have been using iterative approaches for many years, but it has been largely against the grain… some people still build software in the same way they’d build an oil rig :-)

openup
Figure 1: The OpenUP process in a nutshell

The main reason for utilising this process is its flexibility, and it’s lean approach to documentation, with it’s ethos being, if you think what you are about to document is not going to be read by anyone, don’t write it!

I am also a fan of it’s ‘code to quality’ ethos. This would certainly have helped us out of many a hole in previous projects where we didn’t apply OpenUP. Coding to quality means that no matter what you build, be it a small scoping study or prototype, you build it to the same standard you would build the final product. This means that you don’t have to spend a vast amount of time refactoring your starting block. Of course, it will be commented and written to a defined coding standard, so the person who wrote it first wont become a critical resource on the project.

The more you read about OpenUP, the more you’ll probably say – “Don’t we already do iterative processes?”, or “We do that at the moment, it’s just that we don’t document it at all/in the same way”. If you are saying those things, it bears well for you, as adoption will be easy.

OpenUP also talks about Use Cases. As an organisation we’ve been using Enterprise Architect for many years to design them… but we’ve never ever used them to come up with estimates of how much effort would be required to produce (design, implement, test, maintain) them. This brings me onto the topic of this post (sorry for the long preamble)

Using use cases to inform effort

First things first – this method will only work well if:

  1. Use cases are produced in the requirements capture/design phase of your project (if you don’t have any use cases, you can’t do much with this method :))
  2. You use use cases in the ‘traditional’ way, i.e. you are modelling user scenarios [user goal-level] (e.g. user clicks button, system displays dialog etc.) and not system scenarios (system processes batch b and passes result to class a, class a performs operation d, system calculates x) [system goal-level]. A good example is on Mike Cohn’s use case estimation page.

The weight (or complexity) of a use case is determined by the number of different use case transactions in the interaction between the actor and the system to be built.

According to Jacobson’s use case points method [1], the criteria to assign a weight to a use case are*:

  • Simple use case - 1 to 3 transactions, weight = 5
  • Average use case - 4 to 7 transactions, weight = 10
  • Complex use case - more than 7 transactions, weight = 15

The same can be applied to your actors in the system. There is no hard and fast way to assess an actor (that I’m currently aware of) so you need to use judgement.

  • Simple – e.g. Another system through an API, weight = 1
  • Average- e.g. Another system through a protocol or A person through a text-based user interface, weight = 2
  • Complex – e.g. A person through a graphical user interface, weight = 3

usecasepoints
Figure 2: How use case effort estimation works [1] 

*I would argue that the three levels of granularity are not enough if you find that your use cases are too low level. In which case you could use judgement and assign a weight between 1 and 15 based on the number of transactions. This would mean that, for example, you could assign the dead easy ones a weighting of 2, and the medium complexity ones 13.

Okay, so what is a transaction in this context?

A use case transaction is a round trip. The best guide on this is on Remi-Armand and Eef’s article [3]. It is important to clearly understand this, as this is an important step in estimating the effort of your use cases.

How does it work?

Once you’ve assessed the complexity of your use cases you end up with your Unadjusted Use Case Weight (UUCW):

Use case complexity

Weight

Number of use cases

Product (Weight * #Use Cases)

Simple 5 10 50
Average 10 13 130
Complex 15 6 90
    TOTAL 270

Table 1: An example table of use case weights/products/totals

You now need to assess the complexity of your actors in the system. You will then end up with your Unadjusted Actor Weight (UAW):

Actor case complexity

Weight

Number of actors

Product (Weight * #Actors)

Simple 1 3 3
Average 2 2 4
Complex 3 1 3
    TOTAL 10

Table 2: An example table of actors weights/products/totals

Now you can work out the Unadjusted Use Case Points (UUCP) for your project. This is calculated as follows:

  • UUCP = UUCW + UAW
  • 280 = 270 + 10

The black art of the metric is the understanding of the next process. The total effort to develop a system is influenced by factors beyond the collection of use cases that describe the functionality of the intended system, therefore it is necessary to adjust the UUCP by the technical and environmental complexity. This is essentially how this particular method models the real world.

Technical Complexity

Factor

Description

Weight

T1

Distributed system

2

T2

Performance objectives

2

T3

End-user efficiency

1

T4

Complex processing

1

T5

Reusable code

1

T6

Easy to install

0.5

T7

Easy to use

0.5

T8

Portable

2

T9

Easy to change

1

T10

Concurrent use

1

T11

Security

1

T12

Access for third parties

1

T13

Training needs

1

Table 3: Technical Complexity Weights [1]

Each factor is assigned a value between 0 and 5 depending on its assumed influence on the project.

  • 0 means no influence
  • 3 is average influence
  • 5 is large influence

The technical complexity factor (TCF) is calculated multiplying the value (the influence) of each factor in Table 1 by its weight and then adding all these numbers to get the sum called the TFactor. Finally, the following formula is applied:

TCF = 0.6 + (0.01 * TFactor)

Environmental Complexity

Factor

Description

Weight

E1

Familiar with the development process

1.5

E2

Application experience

0.5

E3

Object-oriented experience

1

E4

Lead analyst capability

0.5

E5

Motivation

1

E6

Stable requirements

2

E7

Part-time staff

-1

E8

Difficult programming language

-1

Table 4: Environmental Complexity Weights [1]

Each environmental factor is assigned a value between 0 and 5 depending on its assumed impact on the project.

  • 0 means no impact
  • 3 is average impact
  • 5 is large impact

The environmental factor (EF) is calculated accordingly by multiplying the value (the impact) of each factor as mentioned in the table 4 by its weight and adding all the products to get the sum called the EFactor. The following formula is applied:

EF = 1.4 + (-0.03 * EFactor)

So… what do I get from all that?

You need to calculate the final (adjusted) Use Case Points (UCP) total, that will inform the effort required (with a little tuning)

This is done with the following formula:

  • UCP = UUCP * TCF * EF
  • 378 = 280 * 1.5 * 0.9

NB: I’ve calculated 1.5 for TCF and 0.9 for EF but not shown the workings as part of this article.

Great, so how do I know the effort?

Another piece of black art ensues. You need to put a figure on how many hours your team would take to design, implement and test one use case point. It really is *that* simple. Multiply the Use Case Points by that figure and you have your very rough estimate.

  • Estimated Effort = UCP * #hours to implement one use case point
  • 3780 hours = 378 * 10

If you look at my references, you can do some further reading on how you should tune this ‘number of hours for each use case point’ figure based on past projects.

cone-of-uncertainty
Figure 3: The cone of uncertainty

Don’t forget the dreaded cone of uncertainty… estimates done at the beginning of the project are destined to be out. As my organisation adopts OpenUP, we can go back and rework our estimates to reflect reality with each iteration. I hope this will prove most useful… but the jury is out :-)

SpittingCAML

References

  1. Jacobson, Ivar et al., Object-Oriented Software Engineering. A Use Case Driven Approach, revised printing, Addison-Wesley 1993.
  2. Cockburn, Alistair, Writing Effective Use Cases, Addison-Wesley, 2001.
  3. Remi-Armand Collaris, Eef Dekker, Software cost estimation using use case points: Getting use case transactions straight, IBM 2009
  4. Mike Cohn, Estimating With Use Case Points, Mountain Goat Software 2005 (published in Methods & Tools magazine, a free global software development resource).


Chartered IT Professional (CITP)

It was a great pleasure today, to receive my first formal qualification since I left University. I am now an official Chartered IT Professional.

CITP Medal

In order to maintain my CITP status I need to continually develop my skill set, which is good news for my blog… as I will continue to have lots of interesting things to write.

Find out about becoming a CITP yourself: here

I would highly recommend it, as it really helps you to understand what you may or may not have achieved so far in your career.

SpittingCAML BSc (hons) CITP MBCS MIET

I promise it’s the last time I EVER sign off like that!


By SpittingCAML in Random, Training  .::. Read Comments (3)

Things you should know before you purchase K2 [blackpoint]

  • Only the enterprise version of K2 [blackpoint] will enable you to work with a distributed SharePoint farm (the SQL Server database can be on another server though)
  • It is not possible to deploy K2 [blackpearl] and K2 [blackpoint] on the same physical server
  • If you have multiple WFE for SharePoint, you need to buy an Enterprise license for each WFE
  • K2 [blackpearl] 904 will encompass all the new features available in K2 [blackpoint]

These are a few key points from: K2 blackpoint Licensing, Deployment Scenarios, and Support Information

SpittingCAML




You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.