Archive for the 'Visual Studio 2008' Category

Determining Active Directory group membership from SQL Server 2005

Have you ever needed to determine an authenticated user’s Active Directory (AD) group memberships from within SQL Server?

It makes sense if you are able to utilise AD to decide a user’s role or permissions in an application. This reduces the development time required to develop a custom permissions model, and allows the helpdesk, maintenance and operations team keep track of user privileges from one place.

It seems SQL Server 2005 does not have an out of box solution to this. Searching the various support forums and search engines lead me down the Managed .NET assembly route. Microsoft calls this building a database object with common language runtime (CLR) integration.

You may be asking yourselves… why not go down the SQL Server Integration Services route to gather an export from Active Directory… why would you query LDAP each time a query in SQL Server is executed? These are important design decisions that are really down to the required functionality.

You can find out another way to achieve this here: How To Get Active Directory Users and Groups Using SQL Server Integration Services SSIS 2005

Ok, so I am now going to build a CLR assembly in Visual Studio 2008 with the target framework set to .NET 3.5. SQL Server 2005 is actually .NET 2.0 targeted, but 3.5 uses the same CLR engine, so we are able to make use of things like LINQ and additional Active Directory components. [more on this later].

   1: namespace SpittingCAML.SQLCLR.DatabaseAssemblies 

   2: { 

   3:     using System; 

   4:     using System.Collections.Generic; 

   5:     using System.Data.SqlTypes; 

   6:     using System.DirectoryServices.AccountManagement; 

   7:     using System.Linq; 

   8:     using System.Text; 

   9:     using Microsoft.SqlServer.Server; 

  10:     using System.Data; 

  11:     /// <summary> 

  12:     /// this class contains the AD Integration methods (Stored Procedures) 

  13:     /// </summary> 

  14:     public class ADIntegration 

  15:     { 

  16:         /// <summary> 

  17:         /// this method determines if a user is a member of the supplied 

  18:         /// active directory group 

  19:         /// </summary> 

  20:         /// <param name="user">the user to search for</param> 

  21:         /// <param name="domain">the domain to search</param> 

  22:         /// <param name="groupName">the group to determine</param> 

  23:         /// <returns>true if access granted, false if not granted</returns> 

  24:         [Microsoft.SqlServer.Server.SqlFunction] 

  25:         public static bool IsGroupMember(string user, string domain, string groupName) 

  26:         { 

  27:             bool isGroupMember = false; 


  29:             // can't proceed with lookup if supplied parameters are null 

  30:             if(string.IsNullOrEmpty(user) || string.IsNullOrEmpty(domain) || string.IsNullOrEmpty(groupName)) 

  31:             { 

  32:                 return isGroupMember; 

  33:             }           

  34:             using (var ctx = new PrincipalContext(ContextType.Domain, domain)) 

  35:             { 

  36:                 using (var grp = GroupPrincipal.FindByIdentity(ctx, IdentityType.Name, groupName)) 

  37:                 { 

  38:                     isGroupMember = grp != null && 

  39:                         grp 

  40:                         .GetMembers(true) 

  41:                         .Any(m => m.SamAccountName.ToLower() == user.ToLower()); 

  42:                 } 

  43:             } 


  45:             return isGroupMember; 

  46:         } 

  47:     } 

  48: } 

Code Snippet 1 – the assembly that will be built into a DLL in Visual Studio 2008 (download a copy here: ADIntegration.cs)

The code snippet creates a class ‘ADIntegration’ in the ‘SpittingCAML.SQLCLR.DatabaseAssemblies’ namespace. This is important to remember, as it is required later when registering the assembly.

The functionality of this method is as follows:

  • User calls the method with the user name, domain and user group as parameters
  • System determines if supplied user is a member of the supplied user group for the supplied domain
  • System returns boolean true if user is a member and boolean false if not a member.
  • The method code is shamelessly stolen from StackOverflow, user huseyint. The rest is down to the style conventions of my organisation.

I have tagged the method ‘IsGroupMember’ with the ‘[Microsoft.SqlServer.Server.SqlFunction] tag to let SQL server know this is intended to be used in place of a typical SQL Server function.

One your assembly is built, it is almost ready to be put into a database of your choice, as an object. You’ll notice we make use of LINQ and Directory Services. However, since SQL Server 2005 is targeting .NET 2.0 it is necessary for you to install some pre-requisites in order to gain access to .NET 3.5 features. It should also be noted that not all .NET 2.0 features are available in SQL Server 2005, as the code has to meet certain criteria to be included in the ‘safe’ category.

Read more about safe, unsafe and external access assemblies: here.

The pre-requisites required are:

  • CLR enabled must be set on your database server – this enables CLR assemblies
  • The target database must be set to TRUSTWORTHY to allow UNSAFE assemblies to be run against it.
  • System.Core [.NET 3.5 core components]
  • System.DirectoryServices.AccountManagement [.NET 3.5 Active Directory specific methods]
  • System.DirectoryServices.Protocols [.NET 2.0 dependancy of System.DirectoryServices.AccountManagement]
  • System.DirectoryServices [.NET 2.0 dependancy of System.DirectoryServices.AccountManagement]

The pre-requisites are typically found in the install folders for .NET 2.0 and .NET 3.5

   1: -- must enable CLR on the database


   3: EXEC sp_configure 'show advanced options' , '1';

   4: go

   5: reconfigure;

   6: go

   7: EXEC sp_configure 'clr enabled' , '1'

   8: go

   9: reconfigure;


  11: DECLARE @dbName NVARCHAR(255)

  12: SET @dbName = '<database name here>'

  13: DECLARE @sql NVARCHAR(4000)


  15: SET @sql = 'USE ' + @dbName;


  17: -- must make database trustworthy


  19: EXEC sp_executesql @sql

  20: EXEC sp_changedbowner 'sa'  

  21: SET @sql = 'ALTER DATABASE ' + @dbName + ' SET TRUSTWORTHY ON;'

  22: EXEC sp_executesql @sql


  24: GO


  26: DECLARE @DotNet2Folder NVARCHAR(1000)

  27: DECLARE @DotNet35Folder NVARCHAR(1000)

  28: DECLARE @path NVARCHAR(4000)


  30: SET @DotNet2Folder = 'C:\Windows\Microsoft.NET\Framework\v2.0.50727'

  31: SET @DotNet35Folder = 'C:\Program Files\Reference Assemblies\Microsoft\Framework\v3.5'



  34: --if the [System.Core] assembly exists then drop it 

  35: IF EXISTS (SELECT * FROM sys.assemblies WHERE [name] = 'System.Core') 

  36:         DROP ASSEMBLY [System.Core] 


  38: --if the [System.DirectoryServices.AccountManagement] assembly exists then drop it 

  39: IF EXISTS (SELECT * FROM sys.assemblies WHERE [name] = 'System.DirectoryServices.AccountManagement') 

  40:         DROP ASSEMBLY [System.DirectoryServices.AccountManagement] 


  42: --if the [System.DirectoryServices.Protocols] assembly exists then drop it 

  43: IF EXISTS (SELECT * FROM sys.assemblies WHERE [name] = 'System.DirectoryServices.Protocols') 

  44:         DROP ASSEMBLY [System.DirectoryServices.Protocols] 


  46: --if the [System.DirectoryServices] assembly exists then drop it 

  47: IF EXISTS (SELECT * FROM sys.assemblies WHERE [name] = 'System.DirectoryServices') 

  48:         DROP ASSEMBLY [System.DirectoryServices] 


  50: CREATE ASSEMBLY [System.Core]  

  51: FROM @DotNet35Folder + '\system.core.dll'  

  52: WITH permission_set = UNSAFE  


  54: CREATE ASSEMBLY [System.DirectoryServices.Protocols]  

  55: FROM @DotNet2Folder + '\System.DirectoryServices.Protocols.dll'  



  58: CREATE ASSEMBLY [System.DirectoryServices.AccountManagement]

  59: FROM @DotNet35Folder + '\System.DirectoryServices.AccountManagement.dll'

  60: WITH permission_set = UNSAFE  


  62: CREATE ASSEMBLY [System.DirectoryServices]  

  63: FROM @DotNet2Folder + '\System.DirectoryServices.dll'  



  66: GO  

Code Snippet 2: running the above T-SQL script will enable CLR on the database server, ensure the database you are going to be using assemblies in is trustworthy and registers the pre-requisite components. (download a copy here: ActiveDirectory_PreReqs.sql)

If you make use of the code snippet you’ll need to supply your database name and the locations of the .NET 2.0 and .NET 3.5 DLLs. The pre-requisites are installed in UNSAFE mode since they contain code that is not certified safe. If you try to register the assemblies as SAFE you will be presented with an error showing the areas of the code that are not certified safe.

We are now ready to deploy our assembly that contains the code in Code Snippet 1.

   1: DECLARE @dbName NVARCHAR(255)

   2: SET @dbName = '<database name here>'


   4: DECLARE @assemFolder NVARCHAR(1000)

   5: SET @assemFolder = '<assembly folder here>'


   7: -- have to drop the function first because you can't delete the assembly

   8: -- when the function is dependant on it 


  10:        FROM   sysobjects 

  11:        WHERE  name = N'udf_IsGroupMember')

  12:     DROP FUNCTION dbo.[udf_IsGroupMember]


  14: --if the [SpittingCAML.SQLCLR.DatabaseAssemblies] assembly exists then drop it 

  15: IF EXISTS (SELECT * FROM sys.assemblies WHERE [name] = 'SpittingCAML.SQLCLR.DatabaseAssemblies') 

  16:         DROP ASSEMBLY [SpittingCAML.SQLCLR.DatabaseAssemblies] 


  18: CREATE ASSEMBLY [SpittingCAML.SQLCLR.DatabaseAssemblies]  

  19: FROM @assemFolder + '\SpittingCAML.SQLCLR.DatabaseAssemblies.dll'  

  20: WITH permission_set = unsafe  


  22: GO  


  24: /********************************************************************************

  25: *

  26: * Name: [udf_IsGroupMember]

  27: *

  28: *

  29: * Description: 

  30: *

  31: *       Function determines if the supplied user is in an active directory group

  32: *     

  33: *        NB: this is a MANAGED CODE function utilising the 

  34: *                   SpittingCAML.SQLCLR.DatabaseAssemblies assembly class

  35: *

  36: * Revision History:

  37: *

  38: * Version Date         Author(s)        Description

  39: * 1.0     08 Dec 2009  Martin Allen        Intial Release

  40: *

  41: ********************************************************************************/

  42: CREATE FUNCTION [udf_IsGroupMember]

  43: (

  44:  @user nvarchar(1000), 

  45:  @domain nvarchar(1000), 

  46:  @groupName nvarchar(1000)

  47: )


  49: AS EXTERNAL NAME [SpittingCAML.SQLCLR.DatabaseAssemblies].[SpittingCAML.SQLCLR.DatabaseAssemblies.ADIntegration].[IsGroupMember]


  51: GO

Code Snippet 3: Deploying our assembly and creating a stub that will allow standard SQL stored procedures and functions call our assembly method as if it was a standard SQL function. (download a copy here: Deploy_Assembly.sql)

You will need to edit code snippet 3 with the database name and the folder that you have built the assembly [created by Code Snippet 1] to.

Once you have run this script, you are in the position to test it.

   1: DECLARE @user nvarchar(1000)

   2: DECLARE @domain nvarchar(1000)

   3: DECLARE @groupName nvarchar(1000)


   5: -- TODO: Set parameter values here.


   7: set @user = 'testuser'

   8: set @domain = 'mydomain'

   9: set @groupName = 'mygroup'


  11: SELECT [dbo].[udf_IsGroupMember](@user, @domain, @groupName)

Code Snippet 4: testing the deployment was successful. (download a copy here: Test_Assembly.sql)

Set the parameters to something more meaningful to your Active Directory, and supply a valid domain and give it a go.

Now we have a useful function to utilise alongside standard T-SQL functions. I use it to replace legacy function calls to proprietary permission models to reduce maintenance costs!

Hope this helps you out! Enjoy Smile


Further reading on SQL Server Assemblies

Delivery of an important project, thanks to some clever shortcuts

This blog has been quiet for the last couple of weeks because I’ve been leading a development team in the final stages of delivery of a pretty large asset/stock tracking system.

Its a long story as to why we decided to do it all ‘in house’ rather than to buy a COTS product. Lets just say, we’d already tried buying off the shelf and that’s the reason why we started doing our own.

The budget was very challenging and the development team grew large in the space of a few weeks when we realised that the estimated man hours required to do it was going to take more than four full time developers.


With a large development team, in disparate locations around the UK it is important to get the communication channels right. My organisation had rolled out text based communication similar to Live Messenger (Office Communicator). This proved the first clever shortcut. This tool allowed us to collaborate in real time, using virtual whiteboards.

WAN based software configuration management. This was very important, it meant we could all see the same page, and nothing was stuck on Fred’s laptop when he went on leave :-)

Hudson – continuous integration testing, ensured that we never dirtied our source code, so we didn’t hear any of the usual ‘why does it not build when I check it out?’ type crying from the team. I really liked the fact you could schedule builds and output the successful build number to SharePoint using it’s RSS feeds. This little gem saved the bacon many a time… although until the developers got used to not treating our source code control system as a backup for their work in progress, the automatic emails telling everyone (including the PM) that the build failed was a little annoying ;-)… I loved it, as there was no excuse for checking in ‘work in progress’  :-D

SharePoint 2007 for project documentation. For much the same reason why we had WAN based software configuration management. SharePoint helps anyone on the internal intranet see what we are up to … and the project manager can stick the GANTT and meeting minutes somewhere too :-)


Those of you who have read my blog before will remember my ramblings about which software development methodology should you choose. We chose to use OpenUP, which is the open source version of the Rational Unified Process. I was a little sceptical about its adoption in the embryonic stages of the project, but it certainly helped us stay on course. It also ensured we followed a process that industry knows about. The iterative nature of these kinds of processes ensure that higher management get to see what us technical types are up to, which keeps the pressure off, because at the end of each iteration there is a defined output and testing stratagem.


We had to keep it all web based, so it could work over the intranet… and we were limited to IE6 because my organisation hates upgrading and is very risk adverse. I can hear you all groaning… but what about all the security patches etc. etc… yeah I know… I’ve sat across the table from the infrastructure guys and have tried to explain that, but we are only developing for our internal intranet… what could go wrong with that ;-) … basically it is out of my pay bracket to argue with the people that make that kind of decision.

As my department is well versed in C# and ASP.NET (3.5) this was our chosen technology base… we paired this with SQL Server, as we are also well versed in that.

During the design phase it was glaringly obvious that the standard AJAX toolkit combined with the standard ASP.NET controls was not going to cut the mustard with the requirements that had been elicited… and we needed rapid application development, and slick results.

We turned to Telerik’s ASP.NET AJAX control library and it saved us a small fortune. I’m sceptical about most third party products… and I was very sceptical about using it in such a ‘flagship’ project, however my fears were unfounded. It seems you can sell a product with these controls in, and you get all the source code for them… and you can modify the source code for them providing you let Telerik know what you did. They also provide an excellent forum for dealing with problems. If you use a search engine and look up ‘RadGrid’ the Telerik equivalent of a GridView or DataGrid, you still get hundreds of hits :-)

We used the windows style dialog manager (RadWindow and RadAlert), menu control (RadMenu), grid view (RadGrid), combo box (RadComboBox), numeric only text boxes with extras (RadNumericTextBox) and many of their date pickers to name a few. Not only did we use them, we also used them alongside existing AJAX and standard .NET controls with no issue. I think if we did not buy this toolset, we would have spent hundreds of hours doing what they have already done for us, and I bet ours wouldn’t have been as slick, or as tested as theirs. The development team and I weren’t sad that they’d taken the raw control building away from us (and I thought it would incite a riot by the coffee machine!). One of the most pleasing aspects about it is that the customer is happy that we have delivered a sophisticated interface that is user friendly. The window manager allows users to resize, drag and minimise functionality on our web pages much the same as they do on their windows desktop machines.

Our next cost saving came came with the ReportViewer control. We’d been bitten badly in previous projects when using Reporting Services. Is it just me, or does it not feel like a finished product? Problems we had in the past were cross domain access / the classic double hop problem (impersonation) and unstructured website navigation (that one was probably more our fault). The ReportViewer control gives us a little more freedom, we can construct dynamic object data sources in our business layer, and create reports on the fly, on the actual page rather than firing off our requests to the Reporting Services engine running on another site collection in IIS. This eliminates authentication issues and allows you to put the report that is generated anywhere on your page. Our user’s like it a lot :-).

SQL Server Integration Services (SSIS) is looking like it will save us money once we’ve worked out how to use it properly ;-). The COTS product that we’re replacing has a database backend that we can connect to using SSIS, and the customer is providing us data in excel. This is a useful took to help shape and data cleanse on the way into the new system

Final thoughts

User training is now underway and the User acceptance test is around the corner, and I think it will be a big success for the team.


Hudson – Continuous Integration Testing

For a while now we’ve been planning on making use of Hudson to help us maintain working builds in our source code configuration system.

What Hudson does, is build your software at a predefined schedule (like a SQL Server agent job, or Windows Scheduled task) or when requested by a user, and produces a dashboard showing the status of your software builds. It is highly configurable, and can link to many source code control systems to get the latest version of your latest code.

Even though this product was/is/seems to be (I’m not sure on this) aimed at the Java developers of this world, it can build pretty much any software package.

We intend to make use of it for the building of .NET applications by getting it to call MSBUILD via DOS Batch files (in my example), however I will point out that you can get Hudson to call MSBUILD through an MSBUILD plug-in not covered in this post.

I want to use another plug-in, the ‘Text Finder’ plug in to parse the output of the batch file for StyleCop errors and system build failures.

Please have a read of this: Meet Hudson, so you can get a proper introduction to it.

To be perfectly honest, I’m not a Java fan (as my colleagues will tell you ;-))… and I thought it was a little overkill to have to install a Java Runtime and Apache, and Hudson to perform automatic builds… however, it has been relatively painless… and as my organisation is not exactly liking to splash to cash at the moment, the fact that everything is free helps…

Those risk adverse, like myself, have to put aside the fact that it’s all open source… and the only support out there is through volunteers :-)… there’s no metaphorical stick to hit someone with if the product doesn’t ‘just work’ out of the open source box.

What I downloaded

  1. Java Runtime Environment (jre-6u14-windows-i586)
  2. Tomcat 5.5 (apache-tomcat-5.5.27)
  3. Hudson (hudson.war v1.314)
  4. Text Finder plug in (text-finder.hpi v1.7)

NB: If you are using IE8/7 to download hudson.war and text-finder.hpi you may find that the extension is changed to .zip. You will need to rename the files back to their original extension to get the installation to work correctly.

Setting up Hudson on Windows 2003/2008 Server alongside IIS

Obviously, since apache tomcat and Hudson are Java based applications, if you don’t have a Java Runtime environment (JRE), you’ll need to install that first. It is relatively painless, simply double click on the EXE installer and bob’s your uncle.

I had my doubts about getting Tomcat 5.5 working alongside IIS, but to my surprise (I haven’t touched Tomcat since my University days back in 2001) the installer worked well, and defaulted to using port 8080, which is nice, since I don’t want to be getting in the way of IIS on port 80. The installer *should* also detect if you’ve got a JRE installed and set the link to it up automatically. If it doesn’t, simply point the installer at the folder that you installed the JRE. You need to set an administrator account up to access the Tomcat manager.

The Tomcat application server should be alive (possibly following a reboot in certain circumstances) after the install and you should be able to navigate to it’s front page.

Figure 1: The tomcat front page

imageFigure 2: Entering tomcat manager

Open the Tomcat manager to instigate the Hudson installation. It will prompt you for the username and password you set up during the Tomcat installation before you can access the page.

Scroll down to the Deploy section, in the ‘WAR file to deploy’ section click browse, and select the hudson.war file we downloaded earlier.

Figure 3: Deploying the Web Archive (WAR) for Hudson

Click the deploy button and the application should be deployed. If you put the WAR file in a part of the file system that gives a long file path, e.g. ‘C:\longfilepath\longfilepath\longfilepath\longfilepath\longfile\hudson.war’ you may have issues with the deployment. I certainly encountered this issue last week. The error message you will get isn’t the most useful, so it’s worth moving it to the root of a drive to see if that solves it.

To confirm successful deployment, look at the application list

Figure 4: the application list, showing Hudson

If you click on the hyperlink ‘/hudson’ it should take you to the front page of the Hudson application.

Figure 5: The Hudson ‘dashboard’

You are now ready to go… as you might have noticed I’ve already created a Job - ‘Test 001’. This is the build that I’ve set up to hopefully explain to you as part of this post.

As I’m using the ‘Text Finder’ plug-in, you’ll now need to install that if you want to follow my example.

Figure 6: Managing Hudson, and adding a plug-in

Click ‘Manage Hudson’ and then on ‘Manage Plugins’, Click the ‘Advanced’ tab and scroll to the bottom of that page so you see the following:

Figure 7: Uploading a plug-in

If now click the upload button, when it has finished, restart the Tomcat service. If you don’t perform a restart the plug-in wont be shown as installed.

Figure 8: list of installed plug-ins

Once installed, you should see it in the list of installed plug-ins.

We can now go about creating the job that will build the .NET application.

I’ve got a really simple .NET 3.5 website application (it does nothing other than to display default.aspx) that I’m using for this post.

Figure 9: Visual Studio 2008 Web Application, the working folder on E: drive and the batch file in the root of the web application folder.

The batch file that Hudson will call is very simple, and I suspect it could be done better, however, here it is if you want to make use of it:

echo change directory to visual studio 2008 common tools folder
cd ../..
cd VC
echo set environment variables
call vcvarsall.bat;

echo call Test001.csproj (looks in the directory of this batch file for it)
call msbuild %~dp0Test001.csproj

Navigate to the Hudson ‘dashboard’/front page. And click ‘New Job’.

Provide Hudson with a name for your job, and select ‘Build a free-style software project’.

Figure 10: Free style software project selection

Figure 11: Adding build steps

Leave everything else as standard for now, and click ‘Add build step’ and select the ‘Execute windows batch command’ option.

Enter the path to the batch file (as shown in Figure 12)

Figure 12: entering the batch file details into build step

The next step is to configure the ‘text finder’ plug-in to look for the token ‘FAIL’, since MSBUILD produces messages with the word ‘FAIL’ in them.

Figure 13: configuring Hudson to look for the token ‘FAIL’ in the console output.

Click the Save button, and your job has been created!

Navigate back to the Hudson dashboard, and click the ‘build’ icon next to Job ‘Test 001’ (as shown in Figure 14)

Figure 14: Instigate a build

If the build was successful, when you refresh the page, you should see this:

Figure 15: The sunny picture indicates a very stable build

To demonstrate how Hudson picks up on failed builds, I’m now going to rename the code behind page for default.aspx from ‘default.aspx.cs’ to  ‘breakbuild.aspx.cs’.

Figure 16: Deliberately breaking the build

Using Hudson, run the job again.

Figure 16: The cloudy picture indicates a failure has occurred

The job has failed, the more the job fails, the worse the weather gets :-)

Run it a few more times to get more bleak weather (unless you like thunderstorms).

Figure 17: thunderstorms indicate that most recent builds have all failed

You can review the console output of all the builds that have taken place to help you diagnose failed builds.

Figure 18: review console of failed builds

As you can imagine, with the text finder plug-in and the numerous others available for Hudson, it makes it a very powerful tool.

I intend to set ours up so it will notify the development team when the latest version of a system checked into our source control system will not build, or contains StyleCop warnings.


InfoPath and SharePoint verses ASP.NET and a Traditional Database verses ASP.NET and using SharePoint as a database technology

I was recently asked by a colleague

“I’ve got to build a new application to support x (an anonymous set of requirements that I cannot divulge here!), I’ve not got long to do it, and my developer resources are thin on the ground. I’ve heard you talk about SharePoint and InfoPath, and need to call on your experience, do you think I could develop my application using those two technologies? It requires a complex interface layer and needs to be able to provide neat looking reports.”

Okay I said, I’ll give you my experiences in the form of some potential solutions and potential pros and cons. I realise by posting this I’m likely to anger the gods and provoke some really large debate… but that was my plan all along :-)


So your decision basically is between three development strategies/options

  1. InfoPath and SharePoint 2007 (MOSS)
  2. ASP.NET and MOSS
  3. ASP.NET and SQL Server 2005

This means the first step is to consider the requirements for the interface layer (IL)… ask yourself: will the user want to do anything fancy on the front end? e.g. sorting data grids, combo boxes, interface with external system. If the answer to that is yes, then you’ll probably want to consider an ASP.NET front end.

If the user really only requires a simple form, then InfoPath is a good choice for the IL… but to make the waters even more murky you’ll need to consider the storage/reporting requirements as InfoPath on it’s own will only offer XML based storage, either on disk, email or SharePoint forms library. ASP.NET forms are more flexible and can enable you to store the data in a SharePoint list, database or if you really wanted, and XML file.

InfoPath pros and cons

  • Forms can be produced by pretty much anyone with no training
  • Simple to build prototypes (quick and cheap)
  • Easy for user’s to use and understand
  • Allows offline editing (by saving the form to local hard drive)
  • Doesn’t need to be designed in detail before development can be started


  • Which version of InfoPath does your corporate desktop/laptop build support? InfoPath 2003 is getting a little tired now (this means it’s old, and wont support newer controls, and will limit the ‘code behind’ that you can produce)
  • InfoPath does not allow you to build flexible, custom interfaces
  • Can’t reuse rules from other forms without having to recreate them
  • Rules are difficult to navigate/debug
  • Difficult to migrate (without reworking the forms)
  • If used in conjunction with an SharePoint form library, the coupling is very tight, so if you move the site/rename it you might have to alter the form

ASP.NET pros and cons

  • Can do whatever you like (within reason) as you have access to .NET 3.5. [this includes things like sending email etc.]
  • Can produce flexible interfaces
  • Easy to debug using Visual Studio
  • Can reuse code and layouts using classes and master pages
  • Can interface with SharePoint, SQL Server, Oracle, XML and lots of other ODBC compliant technologies


  • Requires that the developers have ASP.NET training
  • Prototypes take longer to build than in InfoPath
  • Does not allow offline use, without extensive development of a side by side offline system
  • Users may require training if something is ’specialised’
  • You need to design the pages (if you want a sensible solution)

You can also have a read of my blog:, where I’ve linked to Jason Apergis’ blog who explains the pros and cons in a workflow context, but he decides that InfoPath is better for his organisation.

Now you can compare traditional databases and SharePoint

SharePoint pros and cons

  • Easy to build sites and site collections (quick and cheapish)
  • Has plethora of web parts that can be dragged and dropped by novice users to create dynamic content
  • Links well with InfoPath
  • List items can be produced via MOSS API and Web Services from other technologies such as ASP.NET
  • Sites can be generated through the MOSS API
  • Does rudimentary version control (albeit not in the best possible way… perhaps this isn’t a pro after all :-))
  • Can create production level sites/storage facilities without a detailed design


  • It should not be used like a traditional database (… and can’t really be used like one either as it can’t do joins between lists)
  • Difficult to report from MOSS lists and libraries, although you can used Reporting Services to query lists it is generally more difficult compared to SQL queries
  • Uses lots of hard drive space (the MOSS database grows quite large)
  • It is not straight forward to migrate from a dev environment to a live environment

Traditional Database (e.g. SQL Server 2005)

  • Very flexible
  • Can use proper joins, sorts
  • Links very well with Reporting Services to produce powerful outputs
  • Links very well with ASP.NET and other .NET technologies


  • Requires a detailed design (or not… but don’t do that to yourself!)
  • Can’t be used directly with InfoPath
  • Requires a production and dev server in an ideal world

Okay, so if you read between the lines… I think you should go for options 2 or 3… preferably 3.

The perception is that as its quick and cheap to use InfoPath and SharePoint… and that perception is right 90% of the way…. You’ll find that once you’ve done 90%… The last 10% will take you an absolute age, and will probably consist of workarounds, squirming out of meeting requirements and swearing at the computer.

The decision is yours, so be pragmatic, and assess the requirements in front of you, and ask difficult questions to try to ascertain whether any potential requirements creep puts you in the ASP.NET frame or the InfoPath frame. If reporting is a major player, I would urge you to think about using SQL Server and Reporting Services.

I hope this has helped you a little bit anyway, good luck :-)


SPLookupField…. setting this in code

Many of you may have encountered the same challenge as me today… when you attempt to set the value of a lookup field in SharePoint through some C# or VB.NET code.

Matt Rantlett has written a great little helper function that will save the day!

Find it here: SPLookupField Helper function


Creating your SharePoint sites without having to use the rat*

*Rat - a term used by one of my university lecturers many years ago to refer to the ‘mouse’ peripheral. It made me laugh at the time, and it kinda stuck! The point I’m trying to make is that it seems to create a site with some customisation is an endless loop of point and click (In most cases where templates are not appropriate).

Hi there, it’s been a while since my last post, mainly due to the fact I seem to be working on so many different technologies, perhaps I am becoming a ‘jack of all trades, master of none’. Anyhow, I quite like the challenge of having to work with so many different technologies, and make them all work together!

So today I’ve been thinking about producing SharePoint sites through code. Its very easy to do this using K2 [blackpearl], but this project requires the bare minimum of custom install and set up.

farhanfaiz’s Weblog provided a good overview of how easy it is to create sites using C#. I’ve blogged on a previous occasion about site creation and maintenance in PowerShell, but as I need to do the creation inside a dot net application, I figured C# would be the best thing to use, since I know it better than PowerShell, and the rest of my team could perform maintenance on it in the future.

Here is a method you could use in your dot net applications

public static bool CreateSharePointSite(string parentURL,
                    string siteURLnode,
                    string siteTitle,
                    string siteTemplateName)
    bool successStatus= false
    using (SPSite siteCollection =
            new SPSite(parentURL))
        SPWeb parentWeb =
        SPWebTemplateCollection Templates =
        SPWebTemplate siteTemplate =
        parentWeb.Webs.Add(                    siteURLnode,
                    siteTitle, "", 1033,
                    siteTemplate, false, false);
        successStatus = true;
    return successStatus;

In fact, if you are going to be using the SharePoint Object Model for a medium to large size project, I would really recommend you order the Mindsharp SharePoint 2007 poster pack. It’s written by many experts and MVP’s we know and love! You need to become a registered member of their site before you can order the posters. I’m UK based and it took a few weeks to arrive, but the office is now covered in useful material to aid the developers navigate their way through the object model, an example shown below in Figure 1. These posters are FREE if you live in the US or Canada!


Figure 1: an example Mindsharp poster

I think the good starting point is with Microsoft itself with it’s example ‘Creating a Custom User Site Provisioning Solution with Office SharePoint Server 2007′

Good luck with your development, and please let me know if you have any pointers or best practice for me, and the SharePoint community.


Helpful tools for the MOSS Development Community

Thanks to bluephaze (SharePoint blog) I now know that upgrading to Visual Studio 2008 will be a really good idea to help keep our SharePoint customizations coming out as efficiently as possible… now… just need to get a version of K2 [blackpearl] that is supported on VS2008!

Here’s what we can look forward to in VS2008

Visual Studio 2008 Project Templates

  • Web Part
  • Team Site Definition
  • Blank Site Definition
  • List Definition
  • Empty SharePoint Project

Visual Studio 2008 Item Templates (items that can be added into an existing project)

  • Web Part
  • Custom Field
  • List Definition (with optional Event Receiver)
  • Content Type (with optional Event Receiver
  • Module
  • List Instance
  • List Event Handler
  • Template

Once again, thanks to bluephaze!


You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.