Determining Active Directory group membership from SQL Server 2005

Have you ever needed to determine an authenticated user’s Active Directory (AD) group memberships from within SQL Server?

It makes sense if you are able to utilise AD to decide a user’s role or permissions in an application. This reduces the development time required to develop a custom permissions model, and allows the helpdesk, maintenance and operations team keep track of user privileges from one place.

It seems SQL Server 2005 does not have an out of box solution to this. Searching the various support forums and search engines lead me down the Managed .NET assembly route. Microsoft calls this building a database object with common language runtime (CLR) integration.

You may be asking yourselves… why not go down the SQL Server Integration Services route to gather an export from Active Directory… why would you query LDAP each time a query in SQL Server is executed? These are important design decisions that are really down to the required functionality.

You can find out another way to achieve this here: How To Get Active Directory Users and Groups Using SQL Server Integration Services SSIS 2005

Ok, so I am now going to build a CLR assembly in Visual Studio 2008 with the target framework set to .NET 3.5. SQL Server 2005 is actually .NET 2.0 targeted, but 3.5 uses the same CLR engine, so we are able to make use of things like LINQ and additional Active Directory components. [more on this later].

   1: namespace SpittingCAML.SQLCLR.DatabaseAssemblies 

   2: { 

   3:     using System; 

   4:     using System.Collections.Generic; 

   5:     using System.Data.SqlTypes; 

   6:     using System.DirectoryServices.AccountManagement; 

   7:     using System.Linq; 

   8:     using System.Text; 

   9:     using Microsoft.SqlServer.Server; 

  10:     using System.Data; 

  11:     /// <summary> 

  12:     /// this class contains the AD Integration methods (Stored Procedures) 

  13:     /// </summary> 

  14:     public class ADIntegration 

  15:     { 

  16:         /// <summary> 

  17:         /// this method determines if a user is a member of the supplied 

  18:         /// active directory group 

  19:         /// </summary> 

  20:         /// <param name="user">the user to search for</param> 

  21:         /// <param name="domain">the domain to search</param> 

  22:         /// <param name="groupName">the group to determine</param> 

  23:         /// <returns>true if access granted, false if not granted</returns> 

  24:         [Microsoft.SqlServer.Server.SqlFunction] 

  25:         public static bool IsGroupMember(string user, string domain, string groupName) 

  26:         { 

  27:             bool isGroupMember = false; 


  29:             // can't proceed with lookup if supplied parameters are null 

  30:             if(string.IsNullOrEmpty(user) || string.IsNullOrEmpty(domain) || string.IsNullOrEmpty(groupName)) 

  31:             { 

  32:                 return isGroupMember; 

  33:             }           

  34:             using (var ctx = new PrincipalContext(ContextType.Domain, domain)) 

  35:             { 

  36:                 using (var grp = GroupPrincipal.FindByIdentity(ctx, IdentityType.Name, groupName)) 

  37:                 { 

  38:                     isGroupMember = grp != null && 

  39:                         grp 

  40:                         .GetMembers(true) 

  41:                         .Any(m => m.SamAccountName.ToLower() == user.ToLower()); 

  42:                 } 

  43:             } 


  45:             return isGroupMember; 

  46:         } 

  47:     } 

  48: } 

Code Snippet 1 – the assembly that will be built into a DLL in Visual Studio 2008 (download a copy here: ADIntegration.cs)

The code snippet creates a class ‘ADIntegration’ in the ‘SpittingCAML.SQLCLR.DatabaseAssemblies’ namespace. This is important to remember, as it is required later when registering the assembly.

The functionality of this method is as follows:

  • User calls the method with the user name, domain and user group as parameters
  • System determines if supplied user is a member of the supplied user group for the supplied domain
  • System returns boolean true if user is a member and boolean false if not a member.
  • The method code is shamelessly stolen from StackOverflow, user huseyint. The rest is down to the style conventions of my organisation.

I have tagged the method ‘IsGroupMember’ with the ‘[Microsoft.SqlServer.Server.SqlFunction] tag to let SQL server know this is intended to be used in place of a typical SQL Server function.

One your assembly is built, it is almost ready to be put into a database of your choice, as an object. You’ll notice we make use of LINQ and Directory Services. However, since SQL Server 2005 is targeting .NET 2.0 it is necessary for you to install some pre-requisites in order to gain access to .NET 3.5 features. It should also be noted that not all .NET 2.0 features are available in SQL Server 2005, as the code has to meet certain criteria to be included in the ‘safe’ category.

Read more about safe, unsafe and external access assemblies: here.

The pre-requisites required are:

  • CLR enabled must be set on your database server – this enables CLR assemblies
  • The target database must be set to TRUSTWORTHY to allow UNSAFE assemblies to be run against it.
  • System.Core [.NET 3.5 core components]
  • System.DirectoryServices.AccountManagement [.NET 3.5 Active Directory specific methods]
  • System.DirectoryServices.Protocols [.NET 2.0 dependancy of System.DirectoryServices.AccountManagement]
  • System.DirectoryServices [.NET 2.0 dependancy of System.DirectoryServices.AccountManagement]

The pre-requisites are typically found in the install folders for .NET 2.0 and .NET 3.5

   1: -- must enable CLR on the database


   3: EXEC sp_configure 'show advanced options' , '1';

   4: go

   5: reconfigure;

   6: go

   7: EXEC sp_configure 'clr enabled' , '1'

   8: go

   9: reconfigure;


  11: DECLARE @dbName NVARCHAR(255)

  12: SET @dbName = '<database name here>'

  13: DECLARE @sql NVARCHAR(4000)


  15: SET @sql = 'USE ' + @dbName;


  17: -- must make database trustworthy


  19: EXEC sp_executesql @sql

  20: EXEC sp_changedbowner 'sa'  

  21: SET @sql = 'ALTER DATABASE ' + @dbName + ' SET TRUSTWORTHY ON;'

  22: EXEC sp_executesql @sql


  24: GO


  26: DECLARE @DotNet2Folder NVARCHAR(1000)

  27: DECLARE @DotNet35Folder NVARCHAR(1000)

  28: DECLARE @path NVARCHAR(4000)


  30: SET @DotNet2Folder = 'C:\Windows\Microsoft.NET\Framework\v2.0.50727'

  31: SET @DotNet35Folder = 'C:\Program Files\Reference Assemblies\Microsoft\Framework\v3.5'



  34: --if the [System.Core] assembly exists then drop it 

  35: IF EXISTS (SELECT * FROM sys.assemblies WHERE [name] = 'System.Core') 

  36:         DROP ASSEMBLY [System.Core] 


  38: --if the [System.DirectoryServices.AccountManagement] assembly exists then drop it 

  39: IF EXISTS (SELECT * FROM sys.assemblies WHERE [name] = 'System.DirectoryServices.AccountManagement') 

  40:         DROP ASSEMBLY [System.DirectoryServices.AccountManagement] 


  42: --if the [System.DirectoryServices.Protocols] assembly exists then drop it 

  43: IF EXISTS (SELECT * FROM sys.assemblies WHERE [name] = 'System.DirectoryServices.Protocols') 

  44:         DROP ASSEMBLY [System.DirectoryServices.Protocols] 


  46: --if the [System.DirectoryServices] assembly exists then drop it 

  47: IF EXISTS (SELECT * FROM sys.assemblies WHERE [name] = 'System.DirectoryServices') 

  48:         DROP ASSEMBLY [System.DirectoryServices] 


  50: CREATE ASSEMBLY [System.Core]  

  51: FROM @DotNet35Folder + '\system.core.dll'  

  52: WITH permission_set = UNSAFE  


  54: CREATE ASSEMBLY [System.DirectoryServices.Protocols]  

  55: FROM @DotNet2Folder + '\System.DirectoryServices.Protocols.dll'  



  58: CREATE ASSEMBLY [System.DirectoryServices.AccountManagement]

  59: FROM @DotNet35Folder + '\System.DirectoryServices.AccountManagement.dll'

  60: WITH permission_set = UNSAFE  


  62: CREATE ASSEMBLY [System.DirectoryServices]  

  63: FROM @DotNet2Folder + '\System.DirectoryServices.dll'  



  66: GO  

Code Snippet 2: running the above T-SQL script will enable CLR on the database server, ensure the database you are going to be using assemblies in is trustworthy and registers the pre-requisite components. (download a copy here: ActiveDirectory_PreReqs.sql)

If you make use of the code snippet you’ll need to supply your database name and the locations of the .NET 2.0 and .NET 3.5 DLLs. The pre-requisites are installed in UNSAFE mode since they contain code that is not certified safe. If you try to register the assemblies as SAFE you will be presented with an error showing the areas of the code that are not certified safe.

We are now ready to deploy our assembly that contains the code in Code Snippet 1.

   1: DECLARE @dbName NVARCHAR(255)

   2: SET @dbName = '<database name here>'


   4: DECLARE @assemFolder NVARCHAR(1000)

   5: SET @assemFolder = '<assembly folder here>'


   7: -- have to drop the function first because you can't delete the assembly

   8: -- when the function is dependant on it 


  10:        FROM   sysobjects 

  11:        WHERE  name = N'udf_IsGroupMember')

  12:     DROP FUNCTION dbo.[udf_IsGroupMember]


  14: --if the [SpittingCAML.SQLCLR.DatabaseAssemblies] assembly exists then drop it 

  15: IF EXISTS (SELECT * FROM sys.assemblies WHERE [name] = 'SpittingCAML.SQLCLR.DatabaseAssemblies') 

  16:         DROP ASSEMBLY [SpittingCAML.SQLCLR.DatabaseAssemblies] 


  18: CREATE ASSEMBLY [SpittingCAML.SQLCLR.DatabaseAssemblies]  

  19: FROM @assemFolder + '\SpittingCAML.SQLCLR.DatabaseAssemblies.dll'  

  20: WITH permission_set = unsafe  


  22: GO  


  24: /********************************************************************************

  25: *

  26: * Name: [udf_IsGroupMember]

  27: *

  28: *

  29: * Description: 

  30: *

  31: *       Function determines if the supplied user is in an active directory group

  32: *     

  33: *        NB: this is a MANAGED CODE function utilising the 

  34: *                   SpittingCAML.SQLCLR.DatabaseAssemblies assembly class

  35: *

  36: * Revision History:

  37: *

  38: * Version Date         Author(s)        Description

  39: * 1.0     08 Dec 2009  Martin Allen        Intial Release

  40: *

  41: ********************************************************************************/

  42: CREATE FUNCTION [udf_IsGroupMember]

  43: (

  44:  @user nvarchar(1000), 

  45:  @domain nvarchar(1000), 

  46:  @groupName nvarchar(1000)

  47: )


  49: AS EXTERNAL NAME [SpittingCAML.SQLCLR.DatabaseAssemblies].[SpittingCAML.SQLCLR.DatabaseAssemblies.ADIntegration].[IsGroupMember]


  51: GO

Code Snippet 3: Deploying our assembly and creating a stub that will allow standard SQL stored procedures and functions call our assembly method as if it was a standard SQL function. (download a copy here: Deploy_Assembly.sql)

You will need to edit code snippet 3 with the database name and the folder that you have built the assembly [created by Code Snippet 1] to.

Once you have run this script, you are in the position to test it.

   1: DECLARE @user nvarchar(1000)

   2: DECLARE @domain nvarchar(1000)

   3: DECLARE @groupName nvarchar(1000)


   5: -- TODO: Set parameter values here.


   7: set @user = 'testuser'

   8: set @domain = 'mydomain'

   9: set @groupName = 'mygroup'


  11: SELECT [dbo].[udf_IsGroupMember](@user, @domain, @groupName)

Code Snippet 4: testing the deployment was successful. (download a copy here: Test_Assembly.sql)

Set the parameters to something more meaningful to your Active Directory, and supply a valid domain and give it a go.

Now we have a useful function to utilise alongside standard T-SQL functions. I use it to replace legacy function calls to proprietary permission models to reduce maintenance costs!

Hope this helps you out! Enjoy Smile


Further reading on SQL Server Assemblies

InfoPath forms with file attachments: one potential solution, for all InfoPath forms?

InfoPath is a pain in the backside to me and my colleagues. Our users knock up a variety of forms and then ask us to support them when they go wrong, or don’t do what they need. Just to add a little more to that, we only have InfoPath 2003 on the client machines, which means we are working with legacy forms. Currently, our production systems run SharePoint 2007.

We had one such problem arrive on my desk a few weeks back. It seemed that a form was put together by another department that had around 20 (yes twenty!) file attachment controls on it. Each instance of this form would be a ‘case’, that would be under version control in it’s destination form library. I was shown some examples, and we had some InfoPath forms reaching upwards of 100MBs in size. We weren’t all that surprised that the more remote locations that my company occupies had issues reading these forms over our aging network infrastructure. They also have thousands of cases, so we would be looking at version controlling large files, and lots of them. We needed to avoid this, just to ensure we are being sensible with our backups!!

How to fix this? Well, you could add code to the InfoPath form template that uploads attached documents to a SharePoint document library. An excellent approach is devised here: How to upload a document to SharePoint from an InfoPath form. There are numerous others dotted around the internet similar to this… but what if you want to leave the novice users get on with it, without having to train them on how to hack code behind the templates?

I think there may be a solution to this, but believe me, it makes your eyes water when you look at how we have achieved it. It does look like an abomination… but it works, so I thought I’d blog about it and get some opinion from others who may have this issue in their organisation.

Why this solution is different:

  1. No code is required behind the InfoPath template
  2. Works with InfoPath 2003/2007 (untested with 2010)
  3. Works with any InfoPath template with only minor modifications required (as explained in this article)
  4. Uses the HTTP submission method
  5. Destination of file attachments is dynamic, users can alter this using standard InfoPath rules
  6. It requires a custom web handler (deployed to the GAC, deployed as a solution in the SharePoint farm)
  7. Removes attached files and replaces them with a URL file, to allow the user to click on it and open the file from the SharePoint document library
  8. I’ve only tried this in SharePoint 2007, but it should work with SharePoint 2010.
  9. This is a deploy and forget solution – no costly maintenance overhead, just some very simple guidance.

Known limitations:

  1. Only works with HTTP submit, with the built in submit button (if you know how to remove this limitation let me know!)
  2. Doesn’t permit folders inside the Form Library [it will always submit to the root folder]
  3. Users are told that they are likely to be opening a virus when they click on a URL link inside InfoPath – annoying!

Before I begin, acknowledgements must go to:

  • My fat blog – validation of allowed file extensions
  • Adis Jugo – validation of filenames to ensure SharePoint compatibility
  • Microsoft KB892730 – decoding, and encoding InfoPath attachments
  • Cyanwerks – creating URL files using the URL file format
  • … the man with the plan [who came up with this hair brained scheme in the first place, because it wasn’t me!]

The rest of the code is purely draft, non-production level stuff.

How it works:

  1. InfoPath form is submitted to a web handler [I have coined it the InfoPath file extractor (IFE)] (ashx) that is running in the SharePoint application pool with full trust
  2. InfoPath form uses query string arguments to tell it where to look for fields required.
  3. InfoPath form has a field group and a number of field to configure the extraction process
  4. Web handler traverses the submitted InfoPath for looking for base64 encoded attachments
  5. Each attachment is decoded and uploaded to SharePoint Document Library (with folder creation) based upon the configuration set in InfoPath
  6. Each attachment is replaced with a URL file linking to the uploaded file
  7. InfoPath form (with attachments replaced with URL files) is then persisted in a form library

Running Example [code available to download at the end of this article]:

First, use stsadm to deploy the wsp file (it generates this when you build it in Visual Studio, see the post build events, by default it uses c:\wsp at the destination). As shown in Figure 1

stsadm -o addsolution -filename c:\wsp\ife.wsp

imageFigure 1 – STSADM successful deployment 54

Add the solution to your web farm using SharePoint Central Administration. When you open Solution Management, it will be shown as not deployed. See Figure 2.

imageFigure 2 – Solution not yet deployed

Click on ife.wsp. And then deploy it to the all content web applications (or perhaps you wish to be selective?)

 imageFigure 3 – Deploy the solution

The web handler has now been deployed to your web farm. The code will now be in the GAC, and the web handler access will be from the 12 hive: TEMPLATE\LAYOUTS\SC\DEMO\SERVICES\IFE.ashx for all websites.

An example URL would be: http://moss/_LAYOUTS/SC/DEMO/SERVICES/IFE.ashx

Create a demo SharePoint site. Create it with one document library and one form library. I used a blank site as the template for this example and did the rest by hand.

My example demo site looks like:

imageFigure 4 – Demo site

I need to reference these URLs later in this working example:

The site URL is:  http://moss/ifetest
Form library URL is: http://moss/ifetest/FormLibrary
Document library URL is: http://moss/ifetest/ExtractedFiles

Now create an InfoPath form…

I used the Asset Tracking example form (as supplied with InfoPath 2007) and slightly modified it to include a file attachment control in the repeating group.

image Figure 5 – updated the Asset Tracking example form to include file attachment

Now modify the submit options for the form by selecting submit options from the tools menu (as shown in Figure 6)

imageFigure 6 – Updating submit options

The IFE handler requires two query string arguments

  • RootGroup = which is the root field in the form. For the asset tracking example, this is ‘assetTracking’
  • IFEGroup = which is the root field for the IFE configuration fields. In this worked example, I am going to call this root field ‘IFE’.

This means the HTTP submission will look like: http://moss/_LAYOUTS/SC/DEMO/SERVICES/IFE.ashx?RootGroup=assetTracking&IFEGroup=IFE

Now we need to set up the configuration fields in the InfoPath form. Create a new group. As shown in Figure 7.  We call this IFE, as this is how we defined it in the query string.

image Figure 7 – Creating IFE group of field for configuration of the web handler.

In this group we now need to set up a number of fields. These fields should match those in your configuration. The ones here are only set to match my server.

  • SiteURL – SharePoint Site URL
  • FormLibraryURL – InfoPath form library URL
  • DocumentLibraryURL – SharePoint Document library URL
  • DocumentLibraryFolder – the folder inside the document library where extracted files will go (I’ve concatenated two fields together from the Asset Tracking form)
  • InfoPathSaveAsFileName – the name that the form will be called (I’ve concatenated two fields together, and added ‘.xml’ to the end for it to be a valid InfoPath filename.
  • OverwriteFiles – whether the handler should overwrite existing files if they have the same name. This is a boolean string {“True” or “False”}.

You then set the value by setting the ‘default value’ on the InfoPath forms. Fields such as DocumentLibraryFolder and InfoPathSaveAsFileName should be dynamically driven by content on the InfoPath form, as shown above.

You should now have a InfoPath data source like this:image
Figure 8 – The asset tracking Form, with the configuration fields in place

You can now demo this setup by previewing the form.

imageFigure 9 – Preview the form

Enter a name, and department (since these a now required to produce the folder and InfoPath save filename). Add an asset with a file attachment, and hit the submit button, as shown in figure 10.

Figure 10 – form filled in

When submit is hit, you may get a security warning. Hit ‘Yes’ to continue if you do.

imageFigure 11 – Ignore the security warning

When the form is submitted, you should see a dialog like the one in Figure 12.

Figure 12 – successful submission dialog

If you select the ‘Show Details’ button, you will see feedback messages from the web handler. As shown in Figure 13.

Figure 13 - Feedback window

Now lets see if it worked!

The form library should have updated with the new form, and the document library should have a new folder, and an extracted file!


Figure 14 – Form library will our new form!

Figure 15 – Document library with new folder

Figure 16 – Extracted document now exists in the folder

In order for forms to be updated from the form library, we now need to publish it to the Form Library in the standard way.

Figure 17 – Publish the form to the Form Library

We can now click on the form we created earlier to reveal the attachments have been replaced with URL files.

Figure 18 – click on form in form library

Figure 19 – Document has been replaced with a link to the document in the Document Library!!!

The web handler will handle updates to this form by ignoring the already converted attachments, meaning we have created a self managing solution that will stop the InfoPath forms from becoming an unmanageable size, and we don’t even have to train the users to write code behind their forms!

Lets update the form with a few more different attached files.

Figure 20 - Adding more assets to the asset tracking form means more opportunities to add file attachments

And the Document Library now looks like:

Figure 21 – more extracted files

OK, so you now want to give it a go for yourself:

Download the source code: here.

If you do use this in your production environment, or make improvements/modifications to it, I’d like to hear from you! Also, please credit my involvement if it has saved you time, money or effort :-)

I make it available free of charge, with no warranty provided or implied. If you can’t work out how to build/deploy it, please ask on the various forums around the internet – I am not a SharePoint complete beginner support service :-D



Counter, the counterfeit flash drive

I’ve only just discovered this useful utility called H2testw 1.4.

So if you’ve bought one from eBay that looks a little dodgy, check it out.

A quote from the website

We give H2testw 1.4 the highest rating and recommend it for testing counterfeit USB Flash Drives for the following reasons:

  1. Easy to Use
  2. While in German an English Execution is possible
  3. Stand alone executable file – no installation required to run it
  4. You do not need administrator privileges on a computer to run it
  5. Tested to work on 1.1 and 2.0 USB drives
  6. Tested to work on 1.1 and 2.0 USB Ports
  7. Tested to work to analyze drives advertised as 4GB, 8GB, 16GB, 32GB and yes …64 GB capacity.
  8. Reports seen capacity – what the operating system sees size to be.
  9. Will write 1 GB files up to the reported size – requiring no work on your part except patience if it is a large drive and a slow computer
  10. Will read all the files it wrote and verify them
  11. Will produce a report. a) short if all is well b) detailed if there issues found.
  12. The program is offered free

Visit: here


O2 Joggler

I’ve just taken delivery of an O2 Joggler. They are reduced to £49.99 (from £99.99) till the end of this month, but it seems I got one of the last ones as they are now out of stock!

Figure 1 – the joggler running the default OS - (picture from

Figure 2 – shows the fixed stand (it can’t be adjusted)  - (picture from

Like many people, I was originally very confused by the marketing slant put on it by O2 when it was first released last year. I certainly was not prepared to be an early adopter when it was priced at £149.99, since it was a solution looking for a problem that never really existed. At £50, it is a bargain, since you can install netbook flavours of ubuntu, and the development community are working on android installations.

First and foremost, this device does NOT have any battery, so it is fixed in place once you plug it in. It is also NOT a phone. It does not have a sim card slot. This device relies on you having your own internet provider in your house. Connectivity is either Ethernet (it comes with an Ethernet cable) or wifi.

Even with this in mind the specification is rather good for an item that looks so refined. It reminds me of an apple product when I look at it.

  • Connectivity: Wi-Fi
  • Screen: 7 inch Touchscreen
  • Resolution: 800 x 480 pixels
  • Memory: 1 Gig onboard, expandable via USB.
  • Power: Mains only.
  • Video formats: MPEG-2, MPEG-4 part 2, MPEG-4 part 10 (H.264), WMV 9, DivX, Vc1, FLV
  • Audio formats: MP3, WAV, WMA
  • Size: 180 x 130 x 115mm.
  • Weight: 700 grams

There is an O2 app store where you can download new apps direct to your joggler, however there aren’t that many yet!

I think the stock OS is really pretty! It is also surprisingly usable. The on screen keyboard is also great compared to the abominations I’ve had to endure on various touch screen phones.

The stock OS is flash based. All developed apps that run on the stock OS are flash applications. This has advantages and disadvantages. The advantage is that pretty much anyone can create something. The main disadvantages are that we are not able to develop internet browsers, since flash does not support that.

However, if you really must have all that….

Custom OS installations are performed by USB stick with a flavour of linux on it, and it is possible to use it with a keyboard if you connect to the joggler using a USB hub.

I can understand why this device never really took off, but at bargain prices it will be snapped up for it’s potential!

Useful links:

Development / Information Forum (not official O2)
Jogger help (official O2)
Pocket-link Joggler review
Hack the joggler

I look forward to using my jogger to keep track of my ever changing calendar (oh, and SWMBO’s calendar too :-))… well at least until I get bored of it, then I can have a play with some custom apps and maybe even using it with linux.


Importance of web page look ‘n’ feel

The look ‘n’ feel of your website is important. BUT, it is less important than the text-based content. In most commercial websites, the role of the traditional graphic designer is relatively minor. The role of the information architect is central

This article focuses on look and feel.

  • “To look good is to be good - that’s the primary test when people assess a Web site’s credibility” B.J. Fogg, Ph.D (Stanford University 2002) [link]
  • “Uniformity an inherit part of a usable web site design” – Sigma Infotech [link]
  • “Complex and beautiful may win awards, but ugly and simple might just win the marathon.” – Gerry McGovern [link]
  • “Consistency is one of the most powerful usability principles”, “users spend most of their time on other websites.” – Jacob Nielsen [link]
Figure 1 – Scott Adams ‘Dilbert’ on web design (lifted from here)


  • Ensure page layout and content style is part of the design
  • Decide on tone, phrasing and naming conventions for all language used on the site  
  • Decide on the page flow and use the same flow for all pages
  • Template as much of the layout as possible (e.g. Master page)
  • Use cascading style sheets (CSS)
  • Create reusable page components (e.g. User Controls / Server Controls)
  • Seek the advice of an imagery expert when using graphics / icons


  • Design as you go
  • Implement each page with no regard to how other aspects of the application work
  • Recreate components that have already been written for other parts of the application
  • Use inline styles, unless there is a good reason
  • Confuse the user with poor use of language / symbols
  • Resize, stretch, crop or distort images when displaying them as part of your application (unless this is the purpose of the application)

There are several other key elements that shouldn’t be neglected in the design phase of a project.

  • Ensure consistent feedback is given to the user (in terms of error, success messages)
  • Adopt the keep it simple stupid (KISS) approach to design
  • Ask non developers to test your application – usable web pages don’t require a manual to operate them
  • If you need to use a picture, get it sized and formatted for web site usage

Further reading:
9 Essential Principles for Good Web Design


Report Viewer Control fails to render correctly in IE8

One of our more widely used application uses the Report Viewer Control to render SQL Server Reporting Services 2005 reports.

We are migrating to IE8 in the near future, so it was up to the various development teams to ensure compatibility. It should have been oh so easy :-)

Figure 1 shows the Report Viewer Control correctly rendering, a report is produced with twenty pages. Scrolling to the bottom of the control works as expected.

Figure 1- Application renders correctly in IE6

Figure 2 shows the same page and content in IE8. All looks good until you scroll to the bottom of the report (shown in Figure 3).

Figure 2 - All looks okay in IE8…. however….

Figure 3 - Scroll bar not rendered correctly [circled in red], page number footer missing (it is visible in Figure 1 when rendered in IE6)

There is a workaround for this particular issue. It involves the DOCTYPE markup that you may have in your master page or each individual page.

You can read more information about DOCTYPES at Holly Bergevin’s page: DOCTYPE at Community MX

Removing it completely from your page will make the browser work in ‘quirks’ mode. You’ll find that enabling ‘quirk’ mode will ensure the report is rendered correctly. See MSDN social.

If like me, you’ve got your DOCTYPE in your master page, and you’d really like to keep your XHTML compliant markup in place for the majority of your pages, it is a little more tricky. Either code the DOCTYPE in each page (removing it from your Master page! yuck!)… or find another workaround.

I search in vain for a solution. If you can help, please comment on this article. It seems it is a known bug!


Massive SharePoint config database log file

Its a common problem that many of us have. You set up your SharePoint farm, and its been running sweet as a nut, till you get asked one difficult question by the server team… why is the SharePoint configuration database log file so large?

SharePoint 2007 creates its databases in FULL RECOVERY mode. This is presumably because the SharePoint central administration pages were going to offer a more comprehensive recovery option, however I’ve only ever seen a full database restore. However, even though the built in GUI doesn’t make the use of the FULL RECOVERY model, I don’t think we should discount it.

There are two (nice) ways of coping with the log growth issue;

  1. You use your own backup strategy [NB this means you can’t use the built in restore option anymore], e.g. Scheduled SQL Job to backup the databases and logs yourself. When you do this, you have the option to use the WITH TRUNCATE ONLY option, or…
  2. Use the built in SharePoint backup/restore service from Central Admin, and schedule log truncations at defined intervals. (see below snippet of code shamelessly stolen from the Microsoft DBCC Shrink file tutorial)

– Example truncating a config db logfile to 70 MB):
BACKUP LOG [Sharepoint_Config] TO DISK=’drive:\path\Yourbackupfile.bak’
USE [SharePoint_Config]
DBCC SHRINKFILE (N’SharePoint_Config_log’ , 70)

– Note: the 70 is the 70MB.  

There is an excellent article on Death By Patterns website about log file management. This gives you an overview of why logs grow, and how they work in practice.

It should be noted that in more than a few places on the net, people recommend changing the database mode from FULL to SIMPLE in order to perform a DBCC SHRINKFILE. Whilst this is a simple and effective solution it takes away a crucial service that administrators can perform on a live SharePoint farm, if something goes wrong during the day, before a backup takes place, you have the capability of doing a in place transaction rollback.

See Sherin George’s blog for an excellent overview of the recovery models available in SQL Server.

See Server Fault discussion on SharePoint recovery models.


Bug introduced in Windows NT3.1 (1993) still affecting all subsequent releases of Windows!

Since NT was introduced, pure command line DOS was replaced with the Virtual DOS machine (VDM) that allows legacy DOS and 16-bit windows application to run on top of NT (all variants), XP, Vista and Windows 7.

It would seem the VDM engine has a major flaw!

Read more at and packet storm security

A summary of the issue is that it is possible for a limited user (i.e. a non administrator user) to gain administrative privileges via the VDM.

A workaround is to disable 16-bit applications as part of the Active Directory policy for your domain.


IT in 2010… is it going to be as we predicted it?

With the new year almost upon us, I’ve been trying to theorise what skills will be required of developers in the next four years.

It is important to take stock of what you’ve got at the end of each year to ensure you offer training and support to the development team to help ensure they are productive with new technologies and tools.

As a developer myself, do I need to worry about keeping my job… ?

Ellen Fanning from Computer World, back in 2006, predicted outsourcing and the need to be business savvy was a major threat to IT workers.

John Newton from the CIO weblog (also from his own blog), back in 2007, predicted that content management would be improved and delivered in more human friendly ways. Business computing would shift to Blackberry type devices. User Interface design would be improved and take ideas from the gaming market.

There are a number of existing/emerging technologies that will impact my organisation in the near future (… I’m well aware that we are behind the curve on most of these, but give us a chance, and we’ll try to keep up :-)). Those are

  1. SharePoint 2010
  2. .NET 4.0
  3. ASP.NET Model View Controller 2
  4. jQuery
  5. Silverlight
  6. Windows Presentation Foundation

I’m sure there are many others.

We’ll also be trying to maximise productivity with our existing tools, such as the K2 BlackPearl / BlackPoint workflow suites.

It is has been a difficult time for many IT workers, when a business looks at what it can cut out of the budget, it usually means laying off staff, or the reduction of investment in their IT systems. Hopefully we can take heed with an article from Judi Hasson, Fiece CIO, who writes that IT is the key to recession recovery. Lets hope so!

Merry Christmas and a happy new year to all :-D


Why should the FQDN make a difference when using Integrated Security authentication?

We run a few internal applications that are addressed using a FQDN:

The applications are also available through the server name:


We’ve had a strange issue with one of our applications that requires ‘Integrated Security’ authentication.

A few of our users, who run IE6 (because that’s what they are forced to use) get prompted for credentials.

You’d assume that since IE knows who the user is, that it would simply provide it to the application, and it would allow access.

I’ve done a little digging at it would seem we are not the only people with this issue.

It would seem that this is a browser, rather than application related issue:

Best explanation award foes to Windows IT Pro:

It would seem the only short term solution is to provide the fix to the user community before we can update group policy (if this is possible!)