Archive

Archive for November, 2012

Automating SharePoint Build and Deployment–Part 3

22 November 2012 Leave a comment

This is the third in a multipart post about automating the build and deployment of SharePoint solutions.  The other blog posts are:

  1. Introduction
  2. The Deployment Package

In this post we will look at the build itself.  Again as in the rest of this series this post is going focus on the principles behind this task, and how we went about it.  It is not going to give any code.

Why?

If we are doing SharePoint solutions, surely a quick build from the Visual Studio environment, and a package from there, will produce the correct output for us.  Why should we go through the pain of automating the build process?

That is a very good question.  Here are a few reasons:

  • Ensure that the code can be built consistently from any development machine
  • Ensure that everything required for the build are in the source control repository.
  • Ensure that what is in source control builds.
  • Ensure regular testing of the build process.
  • Ensure rapid resolution of a broken build.

But of course it comes back to down to the big three we mentioned in a previous post:  Simplify, Repeatable and Reliable.

Simplify – Because in this case we are simplifying the work we need to do, and the associated documentation required.

Repeatable – Because we need to be able to repeat the process.  Maybe the build won’t be built again for a year, but we need to ensure that when that time comes we can do it.

Reliable – Because as long as it builds the output is known to be built the same way, and in the same order.

But the most important reason of all.  It is not hard to set up.

What?

The build process only needs to do one thing.  Change the source code to the deployment package we talked about in the previous post.

One of the advantages of automating everything is that we can ensure that all parts in the process are tagged and labelled so that we can find the exact code that was used to generate the components running in production. 

In order to do that we are going to need the following actions performed:

  1. Automatic increment of a version number.
  2. Labelling the source code with the above version number.
  3. Stamping of all components with the build name, and version number.
  4. Build of the source code
  5. Package the built binaries into WSP packages, for SharePoint.
  6. Package of the WSPs, and our Deployment scripts into the package.

That seems like a lot, but as you will see most of that can be performed by the Team Foundation Build services.

How? – Building the solution

Team Foundation Server (and the service offering) make this a really easy task.  From Team Explorer just expand the build node, right click, select “create a new build definition…” and follow the wizard.

There are a few extra things that you will need to do to build SharePoint projects though:

  1. SharePoint assemblies need to be on the build agent.  Chris O’Brien has this covered in his post Creating your first TFS Build Process for SharePoint projects.  While not recommend to install SharePoint on the build server, we have installed the binaries on the server but NOT configured it as a farm, which would kill the performance of the server.  We also install Visual Studio, but this again is not necessary or advisable.  I suggest that you follow the How to Build SharePoint Projects with TFS Team Build on MSDN as that provides  good alternatives and clearly lays out the process.
  2. /p:IsPackaging=True needs to be added to the MSBuild arguments parameter so that TFS will tell MSBuild to also perform the SharePoint packaging tasks and create the WSPs.

How? – Versioning

Every .Net assembly contains two version numbers: the assembly version, which is used in the strong name and thus needs to be bound into a lot of the element.xml files.  And the file version number, which is just that a number.

Therefore for our purposes the file version number is adequate, not to mention less complex to implement, to track the running components back to the source code.

We will also need to use the version number as part, or all, of the build number and ensure that the source code repository is labelled with the build number as well.  Fortunately TFS Build already performs these actions for us.

Chris O’Brien has a simple workflow extension, and instructions, based on another derivative work.  For further information and for how to extend the TFS build template to include this versioning see part 6 of his continuous integration series.

How? – Packaging

Once the build has done the work of building the assemblies and the SharePoint packages the next step is to package these artefacts into the deployment package we mentioned in the previous post.

The team I was part of did this in the old TFS 2005 format (which used MSBuild) by extending the PackageBinaries target.  In addition we were able to separate the definition of the package from the creation of the package by using MSBuild include files.   This has made the solution developed incredibly easy to implement and highly reusable even though it is in the MSBuild format.

To integrate this with the newer TFS build workflows we just need to modify the build workflow to call this target at the appropriate place, after the components have been built.

The process for packaging is really quite simple:

1. Build the package in a staging directory structure by copying the output files from the appropriate place.

2. Zip up the staging directory.

3. Ensure that the zip file is included as a build output.

 

Last words

So now we have talked about the overall strategy, the deployment package and how we create the deployment package.  In the next post we will tie all the parts together to show how we can get all the way from the code to the running SharePoint server, every, or almost every, time that someone checks in some code.

Advertisements

Automating SharePoint Build and Deployment – Part 2: Deployment

1 November 2012 1 comment

In the first part of this series we introduced you to the concept of automated deployment and the three parts of our build and deployment framework: Build, Package and Deploy.

This post is about the Deploy process, what it looks and what we learnt.

Advantages

While SharePoint has a mechanism for the deployment of solutions and features onto the platform.  There are many advantages to automating the deployment of solutions into SharePoint.  Lets have a quick look at what these advantages are:

1. Removal of manual steps

Installing a solution into SharePoint requires:

  • The installation of the solution (WSP) itself, from a command line using either STSADM or PowerShell. 
  • The Deployment of the solution, from PowerShell or Central Admin.
  • Activation of the features, at Farm, Web App, Site Collection or Site level, using a combination of site administration, central administration or PowerShell.

Thus a standard deployment can have pages of manual steps to be followed.  But as we can perform all of these tasks using PowerShell we should be able to build a script which can be used for the complete deployment of the solution.

2. Simplify documentation

This leads on from the first point.  With the removal of all the manual steps we can now produce less documentation.

3. More reliable deployment

Also following on from the first point.  With the removal of the manual steps we by default get a more consistent and therefore more reliable deployment.

Principles

There are a number of principles we want to encapsulate in the deployment package.

  • Simplicity – For the people using the package it should be as simple as unzipping it and double clicking a icon.
  • Agnostic – The deployment package should be able to be used in multiple environments without modification.
  • Self Aware – The deployment package should be able to detect what has previously been deployed and take appropriate action.  i.e. upgrade vs fresh install.
  • Reusable – The package should be able to be reused in multiple mechanisms, i.e. manually triggered installation versus automatically triggered.

In addition any framework we put together to help with the deployment we would like to be able to take from project to project with only configuration changes.

The Package

The build and deploy framework we use provides the ability for us to build any type of package we need to deploy the project, could be MSIs, MSBuild scripts, PowerShell, Batch File anything, as long as it is all self contained or relies on already installed built in commands.

The framework also delivers pre-built templates for BizTalk, Databases, IIS, COMPlus, SSRS and Windows services any of which can be combined and utilised together.  However, there was no reusable template for deployment of SharePoint solutions.  So we had to create one to add to the framework.  We decided to use PowerShell for the installation script as we could leverage not only the SharePoint CommandLets but also the standard SharePoint .NET components.  As a side, a lot of the other templates heavily utilise a custom task for executing PowerShell code from the MSBuild script, so we thought it was time we challenged that approach.

The first thing we want to do is decide on a structure for the package.  In order to keep with the simplicity principle, and to keep it inline with the patterns already in the framework, it is preferable to have the root of the package uncluttered.  (I should note here that while I was involved with the building of the framework the result was the amalgamation of work and ideas from multiple people and sources, I’m not attempting to take the credit from these people, even though I can’t remember their names.)

image

To the right is the structure that we settled with. 

  • Configuration contains the definitions and configuration for the package. 
  • Scripts contain PowerShell (and any other script support) required to execute the configuration instructions.
  • WSPs contain the SharePoint solution packages
  • Content contains some additional content required to be loaded into SharePoint during the installation, which would have otherwise been post implementation manual steps before the site would work.

In the root of the structure are just two files, the deploy.cmd for launching the installation and version.targets.  Version.targets is an artefact created by our build process this has two purposes:

  1. It can be referred to during deployment.
  2. If the zip file is renamed, we can still determine the version without digging down to the assemblies in the package.

Need good boot strapping (Simplicity)

The Deploy.cmd file calls a deploy.ps1 file which then calls a install.ps1 file.  This long chain was necessary and each file has its purpose.

Deploy.cmd performs three functions:

  1. Check for the existence of PowerShell v2.0
  2. Ensures that the appropriate Execution policy is set
  3. Launches deploy.ps1
    Originally this script also prompted the user for the action they wanted to take.  But we found that the repeated prompts for the web app that the solution was to be deployed to became a bit cumbersome so we delayed the menu to the deploy.ps1.

Deploy.cmd means that the person doing the install can just right-click and select run as administrator.  There is no need for them to open up the a particular PowerShell window, navigate to the right path, type the right command etc.  It is just a right-click.

Deploy.ps1 performs three key tasks which were easier to do with PowerShell than in the command prompt:

  1. Check the farm version is at or higher than what we have built for.
  2. Prompt the user for the Web Application to deploy to, based on the web applications available.
  3. Prompt the user for what action to perform.
  4. Call the appropriate scripts to perform the requested action.

Install.ps1 performs the main deployment operations.  It is therefore at this point that we encountered most of our problems.  We also use this script as our entry point for automatically triggered deployments as this script does not, if all parameters are supplied, prompt the user for any information.

 

PowerShell runs the Deactivate code (Simplicity)

When PowerShell is used to deactivate a feature any custom deactivation code is run in the PowerShell instance, not on a timer job.  This means that the assembly holding the custom code is loaded into that instance, and there is the problem.  Once an assembly has been loaded into a PowerShell instance it isn’t possible to remove the assembly without unloading the whole PowerShell runtime.

To get around this we separated Uninstall from Install and call the Install.ps1 twice – once for each action – from the deploy.ps1 script.

There may be other ways to get around this:

  • Version the assemblies for every release, this seems excessive in a SharePoint environment where the assembly version needs to be coded in a number of places.  Sure there are ways to tokenise this but it hasn’t been implemented universally in Visual Studio so it is still awkward to do.
  • Use the upgrade instead.  This would work but unless you are versioning your assemblies this seems difficult and in a CI type environment determining which version you are upgrading from isn’t always straight forward.  Again SharePoint hasn’t made this easy.

Log everything

Frequently errors in the installation that are reported are the result of an unreported error further up, or incorrect choices being made by the installer.  These are sometimes avoidable in the case of human being involved but could point to corrections needing to be made in our scripts in the case where humans aren’t the cause.

To help determine which case the error falls into if you can log everything then you are a long way toward diagnosing the fault.  PowerShell helps out here with some useful built in transcription commands.  We use these at the start every one of our scripts so that all output is recorded for us in RTF files.

In addition to the standard output from the script we also output some other useful information like all the contents of all the parameters that were used to invoke the script.  It helps.

We found these logs were unnecessary when created from an automatically triggered build as the output from these were packaged and sent via email by the tools we were using, so we added an extra parameter to the scripts to supress logging in these scenarios.  We obviously don’t set that parameter for manually triggered deployments.

 

Test before execution (Self aware)

In the scripts there are multiple steps that can cause errors.  Such as removal of an solution that has not been retracted, deactivation of a deactivated feature.  Usually these conditions are testable before execution and can be worked around if we are aware of them, i.e. don’t deactivate the feature if it is already deactivated, retract the solution, if it isn’t already.  Thus each part of our script generally tests before executing rather than trying to trap the error and report it erroneously.

 

Separate Configuration from Scripts (Reusable)

To make the scripts reusable across solutions and customers, the scripts perform actions based on XML configuration files in the Configuration folder.  This means that we can take the same scripting process and apply it to the next set of solutions that we do.  This is by far easier than modifying a large script of utilities every time we need a slightly different but similar deployment.

 

Farm Aware (Agnostic)

There are a number of instances when doing SharePoint installs that IIS, the SharePoint timer or admin services need to be restarted.  In a multi-server farm this action needs to be done all servers, not just the one you are executing the scripts on.  This means that your scripts need to detect the servers in the farm and perform the reset on all of them, where appropriate.

 

Last words

All the lessons above actually push us towards the principles listed above, I’ve marked each lesson with the principle it is associated with.

For those that are disappointed because I did not  to show any of the scripts we used,  tough!  The post was getting to long without showing the code.  What you can do though is keep your eyes on this blog as I’ll post some of these tricks at a later stage, most of them you can find online if you look anyway, so you aren’t missing anything big.

The next post will talk about how we get the build process to build this package for us.