31 January 2013 Leave a comment

So awhile ago I mentioned automated versioning of components during a TFS Build.  At the time I suggested Chris O’Brien’s approach, which is a good simple approach.  However, if you want something a bit more powerful take a look at TSVersioning.

TFSVersioning is a codeplex project that has heaps of versioning goodness and good documentation to match.

It is really easy to add to build process just following the instructions they have given.

When using this with SharePoint projects the only limitation I found was that you needed to apply the Assembly Version Pattern which obviously you don’t want to do with SharePoint.  This appears to be a limitation in one of the embedded XAML files.

If you want to remove this limitation then just do the following.  Download the source code (I’m using the version.) and open the solution in Visual Studio.

Find the VersionAssemblyInfoFiles.xaml file and open up the designer.

Find the node that task that says “Validate then process Build Definition Values”.  Chagne the “Or” in the condition to an “And” so it reads “String.IsNullOrWhiteSpace(AssemblyVersionPattern) And String.IsNullOrWhiteSpace(AssemblyFileVersionPattern)”.


Compile the project and use the DLL instead of the one included in the package.

The other XAML and flows already allow you to specify either the AssemblyFileVersionPattern or AssemblyVersionPattern, just not this flow.

Note:  Version 2 has been released for TFS2012, I do not know if that version has this limitation or not.

Categories: Programming, SharePoint

Automating SharePoint build and deployment–Part 4 : Putting it together

6 December 2012 Leave a comment

This is the fourth, and probably final, in a multipart post about automating the build and deployment of SharePoint solutions.  The other blog posts are:

In this post we will put all these bits together to form one process from check-in to deployed solution.  Again as in the rest of this series this post is going focus on the principles behind this task, and how we went about it.  It is not going to give any code.

The parts

So far we have built an automatic deployment script, got automated builds happening that build the package for deployment.  Now we need to get the automated build triggering a deployment into a staging or test environment.

We could use the lab management features of TFS Team Build to handle this, and that would be good way of going.  But the approach we decided to use was chosen as the only dependant tool is PowerShell, the other active components can be replaced with anything that will do that task, which there are a few to choose from.

For our team, in order to leverage the work that had been done by other teams just before us we decided to use TFS Deployer to launch the deployment based on changes to the build quality.

PowerShell Remoting

In order to allow TFS Deployer to do its job we need to enable PowerShell remoting from the build server (as the client) and our target SharePoint server (as the server). 

To setup the SharePoint server open an elevated PowerShell prompt and enter the following commands:

Enable-PSRemoting -force
Set-Item WSMan:\localhost\Shell\MaxMemoryPerShellMB 1000
Enable-WSManCredSSP -Role Server

This script enables remoting, increases the size of the memory available and then allows the server to delegate authentication to another server.   This enables our deployment script to issue commands on other parts of the farm if required.

The build server is our client so there is a little more work to do from the client.

# Enable Powershell remoting on server
$SharePointServer = "SharePointServer"
Enable-PSRemoting -force
Set-Item -force WSMan:\localhost\Client\TrustedHosts $SharePointServer
Restart-Service winrm

# Enable client to delegate credentials to the a server 
Enable-WSManCredSSP –Role Client –DelegateComputer $SharePointServer

#Also need to edit Group Policy to enable Credential Delegation

Further information on this entire process can be found at Keith Hill’s blog

TFS Deployer – The agent in the middle

TFS Deployer is the tool we have chosen, it is not the only one, to detect the change in the Build Quality and launch the scripted deployment on the chosen environment.

TFS Deployer had got a good set of instructions on their project site.  To simply our environments we run TFS Deployer on the build server.

In order to simplify the process for our many clients and projects we have TFS Deployer call a generic script that first copies the project specific deployment, after interrogating the $TfsDeployerBuildDetail object for the location of the files.  It then calls that deployment script to execute the deployment.

The project specific script file is also reasonably generic.  It first copies the packaged output (see previous posts) to the target server.  It then remotely extracts the files and executes the deployment scripts from inside that package.  Of course, in this scenario we know all the parameters that need to be passed so we can conveniently bypass any prompting by supplying all the parameters needed.


That ends the series.  Unfortunately it has been code light, as I was working through the process and trying to capture the philosophies rather than the actual mechanism.  Hopefully you have learnt something from reading these posts and picked up a few tips along the way. 

As stated in the first post the purpose of this series was to put together and understand our build framework and requirements so that I could build a new version using the TFS Build workflow mechanism.  I believe I now have that understanding, so I can start that work.  Perhaps doing that work will lead to a few more blog posts in the future.

Automating SharePoint Build and Deployment–Part 3

22 November 2012 Leave a comment

This is the third in a multipart post about automating the build and deployment of SharePoint solutions.  The other blog posts are:

  1. Introduction
  2. The Deployment Package

In this post we will look at the build itself.  Again as in the rest of this series this post is going focus on the principles behind this task, and how we went about it.  It is not going to give any code.


If we are doing SharePoint solutions, surely a quick build from the Visual Studio environment, and a package from there, will produce the correct output for us.  Why should we go through the pain of automating the build process?

That is a very good question.  Here are a few reasons:

  • Ensure that the code can be built consistently from any development machine
  • Ensure that everything required for the build are in the source control repository.
  • Ensure that what is in source control builds.
  • Ensure regular testing of the build process.
  • Ensure rapid resolution of a broken build.

But of course it comes back to down to the big three we mentioned in a previous post:  Simplify, Repeatable and Reliable.

Simplify – Because in this case we are simplifying the work we need to do, and the associated documentation required.

Repeatable – Because we need to be able to repeat the process.  Maybe the build won’t be built again for a year, but we need to ensure that when that time comes we can do it.

Reliable – Because as long as it builds the output is known to be built the same way, and in the same order.

But the most important reason of all.  It is not hard to set up.


The build process only needs to do one thing.  Change the source code to the deployment package we talked about in the previous post.

One of the advantages of automating everything is that we can ensure that all parts in the process are tagged and labelled so that we can find the exact code that was used to generate the components running in production. 

In order to do that we are going to need the following actions performed:

  1. Automatic increment of a version number.
  2. Labelling the source code with the above version number.
  3. Stamping of all components with the build name, and version number.
  4. Build of the source code
  5. Package the built binaries into WSP packages, for SharePoint.
  6. Package of the WSPs, and our Deployment scripts into the package.

That seems like a lot, but as you will see most of that can be performed by the Team Foundation Build services.

How? – Building the solution

Team Foundation Server (and the service offering) make this a really easy task.  From Team Explorer just expand the build node, right click, select “create a new build definition…” and follow the wizard.

There are a few extra things that you will need to do to build SharePoint projects though:

  1. SharePoint assemblies need to be on the build agent.  Chris O’Brien has this covered in his post Creating your first TFS Build Process for SharePoint projects.  While not recommend to install SharePoint on the build server, we have installed the binaries on the server but NOT configured it as a farm, which would kill the performance of the server.  We also install Visual Studio, but this again is not necessary or advisable.  I suggest that you follow the How to Build SharePoint Projects with TFS Team Build on MSDN as that provides  good alternatives and clearly lays out the process.
  2. /p:IsPackaging=True needs to be added to the MSBuild arguments parameter so that TFS will tell MSBuild to also perform the SharePoint packaging tasks and create the WSPs.

How? – Versioning

Every .Net assembly contains two version numbers: the assembly version, which is used in the strong name and thus needs to be bound into a lot of the element.xml files.  And the file version number, which is just that a number.

Therefore for our purposes the file version number is adequate, not to mention less complex to implement, to track the running components back to the source code.

We will also need to use the version number as part, or all, of the build number and ensure that the source code repository is labelled with the build number as well.  Fortunately TFS Build already performs these actions for us.

Chris O’Brien has a simple workflow extension, and instructions, based on another derivative work.  For further information and for how to extend the TFS build template to include this versioning see part 6 of his continuous integration series.

How? – Packaging

Once the build has done the work of building the assemblies and the SharePoint packages the next step is to package these artefacts into the deployment package we mentioned in the previous post.

The team I was part of did this in the old TFS 2005 format (which used MSBuild) by extending the PackageBinaries target.  In addition we were able to separate the definition of the package from the creation of the package by using MSBuild include files.   This has made the solution developed incredibly easy to implement and highly reusable even though it is in the MSBuild format.

To integrate this with the newer TFS build workflows we just need to modify the build workflow to call this target at the appropriate place, after the components have been built.

The process for packaging is really quite simple:

1. Build the package in a staging directory structure by copying the output files from the appropriate place.

2. Zip up the staging directory.

3. Ensure that the zip file is included as a build output.


Last words

So now we have talked about the overall strategy, the deployment package and how we create the deployment package.  In the next post we will tie all the parts together to show how we can get all the way from the code to the running SharePoint server, every, or almost every, time that someone checks in some code.

Automating SharePoint Build and Deployment – Part 2: Deployment

1 November 2012 1 comment

In the first part of this series we introduced you to the concept of automated deployment and the three parts of our build and deployment framework: Build, Package and Deploy.

This post is about the Deploy process, what it looks and what we learnt.


While SharePoint has a mechanism for the deployment of solutions and features onto the platform.  There are many advantages to automating the deployment of solutions into SharePoint.  Lets have a quick look at what these advantages are:

1. Removal of manual steps

Installing a solution into SharePoint requires:

  • The installation of the solution (WSP) itself, from a command line using either STSADM or PowerShell. 
  • The Deployment of the solution, from PowerShell or Central Admin.
  • Activation of the features, at Farm, Web App, Site Collection or Site level, using a combination of site administration, central administration or PowerShell.

Thus a standard deployment can have pages of manual steps to be followed.  But as we can perform all of these tasks using PowerShell we should be able to build a script which can be used for the complete deployment of the solution.

2. Simplify documentation

This leads on from the first point.  With the removal of all the manual steps we can now produce less documentation.

3. More reliable deployment

Also following on from the first point.  With the removal of the manual steps we by default get a more consistent and therefore more reliable deployment.


There are a number of principles we want to encapsulate in the deployment package.

  • Simplicity – For the people using the package it should be as simple as unzipping it and double clicking a icon.
  • Agnostic – The deployment package should be able to be used in multiple environments without modification.
  • Self Aware – The deployment package should be able to detect what has previously been deployed and take appropriate action.  i.e. upgrade vs fresh install.
  • Reusable – The package should be able to be reused in multiple mechanisms, i.e. manually triggered installation versus automatically triggered.

In addition any framework we put together to help with the deployment we would like to be able to take from project to project with only configuration changes.

The Package

The build and deploy framework we use provides the ability for us to build any type of package we need to deploy the project, could be MSIs, MSBuild scripts, PowerShell, Batch File anything, as long as it is all self contained or relies on already installed built in commands.

The framework also delivers pre-built templates for BizTalk, Databases, IIS, COMPlus, SSRS and Windows services any of which can be combined and utilised together.  However, there was no reusable template for deployment of SharePoint solutions.  So we had to create one to add to the framework.  We decided to use PowerShell for the installation script as we could leverage not only the SharePoint CommandLets but also the standard SharePoint .NET components.  As a side, a lot of the other templates heavily utilise a custom task for executing PowerShell code from the MSBuild script, so we thought it was time we challenged that approach.

The first thing we want to do is decide on a structure for the package.  In order to keep with the simplicity principle, and to keep it inline with the patterns already in the framework, it is preferable to have the root of the package uncluttered.  (I should note here that while I was involved with the building of the framework the result was the amalgamation of work and ideas from multiple people and sources, I’m not attempting to take the credit from these people, even though I can’t remember their names.)


To the right is the structure that we settled with. 

  • Configuration contains the definitions and configuration for the package. 
  • Scripts contain PowerShell (and any other script support) required to execute the configuration instructions.
  • WSPs contain the SharePoint solution packages
  • Content contains some additional content required to be loaded into SharePoint during the installation, which would have otherwise been post implementation manual steps before the site would work.

In the root of the structure are just two files, the deploy.cmd for launching the installation and version.targets.  Version.targets is an artefact created by our build process this has two purposes:

  1. It can be referred to during deployment.
  2. If the zip file is renamed, we can still determine the version without digging down to the assemblies in the package.

Need good boot strapping (Simplicity)

The Deploy.cmd file calls a deploy.ps1 file which then calls a install.ps1 file.  This long chain was necessary and each file has its purpose.

Deploy.cmd performs three functions:

  1. Check for the existence of PowerShell v2.0
  2. Ensures that the appropriate Execution policy is set
  3. Launches deploy.ps1
    Originally this script also prompted the user for the action they wanted to take.  But we found that the repeated prompts for the web app that the solution was to be deployed to became a bit cumbersome so we delayed the menu to the deploy.ps1.

Deploy.cmd means that the person doing the install can just right-click and select run as administrator.  There is no need for them to open up the a particular PowerShell window, navigate to the right path, type the right command etc.  It is just a right-click.

Deploy.ps1 performs three key tasks which were easier to do with PowerShell than in the command prompt:

  1. Check the farm version is at or higher than what we have built for.
  2. Prompt the user for the Web Application to deploy to, based on the web applications available.
  3. Prompt the user for what action to perform.
  4. Call the appropriate scripts to perform the requested action.

Install.ps1 performs the main deployment operations.  It is therefore at this point that we encountered most of our problems.  We also use this script as our entry point for automatically triggered deployments as this script does not, if all parameters are supplied, prompt the user for any information.


PowerShell runs the Deactivate code (Simplicity)

When PowerShell is used to deactivate a feature any custom deactivation code is run in the PowerShell instance, not on a timer job.  This means that the assembly holding the custom code is loaded into that instance, and there is the problem.  Once an assembly has been loaded into a PowerShell instance it isn’t possible to remove the assembly without unloading the whole PowerShell runtime.

To get around this we separated Uninstall from Install and call the Install.ps1 twice – once for each action – from the deploy.ps1 script.

There may be other ways to get around this:

  • Version the assemblies for every release, this seems excessive in a SharePoint environment where the assembly version needs to be coded in a number of places.  Sure there are ways to tokenise this but it hasn’t been implemented universally in Visual Studio so it is still awkward to do.
  • Use the upgrade instead.  This would work but unless you are versioning your assemblies this seems difficult and in a CI type environment determining which version you are upgrading from isn’t always straight forward.  Again SharePoint hasn’t made this easy.

Log everything

Frequently errors in the installation that are reported are the result of an unreported error further up, or incorrect choices being made by the installer.  These are sometimes avoidable in the case of human being involved but could point to corrections needing to be made in our scripts in the case where humans aren’t the cause.

To help determine which case the error falls into if you can log everything then you are a long way toward diagnosing the fault.  PowerShell helps out here with some useful built in transcription commands.  We use these at the start every one of our scripts so that all output is recorded for us in RTF files.

In addition to the standard output from the script we also output some other useful information like all the contents of all the parameters that were used to invoke the script.  It helps.

We found these logs were unnecessary when created from an automatically triggered build as the output from these were packaged and sent via email by the tools we were using, so we added an extra parameter to the scripts to supress logging in these scenarios.  We obviously don’t set that parameter for manually triggered deployments.


Test before execution (Self aware)

In the scripts there are multiple steps that can cause errors.  Such as removal of an solution that has not been retracted, deactivation of a deactivated feature.  Usually these conditions are testable before execution and can be worked around if we are aware of them, i.e. don’t deactivate the feature if it is already deactivated, retract the solution, if it isn’t already.  Thus each part of our script generally tests before executing rather than trying to trap the error and report it erroneously.


Separate Configuration from Scripts (Reusable)

To make the scripts reusable across solutions and customers, the scripts perform actions based on XML configuration files in the Configuration folder.  This means that we can take the same scripting process and apply it to the next set of solutions that we do.  This is by far easier than modifying a large script of utilities every time we need a slightly different but similar deployment.


Farm Aware (Agnostic)

There are a number of instances when doing SharePoint installs that IIS, the SharePoint timer or admin services need to be restarted.  In a multi-server farm this action needs to be done all servers, not just the one you are executing the scripts on.  This means that your scripts need to detect the servers in the farm and perform the reset on all of them, where appropriate.


Last words

All the lessons above actually push us towards the principles listed above, I’ve marked each lesson with the principle it is associated with.

For those that are disappointed because I did not  to show any of the scripts we used,  tough!  The post was getting to long without showing the code.  What you can do though is keep your eyes on this blog as I’ll post some of these tricks at a later stage, most of them you can find online if you look anyway, so you aren’t missing anything big.

The next post will talk about how we get the build process to build this package for us.

Automating SharePoint Build and Deployment–part 1

25 October 2012 2 comments

Over the last twelve years of my career I’ve been part of a service delivery company that delivers software, both bespoke and packaged, to hundreds of customers every year.  In all of these projects there are some common tasks and frameworks that we have created to build and deploy our products.  In so doing it has improved the quality of the products we deliver, so I think it is worth sharing my experiences with building them.

This article, and the ones following, aren’t about those frameworks.  I’m not going to tell you how they work, and I’m not going to give you the code (so don’t ask).  This is because of a number of factors, but the key one is, it is out of date and it is using deprecated features of the current versions of Microsoft Team Foundation Server (TFS) and other tools.

This series is going to be about the guiding principles and techniques we have used to extend this framework so that we can do automated builds, and deployments, with SharePoint.  I’m hoping that, 1. we can understand what we have so we can update it and 2. that you, the reader, can improve your environment using these thoughts.

The toolset

Before I continue it is probably worth mentioning the tools we have used to build our framework, even though the principles in this series can and should be applied to any other tools that perform a similar function.

  • Microsoft Team Foundation Server – this is used primarily as a source control repository but the TFSBuild components are critical to in the build stage, which I’ll leave for the subject of another post.
  • TFS Deployer ( – this is a codeplex project that monitors the builds happening in TFS and triggers scripts based on changes to the build quality, result, etc.
  • MSBuild – Visual Studio, and obviously TFSBuild, use this behind the scenes to do the actual compilation of the code.  We use this to package the build output and sometimes as the scripting language for the deployment, again I’ll leave that subject for another post.
  • PowerShell – The ultimate scripting language for managing Microsoft technologies and products.
  • SharePoint 2010 – Ok so I almost forgot that one.  The fact that we deploy against this does not mean that we cannot use a different version.  It is just that from 2010 SharePoint had really useful PowerShell commandlets to help.  SP 2013 will work just as well using this broad approach, MOSS 2007 will require some changes on the Deployment step.
    You probably noticed that these are all either Microsoft technologies or built on Microsoft technologies.  That is because I work in the Microsoft Solutions team, and also because the point of this series is deploying to SharePoint (which is itself a Microsoft product).  This doesn’t mean you can’t substitute other tools but we have chosen to use this set.

Why automate?

It is important to define the goals behind this automated process.  Otherwise we could end up automating everything but achieving nothing instead of focusing on the important areas to automate first.

The goals are simple:

  1. Save time – No one likes doing the same thing over and over.
  2. Reduce errors – If it is automated there is less chance of forgetting something or doing it wrong.
  3. Save cost – By achieving a time saving and reducing the rework from errors we’ll automatically save costs. However, the time taken to introduce the framework to the next project that uses will also save costs, which can be directly passed onto the customers.

How and what are we automating?

The short version is that we are automating everything to do with building and releasing the software.  As soon as a developer checks in some code we want to be able to have that built, deployed onto a SharePoint farm and tested, before we tell the tester that they can have a look. 

At least that is the dream.  To realise this dream, and I haven’t quite got there yet, we have to break down the problem into small tasks that can be automate it.  Automating the entire process will be accomplished by building smaller automated pieces and then automating the running of these smaller pieces.

So lets break it down a little, which will also guide the structure for the rest of this series.

After many attempts at defining the parts of the framework we have settled on three broad areas for automating, these are:

  1. Build – The building of the source code, versioning the assembly outputs, signing etc.
  2. Package – The packaging of the build outputs into something that can be deployed.
  3. Deploy – The deployment of the package created in the above step.
    The main objective is to automate the deployment post shipment.  Thereby reducing the length of time it takes to write installation instructions, but also to reduce the errors that invariably occur when the Infrastructure people attempt to install our software, and if an error occurs, then reduce the time it takes to determine what where it went wrong.
    Therefore all the steps work toward meeting this objective, the first two steps to build and create the package, the third tests it before shipment, and controls the release post shipment. 

As in order to define the package we need to know what the deployment looks like we will start by investigating what we need to deploy SharePoint solutions.  And then as we progress through the series work backwards through the package and build steps.

There are of course other approaches for doing this, in fact, when we come to update our framework we will probably lean heavily upon them.  The most notable is Chris O’Brien’s series on SharePoint 2010 Continuous Integration, however our framework does some things differently so we will need to make some changes to that approach.

But that is enough of an introduction, in the next post we will look at the structure of the deployment package.

AD FS next steps…

Once you have AD FS setup with SharePoint there are some other considerations that you may want to consider, these are the ones that I have considered when installing for a client recently.


When the user logs out from the SharePoint site the user is not logged out of AD FS.  This may or may not be a problem but needs to be considered.  There are two options:

1. Disable Single Sign-on in AD FS.  To do this you will need to modify the web.config for the AD FS installation, see SharePoint and AD FS Part 2, and look for the microsoft.identityServer.web node.  In there you will find a singleSignOn property.  Change this value to false.  This method has the disadvantage of then not being able to sign on once to the organisation if there are multiple web sites the user can browse.  This may or not be an issue.

2. Modify the logout in SharePoint so it logs out of AD FS.  Shailen Sukul has an excellent example of this method here.

Adding another Web application

The point will invariably come when another web application from the SharePoint farm will need to use the same AD FS instance.  Steve Peschka has an excellent blog post “How to Create Multiple Claims Auth Web Apps in a Single SharePoint 2010 Farm” explaining how to do this.  The only thing that isn’t clear in his post is how to get the $ap variable populated if you already have it registered.  This is simple though, if you have only one token issuer registered then the following line will get it for you.

$ap = Get-SPTrustedIdentityTokenIssuer

If you have more than one you can use

$ap = Get-SPTrustedIdentityTokenIssuer –Identity "name of issuer"

to the end of the line to resolve it.

Categories: SharePoint Tags: , ,

SharePoint 2010 and AD FS – Part 3

In the first part of this series we looked at the problem we were trying to solve by using these products together. In the second part we installed the AD FS server.  Now we need to register our AD FS as a token issuer in SharePoint and configure SharePoint as a relying party in AD FS.

We will start on the AD FS Server creating the Relying party since if you are following this series in sequence you will already be on the AD FS Server.

Relying party trust

1. From the AD FS 2.0 Management application expand the Trust Relationships node and click on the Relying Party Trusts node.  Then click the Add Relying Party Trust link to start the wizard.


2. Press start to start the wizard.

3. Select “Enter data about the relying party manually”, and click “Next”

4. Enter a display name and a description if you want to, and press the “Next” button.

5. Select the option to use the AD FS 2.0 profile and click the “Next” button.

6. We won’t use a certificate to encrypt the SAML token so leave this step as is and press the “Next” button.

7. On the Configure URL screen check “Enable support for the WS-Federation Passive protocol” and enter the URL to the SharePoint web application’s root site, and then include the “_trust” subdirectory.  i.e.

8. After entering the URL press the “Next” button.

9. Under Configure Identities enter a realm in the form of a URN.  This is generally created in the format of urn:foo:bar.   This realm we will associate with a web application in SharePoint.  This means that SharePoint can tell AD FS which relying party to use.  I’m going to use urn:seo:portal for my application.  Then click the “Next” button.

10. Permit all users to access this relying party, and then press the Next button.

11. Click the Next button again and then click Close.

This will close the wizard and open the Edit Claim Rules dialog box.


12. Click the “Add Rule…” button.

13. Select “Send LDAP Attributes as Claims” and click the Next button.

14. Start by giving the rule a name, it can be anything you like.  Then select Active Directory from the Attribute store drop down.

15. I am going to use the email address as the identifier for the person and pass through all groups a user belongs to as a Role claim.  To do this mapping, select the attribute from the left hand side and the corresponding one from the drop down on the left hand side.

This is what you should end up with.


16. Click Finish, and then OK.

You have now configured the Relying party for SharePoint.  We now need to grab the token signing certificates from the AD FS manager and configure AD FS as a Token Issuer in SharePoint.

Copying certificates

Extract from AD FS

AD FS uses a certificate to sign the tokens it sends out. This ensures the consumer of the token that it has not been tampered with since it was created. To configure SharePoint we need a copy of this certificate. To get this token signing certificate from AD FS, expand the Service node and click on the Certificates node.


There is a section there for Token-signing certificates. You may have one to many token-signing certificates, but there will always be ONLY one Primary token signing certificate. Click on that certificate, and then click on the View Certificate link in the right pane.


Now that you are viewing the certificate, click on the Details tab at the top of the dialog.


Click on the Copy to File… button. That will start a wizard to save a copy of the certificate to disk.

Click the Next button to continue.

You don’t need the private key, so accept the default settings and click the Next button.


The default format is fine so click the Next button to continue.

Pick a location to save the certificate and click the Next button. Make sure you remember this location because you will need to copy the certificate from where you save it over to the SharePoint server.

All the information needed to copy the certificate locally has been captured now so click the Finish button to complete the wizard and save the certificate to a local file.

Copy this file to the SharePoint server and then we are finished with the AD FS server.

These steps should be completed for all certificates in the chain.

Register with SharePoint

On the SharePoint server, run the following PowerShell script for each certificate exported above, changing the Name as required.

  $cerFile = "C:\ADFSParent.cer"
  $X509 = [System.Security.Cryptography.X509Certificates.X509Certificate2]
  $root = New-Object $X509($cerFile)
  New-SPTrustedRootAuthority -Name "Token Signing Cert Parent" -Certificate $root

Register the provider in SharePoint

Now we have the token signing certificates on the SharePoint server we need to create a Trusted Identity Token Issuer for AD FS and register in SharePoint.

To do this we will need some more PowerShell.  Here is the PowerShell, I’ll explain it below.

  $cerFile = "c:\TokenSigning.cer"
  $cert = New-Object $X509($cerFile)
  $map = New-SPClaimTypeMapping -IncomingClaimType
	                        -IncomingClaimTypeDisplayName "EmailAddress" -SameAsIncoming
  $map2 = New-SPClaimTypeMapping -IncomingClaimType
                                -IncomingClaimTypeDisplayName "Role" -SameAsIncoming

  $realm = "urn:seo:portal"

  $ap = New-SPTrustedIdentityTokenIssuer -Name "AD FS Login Provider" -Description "SharePoint secured by AD FS" -realm $realm
                                -ImportTrustCertificate $cert -ClaimsMappings $map,$map2 
                                -SignInUrl https://logon.example.coml/adfs/ls
                                -IdentifierClaim $map.InputClaimType


The first two lines load the token signing certificate from wherever you saved it.  Make sure you update the file path.

The next two lines specify the equivalent mappings to what we provided above when creating the Relying party.

Next the Realm is defined, which we entered above for the Relying party.

The last line creates the Trusted Identity Token Issuer in SharePoint.  The Name attribute is shown to your users if they need to select an authentication provider so don’t make it too cryptic.  The SignInUrl is the location of the AD FS Server instance.  The last attribute, IdentifierClaim, is the claim we are using to identify the users, this will show as their display name.

At this point you should be able to select this provider as a token issuer in the web applications authentication options.


And that is the end of the series.

Categories: SharePoint Tags: , ,