I run a scripted deployment process each time new (SharePoint) solutions are ready for QA, or any environment for that matter. We script a series of commands, specifically: Uninstall-SPSolution, Remove-SPSolution, Add-SPSolution and finally Install-SPSolution.
Before running these commands, along with a number of other checks, we perform Get-SPSite to ensure the site is available and fail early if need be. If our custom solution has not been deployed then viewing any page under any of the web application’s site collections fails with an exception due to a failure to find one of the solution assemblies. This is because we have custom membership and claims providers which are defined in the solution assembly and are referenced in the web.config for the web application. Despite this, any SPSite and SPWeb objects can be obtained safely via PowerShell as forms authentication is not taking place.
So I was surprised to find that the Get-SPSite check failing during deployment today with a failure to locate the assembly containing the membership and claims providers. I did not discover the root cause as to why this suddenly occurred but will outline what it took to fix it.
In the end I was able to ” Install-SPSolution -force ” to recover from this situation but not before stopping and starting the SharePoint Administration Service across all Web servers in the farm. The service was not stopped on any of the machines however the Install job was never completing despite the timer job history in Central Administration stating that the job had been successful. Upon restarting all this service the Install-SPSolution command would then complete.
Credit to Andreas’ blog for putting me in the right direction.
I won’t take this opportunity to evangelise the benefits of scripted deployments, I’m going to assume that you already do it (as you should be!) and provide a tiny bit of script that will identify when your deployment doesn’t run quite so smoothly. Before I do, I’ll briefly mention my experience as to why a deployment may fail.
I have found that the by far the primary reason that a SharePoint full-trust deployment fails is that one or more of the assemblies being deployed is locked in the GAC and cannot be removed/replaced. An ISSRESET fixes this in the majority of cases (consider performing a remote ISSRESET across all the farm SharePoint servers as part of your deployment process. Note to self: future blog topic…) and in the remainder of cases stopping related services (v4 Timer, SSRS etc) on the affected server will release the assembly. The easiest way to identify the servers at which the deployment failed is via CA:
Central Administration > System Settings > Farm Management :: Manage farm solutions > “Your WSP”
WSPs are deployed using a timer job. When performing deployment actions we need to wait on the timer job and upon it completing then verify the deployment status of the solution is as we expect. If we don’t wait for the action and ensure that it has run successfully we run this risk of not detecting a failed to deployment until we attempt to access the site. This can be a real time sink if we run lengthy scripts after deployment or are scripting the deployment a number of solutions in succession. The following snippet shows how easy it is to achieve this in PowerShell.
May your deployment failures be immediately detected.
Programmatically updating user profile properties (specifically Email) for inactive users may cause you grief.
Background
In SharePoint a user’s email address (along with some other key properties including name, login etc. ) are stored in three distinct locations. There are:
User Store (this could be AD, SQL for FBA etc.)
User Profile Database (specifically the ‘UserProfile_Full‘ table)
SharePoint Content Database(s) (specifically the ‘UserInfo‘ table)
For the sake of this article I am making the presumption that User Profiles have been configured.
The data stored across these locations needs to be keep synchronised for obvious reasons and this done in different ways depending on the source being updated. For FBA updating the User Store will almost certainly only occur via User Profiles if it is allowed at all (depending on what properties you decide to store there), so sync’ing TO the User Profile Database will not be considered. In the case of an update occurring in AD, the User Profile Synchronisation timer job will periodically check for updates and persist them to the User Profile Database. You find more on this here.
User Profile Synchronisation
A scheduled process will then sync’ the User Profile Database with the SharePoint content database.
Gotcha!
It is this part of the process which has inspired me to write this post; there is an important caveat to previous sentence. It should be as such: A scheduled process will then sync’ the User Profile Database with the SharePoint content database for all active users (in this case an active user is one that has saved his/her user profile). The impact of this is that if you programmatically update the email user profile property for an inactive user (e.g. from a@b.com to c@d.com) the property value will not be sync’ed and an action such as:
will send an email to a@b.com rather than c@d.com whereas:
would correctly send the email to c@d.com (Note: Mailer is a fictitious class in these examples).
For FBA scenarios this follows on what may be a more significant issue. If you have your users login using their email address then most likely the FBA login name includes this email address (e.g. for user with email a@b.com, their login name is i:0#.f|membership|a@b.com). If your custom developed components include code that relies on this relationship, such as web.EnsureUser(TOKEN + email), then this code may start failing for these inactive users. It’s unlikely that this will affect the inactive users themselves (as they aren’t using the system) but it may affect other users which are attempting to interact with these users that exist in the system before they have begun using the system.
Solution
I believe that the goal is to update the tp_Email column in the SharePoint content database UserInfo table. I have been unable to find an API which does this. Set-SPUser does not achieve this, and nor does SPFarm.MigrateUserAccount. I don’t dare break best-practice and update the table directly as in my case knowledge of this issue was enough. This is because the only time we attempt to update inactive user profiles is when obfuscating production email addresses upon restoring production database backups to our test environments. Having said that I would be very interested to hear of a good solution to this issue if anyone has found one.
Infrastructure contacted me to complain that one of our SharePoint environments was logging too much data (via the ULS) and it was becoming unmanageable (an Operations Management tool like SCOM has not been configured). Looking through many gigabytes of text, even with a free tool like ULSViewer, it is difficult to be confident that you are correctly identifying the most common issues, it is an inaccurate art at best.
That is why I wrote a log analyser as a PowerShell script which will process ULS log files and, using fuzzy comparison, create a report of the most frequently occurring log entries.
I am very well aware that this is not necessarily useful information in many cases (hence I had to write this script myself). Nevertheless I found it useful in my scenario and I hope that some of you may as well.
Just in case you are interested: using this script I was able to declare with certainty that logging by the SPMonitoredScope class made up almost 30% of total log entries. This will be reduced by explicitly stating the log severity in the class constructor as verbose and maintaining a log level of Medium for the SharePoint Foundation : Monitoring diagnostic logging category.
A few things of note:
You may want to add to or remove from the set of replace statements in order to increase/decrease the ‘fuzziness’ of the comparison. Adding a replace statement for removing URLs may be a good candidate if you wish to increase matches.
The script loads entire files into memory at once. Be aware of this if you have very large log files or not much RAM.
The output file is a CSV, open it with Excel.
By default the script will identify and analyse all *.log files in the current directory.
If you cancel the script during processing (ctrl+c) it will still write all processed data up until the point at which it was cancelled.
I quite enjoy PowerShell-ing so expect to see more utilities in the future.
In an ideal world, downtime (scheduled or otherwise) would be avoided entirely. Unfortunately, there are plenty of reasons why a web site may need to go into a scheduled maintenance mode. It’s important that this is done correctly and performing such as task across a farm manually can be error prone and tedious.
In my case, I wanted to automate the activation/deactivation of a maintenance page across a SharePoint farm with multiple Web Front End servers. The same script would be run across a number of different environments with differing topology depending on requirements (development, QA, staging, production, etc).
As of .NET 2.0 a very useful feature was introduced which makes putting a single web application (on a single server) into maintenance mode straightforward. If you drop a file named “app_offline.htm” into the IIS web application directory for your web site it will “shut-down the application, unload the application domain from the server, and stop processing any new incoming requests for that application. ASP.NET will also then respond to all requests for dynamic pages in the application by sending back the content of the app_offline.htm file”. [quoted from ScottGu’s Blog].
Leveraging this technique I have written a script to provision (or remove) a maintenance page correctly across all the required servers in a SharePoint farm.
A few things of note:
There are a number of SharePoint specific PowerShell commands being used so, as it is, this script cannot be used for other, non SharePoint, ASP.NET web sites. I would like to hear from anyone who modifies it for another use.
The user running the script will need to have enabled PowerShell remoting (Enable-PSRemoting -Force) as well as permission to Write and Delete from the file system of the other servers.
As it is, the script drops the app_offline.htm file for every web application zone. In my case, I only wanted to block users from accessing the zone configured to use our custom claims provider. I have left in the check for this in case you find yourself in a similar situation.
The maintenance page must be entirely self-contained. By this I mean that all CSS and JS must be embedded and even images must referenced using inline base64 representations. See here for an easy way to achieve this.
If want to perform an IISRESET and stop/start the SharePoint Timer Service at each WFE before taking down the maintenance page then uncomment the relevant lines.
If you find this helpful please subscribe to our feed and feel free to leave a comment with your thoughts.
If you are a C# developer that is new to Powershell, or at least new to writing Powershell functions with typed parameters you will likely come across the following error:
Cannot convert the "System.Object[]" value of type "System.Object[]" to type "<type of first argument>"
The reason for this error is most likely because you are passing a comma separated list of arguments as you would in C#:
myfunction arg1, arg2, arg3 or perhaps even myfunction(arg1, arg2, arg3)
In both of these cases you are in fact passing an array of objects as the single and only parameter. Considering this, the error message makes perfect sense. In Powershell the comma is an array operator (see here) and arguments should be passed to a function delimited only by a space character. Like so:
myfunction arg1 arg2 arg3
This brings up another question though: “how come I wasn’t getting an error when I had untyped parameters (or string type parameters)?”
This is because under the covers the arguments list is just an object array and the parameters are positional – they retrieved by indexing into the arguments array. When you are passing a single parameter that is an object array you are passing in the arguments array itself. When the parameters are typed such that they can be auto-converted from an object type (eg string) then it will work as expected.
A colleague of mine, Ari Bakker, recently posted about the new SharePoint 2013 certification road map. As it represents a significant change, especially for developers, it is definitely worth understanding if you plan on getting certified in the future. Check it out here.