I recently had some trouble trying to update a user’s profile properties via the SharePoint Online Admin console. I would provide values for the fields but when I pressed the save button either the page would refresh, with no validation messages, or I would be redirected as if the save had persisted. However, upon revisiting the user’s profile properties page it was clear that my changes had been discarded.
I was chasing a red herring for a while with the assumption that this issue was related the Kiosk licence of the user I was attempting to update. The issue is not related to licence type but appears to be a validation bug with page.
Although I had provided values for the user profile properties which are marked with a star, the issue persisted. These properties are not required when updated the user profile from the Admin console. The issue in fact lies with the time zone settings.
I found that I could not persist update if the ‘Always use regional settings defined by site administrators‘ radio box was checked.
By changing this to ‘Always use my personal settings‘ I was then able save and persist updates to the user’s profile.
If you need to embed script into a content editable page in SharePoint 2013/Online, you may decide to use the new Script Editor web part. There are often many preferable ways to add script to a page (e.g. via the master page, a custom action, custom control, the ScriptLink property, etc.) however this is an easy option for demo purposes or when deployment activities are out of scope.
CAVEAT: As I stated in the first paragraph, this is often not the best way to add script to a page.
If you have customised the new button order and/or default content type for a list or library, then expect these changes to be lost if that site is moved. SharePoint Bug! You may have done this in order to change the content types that appear under a list’s new button or change the content type that used by default (i.e. when a user just clicks the new document icon rather than selecting a content type from the drop down).
As you can see in the image above, I have a library configured with a restricted set of content types available under the new button. After moving the site (using Site Settings > Content and Structure) these customisations are lost. See the next image.
With the luxury of a farm solution this can fixed using a web event receiver. Using the WebMoving event you can store list new button order information (e.g. in a web property bag) and then in the WebMoved event, this information can be read and applied. I don’t have a code example of this as in my situation it was suitable to apply a static new button order to lists based on the site definition and list template.
When using Move Site via Site Content and Structure to move a site to another location in the site hierarchy you may find it fails with one of the following errors (depending on where you look):
Operation to Move 'old site URL' to 'new site URL' failed
"MoveWebs.Move catches SPException : The attempted operation is prohibited because it exceeds the list view threshold enforced by the administrator."
"Move Operation under site 'Site Name' failed in the Content and Structure tool. Details in ULS logs"
Increasing the list view threshold alleviates the issue however I have not managed to figure out exactly why the limit is reached. I have encountered this when none of the lists in the site being moved, or any of its child sites, have breached the list view threshold. In fact the largest list was less than 2000 items with the list view threshold at the default 5000.
Perhaps its due to the aggregated total items being moved? This makes some sense as the list view threshold is in place to prevent SQL table locks which occur when more than 5000 rows are queried. As all the lists in a site collection are stored in a single table it makes sense that the same limitation would occur here.
If moving large sites is something that you need to do, I would suggest doing it out-of-hours if possible as these thresholds are in place for a good reason. Make sure that you utilise the administration list view threshold rather than increasing the one which restricts the majority of users. Performing actions that require an increased list view threshold may cause serious performance issues.
Please comment if you have anything to add to this.
I have been finding that when moving a site (SPWeb) to a different location in the site hierarchy of a site collection that the breadcrumb would often (not always) be incorrect once the site had been moved.
If you didn’t know, SiteMapProviders are cached in the SharePoint object cache. I’ll put the sporadic nature of the issue down to the natural refresh cycle of the object cache but honestly I’m not completely sure why it doesn’t go wrong all the time. The important bit is that there is way to ensure that the breadcrumb is refreshed correctly every time. For the sake of completeness, here is the SiteMapPath control with its SiteMapProvider property set to CurrentNavSiteMapProviderNoEncode from the custom master page (if you want to read more about this you could start here):
If you are suffering this issue I suspect that mentioning the object cache was enough to put you on the right path but I’ll spell it out just in case.
If you run a few lines of code in a WebMoved event receiver (I blogged briefly about attaching these here) you force the object cache to be refreshed whenever a site is relocated. Be warned that if you have a site that leverages the object cache (e.g. use of the cross-list query object, or the content query web part which utilises it) that these operations will need to re-cache and may have some performance impact.
You may need to fetch the SPSite object that is passed into the constructor from within an elevated privileges block to ensure the current user is allowed to perform this action. Obviously that depends on expected audience for this action.
The following error occurs when attempting to make a data connection using the PerformancePoint Dashboard Designer and the SQL Server Table data source template:
Before I continue I want to make it clear that if you encounter this issue then you have not performed initial configuration/installation correctly for your needs and you should revisit installations guides. In my situation I’m just hacking together a dev environment which requires the use of PerformancePoint. To reiterate, better solutions to this issue exist in the form of installation guides. I wrote this because when I googled for a dirty resolution to this issue I couldn’t find any references to the error message.
Back to the issue at hand – of course this is a configuration issue. The issue for me was that the PerformancePoint service application was configured to run in a general ‘SPServices’ application pool, where the application pool account does not have access to the PerformancePoint database (correctly). To get around this issue I configured the service application to run in its own application pool, where the application pool account is a new domain account configured to have access to the PerformancePoint database. This approach meets the least privileges approach to security which we must all strive to uphold! (I actually just reused the ‘SPFarm’ (god) domain account as the application pool account to get it working in a dev environment, but that’s the theory…)
You will most likely want to use a different account when you configure the Unattended Service Account.
Note on Analysis Services
By following the above steps I managed to get the SQL Server Table data source working, but the Analysis Services data source was still throwing up the same error dialog. Upon setting the PerformancePoint service application properties, a dialog prompts you to install the PowerPivot for SharePoint installation package. After doing this not only was I still getting the same error dialog when attempting to create an Analysis Services data source but I could no longer create a SQL Services Table data source either. Running the PowerPivot for SharePoint 2013 Configuration resolved this issue, obvious really.
System.Data.SqlClient.SqlException (0x80131904): Cannot open database "WSS_Content" requested by the login. The login failed. Login failed for user 'DOMAIN\apppoolaccount'.
SQL Database 'WSS_Content' on SQL Server instance 'SERVER' not found. Additional error information from SQL Server is included below. Cannot open database "WSS_Content" requested by the login. The login failed. Login failed for user 'DOMAIN\apppoolaccount'.
An unexpected error occurred. Error 7451.
No windows identity for DOMAIN\apppoolaccount.
Unable to load custom data source provider type: Microsoft.PerformancePoint.Scorecards.DataSourceProviders.AdomdDataSourceProvider, Microsoft.PerformancePoint.Scorecards.DataSourceProviders.Standard, Version=126.96.36.199, Culture=neutral, PublicKeyToken=71e9bce111e9429c System.IO.FileNotFoundException: Could not load file or assembly 'Microsoft.AnalysisServices.AdomdClient, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91' or one of its dependencies. The system cannot find the file specified. File name: 'Microsoft.AnalysisServices.AdomdClient, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91'
Here you will find a script to cancel all list workflows (not site workflows) running in a given site collection in the most efficient manner possible (that I can think of) while still only using the SharePoint API. A better approach still would be to query the SharePoint workflows SQL table to identify the running workflows so as to avoid iterating all site collection webs and lists. Unfortunately there is no ‘GetAllRunningWorkflows(SPSite)’ method available via the API. I imagine that this approach should be satisfactory in the majority of cases though.
There are a number of posts on the web with code somewhat similar to this, at least with code that aims to achieve the same outcome. Of all the posts which I found they all performed this function in a very inefficiency manner, iterating the SPListItemCollection for every list in the site collection. This may be fine in many circumstances but I wanted something that would run faster with less strain on the server.
I have achieved this by checking for workflow associations on a list before iterating the items as well querying the list items and where a workflow column exists checking to see if the list item needs to be returned at all. When returning the list items, I am querying with ViewFieldsOnly so that less data is returned.
This script also accepts an optional parameter that specifies which workflow associations should be canceled if you are not looking to cancel all of the workflow associations but just those of a specific name.
NB: The script contains a reference to a help function GetNestedCaml which I have defined in a separate post which can be found here.
As the script sample is quite large I suggest clicking the ‘view raw’ link at the bottom of the sample to view it.
Here is a short PowerShell function that can be used when you need to dynamically generate CAML queries with many logically joined statements. I actually adapted it from a C# implementation I wrote (which is probably more useful…) but as you can rewrite it in C# very easily I won’t bother bother posting it twice.
As CAML logical join operators (And, Or) can only compare two statements, when many statements need to be compared you must nest them which is what this function achieves. The $join parameter should be passed as “And” or “Or” and the $fragments parameter should be passed as an array of CAML statement strings such as: @("<Eq><FieldRef Name='Title' /><Value Type='Text'>title</Value></Eq>", "<Eq><FieldRef Name='FileLeafRef' /><Value Type='Text'>name.docx</Value></Eq>")
I run a scripted deployment process each time new (SharePoint) solutions are ready for QA, or any environment for that matter. We script a series of commands, specifically: Uninstall-SPSolution, Remove-SPSolution, Add-SPSolution and finally Install-SPSolution.
Before running these commands, along with a number of other checks, we perform Get-SPSite to ensure the site is available and fail early if need be. If our custom solution has not been deployed then viewing any page under any of the web application’s site collections fails with an exception due to a failure to find one of the solution assemblies. This is because we have custom membership and claims providers which are defined in the solution assembly and are referenced in the web.config for the web application. Despite this, any SPSite and SPWeb objects can be obtained safely via PowerShell as forms authentication is not taking place.
So I was surprised to find that the Get-SPSite check failing during deployment today with a failure to locate the assembly containing the membership and claims providers. I did not discover the root cause as to why this suddenly occurred but will outline what it took to fix it.
In the end I was able to ” Install-SPSolution -force ” to recover from this situation but not before stopping and starting the SharePoint Administration Service across all Web servers in the farm. The service was not stopped on any of the machines however the Install job was never completing despite the timer job history in Central Administration stating that the job had been successful. Upon restarting all this service the Install-SPSolution command would then complete.
I won’t take this opportunity to evangelise the benefits of scripted deployments, I’m going to assume that you already do it (as you should be!) and provide a tiny bit of script that will identify when your deployment doesn’t run quite so smoothly. Before I do, I’ll briefly mention my experience as to why a deployment may fail.
I have found that the by far the primary reason that a SharePoint full-trust deployment fails is that one or more of the assemblies being deployed is locked in the GAC and cannot be removed/replaced. An ISSRESET fixes this in the majority of cases (consider performing a remote ISSRESET across all the farm SharePoint servers as part of your deployment process. Note to self: future blog topic…) and in the remainder of cases stopping related services (v4 Timer, SSRS etc) on the affected server will release the assembly. The easiest way to identify the servers at which the deployment failed is via CA:
WSPs are deployed using a timer job. When performing deployment actions we need to wait on the timer job and upon it completing then verify the deployment status of the solution is as we expect. If we don’t wait for the action and ensure that it has run successfully we run this risk of not detecting a failed to deployment until we attempt to access the site. This can be a real time sink if we run lengthy scripts after deployment or are scripting the deployment a number of solutions in succession. The following snippet shows how easy it is to achieve this in PowerShell.
May your deployment failures be immediately detected.