SPFx packaging: Sharing code between many web parts or extensions

When developing SharePoint Framework components (web parts and extensions) you may release a single one to an environment and be done with it. Or more likely you’ll be creating multiple web parts and extensions and will need to decide how to approach SPFx packaging.

Things to consider when packaging SPFx components

  • How do I share my code between components?
  • How do I share library code between components?
  • How do ensure that components use the same version of the SharePoint Framework?
  • How do I version dependencies of the components?
  • How do I upgrade components?
  • What is the development/test environment like?

All these questions really boil down to a single question:

Do you a create a multi-component SPFx project or have a project for each component?

* SPOILER *

I have concluded that the default approach to SPFx packaging should be to include all components into a single multi-component project. Avoid creating multiple packages where possible. Depending on the scale of the development team there may be some scenarios where this is not appropriate, in which case create as few multi-component projects as are necessary. Furthermore, I recommend creating a single multi-component JS bundle file for all web parts in package (a multi-component bundle), rather than the default approach of having a JS file for each component.

SPFx config.json, multiple web parts will be packaged as a single bundle as part of a single package

Terminology

Multi-component project/package: SPFx packaging such that a single sppkg file is produced which deploys multiple SPFx web parts or extensions.
Multi-component bundle: Only available within the context of a multi-component project, a multi-component bundle includes the JS required for all components as a single file rather than a file for each component.

Just do it

So take my word for it, add all your SPFx components to a single package and create multi-component bundles. To add additional web parts to an existing project just run the Yeoman generator again in the same folder location. Elio Struyf has a post about multi-component bundles and I’m sure that there will be official guidance release very shortly.

Or perhaps you’d like me to explain my rationale, the benefits, and where this approach may not meet your needs. In which case, please read on..

What you sacrifice by having a single project

Firstly, let’s discuss what is sacrificed by packaging SPFx components into a single sppkg package?

  1. You can no longer install or upgrade components individually. (However, the new site collection scoped app catalog may assist with this.)
  2. Depending on your development environment, it may be easier to govern source code and DevOps processes during development and test. For example if you have different teams working different web parts.
  3. During development you can build and deploy individual components which may lead to time savings if/when the volume of components becomes large. (In these cases the gulp tool chain could be modified to meet requirements.)

What you gain by having a single project

If the above list of sacrifices aren’t deal breakers then there are many benefits to be had by taking this approach.

  • Sharing code between SPFx components is trivial. Sharing code between packages is hard.
  • Deployment and upgrade is trivial especially with tenant-scoped deployment – just deploy a single package to the app catalogue and you are done.
  • The risk of having multiple web parts using different versions of the SharePoint Framework is avoided.
  • The risk of having multiple versions of third partly libraries loaded is greatly reduced.
  • Total payload of components will be much smaller due to reduced duplication of shared code, library code (especially Office UI Fabric), and the framework itself due to multi-component bundling. Sub-optimal usage of external references, static vs dynamic import statements, and the bloat that some recommended frameworks currently inflict (Office UI Fabric React…) can lead to very substantial page weight increases. By using a multi-component bundle the worst case scenarios are avoided as in most cases these issues will impact a solution once for each bundle.
  • Versioning of shared code is trivial because you don’t have to it. Internal dependencies are including the bundle and external dependencies are referenced to only once. The framework itself handles the component versioning for you.

And finally…

I’d be particularly interested to hear from people who have found strong reasons to package components individually because currently I believe that the benefits of multi-component SPFx packaging outweighs the benefits in nearly all scenarios.

Paul.

Deploying/provisioning difficult web parts (like XsltListViewWebPart)

Deploying the XsltListViewWebPart to a page as part of a packaged solution can be a challenge (as well as some other web parts). When these web parts are exported they refer to the underlying list or library via its GUID rather than by URL or name. This means that that the web part cannot be imported using this same XML to another site as it will fail to find a list with that GUID.

As Microsoft’s guidance is now to avoid using the server-side object model, the previously most common way to resolve this issue is no longer available. Web parts could be provisioned by JSOM code on a page however that really feels messy and there some declarative alternatives.

SharePoint-2013-export-webpart

Replacing the AllUsersWebPart element with the View element

Specifically for provisioning an XsltListViewWebPart, rather than using the AllUsersWebPart element to provision a web part to page you can use a View element instead. The view element allows you to specific a list via URL, which when omitting a list reference in the child web part element, will be used to identify the list or library. This is my recommended approach. See here for the View element schema definition.

 

Using the listId token for document libraries

There is a token that can be used to replace the library GUIDs in the web part XML. I say library specifically as I can’t seem to get this approach to work with lists. The properties that need to be replaced with token are as follows (in this example we are referencing the ‘Documents’ library):

<property name="ListName" type="string">{$ListId:Documents;}</property>

<property name="ListId" type="System.Guid, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089">$ListId:Documents;</property>

An example of the full XML using this approach can be found here.

This method can be used for other web parts as well such as specifying a specific list when deploying a ContentQueryWebPart.

Using BinarySerializedWebPart for lists or libaries

The other option is a very static and rather awkward approach to solving this issue – but at least it works. The BinarySerializedWebPart allows any web part to be imported in a binary format and provides a mechanism for mapping GUID to URL. The major downside is that if you want to make a minor change it will require you to re-create the binary representation of the web part.

In order to find this XML you will need to configure a web part as desired on a site that has never had publishing features enabled and then import the site as a site template (the publishing features disable the export site template functionality and although you can get around this you can get very strange side-effects, so I recommend avoiding doing so. e.g. I once found that all my Script Editor web parts were deleted from my site after performing the operation on site with publishing features enabled). By downloading and renaming the site template package as a .cab you will be able to find an Elements.xml that contains all the xml for deploying web parts, including yours as BinarySerializedWebPart if it can’t be otherwise imported.

For more information on this see here.

Good luck!

Key difficulties deploying a SharePoint Online solution using CSOM

I have been developing a console app that utilises the SharePoint C# CSOM to deploy a solution to SharePoint Online (a.k.a Office 365 SharePoint). The solution involves more than just a wsp (although it has one of those too). I have encountered a few difficulties during this process and this blog will discuss those:

  • (Re)creating a site collection
  • Importing a large-ish taxonomy
  • Uploading and installing a sandboxed solution (that contain only declarative elements)
  • Hooking up of taxonomy and (root site) lookup columns
  • Pre-creating a number of sites with specific features enabled (including the root site)

Before I go any further, for those of you reading this before doing something similar yourselves, please be aware of two constraints which caught me by surprise:

  • You can’t leverage the same import taxonomy function that is available in Term store management. If you already have files in that format you will need some custom code (I have an example later on) or you may want to import from a more robust XML formatted document
  • The CSOM does not support uploading or activating sandboxed solutions! However, there is a CodePlex project that assists with this. I also include the dll later in the post that I have rebuilt with references to the lastest v16 Microsoft.SharePoint.Client dlls.
  • The CSOM does not support activating web scoped features! You can active site scoped but not web scoped. You need to use web templates to achieve this. Again, I will cover this in some more depth later on.

SharePointOnline2L-1

Deleting and recreating a site collection

The initial step of the deployment process involves creating a new site collection (having deleted it first as required). In order to perform actions at this scope (tenant) you cannot create your client context in the same manner as usual (with a reference to the site collection; as it is yet to exist and the site collection delete and empty recycle bin require it too). Instead you must create the client context passing in tenant/admin site URL.
This is the one that looks like this: https://<tenant>-admin.sharepoint.com

You can then create a Microsoft.Online.SharePoint.Tenant object by passing the ‘admin’ client content to its constructor. This object requires a reference to the Microsoft.Online.SharePoint.Client.Tenant assembly which is available by downloading and installing the SharePoint Server 2013 Client Components SDK. The assembly can then be found here: C:\Program Files\SharePoint Client Components\16.0\Assemblies

The tenant object provides the methods required to perform the create and delete site collection actions. This process involves a lot of waiting about for deletion to complete, and then provisioning to complete. Unfortunately you can’t continue with other actions until this has occurred. I found this to take upwards of three minutes.

A link to the relevant code that I used to achieve this can be found here: https://gist.github.com/paulryan/cbfaa966571d6a9cdb8b

Importing taxonomy

As mentioned above you can’t pass those CSV files directly to the CSOM and have it import it all for you. In my scenario we had already developed a lot (dozens) of term sets in the form of these CSV files so that were able to import them during a discovery phase so it was important that I could support the import of taxonomy in this form. I wrote code to support in the import of these files, but only to the point that it meets my immediate requirements. Please use the following as a rough guide only as it is not fully featured (or tested beyond the happy path).

The code I wrote to support his can be found here: https://gist.github.com/paulryan/e461f8bac28336b05109#file-importtaxonomycsom-cs

Uploading and activating a sandboxed solution

There is a CodePlex project that provides this functionality (as well as some authentication utilities) that I mentioned above. It performs web requests to UI and I am very glad someone else has already done this before me! It was originally created when SharePoint 2010 was present in the cloud and references the v14 assemblies of the Microsoft.SharePoint.Client assemblies accordingly. If you don’t mind maintain references to both v14 and v16 assemblies then this might be fine. I have instead rebuilt the source having replaced the references with the v16 equivalents.

You can download it here: SharePointOnline.Helper.dll

FYI: v14 is SharePoint 2010, v15 is SharePoint 2013, v16 is SharePoint 2013 Online specific

Activating web features

Actually there isn’t a lot more to say here other than you must use web templates if you need to create sites with features enabled as part of the deployment process as it can’t (currently) be done using the CSOM. I would recommend using the web template for nothing other than activating features and put all other declarative elements in a feature. This will provide the best upgrade experience in the future.

Hooking up taxonomy columns

The best place to start is almost certainly a reference to Chris O’Brien’s blog on this here. As I have the luxury of being able to run further deployment code after uploading/activating the sandboxed solution I opted to avoid having to rebuild the solution for various environments and instead hook-up the columns using the CSOM and a mapping. There is a catch with this though.

If your list instance is built from a list template which defines the managed metadata columns then updating the site column via the CSOM fails to push down the new SspID. To get around this, DO NOT include managed metadata column definitions as part of the list definition (in the fields element). When you run the CSOM to update the site columns it will update the content type and add the column to the list instance with the correct SspID.

Good luck building your SharePoint Online CSOM deployment framework!

Deployment failure causing Get-SPSite to fail

I run a scripted deployment process each time new (SharePoint) solutions are ready for QA, or any environment for that matter. We script a series of commands, specifically:
Uninstall-SPSolution, Remove-SPSolution, Add-SPSolution and finally Install-SPSolution.

Before running these commands, along with a number of other checks, we perform Get-SPSite to ensure the site is available and fail early if need be. If our custom solution has not been deployed then viewing any page under any of the web application’s site collections fails with an exception due to a failure to find one of the solution assemblies. This is because we have custom membership and claims providers which are defined in the solution assembly and are referenced in the web.config for the web application. Despite this, any SPSite and SPWeb objects can be obtained safely via PowerShell as forms authentication is not taking place.

So I was surprised to find that the Get-SPSite check failing during deployment today with a failure to locate the assembly containing the membership and claims providers. I did not discover the root cause as to why this suddenly occurred but will outline what it took to fix it.

In the end I was able to ” Install-SPSolution -force ” to recover from this situation but not before stopping and starting the SharePoint Administration Service across all Web servers in the farm. The service was not stopped on any of the machines however the Install job was never completing despite the timer job history in Central Administration stating that the job had been successful. Upon restarting all this service the Install-SPSolution command would then complete.

Credit to Andreas’ blog for putting me in the right direction.

Scripting WSP deployment

I won’t take this opportunity to evangelise the benefits of scripted deployments, I’m going to assume that you already do it (as you should be!) and provide a tiny bit of script that will identify when your deployment doesn’t run quite so smoothly. Before I do, I’ll briefly mention my experience as to why a deployment may fail.

I have found that the by far the primary reason that a SharePoint full-trust deployment fails is that one or more of the assemblies being deployed is locked in the GAC and cannot be removed/replaced. An ISSRESET fixes this in the majority of cases (consider performing a remote ISSRESET across all the farm SharePoint servers as part of your deployment process. Note to self: future blog topic…) and in the remainder of cases stopping related services (v4 Timer, SSRS etc) on the affected server will release the assembly. The easiest way to identify the servers at which the deployment failed is via CA:

Central Administration > System Settings > Farm Management :: Manage farm solutions > "Your WSP"
Central Administration > System Settings > Farm Management :: Manage farm solutions > “Your WSP”

WSPs are deployed using a timer job. When performing deployment actions we need to wait on the timer job and upon it completing then verify the deployment status of the solution is as we expect. If we don’t wait for the action and ensure that it has run successfully we run this risk of not detecting a failed to deployment until we attempt to access the site. This can be a real time sink if we run lengthy scripts after deployment or are scripting the deployment a number of solutions in succession. The following snippet shows how easy it is to achieve this in PowerShell.

May your deployment failures be immediately detected.

SharePoint Maintenance Mode Automation with PowerShell

In an ideal world, downtime (scheduled or otherwise) would be avoided entirely. Unfortunately, there are plenty of reasons why a web site may need to go into a scheduled maintenance mode. It’s important that this is done correctly and performing such as task across a farm manually can be error prone and tedious.

 

Maintenance

In my case, I wanted to automate the activation/deactivation of a maintenance page across a SharePoint farm with multiple Web Front End servers. The same script would be run across a number of different environments with differing topology depending on requirements (development, QA, staging, production, etc).

 

As of .NET 2.0 a very useful feature was introduced which makes putting a single web application (on a single server) into maintenance mode straightforward. If you drop a file named “app_offline.htm” into the IIS web application directory for your web site it will “shut-down the application, unload the application domain from the server, and stop processing any new incoming requests for that application. ASP.NET will also then respond to all requests for dynamic pages in the application by sending back the content of the app_offline.htm file”. [quoted from ScottGu’s Blog].

 

Leveraging this technique I have written a script to provision (or remove) a maintenance page correctly across all the required servers in a SharePoint farm.

 

A few things of note:

  • There are a number of SharePoint specific PowerShell commands being used so, as it is, this script cannot be used for other, non SharePoint, ASP.NET web sites. I would like to hear from anyone who modifies it for another use.
  • The user running the script will need to have enabled PowerShell remoting (Enable-PSRemoting -Force) as well as permission to Write and Delete from the file system of the other servers.
  • As it is, the script drops the app_offline.htm file for every web application zone. In my case, I only wanted to block users from accessing the zone configured to use our custom claims provider. I have left in the check for this in case you find yourself in a similar situation.
  • The maintenance page must be entirely self-contained. By this I mean that all CSS and JS must be embedded and even images must referenced using inline base64 representations. See here for an easy way to achieve this.
  • If want to perform an IISRESET and stop/start the  SharePoint Timer Service at each WFE before taking down the maintenance page then uncomment the relevant lines.

If you find this helpful please subscribe to our feed and feel free to leave a comment with your thoughts.