I recently wrote a post for my employer about the recent history of SharePoint extensibility models. It also touches on how we as company settled on the model with which we are currently delivering our Intranet/Digital-Workplace solution. I discuss the Feature Framework, Farm and Sandboxed Solutions, SharePoint Add-in Model, SharePoint Framework, Remote Provisioning, and more.
The Office 365 CDN (Content Delivery Networks) may be activated to host SharePoint Online files in a more globally accessible manner. The general premise behind this is that static assets can be served to users from a location more local to them than the data centre in which the Office 365 tenant is located.
I won’t go into the real benefits of this beyond to say that my limited testing at this point leads me to believe that the performance impact of using a CDN will be negligible for the vast majority of users/organisations. This is because the volume of data which can be served via the CDN is not a significant proportion of the data impacting page load speed.
Regardless, the documentation around how to get started with the Office 365 CDN is decent. A good place to start is this link.
Private CDN with auto-rewrite. Image credit to Microsoft (https://dev.office.com/blogs/general-availability-of-office-365-cdn)
A couple of gotchas I’ve noticed
Fetching an image rendition using the width query string parameter does NOT correctly return the image rendition as configured. It simply scales the image to the specified width (i.e. no cropping or positioning is performed).
If all users are located in the same region as the Office 365 tenant, turning on the CDN may reduce performance due to CDN priming (replication of files to the CDN) and will complicate updates to files which are replicated (e.g. JavaScript in the Style Library).
Search web parts must be configured for ‘Loading Behaviour’ – ‘Sync option: Issue query from the server’ in order for the auto rewrite of CDN hosted files to occur. This is true for display templates as well as the value of the PublishingImage managed property
Office 365 CDN PowerShell Samples
I’ve got some sample PowerShell below showing how to activate the Office 365 CDN (there’s private and public, you can use either or both) and associate origins with it (an origin is a document library which will be replicated to the CDN).
I’ve also got a simple sample of how to remove all origins as there is not a single cmdlet for this. It is worth noting that although an enabled CDN with no origins is functionally identical to a disabled CDN (i.e. no files are being replicated) they are not the same from a configuration perspective.
Please note that these are just sample scripts and have not been parameterised as you may require.
As you may be aware Yammer ‘on by default’ now as part of Office 365. This means you *may* get a Yammer network provisioned at the *.onmicrosoft.com domain associated with your tenancy.
Yammer network using a *.onmicrosoft.com domain
Here’s a tip
If you want to take advantage of this make sure that you create the tenancy as a Trial and then buy licences later. Taking this approach means you get the Yammer ‘on by default’ experience more or less as you would expect, it will be provisioned within a hour or so (maybe sooner).
If you instead purchase a tenant right away, they apparently wait a while as the expectation is that you’ll want to use a vanity domain for your Yammer network… “But I want Yammer ‘on by default’ with my paid tenancy”! You can create a Yammer network on your default Office 365 domain just by logging into Yammer with your *.onmicrosoft.com account BUT you don’t get the integrations with Office 365 – the app launch app, the link to Yammer admin from admin centre. Based on my experience, you will get these integrations eventually – I saw them appear more than 6 weeks later!
I had a long call with Microsoft support regarding this and they weren’t able to provided any solid explanation or reasoning; the information contained in the post is based on my experience rather than official guidance.
Summary
Create new (dev/test) Office 365 tenancies using a Trial subscription and buy licences later if you want to use Yammer without a vanity domain.
The user photo story in Office 365 is not so straight forward. Photos are stored in Active Directory (AD) on-premises, Azure Active Directory (AAD), Exchange Online (EXO), SharePoint Online (SPO), and at first appearances possibly elsewhere as well (where does my Delve profile picture live, what about my Skype for Business (SfB) avatar?).
I have put together a flow diagram to represent how this actually works. It aims to demonstrate where user photos are stored and where different applications fetch user photos from (if they don’t store the images), and leads to some recommendations about user photo synchronisation.
Please note the date of this article (August 2016) and be conscious that Office 365 is changing rapidly and the following recommendations may have changed (e.g. Prior to the Delve user profile page, the SharePoint user profile page referenced images stored in SharePoint rather than Exchange. Changes such as these will continue to evolve).
User photos: the diagram
User photos flow in Office 365
Where applications store and fetch user photos
Photo Location
Comments
Size
Is source?
On-premises AD DS in the thumbnailPhoto attribute
100Kb maximum
Recommended to be
96×96 or 48×48
Yes
Azure AD in the thumbnailPhoto attribute
100Kb maximum
Usually synced from AD DS via Azure AD Connect
Recommended to be
96×96 or 48×48
No.
Sync from AD
Exchange Online as property of the mailbox
500Kb
Provided manually by users or a bulk import can be scripted if source photos can be located and named appropriately.
If not provided, Exchange will reference the AAD thumbnailPhoto in some instances but only if the thumbnailPhoto is less than 10Kb.
Does not sync back to AD
Recommended to be
648×648
Yes
SharePoint Online ‘User Photos’ library
Three renditions of the EXO photo are automatically created in SharePoint after upload to EXO.
It generally takes up to 72 hours to see changes to EXO photo here. Sometimes we see that a user must ‘touch’ their profile before the sync will be performed.
NOTE: Updating user profile photo via Delve profile is actually updating EXO profile photo and not performing any actions directly in SharePoint Online.
Small is 48 x 48,
Medium is 72 x 72,
Large changes depending on the source image but is always square. I have seen as small as 120 x 120 and as large as 300 x 300. PnP image upload solution uploads these as 200 x 200.
No.
Sync from EXO
Skype for Business
Does not store any images
Uses the high resolution Exchange image if available, otherwise uses the AD thumbnailPhoto
EXO image or AD thumbnailPhoto
No.
Read from EXO
Delve user profile
Does not store any images
Uses the high resolution Exchange image if available, otherwise uses the AD thumbnailPhoto
EXO image or AD thumbnailPhoto
No.
Read from EXO
Yammer
Also stores its own photo. Out of scope of this discussion for now.
Yes
Likely issues and resolutions
Issue
Resolution
Exchange Online user photo is low quality (and in turn so is the SPO photo and SfB photo)
The source image coming from AD was/is low quality.
EXO user photos can be updated by users individually or if high res source photos are available this import can be scripted.
Source images should be jpg of 648×648 (resizing and compression can also be scripted)
Exchange Online user photo is high quality but SfB photo is low quality
High resolution photos from Exchange will be used as long as both Exchange and Sfb/Lync are of new enough versions (2013 or greater) and SfB is configured to allow all photos (not just those from AD).
NB. If a user doesn’t have a mailbox (e.g. not licenced) then they will be displayed using the AD photo
There is no Exchange Online user photo (and in turn there is no SPO photo or SfB photo)
A photo has not been imported to the user’s EXO mailbox and the AAD thumbnailPhoto either doesn’t not contain an image or that image is greater than 10Kb.
Import of photos up to 500Kb to EXO mailbox can be scripted (the source images could be on a file share, or AAD).
Changes to user photos are reflected quickly in Exchange and Skype but take days to replicate to SPO
Exchange to SPO synchronisation is a periodic process and can take up to 72 hours.
A custom solution can perform this replication on demand (e.g. at the same time EXO user photos are set)
User photos changed in other systems which update AD are not reflected in EXO, SPO, SfB.
E.g. A user in an on-premises SharePoint farm updates their user photo
When AD is updated, it is synchronised with AAD but that is as far as it gets as the “sync” from AAD to EXO is one-off import rather than a Sync.
Unlikely to be desirable to create a custom sync relationship here as users will want to be able to update EXO directly and won’t want their photo’s overwritten
User photos updated in EXO aren’t replicated to other systems which share an AD.
E.g. An on-premises SharePoint farm
The user photo in EXO is not synched back to AD – it can’t be consistently as the AD thumbnailPhoto attribute only supports photos up to 100Kb where EXO supports larger images.
Potential for a custom solution to sync images back to AD after having resized/compressed them to <100kb – However general recommendation is that AD thumbnailPhoto optimal size is 10Kb and 96×96.
Recommendations
Use Exchange user photos as the master. Allow users to update their user photos but pre-populate their user photo if possible and before end users are provided any access to the system.
If high resolution photos are available, script import of high resolution photos (648×648) to Exchange Online (see Set-UserPhoto and this and sample script below). These will then be visible in Exchange, in Skype, and, once processed, in SharePoint Online. In a dispersed environment this may have to be managed by many teams rather than trying to compile a single list of all user photos.
Users may then update their user profile photo directly via Outlook or indirectly via their Delve profile.
If synchronisation back to AD is required in order serve other applications (e.g. an on-premises SharePoint farm) then a custom solution could provide synchronisation from EXO to AD but this process should compress and shrink images as the recommended size of thumbnailPhoto images is only 96×96 and 10Kb.
Azure AD apps (a.k.a Azure Active Directory apps, a.k.a AAD apps) are an essential component when interacting with Office 365 data outside of SharePoint – Mail, Calendar, Groups, etc.
As an O365 developer I have found myself writing JavaScript code against AAD apps (using ADAl.js) and often, especially during development, found myself entering a long list of Reply URLs. Reply URLs must be specified for any location from which authentication to AAD occurs. From a practical standpoint this results in someone (an Azure Administrator) having to update the list of Reply URLs every time a web part is inserted into a page or a new site is provisioned which relies on an Azure AD app.
If this is not done, the user is redirected to Azure login failure with ‘The reply address … does not match the reply addresses configured for the application’.
Error when Reply URL is not correctly specified
Perhaps the following is documented elsewhere but I have not come across it – a Reply URL can be specified using wildcards!
Using wildcard Reply URLs when configuring an AAD app
Probably the most common use for this is to end a Reply URL with an asterisk (wildcard) which will permit any URL which begins with the characters preceding it.
e.g. https://tenant.sharepoint.com/*
This example would support any URL coming from any page in SharePoint Online from within the named tenant.
It is also possible to use the wildcard character elsewhere in the Reply URL string.
e.g. https://*.sharepoint.com/*
This example would support any URL coming from any page in SharePoint Online from within *any* tenant.
Armed with this knowledge, be responsible and limit strictly how it is utilised. The implementation of Reply URL is a security feature and it is important that only trusted locations are allowed to interact with your app. I recommend only using wildcard Reply URLs in development environments.
Delve, as part of the Office 365 suite, provides a number of useful pages for finding content or people that are trending around you or that you recently interacted with. Often, as a Developer, these pages are the perfect target for “See More” links as part of customisations written using the Office Graph. Or perhaps as an administrator you would like to configure a promoted link on a team site home page to navigate to a user’s ‘Your Recent Documents’ page in Delve, for example.
The Delve Recent Documents page. Note that the URL contains the user’s AAD object ID.
Delve Links – a minor problem
When you visit pages that show content relevant to a specific user (such as Your Recent Documents or the Recent Documents page for another user) the URL of that page contains a query string variable ‘u’ with the value of this variable equal to the Azure Active Directory (AAD) object ID of the user. Azure Active Directory is the identity provider that backs Office 365 and is out the scope of this post. If this parameter is not provided then Delve falls back to the Delve homepage. I would have preferred it to have just used the current user if the parameter is not present, but no, this is how it works.
The ‘u’ query string parameter can be substituted for the ‘p’ query string parameter where the value of p is the user’s account name – the email address which they use to login as.
This value is present on any SharePoint 2013+ page via the JavaScript variable: _spPageContextInfo.userLoginName
This can be utilised as follows:
var mySiteHostUrl = "https://-my.sharepoint.com";
var pageKey = "liveprofilemodified"; // liveprofilemodified='Recent Documents', liveprofileworkingwith='People page'
var delveUrl = mySiteHostUrl + "/_layouts/15/me.aspx" + "?v=" + pageKey + "&p=" + _spPageContextInfo.userLoginName;
Delve Links – side note
This value is present as an Office Graph property as: AccountName
The AAD Object is present as an Office Graph property as: AadObjectId
If you create a SharePoint site column (a note field in this case), associate it with a site content type, and then associate that content type with a list in a sub site, the site column will be available on that library. Obviously right?
However, when you update the site column (and push all changes to lists and libraries) not *all* of the changes you make are in fact pushed down. An example of this is the setting that dictates whether a note field should allow rich text or enforce plain text. If you change this setting at the site column level it will *not* propagate to libraries which already exist. New instances of the column (say if you associated the content type with a list for the first time) will be configured correctly, but existing list-level instances are not updated. NOTE: This is only true for properties specific to particular column type; common properties such as ‘required’ will be pushed down to existing instances of the column at the list level.
Configuring a SharePoint note field
So you want to change a list-level instance of a plain text note column to a rich text note column (or vice-versa, or otherwise change column specific properties or another field type)? You need to do it for every list where the column is in use. That would be very tedious to do via the SharePoint UI, but you can’t anyway. The UI only supports changing the set of common field properties (type, required, hidden, etc).
In comes PowerShell. Below you will find a script which updates a plain text note column to be a rich text note column. It is important to note that this script only updates the list-level columns and not the site column. This means that after running the script, new instances will continue to inherit the site column configuration.
The script is written for SharePoint Online (and assumes that the SharePoint Online Client Components SDK is installed) but for this to work on-premises you would only need to update the referenced assemblies (v15 for 2013) and modify the code which passes the credentials.
After recently implementing an Azure-based solution to mitigate SharePoint Online’s poor image rendition performance by utilising Azure CDN (see Chris O’Brien’s post on this issue, see Fran R’s post on other Image Rendition issues) I’ve reached a few conclusions regarding setting appropriate cache control headers. It is important to reach a practical balance between performance and receiving updates to files.
Before continuing it is important to understand the fundamental building blocks when using a CDN. At any time a file can be present in three location types: the blob or source file, the CDN endpoint(s), and users’ browser caches. In the case of Azure CDN, the source file must be a blob in Azure Blob Storage. Depending on the CDN/configuration it is likely that the file may be cached at many (dozens) of CDN endpoints dispersed around the globe. Without a CDN the only consideration is the cache timeout for files stored at the user’s browser cache. When considering a CDN we must also consider the cache timeout between the CDN endpoint and the source file.
Another important point to call out is that CDNs generally only push content to an endpoint when is it first requested: on-demand. This will incur a delay for the first user to request that asset from a given endpoint, while source blob is transferred to the endpoint. The impact of this will differ depending on the distance between the source blob and the CDN endpoint and the file size. It is this process that increasing the s-maxage header prevents (discussed below).
Relevant cache control headers
Definitions
max-age : Defines the period which, until reached, the client will used the cached file without contacting the server. ‘Client’ refers to a user’s browser cache as well as a CDN.
s-maxage : If provided, overrides max-age for CDNs only
public : Explicitly marks the file as not user specific
no-transform : Proxy servers may compress or encode images to improve performance or reduce bandwidth traffic. This header prevents this for occurring. It is preferable to avoid this header assuming that you can spare the effort to ensure the files being served are not affected adversely.
A good summary of the many remaining cache control headers that I didn’t feel were relevant to this post can be found here: A beginners guide to HTTP cache headers
In practice
For an image that has been previously requested:
When s-maxage has not expired and max-age has not expired, server responds with 200 (OK), the file is not downloaded again [0ms]
When s-maxage has not expired but max-age has expired, server responds with 304 (not modified), the file is not downloaded again [<100ms]
When s-maxage has expired but max-age has not expired, server responds with 200 (OK), the file is not downloaded again [0ms]
When s-maxage has expired and max-age has expired and the blob has not changed, server responds with 304 (not modified), the file is not downloaded again [<100ms]
When s-maxage has expired and max-age has expired and the blob has changed, server responds with 200 (OK), the file is downloaded again [download image]
A request for an image will return 200 (OK) until max-age has expired and then 304 (not modified) for every subsequent request until the blob is updated. Once updated, this process repeats
If an existing image is updated, the longest a user can wait to see the updated image is
Without clearing browser cache: max-age + s-maxage
With clearing browser cache: s-maxage
If an user views an image from the CDN for the first time, it is only guaranteed to be the latest version of that image if the blob hasn’t been updated in the last s-maxage
SharePoint library images are served with a max-age of 24 hours
As SharePoint library images are not served via a CDN they have an effective s-maxage of 0
My recommendations
Keeping all of the above in mind, I feel that the most important factor is to replicate the experience that users expect from images being served from the SharePoint environment. This can presented as a couple of simple rules:
max-age + s-maxage = 24 hours = 86400 seconds
s-maxage is as low as possible whilst satisfying bandwidth and performance targets (especially for locations most distant to the source blob)
For a recent SharePoint/CDN, I used the following cache control headers:
max-age: 23 hours
s-maxage: 1 hour
public
no-transform
Which looks like this: no-transform,public,max-age=82800,s-maxage=3600
Setting the cache headers served by Azure CDN and Azure Blob Storage
When working with cache control headers in Azure, they are set on the blob itself. It is not a CDN configuration setting.
For solutions that are contained in a single site collection, or span a small number of site collections, or are in a tenant where the other solutions are not trusted or are unknown, then I have a strong preference to use site collection scoped search schema rather than tenant scoped.
Side note: I am yet to come across a situation where I would use site scoped search schema. In my mind, the existence of search schema at this level only serves to confuse.
Search Schema hierarchy is SharePoint Online. There is also site scoped search schema at lowest level which is not present here.
For those that aren’t fully aware, search schema (the set of managed properties that are accessible via the search framework) can be provisioned at the tenant, site collection, or site scope. These scopes are hierarchical such that managed properties are inherited from the tenant scope down to the site scope but can be overridden along the way. There are some good articles that delve into this in more detail.
By provisioning search schema at the site collection level you are mitigating the risks of errors related to other solutions changing the properties which your solution relies upon. This is especially relevant in SharePoint Online where all solutions in the tenant have to share a common set of RefinableTypeXX managed properties.
There are some important exceptions, of course.
People Search, a.k.a User Profile Search, a.k.a Local People Results
In SharePoint Online, people properties are indexed on a very slow schedule. We requested more information from Microsoft regarding this and were told that this schedule is ‘confidential’. I have found that when using site-collection scoped managed properties it can take *weeks* for them to get populated. I have found much better (although still poor) performance using tenant scoped properties (usually within a few days). Assuming you do require custom search schema for people properties I would still recommend provisioning all remaining managed properties (all those not mapped to people properties) at the site collection level.
Many site collections
Of course, having many site collections which require the same search schema is valid reason to go tenant scoped. This is purely due to management of the properties going forwards. A solid scripted deployment procedure should not care if you are provision search schema to 1 or 50 site collections – but anyone maintaining the solution will definitely care if they have update 50 schemas manually, or are suddenly required to script something which they feel should be *easy*. Even in this scenario you should still consider how much you trust other solutions in the tenant against the impact of finding out that one day your managed properties are mapped incorrectly. Depending on your solution this could lead to errors that are left undetected, or conversely obviously break your home page.
There is a somewhat confusing logic behind when the FOLLOW button is displayed on the search results hover panel (a.k.a document preview).
A document hover panel with both the POST and FOLLOW buttons present
What I am talking about?
If you are building a solution that relies on the following of documents but you are using Yammer rather than the SharePoint social feed then you may be wondering why, from the search results hover panel, you can follow pages, users, sites, but not most document types.
NB. If you are finding that you can’t following anything, check that web scoped feature ‘Follow Content’ has been activated on each site which contains content you wish to be able to follow.
NB. You can still follow the document types in question by clicking ‘view in library’ and using the library item menu to follow.
In many cases, wanting both POST and FOLLOW doesn’t make a lot sense as a primary reason of following documents is to populate the activity feed which is not available when Yammer is being used as the enterprise social experience. As such, please consider why you want this behaviour at all. In my scenario the user’s list of followed documents is promoted to the home page and bookmarking documents is a key user story.
What is going on?
The search results hover panel is built from a number of display templates which you can read about in more depth here (TechNet) or here (Chris O’Brien) or many other places.
Importantly, there is a display template which defines the common actions (buttons) across the bottom of the hover panel and when to display them. The display template is called Item_CommonHoverPanel_Actions and can be found here:
Site Settings > Master Pages and Page Layouts > Display Templates > Search > Item_CommonHoverPanel_Actions.html
If you inspect this display template you will find an if else block around the rendering of the POST and FOLLOW buttons. The logic can be summarised as: The POST button is visible if Yammer is enabled and the result type supports it, otherwise the FOLLOW button is visible if the result type supports it, at no time will both buttons be visible.
If you download a copy of the display template HTML file, update it to remove the ‘else’ as in the code snippet below, and then upload it again, you will find the both the POST and FOLLOW buttons will be displayed in the search hover panel when supported. Success!
But is it okay update that file?
The short answer is yes. Take care as this file is used by every hover panel in SharePoint (to my knowledge, there may be some completely unique ones) and so changes could break something that isn’t obvious.
The major risk is that if Microsoft decide to update the hover panel which require them to produce a new version of the display template file (they have done this previously when introducing the POST button). In the case that you have modified this file, then your changes will be lost. This can happen without warning (unless you have a second tenant on first release to catch these issues before they hit production – you should be doing this!).
For very minor updates such as this, and to support non-critical functionality, it may be okay to make these changes and be prepared to re-implement them should Microsoft issue an update.
The alternative is to make a copy of the display template with a new name. This approach means that your changes will not get overridden but it also means that your solution will not get the updates that would otherwise be pushed to this file. We call this ‘customisation tax’ and it is a trade off as to which way you’d rather push changes.
In this particular scenario this latter approach is not very practical as every result type references the existing display template. You would be required to make copies of all the result type display templates that are applicable (possibly a dozen or more), and update the result types themselves to use your new templates. Unless you are bypassing result types and using a single display template for all results, this feels overly complex for such a minor change, but major changes will necessitate the effort.
EDIT: A colleague of mine, Luis Manez, pointed out that with a little JS you can force a custom hover panel to be rendered for all result types. You can read about it (approach one) and some other approaches to associating custom hover panels here (Elio Struyf).