However, when doing this you may have encountered the following error:
Could not install package ‘Microsoft.IdentityModel.Clients.ActiveDirectory 3.13.9’. You are trying to install this package into a project that targets ‘.NETPortable,Version=v4.5,Profile=Profile259’, but the package does not contain any assembly references or content files that are compatible with that framework. For more information, contact the package author.
I’m using Visual Studio 2017 (VS2017) with Xamarin for Visual Studio 4.5. The Cross Platform App project template has it’s Target Framework Profile configured to Profile259 as stated in the error. This profile is an alias for the set of supported target frameworks. This profile doesn’t include Windows 10 Universal Windows Platform and does include Windows Phone 8 and 8.1 which are not supported. Profile7 represents a set of target platforms which is supported. The TargetFrameworkProfile can be seen by viewing csproj file in a text editor.
Changing the target frameworks
In order to change the set of target frameworks, right click the project, click Properties, then click Change... under Targeting.
Add Windows Universal 10.0. Adding support for UWP implicitly removes support for Windows Phone 8 and 8.1. If you don’t see Windows Universal 10.0 it may because you need to have Windows 10 installed.
Click OK. And you’ll see another error:
In order to get around this, go back to the nuget package manager and uninstall all packages installed against the PCL project. In my scenario, this is only the Xamarin.Forms package. If you’ve got others because you are working from another template, uninstall those too, but remember to note them down as you’ll likely want to re-install them once the target framework profile has been changed.
You can now go back the project properties and successfully update the target framework profile. Note that after adding Windows Universal 10.0 it may look as though it has not been added. This appears to be a UI bug but as long as there are no errors it will have worked correctly. You can confirm this by checking the TargetFrameworkProfile in the csproj file.
Now that the target framework profile has been updated, the nuget packages which were uninstalled should now be re-installed and the ADAL library should successfully install as well.
The ADAL.js library exists as a solution to all five of the OAuth grants specifically when working against AAD as the identity provider. Unfortunately, it is currently not well maintained and is over complicated. From a user experience perspective, the implementation discussed in this post avoids the need to redirect in order to authenticate. It happens seamlessly in the background via a hidden iframe.
A great article on the OAuth grants, agnostic of implementation, can be found here.
Thanks to my colleague Paul Lawrence for writing the first iteration of this code.
This code has a dependency on jQuery, mostly just for promises. I know, old school. I expect I’ll write an es6/2016 version of this soon enough but it shouldn’t be a challenge to convert this code yourself.
As I know I’ll get comments about it if I don’t mention it, this code doesn’t send and verify a state token as part of the grant flow. This is optional as far as the OAuth specification is concerned but it should be done as an additional security measure.
Although I’m Microsoft stack developer and have only tested this with AAD as the identity provider, I believe that it should work for any identify provider that adheres to the OAuth specification for authentication. You would need to play around with the authorisation server URL as login.microsoftonline.com is specifically for authenticating to AAD. I’d love feedback on this.
By definition, the OAuth implicit flow grant does not return a refresh token. Furthermore, the access token has a short lifetime, an hour I believe, and credentials must be re-entered before additional access tokens can be obtained via the implicit flow grant. The code provided in this post handles this by returning a URL which can be used to re-authenticate when a request fails. This URL can be used behind a link or redirection could be forced to occur automatically.
The following code snippet is an example of using this implicit flow library to call into the Microsoft Graph from within the context of a SharePoint Online page.
You will need to provide an appropriate AAD app ID for your AAD app. And don’t forget that you need to enable implicit flow via the app manifest and associate the correct delegate permissions.
This code should work not only with the Microsoft Graph but also to SharePoint Online endpoints, other AAD secured resources such as Azure services or your own AAD secured and CORS enabled web API.
[See note above about identity providers other than AAD]
Here is the implicit flow library code itself.
And here is the definition of the cache functions used above. Nothing special here, this could be swapped out with any cache implementation or removed altogether if caching is truly unnecessary or a security concern.
I welcome your comments, especially from anyone who gives this a go outside of Office 365 and the Microsoft stack.
As you may be aware Yammer ‘on by default’ now as part of Office 365. This means you *may* get a Yammer network provisioned at the *.onmicrosoft.com domain associated with your tenancy.
Here’s a tip
If you want to take advantage of this make sure that you create the tenancy as a Trial and then buy licences later. Taking this approach means you get the Yammer ‘on by default’ experience more or less as you would expect, it will be provisioned within a hour or so (maybe sooner).
If you instead purchase a tenant right away, they apparently wait a while as the expectation is that you’ll want to use a vanity domain for your Yammer network… “But I want Yammer ‘on by default’ with my paid tenancy”! You can create a Yammer network on your default Office 365 domain just by logging into Yammer with your *.onmicrosoft.com account BUT you don’t get the integrations with Office 365 – the app launch app, the link to Yammer admin from admin centre. Based on my experience, you will get these integrations eventually – I saw them appear more than 6 weeks later!
I had a long call with Microsoft support regarding this and they weren’t able to provided any solid explanation or reasoning; the information contained in the post is based on my experience rather than official guidance.
Create new (dev/test) Office 365 tenancies using a Trial subscription and buy licences later if you want to use Yammer without a vanity domain.
The user photo story in Office 365 is not so straight forward. Photos are stored in Active Directory (AD) on-premises, Azure Active Directory (AAD), Exchange Online (EXO), SharePoint Online (SPO), and at first appearances possibly elsewhere as well (where does my Delve profile picture live, what about my Skype for Business (SfB) avatar?).
I have put together a flow diagram to represent how this actually works. It aims to demonstrate where user photos are stored and where different applications fetch user photos from (if they don’t store the images), and leads to some recommendations about user photo synchronisation.
Please note the date of this article (August 2016) and be conscious that Office 365 is changing rapidly and the following recommendations may have changed (e.g. Prior to the Delve user profile page, the SharePoint user profile page referenced images stored in SharePoint rather than Exchange. Changes such as these will continue to evolve).
User photos: the diagram
Where applications store and fetch user photos
On-premises AD DS in the thumbnailPhoto attribute
Recommended to be
96×96 or 48×48
Azure AD in the thumbnailPhoto attribute
Usually synced from AD DS via Azure AD Connect
Recommended to be
96×96 or 48×48
Sync from AD
Exchange Online as property of the mailbox
Provided manually by users or a bulk import can be scripted if source photos can be located and named appropriately.
If not provided, Exchange will reference the AAD thumbnailPhoto in some instances but only if the thumbnailPhoto is less than 10Kb.
Does not sync back to AD
Recommended to be
SharePoint Online ‘User Photos’ library
Three renditions of the EXO photo are automatically created in SharePoint after upload to EXO.
It generally takes up to 72 hours to see changes to EXO photo here. Sometimes we see that a user must ‘touch’ their profile before the sync will be performed.
NOTE: Updating user profile photo via Delve profile is actually updating EXO profile photo and not performing any actions directly in SharePoint Online.
Small is 48 x 48,
Medium is 72 x 72,
Large changes depending on the source image but is always square. I have seen as small as 120 x 120 and as large as 300 x 300. PnP image upload solution uploads these as 200 x 200.
Sync from EXO
Skype for Business
Does not store any images
Uses the high resolution Exchange image if available, otherwise uses the AD thumbnailPhoto
EXO image or AD thumbnailPhoto
Read from EXO
Delve user profile
Does not store any images
Uses the high resolution Exchange image if available, otherwise uses the AD thumbnailPhoto
EXO image or AD thumbnailPhoto
Read from EXO
Also stores its own photo. Out of scope of this discussion for now.
Likely issues and resolutions
Exchange Online user photo is low quality (and in turn so is the SPO photo and SfB photo)
The source image coming from AD was/is low quality.
EXO user photos can be updated by users individually or if high res source photos are available this import can be scripted.
Source images should be jpg of 648×648 (resizing and compression can also be scripted)
Exchange Online user photo is high quality but SfB photo is low quality
High resolution photos from Exchange will be used as long as both Exchange and Sfb/Lync are of new enough versions (2013 or greater) and SfB is configured to allow all photos (not just those from AD).
NB. If a user doesn’t have a mailbox (e.g. not licenced) then they will be displayed using the AD photo
There is no Exchange Online user photo (and in turn there is no SPO photo or SfB photo)
A photo has not been imported to the user’s EXO mailbox and the AAD thumbnailPhoto either doesn’t not contain an image or that image is greater than 10Kb.
Import of photos up to 500Kb to EXO mailbox can be scripted (the source images could be on a file share, or AAD).
Changes to user photos are reflected quickly in Exchange and Skype but take days to replicate to SPO
Exchange to SPO synchronisation is a periodic process and can take up to 72 hours.
A custom solution can perform this replication on demand (e.g. at the same time EXO user photos are set)
User photos changed in other systems which update AD are not reflected in EXO, SPO, SfB.
E.g. A user in an on-premises SharePoint farm updates their user photo
When AD is updated, it is synchronised with AAD but that is as far as it gets as the “sync” from AAD to EXO is one-off import rather than a Sync.
Unlikely to be desirable to create a custom sync relationship here as users will want to be able to update EXO directly and won’t want their photo’s overwritten
User photos updated in EXO aren’t replicated to other systems which share an AD.
E.g. An on-premises SharePoint farm
The user photo in EXO is not synched back to AD – it can’t be consistently as the AD thumbnailPhoto attribute only supports photos up to 100Kb where EXO supports larger images.
Potential for a custom solution to sync images back to AD after having resized/compressed them to <100kb – However general recommendation is that AD thumbnailPhoto optimal size is 10Kb and 96×96.
Use Exchange user photos as the master. Allow users to update their user photos but pre-populate their user photo if possible and before end users are provided any access to the system.
If high resolution photos are available, script import of high resolution photos (648×648) to Exchange Online (see Set-UserPhoto and this). These will then be visible in Exchange, in Skype, and, once processed, in SharePoint Online. In a dispersed environment this may have to be managed by many teams rather than trying to compile a single list of all user photos.
Users may then update their user profile photo directly via Outlook or indirectly via their Delve profile.
If synchronisation back to AD is required in order serve other applications (e.g. an on-premises SharePoint farm) then a custom solution could provide synchronisation from EXO to AD but this process should compress and shrink images as the recommended size of thumbnailPhoto images is only 96×96 and 10Kb.
Before continuing it is important to understand the fundamental building blocks when using a CDN. At any time a file can be present in three location types: the blob or source file, the CDN endpoint(s), and users’ browser caches. In the case of Azure CDN, the source file must be a blob in Azure Blob Storage. Depending on the CDN/configuration it is likely that the file may be cached at many (dozens) of CDN endpoints dispersed around the globe. Without a CDN the only consideration is the cache timeout for files stored at the user’s browser cache. When considering a CDN we must also consider the cache timeout between the CDN endpoint and the source file.
Another important point to call out is that CDNs generally only push content to an endpoint when is it first requested: on-demand. This will incur a delay for the first user to request that asset from a given endpoint, while source blob is transferred to the endpoint. The impact of this will differ depending on the distance between the source blob and the CDN endpoint and the file size. It is this process that increasing the s-maxage header prevents (discussed below).
Relevant cache control headers
max-age : Defines the period which, until reached, the client will used the cached file without contacting the server. ‘Client’ refers to a user’s browser cache as well as a CDN.
s-maxage : If provided, overrides max-age for CDNs only
public : Explicitly marks the file as not user specific
no-transform : Proxy servers may compress or encode images to improve performance or reduce bandwidth traffic. This header prevents this for occurring. It is preferable to avoid this header assuming that you can spare the effort to ensure the files being served are not affected adversely.
When s-maxage has not expired and max-age has not expired, server responds with 200 (OK), the file is not downloaded again [0ms]
When s-maxage has not expired but max-age has expired, server responds with 304 (not modified), the file is not downloaded again [<100ms]
When s-maxage has expired but max-age has not expired, server responds with 200 (OK), the file is not downloaded again [0ms]
When s-maxage has expired and max-age has expired and the blob has not changed, server responds with 304 (not modified), the file is not downloaded again [<100ms]
When s-maxage has expired and max-age has expired and the blob has changed, server responds with 200 (OK), the file is downloaded again [download image]
A request for an image will return 200 (OK) until max-age has expired and then 304 (not modified) for every subsequent request until the blob is updated. Once updated, this process repeats
If an existing image is updated, the longest a user can wait to see the updated image is
Without clearing browser cache: max-age + s-maxage
With clearing browser cache: s-maxage
If an user views an image from the CDN for the first time, it is only guaranteed to be the latest version of that image if the blob hasn’t been updated in the last s-maxage
SharePoint library images are served with a max-age of 24 hours
As SharePoint library images are not served via a CDN they have an effective s-maxage of 0
Keeping all of the above in mind, I feel that the most important factor is to replicate the experience that users expect from images being served from the SharePoint environment. This can presented as a couple of simple rules:
max-age + s-maxage = 24 hours = 86400 seconds
s-maxage is as low as possible whilst satisfying bandwidth and performance targets (especially for locations most distant to the source blob)
For a recent SharePoint/CDN, I used the following cache control headers:
max-age: 23 hours
s-maxage: 1 hour
Which looks like this: no-transform,public,max-age=82800,s-maxage=3600
Setting the cache headers served by Azure CDN and Azure Blob Storage
When working with cache control headers in Azure, they are set on the blob itself. It is not a CDN configuration setting.
The Amazon Product Advertising API documentation provides some code samples for its use but none using ASP.NET. A personal interest brought me to play with it and as it wasn’t entirely trivial to create a signed request as required associate authentication I thought I’d share some working code samples.
The API does surface a WSDL file and as such a Web Reference could be used to generate classes to interact with the API. The sample I am providing here does not take advantage of this and is instead submitting raw REST requests.
I see the most valuable part of this sample as the request signing piece. This sample should not be seen as a best practice for interacting with the API but rather as a utility for request signing.
The order of the query string parameters that are included in the signed string is crucial. They must be ordered by character code (in practise this equates to alphabetically, but with all upper case letters coming before any lower case letters). The API documentation suggests string splitting, sorting, and string joining. This is definitely the approach I would take if you find yourself writing queries that use parameters dynamically but I struggle to see the use-case. This sample just uses a hard-coded string with the relevant parameters in the correct order.
Although I haven’t looked in detail yet, the approach taken to sign requests here appears very similar if not identical to that required by the Instagram API, and I am sure many other (social media) APIs.
Below you will find a class called the AmazonApiHelper. Further below is an ashx HttpHandler as an example of calling the utility functions provided by the helper class. You’ll need to provide you own values for the following constants: private const string awsSecretKey = "Your secret key goes here";
private const string awsAccessKeyId = "Your access key Id goes here";
private const string associateTag = "Your associate tag goes here";
If you use jsLink to override the rendering of list views then you may have noticed that your custom jsLink no longer renders a message when there are no items returned in the view. I am going to discuss with code samples how to display a ‘no items’ message – or at least help you stop overriding it.
If, alternatively, you have a ‘no items’ message being displayed and just want to modify the text, try this link.
If you don’t know what jsLink is then it is worth learning about it. Try this link.
What am I doing wrong?
Chances you are making the same mistake that many people make. A mistake that has been replicated again and again online and doesn’t break anything but does prevent the display of the ‘no items’ message and the paging control. When you override Templates.Header you DO NOT need to override Templates.Footer in order to close tags which you opened in the header.
Although doing so seems to make sense, you can rest assured knowing that tags you open in the header will be closed auto-magically after the item templates have completed rendering. In fact, the footer template is rendered in a different table cell to the header and item templates when this all hits the page. Think of the footer template as a distinct block that is rendered after everything else rather than the end of the same block.
By overriding the footer template you are also inadvertently overriding the ‘no items’ message and the list view paging control. You can see exactly what you are overriding by inspected the default values for the templates. Below is snippet from clientrenderer.js which shows the default footer template.
So what should you do?
If you just want the default no items message and can get away with not overriding the footer template (as in the first code snippet), then great – you are all done.
If want a custom message then check out the link at the very top of the article (in summary: renderCtx.ListSchema.NoListItem = "Nada, nothing, zilch";).
If you want to override the footer template or perhaps you want the message to appear within a wrapper tag defined in the header or you want some custom logic behind which message to display then you can do that too – keep reading.
Doing it yourself
I’ve written a utility function that is based on the logic in the OOTB footer template that makes it easier to manage the ‘no items’ text. This function does NOT replicate the paging functionality. If you need paging and are overriding the footer template then you will need to replicate the paging functionality as well. You will need to look into clientrenderer.js to find out how MSFT do this.
Looking at this snippet you can see the if-else block where you can define custom messages for different list templates or if the lack of results has occurred only after a search term was provided. This sample should not be considered the superlative version, it just does a basic job in line with what happens by default.
Below are two examples of how you may want to use this. The first is by overriding the footer template, and the second is by overriding the header template. The advantage of sticking this code into the header template is that it allows you wrap the no items message in the same wrapper tags that you defined for the main content.
For aiding findability:
There are no items to show in this view of the list
Your search returned no results
Some items might be hidden. Include these in your search
Still didn’t find it? Try searching the entire site.
It would be worth reading the intro of my earlier article to get a better understanding of what is happening in the snippets provided in this post.
As the most common usage will surely be to produce search result page URLs that are refined on a single value, I have written an ‘overload’ function that simplifies calling the method in this scenario
The ‘search page URL’ can be provided to the functions in a number of ways including:
“/search” : to the web. The default page for that web. In the case of an Enterprise Search Centre this will be the ‘Everything’ search results page
“/search/Pages/peopleresults.aspx” : to the page
Use an absolute URL if you are out of the context of the SharePoint Online tenant in which the search page resides. This will be true for provider hosted add-ins (apps)
If you are writing your own refiner, then pass an empty string and set window.location.hash to the result of the function
This script has no dependencies on other libraries (jQuery, SP.js, etc)
The ADAL library simplifies the process of obtaining and caching the authentication tokens required to retrieve data from Office 365. It is possible to avoid the ADAL library and handle this yourself, although I would recommend doing so as a learning exercise only.
I failed to find a simple example of how to achieve this, my search results often filled with examples of calling the APIs from server-side code or else utilising the Angular.js framework. This example is based on a more complex example.
The following snippet will log to the browser console the results of a call the to files endpoint of the Office 365 unified API, which will return a JSON object containing information about the files in the current users’ OD4B.
Register an Azure Active Directory App. Note that *every* Office 365 subscriptions comes with AAD and supports the creation of an app
Associate the required ‘permissions to other services’, in this case ‘Read users files’ via the Office 365 Unified API
Allow implicit flow
Not covered explicitly in the above article but also critical are the following steps:
Get the App’s Client ID and copy it into the snippet
Get the Azure Active Directory subscription ID and copy it into the snippet
Once the above steps have been completed, you can try out the snippet by embedding in a Script Editor web part, or you can run it externally to SharePoint as part of, say, a provider hosted app.
NOTE: I found that the call to files endpoint is failing for certain users. I am still unsure whether this is due to external vs internal users (is working for internal [.onmicrosoft.com] users) or whether it could be licencing issue. The /beta/me endpoint is working in all cases.
CORS: Cross-Origin Resource Sharing
ADAL: Active Directly Authentication Library
OD4B: OneDrive 4 Business
For solutions that are contained in a single site collection, or span a small number of site collections, or are in a tenant where the other solutions are not trusted or are unknown, then I have a strong preference to use site collection scoped search schema rather than tenant scoped.
Side note: I am yet to come across a situation where I would use site scoped search schema. In my mind, the existence of search schema at this level only serves to confuse.
For those that aren’t fully aware, search schema (the set of managed properties that are accessible via the search framework) can be provisioned at the tenant, site collection, or site scope. These scopes are hierarchical such that managed properties are inherited from the tenant scope down to the site scope but can be overridden along the way. There are some good articles that delve into this in more detail.
By provisioning search schema at the site collection level you are mitigating the risks of errors related to other solutions changing the properties which your solution relies upon. This is especially relevant in SharePoint Online where all solutions in the tenant have to share a common set of RefinableTypeXX managed properties.
There are some important exceptions, of course.
People Search, a.k.a User Profile Search, a.k.a Local People Results
In SharePoint Online, people properties are indexed on a very slow schedule. We requested more information from Microsoft regarding this and were told that this schedule is ‘confidential’. I have found that when using site-collection scoped managed properties it can take *weeks* for them to get populated. I have found much better (although still poor) performance using tenant scoped properties (usually within a few days). Assuming you do require custom search schema for people properties I would still recommend provisioning all remaining managed properties (all those not mapped to people properties) at the site collection level.
Many site collections
Of course, having many site collections which require the same search schema is valid reason to go tenant scoped. This is purely due to management of the properties going forwards. A solid scripted deployment procedure should not care if you are provision search schema to 1 or 50 site collections – but anyone maintaining the solution will definitely care if they have update 50 schemas manually, or are suddenly required to script something which they feel should be *easy*. Even in this scenario you should still consider how much you trust other solutions in the tenant against the impact of finding out that one day your managed properties are mapped incorrectly. Depending on your solution this could lead to errors that are left undetected, or conversely obviously break your home page.