Category Archives: Technical Investigation

OAuth On-Behalf-Of Flow: Getting a user access token without user interaction. Includes ADAL sample.

OAuth 2.0 (and hence Azure Active Directory) provides the On-Behalf-Of flow to support obtaining a user access token for a resource with only a user access token for a different resource – and without user interaction.
This supports the scenario where a secured Web API acts as an interface to other resources (a.k.a endpoints) secured by the same identity provider and that require user context. As a practical example, a mobile client accesses some resources via a middle tier API which provides services such as data processing, caching, API simplification/optimisation, joining of datasets, etc.

The OAuth flow that achieves this is called the On-Behalf-Of flow; this makes sense as we’re facilitating the middle tier to act on behalf of the client when it accesses the resources farther down.

Using the on-behalf-of flow to access a resource via a middle tier API

Some background

Authentication with an OAuth 2.0 identity provider (such as AAD) produces JWT tokens. These tokens include information such as which claims (permissions) the user should be granted and the particular resource at which the token is valid (such as graph.microsoft.com). The OAuth 2.0 framework is specified such that a given token can only ever be valid for a single resource. This means that the token received by an endpoint (such as an Azure App Service Web API) cannot be used to directly authenticate to ‘another resource’. This is because the token’s resource will be that of the Web API and not the ‘other resource’. To see this I recommend checking out jwt.io and cracking open some tokens.

For completeness, the ‘other resource’ could be accessed using app-only authentication if it supports it, and if user context is not required (i.e. the return value will be the same regardless of the user) although this may greatly increase complexity in a multi-tenant scenario.

Configuring AAD for on-behalf-of

Before we get to the code the first hurdle is configuring AAD app registrations correctly. Initially it may be tempting to consider having both the Client and Web API layers utilise a single AAD app registration. After all, they are same holistic ‘app’ and how else can we get a user to consent to the permissions required by the Web API app when there is no interactive interface at that point? The latter point is resolved by explicitly binding the app registrations so that both are consented to as one. I mention how this is done below. By having two app registrations the flexibility of configuration is improved; we can have a Native app registration for the client and a Web API app registration for the Web API, we can have implicit flow configured for one app and not the other, and generally have granular control over configuration. Most vitally, an app registration can’t issue tokens valid for its own resource so two app registrations is a requirement.

I’ll avoid stepping though the configuration of the app registrations here as this is available elsewhere including this GitHub project. I will give a high level overview of what needs to happen.

    1. Create app registration for the Web API
      • Assign permissions to the downstream resources (e.g. Microsoft Graph, a custom Web API, etc)
      • If supporting multi-tenant authentication ensure availableToOtherTenants is set to true in the manifest
    2. Create app registration for the Client
      • Assign permissions to the app registration created above for the Web API
      • If supporting multi-tenant authentication ensure availableToOtherTenants is set to true in the manifest
    3. Associate app registrations
      • In the manifest for Web API app registration, configure knownClientApplications to reference the App ID for the app registration created for the Client. E.g. "knownClientApplications": ["9XXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXc"]
        This binds the app registrations such that the Web API app registration is consented to as part of a single consent dialog displayed to a user when they authenticate to the Client app registration.
Before and after the app registrations are associated. Note how ‘Access Mobile App Backend’ is no longer present and instead is expanded to show the individual permissions required by that app.

Access Token Broker code

The following code is a .NET example of how to use the Active Directory Authentication Library (ADAL) to achieve the On-Behalf-Of flow.

Credits and further reading

This GitHub project was a very useful resource and a recommended starting point:
https://github.com/Azure-Samples/active-directory-dotnet-webapi-onbehalfof-ca

Microsoft docs OAuth 2.0 On-Behalf-Of flow
https://docs.microsoft.com/en-us/azure/active-directory/develop/active-directory-v2-protocols-oauth-on-behalf-of

OAuth 2.0 specification
https://tools.ietf.org/html/rfc6749

Redis Token Cache example
https://blogs.msdn.microsoft.com/mrochon/2016/09/19/using-redis-as-adal-token-cache/

Vardhaman Deshpande – always helpful
http://www.vrdmn.com/

I was inspired to write this post not because this information isn’t available but because the information is hard to find if you aren’t familiar with the term “On-Behalf-Of”. Hopefully this post will be found by those of you searching for terms like “trade access token for new resource”, “change token resource”, “use access token with multiple resources/endpoints”, “access Microsoft Graph via Web API”, etc.

Paul.

Xamarin and ADAL: Could not install package Microsoft.IdentityModel. Clients.ActiveDirectory

When starting a new Cross Platform App with Xamarin.Forms you may find yourself needing to integrate the Active Directory Authentication Library (ADAL) with your Portable Class Library (PCL). This is most easily done using nuget.

Using nuget to add the ADAL package
Using nuget to add the ADAL package

The error

However, when doing this you may have encountered the following error:

Failing to add the ADAL package
Failing to add the ADAL package

Could not install package ‘Microsoft.IdentityModel.Clients.ActiveDirectory 3.13.9’. You are trying to install this package into a project that targets ‘.NETPortable,Version=v4.5,Profile=Profile259’, but the package does not contain any assembly references or content files that are compatible with that framework. For more information, contact the package author.

I’m using Visual Studio 2017 (VS2017) with Xamarin for Visual Studio 4.5. The Cross Platform App project template has it’s Target Framework Profile configured to Profile259 as stated in the error. This profile is an alias for the set of supported target frameworks. This profile doesn’t include Windows 10 Universal Windows Platform and does include Windows Phone 8 and 8.1 which are not supported. Profile7 represents a set of target platforms which is supported. The TargetFrameworkProfile can be seen by viewing csproj file in a text editor.

Changing the target frameworks

In order to change the set of target frameworks, right click the project, click Properties, then click Change... under Targeting.

Changing the target frameworks for a PCL
Changing the target frameworks for a PCL

Add Windows Universal 10.0. Adding support for UWP implicitly removes support for Windows Phone 8 and 8.1. If you don’t see Windows Universal 10.0 it may because you need to have Windows 10 installed.
Click OK. And you’ll see another error:

Exisitng nuget packages prevent changing the target framework profile
Exisitng nuget packages prevent changing the target framework profile

The resolution

In order to get around this, go back to the nuget package manager and uninstall all packages installed against the PCL project. In my scenario, this is only the Xamarin.Forms package. If you’ve got others because you are working from another template, uninstall those too, but remember to note them down as you’ll likely want to re-install them once the target framework profile has been changed.

Unistall any nuget packages on the PCL
Unistall any nuget packages on the PCL

You can now go back the project properties and successfully update the target framework profile. Note that after adding Windows Universal 10.0 it may look as though it has not been added. This appears to be a UI bug but as long as there are no errors it will have worked correctly. You can confirm this by checking the TargetFrameworkProfile in the csproj file.

Now that the target framework profile has been updated, the nuget packages which were uninstalled should now be re-installed and the ADAL library should successfully install as well.

ADAL Package has installed successfully
ADAL Package has installed successfully!

Paul.

OAuth Implicit Flow in JS without ADAL.js

This post provides a lightweight implementation of the OAuth implicit flow grant for obtaining an access token. Implicit flow is appropriate when the current user is authenticated to a common identity provider (e.g. Azure Active Directory a.k.a AAD) and the client (the environment requesting the token) is not secure. A great example of this is making a call to the Microsoft Graph from a page in SharePoint Online using only JavaScript.

The ADAL.js library exists as a solution to all five of the OAuth grants specifically when working against AAD as the identity provider. Unfortunately, it is currently not well maintained and is over complicated. From a user experience perspective, the implementation discussed in this post avoids the need to redirect in order to authenticate. It happens seamlessly in the background via a hidden iframe.

Azure Active Directory
Azure Active Directory
  • A great article on the OAuth grants, agnostic of implementation, can be found here.
  • Thanks to my colleague Paul Lawrence for writing the first iteration of this code.
  • This code has a dependency on jQuery, mostly just for promises. I know, old school. I expect I’ll write an es6/2016 version of this soon enough but it shouldn’t be a challenge to convert this code yourself.
  • As I know I’ll get comments about it if I don’t mention it, this code doesn’t send and verify a state token as part of the grant flow. This is optional as far as the OAuth specification is concerned but it should be done as an additional security measure.
  • Although I’m Microsoft stack developer and have only tested this with AAD as the identity provider, I believe that it should work for any identify provider that adheres to the OAuth specification for authentication. You would need to play around with the authorisation server URL as login.microsoftonline.com is specifically for authenticating to AAD. I’d love feedback on this.
  • By definition, the OAuth implicit flow grant does not return a refresh token. Furthermore, the access token has a short lifetime, an hour I believe, and credentials must be re-entered before additional access tokens can be obtained via the implicit flow grant. The code provided in this post handles this by returning a URL which can be used to re-authenticate when a request fails. This URL can be used behind a link or redirection could be forced to occur automatically.

The following code snippet is an example of using this implicit flow library to call into the Microsoft Graph from within the context of a SharePoint Online page.
You will need to provide an appropriate AAD app ID for your AAD app. And don’t forget that you need to enable implicit flow via the app manifest and associate the correct delegate permissions.
This code should work not only with the Microsoft Graph but also to SharePoint Online endpoints, other AAD secured resources such as Azure services or your own AAD secured and CORS enabled web API.
[See note above about identity providers other than AAD]

var aadAppClientId = "8BE5AA0E-F900-4BDF-A7CF-71B3CC53B78E";
var resource = "https://graph.microsoft.com"
var query = "/v1.0/me/events";
var tokenFactory = new CC.CORE.Adal.AppTokenFactory(aadAppClientId, resource);
tokenFactory.ExecuteQuery(query)
.done(function (response) {
	// Success!
})
.fail(function (response) {
	// NOTE: Provide a link to renew an expired or yet to be approved session:
	// "Sorry, your session has expired or requires your approval. 
	// <div><a href='" + response.authorizeUrl + "'>Click here to sign in</a></div>";
});

Here is the implicit flow library code itself.

var CC = CC || {};
CC.CORE = CC.CORE || {};

CC.CORE.Log = function (errMsg) {
    // console.log is undefined in IE10 and earlier unless in debug mode, so must check for it
    if (typeof window.console === "object" && typeof console.log === "function") {
        console.log(errMsg);
    }
};

CC.CORE.Adal = (function () {
    "use strict";

    var appTokenFactory = function (aadAppClientId, resource) {
        // redirectUrl is the URL which the iframe will redirect to once auth occurs.
        // we use blank.gif as it is a very low payload
        var redirectUrl = _spPageContextInfo.webAbsoluteUrl + "/_layouts/images/blank.gif";

        // NOTE on security: include the userId in the cache key to prevent the case where a user logs out but
        // leaves the tab open and a new user logs in on the same tab. The first user's calender
        // would be returned if we didn't associate the cache key with the current user.
        var cacheKey = "candc_cache_adal_" + _spPageContextInfo.userId + "_" + aadAppClientId + "_" + resource;

        this.params = {
            clientId: aadAppClientId,
            redirectUrl: redirectUrl,
            resource: resource,
            cacheKey: cacheKey
        };

        var getAuthorizeUri = function (params, redirectUrl) {
            var authUri = "https://login.microsoftonline.com/common/oauth2/authorize" +
                            "?client_id=" + params.clientId +
                            "&response_type=token" +
                            "&redirect_uri=" + encodeURIComponent(redirectUrl) +
                            "&resource=" + encodeURIComponent(params.resource);
            return authUri;
        };

		var getQueryStringParameterByName = function (name, url) {
			name = name.replace(/[\[\]]/g, "\\$&");
			var regex = new RegExp("[?&#]" + name + "(=([^&#]*)|&|#|$)");
			var results = regex.exec(url);
			if (!results) return null;
			if (!results[2]) return '';
			return decodeURIComponent(results[2].replace(/\+/g, " "));
		};
		
        // create iframe, set its href, set listener for when loaded
        // to parse the query string. Deferred returns upon parse of query string in iframe.
        var acquirePassiveToken = function (params) {
            var deferred = jQuery.Deferred();

            // create iframe and inject into dom
            var iframe = jQuery("<iframe />").attr({
                width: 1,
                height: 1,
                src: getAuthorizeUri(params, params.redirectUrl)
            })
            jQuery(document.body).append(iframe);

            // bind event handler to iframe for parse query string on load
            iframe.on("load", function (iframeData) {
                parseAccessTokenFromIframe(iframeData, deferred);
            });

            return deferred.promise();
        };

        // handle iframe once it has loaded
        var parseAccessTokenFromIframe = function (iframeData, deferred) {
            // read the iframe href
            var frameHref = "";
            try {
                // this will throw a cross-domain error for any issue other than success
                // as the iframe will diplay the error on the login.microsoft domain
                frameHref = iframeData.currentTarget.contentWindow.location.href;
            }
            catch (error) {
                deferred.reject(error);
                return;
            }

            // parse iframe query string parameters
            var accessToken = getQueryStringParameterByName("access_token", frameHref);
            var expiresInSeconds = getQueryStringParameterByName("expires_in", frameHref);

            // delete the iframe, and event handler.
            var iframe = jQuery(iframeData.currentTarget);
            iframe.remove();

            // resolve promise
            deferred.resolve({
                accessToken: accessToken,
                expiresInSeconds: expiresInSeconds
            });
        };

        // get the most recent token from the cache, or if not available,
        // fetch a new token via iframe
        var getToken = function (params) {
            var deferred = jQuery.Deferred();
            
            // check for cached token
            var tokenFromCache = CC.CORE.Cache.Get(params.cacheKey);
            if (!tokenFromCache) {
                // fetch token via iframe
                acquirePassiveToken(params)
                .done(function (tokenFromIframe) {
                    CC.CORE.Log("ADAL: Fetched token from iframe.");
                    // expire cache a minute before token expires to be safe
                    var cacheTimeout = (tokenFromIframe.expiresInSeconds - 60) * 1000;
                    CC.CORE.Cache.Set(params.cacheKey, tokenFromIframe, cacheTimeout);
                    // resolve the promise
                    deferred.resolve(tokenFromIframe);
                })
                .fail(function (error) {
                    // Logs when rejection is caught
                    deferred.reject(error);
                });
            }
            else {
                CC.CORE.Log("ADAL: Fetched token from cache.");
                // resolve the promise
                deferred.resolve(tokenFromCache);
            }
            return deferred.promise();
        };

        this.ExecuteQuery = function (query, additionalHeaders) {
            var deferred = jQuery.Deferred();
            var params = this.params;
            // get token from cache or via iframe
            getToken(params)
            .done(function (token) {
                // submit request with token in header
                var ajaxHeaders = {
                    'Authorization': 'Bearer ' + token.accessToken
                };
                if (typeof additionalHeaders === "object") {
                    jQuery.extend(ajaxHeaders, additionalHeaders);
                }
                jQuery.ajax({
                    type: "GET",
                    url: params.resource + query,
                    headers: ajaxHeaders
                }).done(function (response) {
                    deferred.resolve(response);
                }).fail(function (error) {
                    deferred.reject({
                        error: error
                    });
                });
            })
            .fail(function (error) {
                CC.CORE.Log('ADAL error occurred: ' + error);
                deferred.reject({
                    error: error,
                    authorizeUrl: getAuthorizeUri(params, window.location.href)
                });
            });
            return deferred.promise();
        };
    };

    return {
        AppTokenFactory: appTokenFactory
    };

})(jQuery);

And here is the definition of the cache functions used above. Nothing special here, this could be swapped out with any cache implementation or removed altogether if caching is truly unnecessary or a security concern.

var CC = CC || {};
CC.CORE = CC.CORE || {};

CC.CORE.Cache = (function () {
    var defaultCacheExpiry = 15 * 60 * 1000; // default is 15 minutes
	var aMinuteInMs = (1000 * 60);
	var anHourInMs = aMinuteInMs * 60;
	
    var getCacheObject = function () {
        // Using session storage rather than local storage as caching benefit
        // is minimal so would rather have an easy way to reset it.
        return window.sessionStorage;
    };

    var isSupportStorage = function () {
        var cacheObj = getCacheObject();
        var supportsStorage = cacheObj && JSON && typeof JSON.parse === "function" && typeof JSON.stringify === "function";
        if (supportsStorage) {
            // Check for dodgy behaviour from iOS Safari in private browsing mode
            try {
                var testKey = "candc-cache-isSupportStorage-testKey";
                cacheObj[testKey] = "1";
                cacheObj.removeItem(testKey);
                return true;
            }
            catch (ex) {
                // Private browsing mode in iOS Safari, or possible full cache
            }
        }
        CC.CORE.Log("Browser does not support caching");
        return false;
    };

    var getExpiryKey = function (key) {
        return key + "_expiry";
    };

    var isCacheExpired = function (key) {
        var cacheExpiryString = getCacheObject()[getExpiryKey(key)];
        if (typeof cacheExpiryString === "string" && cacheExpiryString.length > 0) {
            var cacheExpiryInt = parseInt(cacheExpiryString);
            if (cacheExpiryInt > (new Date()).getTime()) {
                return false;
            }
        }
        return true;
    };

    var get = function (key) {
        if (isSupportStorage()) {
            if (!isCacheExpired(key)) {
                var valueString = getCacheObject()[key];
                if (typeof valueString === "string") {
                    CC.CORE.Log("Got from cache at key: " + key);
                    if (valueString.indexOf("{") === 0 || valueString.indexOf("[") === 0) {
                        var valueObj = JSON.parse(valueString);
                        return valueObj;
                    }
                    else {
                        return valueString;
                    }
                }
            }
            else {
                // remove expired entries?
                // not required as we will almost always be refreshing the cache
                // at this time
            }
        }
        return null;
    };

    var set = function (key, valueObj, validityPeriodMs) {
        var didSetInCache = false;
        if (isSupportStorage()) {
            // Get value as a string
            var cacheValue = undefined;
            if (valueObj === null || valueObj === undefined) {
                cacheValue = null;
            }
            else if (typeof valueObj === "object") {
                cacheValue = JSON.stringify(valueObj);
            }
            else if (typeof valueObj.toString === "function") {
                cacheValue = valueObj.toString();
            }
            else {
                alert("Cannot cache type: " + typeof valueObj);
            }

            // Cache value if it is valid
            if (cacheValue !== undefined) {
                // Cache value
                getCacheObject()[key] = cacheValue;
                // Ensure valid expiry period
                if (typeof validityPeriodMs !== "number" || validityPeriodMs < 1) {
                    validityPeriodMs = defaultCacheExpiry;
                }
                // Cache expiry
                getCacheObject()[getExpiryKey(key)] = ((new Date()).getTime() + validityPeriodMs).toString();
                CC.CORE.Log("Set in cache at key: " + key);
                didSetInCache = true;
            }
        }
        return didSetInCache;
    };

    var clear = function (key) {
        var cache = getCacheObject();
        cache.removeItem(key);
        cache.removeItem(getExpiryKey(key));
    };

    return {
        Get: get,
        Set: set,
        Clear: clear,
        IsSupportStorage: isSupportStorage,
        Timeout: {
            VeryShort: (aMinuteInMs * 1),
            Default: (anHourInMs * 2),
            VeryLong: (anHourInMs * 72),
        }
    };
})();

I welcome your comments, especially from anyone who gives this a go outside of Office 365 and the Microsoft stack.

Paul.

Yammer ‘on by default’ network using default Office 365 domain

As you may be aware Yammer ‘on by default’ now as part of Office 365. This means you *may* get a Yammer network provisioned at the *.onmicrosoft.com domain associated with your tenancy.

Yammer 'on by default'
Yammer network using a *.onmicrosoft.com domain

Here’s a tip

If you want to take advantage of this make sure that you create the tenancy as a Trial and then buy licences later. Taking this approach means you get the Yammer ‘on by default’ experience more or less as you would expect, it will be provisioned within a hour or so (maybe sooner).

createtrialtenancy

If you instead purchase a tenant right away, they apparently wait a while as the expectation is that you’ll want to use a vanity domain for your Yammer network… “But I want Yammer ‘on by default’ with my paid tenancy”! You can create a Yammer network on your default Office 365 domain just by logging into Yammer with your *.onmicrosoft.com account BUT you don’t get the integrations with Office 365 – the app launch app, the link to Yammer admin from admin centre. Based on my experience, you will get these integrations eventually – I saw them appear more than 6 weeks later!

I had a long call with Microsoft support regarding this and they weren’t able to provided any solid explanation or reasoning; the information contained in the post is based on my experience rather than official guidance.

Summary

Create new (dev/test) Office 365 tenancies using a Trial subscription and buy licences later if you want to use Yammer without a vanity domain.

Paul.

User photos in Office 365

The user photo story in Office 365 is not so straight forward. Photos are stored in Active Directory (AD) on-premises, Azure Active Directory (AAD), Exchange Online (EXO), SharePoint Online (SPO), and at first appearances possibly elsewhere as well (where does my Delve profile picture live, what about my Skype for Business (SfB) avatar?).

I have put together a flow diagram to represent how this actually works. It aims to demonstrate where user photos are stored and where different applications fetch user photos from (if they don’t store the images), and leads to some recommendations about user photo synchronisation.

Please note the date of this article (August 2016) and be conscious that Office 365 is changing rapidly and the following recommendations may have changed (e.g. Prior to the Delve user profile page, the SharePoint user profile page referenced images stored in SharePoint rather than Exchange. Changes such as these will continue to evolve).

User photos: the diagram

User photos flow in Office 365
User photos flow in Office 365

Where applications store and fetch user photos

Photo Location

Comments

Size

Is source?

On-premises AD DS in the thumbnailPhoto attribute

100Kb maximum

Recommended to be

96×96 or 48×48

Yes

Azure AD in the thumbnailPhoto attribute

100Kb maximum

Usually synced from AD DS via Azure AD Connect

Recommended to be

96×96 or 48×48

No.

Sync from AD

Exchange Online as property of the mailbox

500Kb

Provided manually by users or a bulk import can be scripted if source photos can be located and named appropriately.

If not provided, Exchange will reference the AAD thumbnailPhoto in some instances but only if the thumbnailPhoto is less than 10Kb.

Does not sync back to AD

Recommended to be

648×648

Yes

SharePoint Online ‘User Photos’ library

Three renditions of the EXO photo are automatically created in SharePoint after upload to EXO.

It generally takes up to 72 hours to see changes to EXO photo here. Sometimes we see that a user must ‘touch’ their profile before the sync will be performed.

NOTE: Updating user profile photo via Delve profile is actually updating EXO profile photo and not performing any actions directly in SharePoint Online.

Small is 48 x 48,

Medium is 72 x 72,

Large changes depending on the source image but is always square. I have seen as small as 120 x 120 and as large as 300 x 300. PnP image upload solution uploads these as 200 x 200.

No.

Sync from EXO

Skype for Business

Does not store any images

Uses the high resolution Exchange image if available, otherwise uses the AD thumbnailPhoto

EXO image or AD thumbnailPhoto

No.

Read from EXO

Delve user profile

Does not store any images

Uses the high resolution Exchange image if available, otherwise uses the AD thumbnailPhoto

EXO image or AD thumbnailPhoto

No.

Read from EXO

Yammer

Also stores its own photo. Out of scope of this discussion for now.

Yes

Likely issues and resolutions

Issue

Resolution

Exchange Online user photo is low quality (and in turn so is the SPO photo and SfB photo)

The source image coming from AD was/is low quality.

EXO user photos can be updated by users individually or if high res source photos are available this import can be scripted.

Source images should be jpg of 648×648 (resizing and compression can also be scripted)

Exchange Online user photo is high quality but SfB photo is low quality

High resolution photos from Exchange will be used as long as both Exchange and Sfb/Lync are of new enough versions (2013 or greater) and SfB is configured to allow all photos (not just those from AD).

NB. If a user doesn’t have a mailbox (e.g. not licenced) then they will be displayed using the AD photo

There is no Exchange Online user photo (and in turn there is no SPO photo or SfB photo)

A photo has not been imported to the user’s EXO mailbox and the AAD thumbnailPhoto either doesn’t not contain an image or that image is greater than 10Kb.

Import of photos up to 500Kb to EXO mailbox can be scripted (the source images could be on a file share, or AAD).

Changes to user photos are reflected quickly in Exchange and Skype but take days to replicate to SPO

Exchange to SPO synchronisation is a periodic process and can take up to 72 hours.

A custom solution can perform this replication on demand (e.g. at the same time EXO user photos are set)

User photos changed in other systems which update AD are not reflected in EXO, SPO, SfB.

E.g. A user in an on-premises SharePoint farm updates their user photo

When AD is updated, it is synchronised with AAD but that is as far as it gets as the “sync” from AAD to EXO is one-off import rather than a Sync.

Unlikely to be desirable to create a custom sync relationship here as users will want to be able to update EXO directly and won’t want their photo’s overwritten

User photos updated in EXO aren’t replicated to other systems which share an AD.

E.g. An on-premises SharePoint farm

The user photo in EXO is not synched back to AD – it can’t be consistently as the AD thumbnailPhoto attribute only supports photos up to 100Kb where EXO supports larger images.

Potential for a custom solution to sync images back to AD after having resized/compressed them to <100kb – However general recommendation is that AD thumbnailPhoto optimal size is 10Kb and 96×96.

Recommendations

Use Exchange user photos as the master. Allow users to update their user photos but pre-populate their user photo if possible and before end users are provided any access to the system.

If high resolution photos are available, script import of high resolution photos (648×648) to Exchange Online (see Set-UserPhoto and this). These will then be visible in Exchange, in Skype, and, once processed, in SharePoint Online. In a dispersed environment this may have to be managed by many teams rather than trying to compile a single list of all user photos.

Users may then update their user profile photo directly via Outlook or indirectly via their Delve profile.

If synchronisation back to AD is required in order serve other applications (e.g. an on-premises SharePoint farm) then a custom solution could provide synchronisation from EXO to AD but this process should compress and shrink images as the recommended size of thumbnailPhoto images is only 96×96 and 10Kb.

Paul.


Azure CDN integration with SharePoint, cache control headers max-age, s-maxage

After recently implementing an Azure-based solution to mitigate SharePoint Online’s poor image rendition performance by utilising Azure CDN (see Chris O’Brien’s post on this issue, see Fran R’s post on other Image Rendition issues) I’ve reached a few conclusions regarding setting appropriate cache control headers. It is important to reach a practical balance between performance and receiving updates to files.

Azure CDN logo

Before continuing it is important to understand the fundamental building blocks when using a CDN. At any time a file can be present in three location types: the blob or source file, the CDN endpoint(s), and users’ browser caches. In the case of Azure CDN, the source file must be a blob in Azure Blob Storage. Depending on the CDN/configuration it is likely that the file may be cached at many (dozens) of CDN endpoints dispersed around the globe. Without a CDN the only consideration is the cache timeout for files stored at the user’s browser cache. When considering a CDN we must also consider the cache timeout between the CDN endpoint and the source file.

Another important point to call out is that CDNs generally only push content to an endpoint when is it first requested: on-demand. This will incur a delay for the first user to request that asset from a given endpoint, while source blob is transferred to the endpoint. The impact of this will differ depending on the distance between the source blob and the CDN endpoint and the file size. It is this process that increasing the s-maxage header prevents (discussed below).

Relevant cache control headers

Definitions

  • max-age : Defines the period which, until reached, the client will used the cached file without contacting the server. ‘Client’ refers to a user’s browser cache as well as a CDN.
  • s-maxage : If provided, overrides max-age for CDNs only
  • public : Explicitly marks the file as not user specific
  • no-transform : Proxy servers may compress or encode images to improve performance or reduce bandwidth traffic. This header prevents this for occurring. It is preferable to avoid this header assuming that you can spare the effort to ensure the files being served are not affected adversely.

A good summary of the many remaining cache control headers that I didn’t feel were relevant to this post can be found here:
A beginners guide to HTTP cache headers

In practice

  • For an image that has been previously requested:
    • When s-maxage has not expired and max-age has not expired, server responds with 200 (OK), the file is not downloaded again [0ms]
    • When s-maxage has not expired but max-age has expired, server responds with 304 (not modified), the file is not downloaded again [<100ms]
    • When s-maxage has expired but max-age has not expired, server responds with 200 (OK), the file is not downloaded again [0ms]
    • When s-maxage has expired and max-age has expired and the blob has not changed, server responds with 304 (not modified), the file is not downloaded again [<100ms]
    • When s-maxage has expired and max-age has expired and the blob has changed, server responds with 200 (OK), the file is downloaded again [download image]
  • A request for an image will return 200 (OK) until max-age has expired and then 304 (not modified) for every subsequent request until the blob is updated. Once updated, this process repeats
  • If an existing image is updated, the longest a user can wait to see the updated image is
    • Without clearing browser cache: max-age + s-maxage
    • With clearing browser cache: s-maxage
  • If an user views an image from the CDN for the first time, it is only guaranteed to be the latest version of that image if the blob hasn’t been updated in the last s-maxage
  • SharePoint library images are served with a max-age of 24 hours
  • As SharePoint library images are not served via a CDN they have an effective s-maxage of 0

My recommendations

Keeping all of the above in mind, I feel that the most important factor is to replicate the experience that users expect from images being served from the SharePoint environment. This can presented as a couple of simple rules:

  1. max-age + s-maxage = 24 hours = 86400 seconds
  2. s-maxage is as low as possible whilst satisfying bandwidth and performance targets (especially for locations most distant to the source blob)

For a recent SharePoint/CDN, I used the following cache control headers:

  • max-age: 23 hours
  • s-maxage: 1 hour
  • public
  • no-transform

Which looks like this:
no-transform,public,max-age=82800,s-maxage=3600

Setting the cache headers served by Azure CDN and Azure Blob Storage

When working with cache control headers in Azure, they are set on the blob itself. It is not a CDN configuration setting.

Paul.

Request Signing, Amazon Product Advertising API, .NET C#

The Amazon Product Advertising API documentation provides some code samples for its use but none using ASP.NET. A personal interest brought me to play with it and as it wasn’t entirely trivial to create a signed request as required associate authentication I thought I’d share some working code samples.

Amazon Product Advertising API

Some notes

The API does surface a WSDL file and as such a Web Reference could be used to generate classes to interact with the API. The sample I am providing here does not take advantage of this and is instead submitting raw REST requests.

I see the most valuable part of this sample as the request signing piece. This sample should not be seen as a best practice for interacting with the API but rather as a utility for request signing.

The order of the query string parameters that are included in the signed string is crucial. They must be ordered by character code (in practise this equates to alphabetically, but with all upper case letters coming before any lower case letters). The API documentation suggests string splitting, sorting, and string joining. This is definitely the approach I would take if you find yourself writing queries that use parameters dynamically but I struggle to see the use-case. This sample just uses a hard-coded string with the relevant parameters in the correct order.

Although I haven’t looked in detail yet, the approach taken to sign requests here appears very similar if not identical to that required by the Instagram API, and I am sure many other (social media) APIs.

Requests to APIs which require the signing of a secret key cannot be made securely directly from the client (e.g. using JavaScript) as it would require your secret key to be available in plain text on the client. If you want to run ajax commands against the API you need execute requests to an intermediary service. This is the approach that the sample code below facilitates.

You can read about the Amazon Product Advertising API here: Product Advertising API

The code

Below you will find a class called the AmazonApiHelper. Further below is an ashx HttpHandler as an example of calling the utility functions provided by the helper class. You’ll need to provide you own values for the following constants:
private const string awsSecretKey = "Your secret key goes here";
private const string awsAccessKeyId = "Your access key Id goes here";
private const string associateTag = "Your associate tag goes here";

The helper class

 

Calling the helper class from a web handler

Good luck advertising those products!

Paul.

jsLink: how to display a custom ‘no items’ message

If you use jsLink to override the rendering of list views then you may have noticed that your custom jsLink no longer renders a message when there are no items returned in the view. I am going to discuss with code samples how to display a ‘no items’ message – or at least help you stop overriding it.

You have complete control over how list items are rendered using jsLink
You have complete control over how list items are rendered using jsLink

If, alternatively, you have a ‘no items’ message being displayed and just want to modify the text, try this link.

If you don’t know what jsLink is then it is worth learning about it. Try this link.

What am I doing wrong?

Chances you are making the same mistake that many people make. A mistake that has been replicated again and again online and doesn’t break anything but does prevent the display of the ‘no items’ message and the paging control. When you override Templates.Header you DO NOT need to override Templates.Footer in order to close tags which you opened in the header.

Although doing so seems to make sense, you can rest assured knowing that tags you open in the header will be closed auto-magically after the item templates have completed rendering. In fact, the footer template is rendered in a different table cell to the header and item templates when this all hits the page. Think of the footer template as a distinct block that is rendered after everything else rather than the end of the same block.

By overriding the footer template you are also inadvertently overriding the ‘no items’ message and the list view paging control. You can see exactly what you are overriding by inspected the default values for the templates. Below is snippet from clientrenderer.js which shows the default footer template.

So what should you do?

If you just want the default no items message and can get away with not overriding the footer template (as in the first code snippet), then great – you are all done.

If want a custom message then check out the link at the very top of the article (in summary: renderCtx.ListSchema.NoListItem = "Nada, nothing, zilch";).

If you want to override the footer template or perhaps you want the message to appear within a wrapper tag defined in the header or you want some custom logic behind which message to display then you can do that too – keep reading.

Doing it yourself

I’ve written a utility function that is based on the logic in the OOTB footer template that makes it easier to manage the ‘no items’ text. This function does NOT replicate the paging functionality. If you need paging and are overriding the footer template then you will need to replicate the paging functionality as well. You will need to look into clientrenderer.js to find out how MSFT do this.
Looking at this snippet you can see the if-else block where you can define custom messages for different list templates or if the lack of results has occurred only after a search term was provided. This sample should not be considered the superlative version, it just does a basic job in line with what happens by default.

Below are two examples of how you may want to use this. The first is by overriding the footer template, and the second is by overriding the header template. The advantage of sticking this code into the header template is that it allows you wrap the no items message in the same wrapper tags that you defined for the main content.

Paul.

For aiding findability:

  • There are no items to show in this view of the list
  • Your search returned no results
  • Some items might be hidden. Include these in your search
  • Still didn’t find it? Try searching the entire site.

Dynamically generating complex pre-refined search result page URLs

I while ago I blogged about creating a static link to a pre-refined (pre-filtered) search page. This post follows that idea to it’s natural conclusion by providing a number of JavaScript functions which can dynamically create search result page URLs. These URLs will look something like this:

https://tenant.sharepoint.com/search#Default=%7B%22k%22%3A%22article%22%2C%22r%22%3A%5B%7B%22n%22%3A%22RefinableString20%22%2C%22t%22%3A%5B%22%5C%22%C7%82%C7%824275696c64%5C%22%22%5D%2C%22o%22%3A%22OR%22%2C%22k%22%3Afalse%2C%22m%22%3A%7B%22%5C%22%C7%82%C7%824275696c64%5C%22%22%3A%22Build%22%7D%7D%2C%7B%22n%22%3A%22RefinableString21%22%2C%22t%22%3A%5B%22%5C%22%C7%82%C7%824c6f6e646f6e%5C%22%22%5D%2C%22o%22%3A%22OR%22%2C%22k%22%3Afalse%2C%22m%22%3A%7B%22%5C%22%C7%82%C7%824c6f6e646f6e%5C%22%22%3A%22London%22%7D%7D%5D%7D

The provided scripts support filtering on:

  • a search term
  • multiple refiners
  • multiple values for a refiner, or
  • any combination of the above

It would be worth reading the intro of my earlier article to get a better understanding of what is happening in the snippets provided in this post.

Default Enterprise Search Centre
Default Enterprise Search Centre

OF NOTE:

  • As the most common usage will surely be to produce search result page URLs that are refined on a single value, I have written an ‘overload’ function that simplifies calling the method in this scenario
  • The ‘search page URL’ can be provided to the functions in a number of ways including:
    • “/search” : to the web. The default page for that web. In the case of an Enterprise Search Centre this will be the ‘Everything’ search results page
    • “/search/Pages/peopleresults.aspx” : to the page
    • Use an absolute URL if you are out of the context of the SharePoint Online tenant in which the search page resides. This will be true for provider hosted add-ins (apps)
    • If you are writing your own refiner, then pass an empty string and set window.location.hash to the result of the function
  • This script has no dependencies on other libraries (jQuery, SP.js, etc)
  • The hex encoded string must be UTF-8 encoded. JavaScript is natively UTF-16. The particular scenario where this raised an issue for me was the wide-ampersand character which is often used instead of a standard ampersand as it is XML friendly. ‘unescape’ returns a UTF-8 encoded string and is used to force the required encoding. Thanks to ecmanaut for this solution
  • I took inspiration for the stringToHex method from a post by pussard

The functions:

var getPreRefinedSearchPageUrl = function (searchPageUrl, searchTerm, managedPropertyName, managedPropertyValue) {
  return getComplexPreRefinedSearchPageUrl({
    searchPageUrl: searchPageUrl,
    searchTerm: searchTerm,
    refiners: [
      {
        managedPropertyName: managedPropertyName,
        managedPropertyValues: [
          managedPropertyValue
        ]
      }
    ]
  });
};

// input:
// {
//   searchPageUrl: "/search/Pages/results.aspx",
//   searchTerm: "",
//   refiners: [
//     {
//       managedPropertyName: "RefinableString08",
//       managedPropertyValues: [
//         "Human Resources"
//       ]
//     }
//   ]
// }
var getComplexPreRefinedSearchPageUrl = function (data) {
  var searchObj = {
    "k": data.searchTerm,
    "r": []
  };
  for (var i = 0; i < data.refiners.length; i++) {
    var refiner = data.refiners[i];
    var searchObjRefiner = {
      "n": refiner.managedPropertyName,
      "t": [],
      "o": "OR",
      "k": false,
      "m": {}
    };
    for (var j = 0; j < refiner.managedPropertyValues.length; j++) {
      var refinerValue = refiner.managedPropertyValues[j];
      // Force UTF8 encoding to handle special characters, specifically full-width ampersand
      var managedPropertyValueUTF8 = unescape(encodeURIComponent(refinerValue)); 
      var managedPropertyValueHex = stringToHex(managedPropertyValueUTF8);
      var managedPropertyValueHexToken = "\"ǂǂ" + managedPropertyValueHex + "\"";
      searchObjRefiner.t.push(managedPropertyValueHexToken);
      searchObjRefiner.m[managedPropertyValueHexToken] = refinerValue;
      searchObj.r.push(searchObjRefiner);
    }
  }
  var seachObjString = JSON.stringify(searchObj);
  var searchObjEncodedString = encodeURIComponent(seachObjString);
  var url = data.searchPageUrl + "#Default=" + searchObjEncodedString;
  return url;
};

var stringToHex = function (tmp) {
  var d2h = function (d) {
    return d.toString(16);
  };
  var str = '',
    i = 0,
    tmp_len = tmp.length,
    c;
  for (; i < tmp_len; i += 1) {
    c = tmp.charCodeAt(i);
    str += d2h(c);
  }
  return str;
};

These are examples of how to call the function that are defined above.

var complexUrl = getComplexPreRefinedSearchPageUrl({
  searchPageUrl: "/search/Pages/results.aspx",
  searchTerm: "article",
  refiners: [
    {
      managedPropertyName: "RefinableString20",
      managedPropertyValues: [
        "Build", "Land"
      ]
    },
    {
      managedPropertyName: "RefinableString21",
      managedPropertyValues: [
        "London"
      ]
    }
  ]
});
var basicUrl = getPreRefinedSearchPageUrl("/search/Pages/results.aspx", "", "RefinableString20", "Build");

Paul.

Calling the Office 365 Unified API from JavaScript using ADAL.js

The goal of this post is to provide very basic ‘hello world’ example of how to call the Office 365 Unified API (aka Graph API) using JavaScript and the ADAL.js library. This has recently become possible (May 2015) now that CORS is supported by the Office 365 APIs (most of the individual endpoints support it as well as the unified API).

The ADAL library simplifies the process of obtaining and caching the authentication tokens required to retrieve data from Office 365. It is possible to avoid the ADAL library and handle this yourself, although I would recommend doing so as a learning exercise only.

I failed to find a simple example of how to achieve this, my search results often filled with examples of calling the APIs from server-side code or else utilising the Angular.js framework. This example is based on a more complex example.

The following snippet will log to the browser console the results of a call the to files endpoint of the Office 365 unified API, which will return a JSON object containing information about the files in the current users’ OD4B.

Before it will work you must complete the following steps (as described in detail here):

  1. Register an Azure Active Directory App. Note that *every* Office 365 subscriptions comes with AAD and supports the creation of an app
  2. Associate the required ‘permissions to other services’, in this case ‘Read users files’ via the Office 365 Unified API
  3. Allow implicit flow

Not covered explicitly in the above article but also critical are the following steps:

  • Get the App’s Client ID and copy it into the snippet
The App Client ID
The App Client ID
  • Get the Azure Active Directory subscription ID and copy it into the snippet
The subscription ID in Azure Active Directory
The subscription ID in Azure Active Directory

Once the above steps have been completed, you can try out the snippet by embedding in a Script Editor web part, or you can run it externally to SharePoint as part of, say, a provider hosted app.

NOTE: I found that the call to files endpoint is failing for certain users. I am still unsure whether this is due to external vs internal users (is working for internal [.onmicrosoft.com] users) or whether it could be licencing issue. The /beta/me endpoint is working in all cases.

Paul.

GLOSSARY

CORS: Cross-Origin Resource Sharing
ADAL: Active Directly Authentication Library
OD4B: OneDrive 4 Business