As you may be aware Yammer ‘on by default’ now as part of Office 365. This means you *may* get a Yammer network provisioned at the *.onmicrosoft.com domain associated with your tenancy.
Yammer network using a *.onmicrosoft.com domain
Here’s a tip
If you want to take advantage of this make sure that you create the tenancy as a Trial and then buy licences later. Taking this approach means you get the Yammer ‘on by default’ experience more or less as you would expect, it will be provisioned within a hour or so (maybe sooner).
If you instead purchase a tenant right away, they apparently wait a while as the expectation is that you’ll want to use a vanity domain for your Yammer network… “But I want Yammer ‘on by default’ with my paid tenancy”! You can create a Yammer network on your default Office 365 domain just by logging into Yammer with your *.onmicrosoft.com account BUT you don’t get the integrations with Office 365 – the app launch app, the link to Yammer admin from admin centre. Based on my experience, you will get these integrations eventually – I saw them appear more than 6 weeks later!
I had a long call with Microsoft support regarding this and they weren’t able to provided any solid explanation or reasoning; the information contained in the post is based on my experience rather than official guidance.
Summary
Create new (dev/test) Office 365 tenancies using a Trial subscription and buy licences later if you want to use Yammer without a vanity domain.
The user photo story in Office 365 is not so straight forward. Photos are stored in Active Directory (AD) on-premises, Azure Active Directory (AAD), Exchange Online (EXO), SharePoint Online (SPO), and at first appearances possibly elsewhere as well (where does my Delve profile picture live, what about my Skype for Business (SfB) avatar?).
I have put together a flow diagram to represent how this actually works. It aims to demonstrate where user photos are stored and where different applications fetch user photos from (if they don’t store the images), and leads to some recommendations about user photo synchronisation.
Please note the date of this article (August 2016) and be conscious that Office 365 is changing rapidly and the following recommendations may have changed (e.g. Prior to the Delve user profile page, the SharePoint user profile page referenced images stored in SharePoint rather than Exchange. Changes such as these will continue to evolve).
User photos: the diagram
User photos flow in Office 365
Where applications store and fetch user photos
Photo Location
Comments
Size
Is source?
On-premises AD DS in the thumbnailPhoto attribute
100Kb maximum
Recommended to be
96×96 or 48×48
Yes
Azure AD in the thumbnailPhoto attribute
100Kb maximum
Usually synced from AD DS via Azure AD Connect
Recommended to be
96×96 or 48×48
No.
Sync from AD
Exchange Online as property of the mailbox
500Kb
Provided manually by users or a bulk import can be scripted if source photos can be located and named appropriately.
If not provided, Exchange will reference the AAD thumbnailPhoto in some instances but only if the thumbnailPhoto is less than 10Kb.
Does not sync back to AD
Recommended to be
648×648
Yes
SharePoint Online ‘User Photos’ library
Three renditions of the EXO photo are automatically created in SharePoint after upload to EXO.
It generally takes up to 72 hours to see changes to EXO photo here. Sometimes we see that a user must ‘touch’ their profile before the sync will be performed.
NOTE: Updating user profile photo via Delve profile is actually updating EXO profile photo and not performing any actions directly in SharePoint Online.
Small is 48 x 48,
Medium is 72 x 72,
Large changes depending on the source image but is always square. I have seen as small as 120 x 120 and as large as 300 x 300. PnP image upload solution uploads these as 200 x 200.
No.
Sync from EXO
Skype for Business
Does not store any images
Uses the high resolution Exchange image if available, otherwise uses the AD thumbnailPhoto
EXO image or AD thumbnailPhoto
No.
Read from EXO
Delve user profile
Does not store any images
Uses the high resolution Exchange image if available, otherwise uses the AD thumbnailPhoto
EXO image or AD thumbnailPhoto
No.
Read from EXO
Yammer
Also stores its own photo. Out of scope of this discussion for now.
Yes
Likely issues and resolutions
Issue
Resolution
Exchange Online user photo is low quality (and in turn so is the SPO photo and SfB photo)
The source image coming from AD was/is low quality.
EXO user photos can be updated by users individually or if high res source photos are available this import can be scripted.
Source images should be jpg of 648×648 (resizing and compression can also be scripted)
Exchange Online user photo is high quality but SfB photo is low quality
High resolution photos from Exchange will be used as long as both Exchange and Sfb/Lync are of new enough versions (2013 or greater) and SfB is configured to allow all photos (not just those from AD).
NB. If a user doesn’t have a mailbox (e.g. not licenced) then they will be displayed using the AD photo
There is no Exchange Online user photo (and in turn there is no SPO photo or SfB photo)
A photo has not been imported to the user’s EXO mailbox and the AAD thumbnailPhoto either doesn’t not contain an image or that image is greater than 10Kb.
Import of photos up to 500Kb to EXO mailbox can be scripted (the source images could be on a file share, or AAD).
Changes to user photos are reflected quickly in Exchange and Skype but take days to replicate to SPO
Exchange to SPO synchronisation is a periodic process and can take up to 72 hours.
A custom solution can perform this replication on demand (e.g. at the same time EXO user photos are set)
User photos changed in other systems which update AD are not reflected in EXO, SPO, SfB.
E.g. A user in an on-premises SharePoint farm updates their user photo
When AD is updated, it is synchronised with AAD but that is as far as it gets as the “sync” from AAD to EXO is one-off import rather than a Sync.
Unlikely to be desirable to create a custom sync relationship here as users will want to be able to update EXO directly and won’t want their photo’s overwritten
User photos updated in EXO aren’t replicated to other systems which share an AD.
E.g. An on-premises SharePoint farm
The user photo in EXO is not synched back to AD – it can’t be consistently as the AD thumbnailPhoto attribute only supports photos up to 100Kb where EXO supports larger images.
Potential for a custom solution to sync images back to AD after having resized/compressed them to <100kb – However general recommendation is that AD thumbnailPhoto optimal size is 10Kb and 96×96.
Recommendations
Use Exchange user photos as the master. Allow users to update their user photos but pre-populate their user photo if possible and before end users are provided any access to the system.
If high resolution photos are available, script import of high resolution photos (648×648) to Exchange Online (see Set-UserPhoto and this and sample script below). These will then be visible in Exchange, in Skype, and, once processed, in SharePoint Online. In a dispersed environment this may have to be managed by many teams rather than trying to compile a single list of all user photos.
Users may then update their user profile photo directly via Outlook or indirectly via their Delve profile.
If synchronisation back to AD is required in order serve other applications (e.g. an on-premises SharePoint farm) then a custom solution could provide synchronisation from EXO to AD but this process should compress and shrink images as the recommended size of thumbnailPhoto images is only 96×96 and 10Kb.
Azure AD apps (a.k.a Azure Active Directory apps, a.k.a AAD apps) are an essential component when interacting with Office 365 data outside of SharePoint – Mail, Calendar, Groups, etc.
As an O365 developer I have found myself writing JavaScript code against AAD apps (using ADAl.js) and often, especially during development, found myself entering a long list of Reply URLs. Reply URLs must be specified for any location from which authentication to AAD occurs. From a practical standpoint this results in someone (an Azure Administrator) having to update the list of Reply URLs every time a web part is inserted into a page or a new site is provisioned which relies on an Azure AD app.
If this is not done, the user is redirected to Azure login failure with ‘The reply address … does not match the reply addresses configured for the application’.
Error when Reply URL is not correctly specified
Perhaps the following is documented elsewhere but I have not come across it – a Reply URL can be specified using wildcards!
Using wildcard Reply URLs when configuring an AAD app
Probably the most common use for this is to end a Reply URL with an asterisk (wildcard) which will permit any URL which begins with the characters preceding it.
e.g. https://tenant.sharepoint.com/*
This example would support any URL coming from any page in SharePoint Online from within the named tenant.
It is also possible to use the wildcard character elsewhere in the Reply URL string.
e.g. https://*.sharepoint.com/*
This example would support any URL coming from any page in SharePoint Online from within *any* tenant.
Armed with this knowledge, be responsible and limit strictly how it is utilised. The implementation of Reply URL is a security feature and it is important that only trusted locations are allowed to interact with your app. I recommend only using wildcard Reply URLs in development environments.
Delve, as part of the Office 365 suite, provides a number of useful pages for finding content or people that are trending around you or that you recently interacted with. Often, as a Developer, these pages are the perfect target for “See More” links as part of customisations written using the Office Graph. Or perhaps as an administrator you would like to configure a promoted link on a team site home page to navigate to a user’s ‘Your Recent Documents’ page in Delve, for example.
The Delve Recent Documents page. Note that the URL contains the user’s AAD object ID.
Delve Links – a minor problem
When you visit pages that show content relevant to a specific user (such as Your Recent Documents or the Recent Documents page for another user) the URL of that page contains a query string variable ‘u’ with the value of this variable equal to the Azure Active Directory (AAD) object ID of the user. Azure Active Directory is the identity provider that backs Office 365 and is out the scope of this post. If this parameter is not provided then Delve falls back to the Delve homepage. I would have preferred it to have just used the current user if the parameter is not present, but no, this is how it works.
The ‘u’ query string parameter can be substituted for the ‘p’ query string parameter where the value of p is the user’s account name – the email address which they use to login as.
This value is present on any SharePoint 2013+ page via the JavaScript variable: _spPageContextInfo.userLoginName
This can be utilised as follows:
var mySiteHostUrl = "https://-my.sharepoint.com";
var pageKey = "liveprofilemodified"; // liveprofilemodified='Recent Documents', liveprofileworkingwith='People page'
var delveUrl = mySiteHostUrl + "/_layouts/15/me.aspx" + "?v=" + pageKey + "&p=" + _spPageContextInfo.userLoginName;
Delve Links – side note
This value is present as an Office Graph property as: AccountName
The AAD Object is present as an Office Graph property as: AadObjectId
If you create a SharePoint site column (a note field in this case), associate it with a site content type, and then associate that content type with a list in a sub site, the site column will be available on that library. Obviously right?
However, when you update the site column (and push all changes to lists and libraries) not *all* of the changes you make are in fact pushed down. An example of this is the setting that dictates whether a note field should allow rich text or enforce plain text. If you change this setting at the site column level it will *not* propagate to libraries which already exist. New instances of the column (say if you associated the content type with a list for the first time) will be configured correctly, but existing list-level instances are not updated. NOTE: This is only true for properties specific to particular column type; common properties such as ‘required’ will be pushed down to existing instances of the column at the list level.
Configuring a SharePoint note field
So you want to change a list-level instance of a plain text note column to a rich text note column (or vice-versa, or otherwise change column specific properties or another field type)? You need to do it for every list where the column is in use. That would be very tedious to do via the SharePoint UI, but you can’t anyway. The UI only supports changing the set of common field properties (type, required, hidden, etc).
In comes PowerShell. Below you will find a script which updates a plain text note column to be a rich text note column. It is important to note that this script only updates the list-level columns and not the site column. This means that after running the script, new instances will continue to inherit the site column configuration.
The script is written for SharePoint Online (and assumes that the SharePoint Online Client Components SDK is installed) but for this to work on-premises you would only need to update the referenced assemblies (v15 for 2013) and modify the code which passes the credentials.
If you call the SharePoint 2013 REST API in your applications ensure that any requests originating from the client are sent from the current web base URL to avoid returning a SafeQueryPropertiesTemplateUrl error.
If the current site is https://tenant.sharepoint.com/sites/mysitecollection/subsite1/subsite2 then it is very important that you submit API requests as https://tenant.sharepoint.com/sites/mysitecollection/subsite1/subsite2/_api
and NOT as any of:
https://tenant.sharepoint.com/_api or
https://tenant.sharepoint.com/sites/mysitecollection/_api or even
The reason for this is that the current user must have access to the site addressed by the base URL of the API request (the bit before the _api). If the user cannot access this site then the request will fail. Unfortunately it doesn’t fail in the manner you might expect (i.e. a 401 access denied exception). A request that fails in this manner will return a 500 error. The specific exception details are as follows:
I was recently told that an web app I had developed was returning an HTTP 405 error upon being freshly deployed. It took me way too long to realise that cause of the issue came down to missing files. Specifically, the complete folder structure had been deployed however the files at the top level web root were missing. These are files are rather critical.
They are the web.config and global.asax
If you are seeing this error, ensure these files have been deployed correctly and aren’t corrupt as a first point of call.
Receiving a 405 in IE11
For SEO HTTP 405
Chrome: The page you are looking for cannot be displayed because an invalid method (HTTP verb) is being used.
IE: HTTP 405 The website has a programming error. This error (HTTP 405 Method Not Allowed) means that Internet Explorer was able to connect to the website, but the site had a programming error.
Edge: HTTP 405 error That’s odd… Microsoft Edge can’t find this page
After recently implementing an Azure-based solution to mitigate SharePoint Online’s poor image rendition performance by utilising Azure CDN (see Chris O’Brien’s post on this issue, see Fran R’s post on other Image Rendition issues) I’ve reached a few conclusions regarding setting appropriate cache control headers. It is important to reach a practical balance between performance and receiving updates to files.
Before continuing it is important to understand the fundamental building blocks when using a CDN. At any time a file can be present in three location types: the blob or source file, the CDN endpoint(s), and users’ browser caches. In the case of Azure CDN, the source file must be a blob in Azure Blob Storage. Depending on the CDN/configuration it is likely that the file may be cached at many (dozens) of CDN endpoints dispersed around the globe. Without a CDN the only consideration is the cache timeout for files stored at the user’s browser cache. When considering a CDN we must also consider the cache timeout between the CDN endpoint and the source file.
Another important point to call out is that CDNs generally only push content to an endpoint when is it first requested: on-demand. This will incur a delay for the first user to request that asset from a given endpoint, while source blob is transferred to the endpoint. The impact of this will differ depending on the distance between the source blob and the CDN endpoint and the file size. It is this process that increasing the s-maxage header prevents (discussed below).
Relevant cache control headers
Definitions
max-age : Defines the period which, until reached, the client will used the cached file without contacting the server. ‘Client’ refers to a user’s browser cache as well as a CDN.
s-maxage : If provided, overrides max-age for CDNs only
public : Explicitly marks the file as not user specific
no-transform : Proxy servers may compress or encode images to improve performance or reduce bandwidth traffic. This header prevents this for occurring. It is preferable to avoid this header assuming that you can spare the effort to ensure the files being served are not affected adversely.
A good summary of the many remaining cache control headers that I didn’t feel were relevant to this post can be found here: A beginners guide to HTTP cache headers
In practice
For an image that has been previously requested:
When s-maxage has not expired and max-age has not expired, server responds with 200 (OK), the file is not downloaded again [0ms]
When s-maxage has not expired but max-age has expired, server responds with 304 (not modified), the file is not downloaded again [<100ms]
When s-maxage has expired but max-age has not expired, server responds with 200 (OK), the file is not downloaded again [0ms]
When s-maxage has expired and max-age has expired and the blob has not changed, server responds with 304 (not modified), the file is not downloaded again [<100ms]
When s-maxage has expired and max-age has expired and the blob has changed, server responds with 200 (OK), the file is downloaded again [download image]
A request for an image will return 200 (OK) until max-age has expired and then 304 (not modified) for every subsequent request until the blob is updated. Once updated, this process repeats
If an existing image is updated, the longest a user can wait to see the updated image is
Without clearing browser cache: max-age + s-maxage
With clearing browser cache: s-maxage
If an user views an image from the CDN for the first time, it is only guaranteed to be the latest version of that image if the blob hasn’t been updated in the last s-maxage
SharePoint library images are served with a max-age of 24 hours
As SharePoint library images are not served via a CDN they have an effective s-maxage of 0
My recommendations
Keeping all of the above in mind, I feel that the most important factor is to replicate the experience that users expect from images being served from the SharePoint environment. This can presented as a couple of simple rules:
max-age + s-maxage = 24 hours = 86400 seconds
s-maxage is as low as possible whilst satisfying bandwidth and performance targets (especially for locations most distant to the source blob)
For a recent SharePoint/CDN, I used the following cache control headers:
max-age: 23 hours
s-maxage: 1 hour
public
no-transform
Which looks like this: no-transform,public,max-age=82800,s-maxage=3600
Setting the cache headers served by Azure CDN and Azure Blob Storage
When working with cache control headers in Azure, they are set on the blob itself. It is not a CDN configuration setting.
Do not be confused! The Azure Service Management REST API and the Azure API Management REST API are completely different. Yes, they may have confusingly similar names but they service completely different purposes, support different authentication protocols, and are surfaced via different endpoint domains.
The Azure Service Management REST API
What can I do with it?
This service supports actions for managing Azure resources such as web apps or storage accounts. Think of it as an endpoint for the actions you might otherwise perform manually via the (Classic or New) Azure Portal.
What do the endpoints look like?
Service request URIs will be of the form: https://management.azure.com /subscriptions/…
How does authentication work?
Service authentication is achieved using OAuth via the use of a Bearer access token in the Authorization header. The app principal is an Azure Active Directory application. The AAD app must be given ‘permissions to other applications’ for ‘Windows Azure Service Management API’. As the only grant-able permissions are ‘delegated permissions’ (App+User) rather than ‘application permissions’ (App-only), this API can only be called from within a user context and not, for example, from the context of a web job.
Configuring AAD App Permissions
The Azure API Management REST API
What can I do with it?
The API Management Service supports publishing APIs to consumers by providing an ID and secret key ‘shared signature’ authentication mechanism very similar to that used by Amazon or Instagram for their (public, pending approval) APIs. An API Management Service instance provides benefits like management of users, groups, products (endpoints), and subscriptions. There is then a REST API for managing these users, groups, products, and subscription that the API Management Service provides – this is referred to as the API Management REST API.
What do the endpoints look like?
Service request URIs will be of the form: https://{servicename}.management .azure-api.net/…
How does authentication work?
Service authentication is achieved via the use of a Shared Access Signature access token in the Authorization header. The identifier and secret key required to generated a request signature are available via API Management Service instance. Access to the API must be explicity allowed by checking the ‘Enable API Management REST API’ via the API Management Service publisher portal.
Enable API Management REST API Credit to Microsoft Azure Documenation
Read more
Ok, so just reading the above really won’t be enough to get you firing off requests but hopefully it will provide enough clarity that you fully understand how to interact with which API.
I will post about using the Service Management API along will app configuration and full code samples for authentication in the near future. I will link to that post from here.
There are many ways to iterate a collection in PowerShell. I just really like using delegate functions. This approach is not native PowerShell but utilises the .NET Action class as a function parameter. Using a delegate function approach, it is possible to create a recursive loop that can be very easily reused in the future just by providing an alternative Action.
The example code I provide below demonstrates how to create a delegate function in PowerShell, how to write a function that accepts one as a parameter, and provides some ready made samples for iterating SharePoint objects, specifically all webs or all lists. I am using some specific SharePoint objects in these samples, however the fundamental pattern can be used to effectively iterate any recursive structure.
foreachDecendentWeb : perform an action on every web below the provided web foreachListInWeb : perform an action on every list in the provided web foreachListInWebAndAllDecendentWebs : perform an action on every list in the current and all decendent webs
Some notes
The below script references ‘TopOfScript.ps1’, it is specifically related to calling SharePoint CSOM from PowerShell. Read about it here on sharepointnutsandbolts.
Making the call, providing the delegate
The utility scipts, recursive functions accepting delegate parameters