jsLink: how to display a custom ‘no items’ message

If you use jsLink to override the rendering of list views then you may have noticed that your custom jsLink no longer renders a message when there are no items returned in the view. I am going to discuss with code samples how to display a ‘no items’ message – or at least help you stop overriding it.

You have complete control over how list items are rendered using jsLink
You have complete control over how list items are rendered using jsLink

If, alternatively, you have a ‘no items’ message being displayed and just want to modify the text, try this link.

If you don’t know what jsLink is then it is worth learning about it. Try this link.

What am I doing wrong?

Chances you are making the same mistake that many people make. A mistake that has been replicated again and again online and doesn’t break anything but does prevent the display of the ‘no items’ message and the paging control. When you override Templates.Header you DO NOT need to override Templates.Footer in order to close tags which you opened in the header.

Although doing so seems to make sense, you can rest assured knowing that tags you open in the header will be closed auto-magically after the item templates have completed rendering. In fact, the footer template is rendered in a different table cell to the header and item templates when this all hits the page. Think of the footer template as a distinct block that is rendered after everything else rather than the end of the same block.

By overriding the footer template you are also inadvertently overriding the ‘no items’ message and the list view paging control. You can see exactly what you are overriding by inspected the default values for the templates. Below is snippet from clientrenderer.js which shows the default footer template.

So what should you do?

If you just want the default no items message and can get away with not overriding the footer template (as in the first code snippet), then great – you are all done.

If want a custom message then check out the link at the very top of the article (in summary: renderCtx.ListSchema.NoListItem = "Nada, nothing, zilch";).

If you want to override the footer template or perhaps you want the message to appear within a wrapper tag defined in the header or you want some custom logic behind which message to display then you can do that too – keep reading.

Doing it yourself

I’ve written a utility function that is based on the logic in the OOTB footer template that makes it easier to manage the ‘no items’ text. This function does NOT replicate the paging functionality. If you need paging and are overriding the footer template then you will need to replicate the paging functionality as well. You will need to look into clientrenderer.js to find out how MSFT do this.
Looking at this snippet you can see the if-else block where you can define custom messages for different list templates or if the lack of results has occurred only after a search term was provided. This sample should not be considered the superlative version, it just does a basic job in line with what happens by default.

Below are two examples of how you may want to use this. The first is by overriding the footer template, and the second is by overriding the header template. The advantage of sticking this code into the header template is that it allows you wrap the no items message in the same wrapper tags that you defined for the main content.

Paul.

For aiding findability:

  • There are no items to show in this view of the list
  • Your search returned no results
  • Some items might be hidden. Include these in your search
  • Still didn’t find it? Try searching the entire site.

Dynamically generating complex pre-refined search result page URLs

I while ago I blogged about creating a static link to a pre-refined (pre-filtered) search page. This post follows that idea to it’s natural conclusion by providing a number of JavaScript functions which can dynamically create search result page URLs. These URLs will look something like this:

https://tenant.sharepoint.com/search#Default=%7B%22k%22%3A%22article%22%2C%22r%22%3A%5B%7B%22n%22%3A%22RefinableString20%22%2C%22t%22%3A%5B%22%5C%22%C7%82%C7%824275696c64%5C%22%22%5D%2C%22o%22%3A%22OR%22%2C%22k%22%3Afalse%2C%22m%22%3A%7B%22%5C%22%C7%82%C7%824275696c64%5C%22%22%3A%22Build%22%7D%7D%2C%7B%22n%22%3A%22RefinableString21%22%2C%22t%22%3A%5B%22%5C%22%C7%82%C7%824c6f6e646f6e%5C%22%22%5D%2C%22o%22%3A%22OR%22%2C%22k%22%3Afalse%2C%22m%22%3A%7B%22%5C%22%C7%82%C7%824c6f6e646f6e%5C%22%22%3A%22London%22%7D%7D%5D%7D

The provided scripts support filtering on:

  • a search term
  • multiple refiners
  • multiple values for a refiner, or
  • any combination of the above

It would be worth reading the intro of my earlier article to get a better understanding of what is happening in the snippets provided in this post.

Default Enterprise Search Centre
Default Enterprise Search Centre

OF NOTE:

  • As the most common usage will surely be to produce search result page URLs that are refined on a single value, I have written an ‘overload’ function that simplifies calling the method in this scenario
  • The ‘search page URL’ can be provided to the functions in a number of ways including:
    • “/search” : to the web. The default page for that web. In the case of an Enterprise Search Centre this will be the ‘Everything’ search results page
    • “/search/Pages/peopleresults.aspx” : to the page
    • Use an absolute URL if you are out of the context of the SharePoint Online tenant in which the search page resides. This will be true for provider hosted add-ins (apps)
    • If you are writing your own refiner, then pass an empty string and set window.location.hash to the result of the function
  • This script has no dependencies on other libraries (jQuery, SP.js, etc)
  • The hex encoded string must be UTF-8 encoded. JavaScript is natively UTF-16. The particular scenario where this raised an issue for me was the wide-ampersand character which is often used instead of a standard ampersand as it is XML friendly. ‘unescape’ returns a UTF-8 encoded string and is used to force the required encoding. Thanks to ecmanaut for this solution
  • I took inspiration for the stringToHex method from a post by pussard

The functions:

var getPreRefinedSearchPageUrl = function (searchPageUrl, searchTerm, managedPropertyName, managedPropertyValue) {
  return getComplexPreRefinedSearchPageUrl({
    searchPageUrl: searchPageUrl,
    searchTerm: searchTerm,
    refiners: [
      {
        managedPropertyName: managedPropertyName,
        managedPropertyValues: [
          managedPropertyValue
        ]
      }
    ]
  });
};

// input:
// {
//   searchPageUrl: "/search/Pages/results.aspx",
//   searchTerm: "",
//   refiners: [
//     {
//       managedPropertyName: "RefinableString08",
//       managedPropertyValues: [
//         "Human Resources"
//       ]
//     }
//   ]
// }
var getComplexPreRefinedSearchPageUrl = function (data) {
  var searchObj = {
    "k": data.searchTerm,
    "r": []
  };
  for (var i = 0; i < data.refiners.length; i++) {
    var refiner = data.refiners[i];
    var searchObjRefiner = {
      "n": refiner.managedPropertyName,
      "t": [],
      "o": "OR",
      "k": false,
      "m": {}
    };
    for (var j = 0; j < refiner.managedPropertyValues.length; j++) {
      var refinerValue = refiner.managedPropertyValues[j];
      // Force UTF8 encoding to handle special characters, specifically full-width ampersand
      var managedPropertyValueUTF8 = unescape(encodeURIComponent(refinerValue)); 
      var managedPropertyValueHex = stringToHex(managedPropertyValueUTF8);
      var managedPropertyValueHexToken = "\"ǂǂ" + managedPropertyValueHex + "\"";
      searchObjRefiner.t.push(managedPropertyValueHexToken);
      searchObjRefiner.m[managedPropertyValueHexToken] = refinerValue;
      searchObj.r.push(searchObjRefiner);
    }
  }
  var seachObjString = JSON.stringify(searchObj);
  var searchObjEncodedString = encodeURIComponent(seachObjString);
  var url = data.searchPageUrl + "#Default=" + searchObjEncodedString;
  return url;
};

var stringToHex = function (tmp) {
  var d2h = function (d) {
    return d.toString(16);
  };
  var str = '',
    i = 0,
    tmp_len = tmp.length,
    c;
  for (; i < tmp_len; i += 1) {
    c = tmp.charCodeAt(i);
    str += d2h(c);
  }
  return str;
};

These are examples of how to call the function that are defined above.

var complexUrl = getComplexPreRefinedSearchPageUrl({
  searchPageUrl: "/search/Pages/results.aspx",
  searchTerm: "article",
  refiners: [
    {
      managedPropertyName: "RefinableString20",
      managedPropertyValues: [
        "Build", "Land"
      ]
    },
    {
      managedPropertyName: "RefinableString21",
      managedPropertyValues: [
        "London"
      ]
    }
  ]
});
var basicUrl = getPreRefinedSearchPageUrl("/search/Pages/results.aspx", "", "RefinableString20", "Build");

Paul.

Calling the Office 365 Unified API from JavaScript using ADAL.js

The goal of this post is to provide very basic ‘hello world’ example of how to call the Office 365 Unified API (aka Graph API) using JavaScript and the ADAL.js library. This has recently become possible (May 2015) now that CORS is supported by the Office 365 APIs (most of the individual endpoints support it as well as the unified API).

The ADAL library simplifies the process of obtaining and caching the authentication tokens required to retrieve data from Office 365. It is possible to avoid the ADAL library and handle this yourself, although I would recommend doing so as a learning exercise only.

I failed to find a simple example of how to achieve this, my search results often filled with examples of calling the APIs from server-side code or else utilising the Angular.js framework. This example is based on a more complex example.

The following snippet will log to the browser console the results of a call the to files endpoint of the Office 365 unified API, which will return a JSON object containing information about the files in the current users’ OD4B.

Before it will work you must complete the following steps (as described in detail here):

  1. Register an Azure Active Directory App. Note that *every* Office 365 subscriptions comes with AAD and supports the creation of an app
  2. Associate the required ‘permissions to other services’, in this case ‘Read users files’ via the Office 365 Unified API
  3. Allow implicit flow

Not covered explicitly in the above article but also critical are the following steps:

  • Get the App’s Client ID and copy it into the snippet
The App Client ID
The App Client ID
  • Get the Azure Active Directory subscription ID and copy it into the snippet
The subscription ID in Azure Active Directory
The subscription ID in Azure Active Directory

Once the above steps have been completed, you can try out the snippet by embedding in a Script Editor web part, or you can run it externally to SharePoint as part of, say, a provider hosted app.

NOTE: I found that the call to files endpoint is failing for certain users. I am still unsure whether this is due to external vs internal users (is working for internal [.onmicrosoft.com] users) or whether it could be licencing issue. The /beta/me endpoint is working in all cases.

Paul.

GLOSSARY

CORS: Cross-Origin Resource Sharing
ADAL: Active Directly Authentication Library
OD4B: OneDrive 4 Business

Search Schema Scoping in SharePoint Online

For solutions that are contained in a single site collection, or span a small number of site collections, or are in a tenant where the other solutions are not trusted or are unknown, then I have a strong preference to use site collection scoped search schema rather than tenant scoped.

Side note: I am yet to come across a situation where I would use site scoped search schema. In my mind, the existence of search schema at this level only serves to confuse.

Search Schema hierarchy is SharePoint Online.
Search Schema hierarchy is SharePoint Online. There is also site scoped search schema at lowest level which is not present here.

For those that aren’t fully aware, search schema (the set of managed properties that are accessible via the search framework) can be provisioned at the tenant, site collection, or site scope. These scopes are hierarchical such that managed properties are inherited from the tenant scope down to the site scope but can be overridden along the way. There are some good articles that delve into this in more detail.

By provisioning search schema at the site collection level you are mitigating the risks of errors related to other solutions changing the properties which your solution relies upon. This is especially relevant in SharePoint Online where all solutions in the tenant have to share a common set of RefinableTypeXX managed properties.

There are some important exceptions, of course.

  1. People Search, a.k.a User Profile Search, a.k.a Local People Results
    In SharePoint Online, people properties are indexed on a very slow schedule. We requested more information from Microsoft regarding this and were told that this schedule is ‘confidential’. I have found that when using site-collection scoped managed properties it can take *weeks* for them to get populated. I have found much better (although still poor) performance using tenant scoped properties (usually within a few days). Assuming you do require custom search schema for people properties I would still recommend provisioning all remaining managed properties (all those not mapped to people properties) at the site collection level.
  2. Many site collections
    Of course, having many site collections which require the same search schema is valid reason to go tenant scoped. This is purely due to management of the properties going forwards. A solid scripted deployment procedure should not care if you are provision search schema to 1 or 50 site collections – but anyone maintaining the solution will definitely care if they have update 50 schemas manually, or are suddenly required to script something which they feel should be *easy*. Even in this scenario you should still consider how much you trust other solutions in the tenant against the impact of finding out that one day your managed properties are mapped incorrectly. Depending on your solution this could lead to errors that are left undetected, or conversely obviously break your home page.

Paul.

SharePoint Online remote authentication (and Doc upload)

The SharePoint REST API is touted as being the tool to provide inter-platform integration with SharePoint Online. However, outside of .NET the authentication piece is not so straightforward. App authentication solves this issue for registered apps but I want to show how remote user authentication can be achieved, regardless of platform.

In a .NET environment please refer to the ADAL library for authentication rather than writing it yourself.

The goal of this post is to provide examples of the HTTP requests which need to be made in order to authenticate SharePoint Online. It then provides an example of using the same technique to upload a document and update metadata just to prove it all works 🙂

The type of applications where this kind of approach may be necessary include: a Java application, a PHP application, or JavaScript application where there is otherwise no SharePoint Online authentication context and the decision has been made (for whatever reason) that user authentication is most appropriate (as opposed to app authentication).

Edit: This approach will not work in a JavaScript environment due to cross-domain restrictions enforced by browsers (unless of course you are on the same domain, in which case you don’t need to worry about any of this anyway). The ADAL.js library is available for the cross-domain JS scenario. I have posted an example here: https://paulryan.com.au/2015/unified-api-adal/

I wrote about using the SharePoint REST API here (and background here, and here).

I will be providing examples of the requests using the ‘Advanced REST Client’ Google Chrome extension.

Authenticate

The authentication piece comes in a few steps:

  • Get the security token
  • Get the access token
  • Get the request digest

Get the security token

First we must provide a username and password of a user with Contribute access to the Roster Data library and the URL at which we want access to the SharePoint Online Security Token Service.

This is done by POSTing the following XML as the request body to:
https://login.microsoftonline.com/extSTS.srf

<s:Envelope xmlns:s="http://www.w3.org/2003/05/soap-envelope"
      xmlns:a="http://www.w3.org/2005/08/addressing"
      xmlns:u="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd">
  <s:Header>
    <a:Action s:mustUnderstand="1">http://schemas.xmlsoap.org/ws/2005/02/trust/RST/Issue</a:Action>
    <a:ReplyTo>
      <a:Address>http://www.w3.org/2005/08/addressing/anonymous</a:Address>
    </a:ReplyTo>
    <a:To s:mustUnderstand="1">https://login.microsoftonline.com/extSTS.srf</a:To>
    <o:Security s:mustUnderstand="1"
       xmlns:o="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd">
      <o:UsernameToken>
        <o:Username>[username]</o:Username>
        <o:Password>[password]</o:Password>
      </o:UsernameToken>
    </o:Security>
  </s:Header>
  <s:Body>
    <t:RequestSecurityToken xmlns:t="http://schemas.xmlsoap.org/ws/2005/02/trust">
      <wsp:AppliesTo xmlns:wsp="http://schemas.xmlsoap.org/ws/2004/09/policy">
        <a:EndpointReference>
          <a:Address>[endpoint]</a:Address>
        </a:EndpointReference>
      </wsp:AppliesTo>
      <t:KeyType>http://schemas.xmlsoap.org/ws/2005/05/identity/NoProofKey</t:KeyType>
      <t:RequestType>http://schemas.xmlsoap.org/ws/2005/02/trust/Issue</t:RequestType>
      <t:TokenType>urn:oasis:names:tc:SAML:1.0:assertion</t:TokenType>
    </t:RequestSecurityToken>
  </s:Body>
</s:Envelope>

Requesting the security token
Requesting the security token

The response from the request includes the security token needed to get the access token. It is the value which has been stricken out in orange in the image below.

Response including the security token
Response including the security token

Get the access token

Once the security token has been retrieved it must be used to fetch the access token. The can be done by POSTing to the following URL with the security token as the request body:
https://yourdomain.sharepoint.com/_forms/default.aspx?wa=wsignin1.0

Request to fetch the access token, passing the security token
Request to fetch the access token, passing the security token

The response from this request includes couple of cookies which must be passed as headers with all future requests. They are marked with the ‘Set-Cookie’ header. We need the ones beginning with rtFa= and FedAuth=. They can be seen the below image of the response headers.

Response includes the access token cookies
Response includes the access token cookies

Get the request digest

The request digest is a .NET security feature that ensures any update requests are coming from a single session. It must also be included with any POST requests.

The request digest is fetched by POSTing to: https://yourdomain.sharepoint.com/_api/contextinfo
The access token cookies must be included as Cookie headers with the request as shown in the image below.

Request to fetch the request digest, passing access tokens
Request to fetch the request digest, passing access tokens

The response from the request will include the request digest in the XML response as in the image below. The entire contents of the FormDigestValue tag will required, including the date time portion and timezone offset (-0000).

Response containing the request digest value
Response containing the request digest value

Upload a document with metadata

Upload the document

Now that we have all the authentication headers we can make update calls into SharePoint Online as the user whose credentials we originally supplied when fetching the security token.

In order to upload a document perform the following POST request:
https://yourdomain.sharepoint.com/subweb/_api/web/lists/getbytitle(‘list name’)
/rootfolder/files/add(url='filename.csv',overwrite=true)

A number of headers must be send with the request including the access token cookies, the request digest (X-RequestDigest) and the accept header as shown in the image below. The body of the request must contain the content of the document being uploaded.

Request to upload a document to SharePoint Online
Request to upload a document to SharePoint Online

The response of this request contains some minimal metadata about the file and can be safely ignored. However, for completeness here it is.

Response JSON from the document upload request
Response JSON from the document upload request

The unique ID property could be used to fetch the document in order to perform metadata updates rather than URL as done in the following example.

Update document metadata

The final step which needs to take place is update the document in SharePoint with the relevant metadata.

This can be done with yet another POST request. This time to the following URI:
https://yourdomain.sharepoint.com/subweb/_api/web/lists/getbytitle(‘listTitle')
/rootfolder/files/getbyurl(url='serverRelFileUrl')/listitemallfields/validateupdatelistitem

All the headers sent with the previous request must be sent with this request as well. The request body is a JSON object which defines the metadata fields to be updated. The fieldname and fieldValue properties must be updated as required. Note that the fieldname property must be equal to the field internal name not the field display name. An example of this is in the image below.

Request to set metadata on a document in SharePoint
Request to set metadata on a document in SharePoint

The response from this request provides success notification for each individual field update as shown below.

Response from the document metadata update request
Response from the document metadata update request

So, this should now be enough to write an application in any server-side language which supports web requests and work against SharePoint Online. I’d love to see some implementations of this, please comment if you’ve done it.

I’d like to acknowledge the following posts as they were invaluable references:

 

Key difficulties deploying a SharePoint Online solution using CSOM

I have been developing a console app that utilises the SharePoint C# CSOM to deploy a solution to SharePoint Online (a.k.a Office 365 SharePoint). The solution involves more than just a wsp (although it has one of those too). I have encountered a few difficulties during this process and this blog will discuss those:

  • (Re)creating a site collection
  • Importing a large-ish taxonomy
  • Uploading and installing a sandboxed solution (that contain only declarative elements)
  • Hooking up of taxonomy and (root site) lookup columns
  • Pre-creating a number of sites with specific features enabled (including the root site)

Before I go any further, for those of you reading this before doing something similar yourselves, please be aware of two constraints which caught me by surprise:

  • You can’t leverage the same import taxonomy function that is available in Term store management. If you already have files in that format you will need some custom code (I have an example later on) or you may want to import from a more robust XML formatted document
  • The CSOM does not support uploading or activating sandboxed solutions! However, there is a CodePlex project that assists with this. I also include the dll later in the post that I have rebuilt with references to the lastest v16 Microsoft.SharePoint.Client dlls.
  • The CSOM does not support activating web scoped features! You can active site scoped but not web scoped. You need to use web templates to achieve this. Again, I will cover this in some more depth later on.

SharePointOnline2L-1

Deleting and recreating a site collection

The initial step of the deployment process involves creating a new site collection (having deleted it first as required). In order to perform actions at this scope (tenant) you cannot create your client context in the same manner as usual (with a reference to the site collection; as it is yet to exist and the site collection delete and empty recycle bin require it too). Instead you must create the client context passing in tenant/admin site URL.
This is the one that looks like this: https://<tenant>-admin.sharepoint.com

You can then create a Microsoft.Online.SharePoint.Tenant object by passing the ‘admin’ client content to its constructor. This object requires a reference to the Microsoft.Online.SharePoint.Client.Tenant assembly which is available by downloading and installing the SharePoint Server 2013 Client Components SDK. The assembly can then be found here: C:\Program Files\SharePoint Client Components\16.0\Assemblies

The tenant object provides the methods required to perform the create and delete site collection actions. This process involves a lot of waiting about for deletion to complete, and then provisioning to complete. Unfortunately you can’t continue with other actions until this has occurred. I found this to take upwards of three minutes.

A link to the relevant code that I used to achieve this can be found here: https://gist.github.com/paulryan/cbfaa966571d6a9cdb8b

Importing taxonomy

As mentioned above you can’t pass those CSV files directly to the CSOM and have it import it all for you. In my scenario we had already developed a lot (dozens) of term sets in the form of these CSV files so that were able to import them during a discovery phase so it was important that I could support the import of taxonomy in this form. I wrote code to support in the import of these files, but only to the point that it meets my immediate requirements. Please use the following as a rough guide only as it is not fully featured (or tested beyond the happy path).

The code I wrote to support his can be found here: https://gist.github.com/paulryan/e461f8bac28336b05109#file-importtaxonomycsom-cs

Uploading and activating a sandboxed solution

There is a CodePlex project that provides this functionality (as well as some authentication utilities) that I mentioned above. It performs web requests to UI and I am very glad someone else has already done this before me! It was originally created when SharePoint 2010 was present in the cloud and references the v14 assemblies of the Microsoft.SharePoint.Client assemblies accordingly. If you don’t mind maintain references to both v14 and v16 assemblies then this might be fine. I have instead rebuilt the source having replaced the references with the v16 equivalents.

You can download it here: SharePointOnline.Helper.dll

FYI: v14 is SharePoint 2010, v15 is SharePoint 2013, v16 is SharePoint 2013 Online specific

Activating web features

Actually there isn’t a lot more to say here other than you must use web templates if you need to create sites with features enabled as part of the deployment process as it can’t (currently) be done using the CSOM. I would recommend using the web template for nothing other than activating features and put all other declarative elements in a feature. This will provide the best upgrade experience in the future.

Hooking up taxonomy columns

The best place to start is almost certainly a reference to Chris O’Brien’s blog on this here. As I have the luxury of being able to run further deployment code after uploading/activating the sandboxed solution I opted to avoid having to rebuild the solution for various environments and instead hook-up the columns using the CSOM and a mapping. There is a catch with this though.

If your list instance is built from a list template which defines the managed metadata columns then updating the site column via the CSOM fails to push down the new SspID. To get around this, DO NOT include managed metadata column definitions as part of the list definition (in the fields element). When you run the CSOM to update the site columns it will update the content type and add the column to the list instance with the correct SspID.

Good luck building your SharePoint Online CSOM deployment framework!

Running SP.UI.Status.addStatus on page load

This post is about a specific issue when running the SP.UI command ‘addStatus’ on page load as well as short discussion on the JavaScript page life cycle.

Initial Problem
When running the SP.UI.Status.addStatus command upon page load, the status message was being hidden almost immediately in Chrome but worked as expected in IE.

Scenario
We have a concept of ‘archived’ sites. A control is embedded on the page layout for site home pages (default.aspx) which renders javascript which should display a ‘this site is archived’ message. The message should be displayed using SP.UI.Status.addStatus and was initially implemented as follows (the script is being rendered dynamically using an ASP literal control, I’ll just be considering the final output):

// This is an example of what NOT to do
ExecuteOrDelayUntilScriptLoaded(function () {
    SP.UI.Status.addStatus("This is an archived site", "removed for brevity");
}, 'sp.js'););

This worked as expected in IE, however the status message would appear for a split second and then disappear in Chrome.

Investigation

The first piece of the the puzzle: What is happening?
After getting deep with Chrome DevTools I found the script responsible for hiding the status message. SharePoint utilises the document.onreadystatechange handle to run a function called fnRemoveAllStatus. I think you can guess what it achieves. Why this is being run at this point is beyond me. Importantly, I don’t want to prevent it running in case it serves a purpose that I’m unaware of.

The second piece of the the puzzle: How’s that work?
If a function is assigned to document.onreadystatechange it will be run as many as four times (depending on when in the cycle the assignment occurs), once for each transition between the following sequence of states:

  1. ‘uninitialized’
  2. ‘loading’
  3. ‘interactive’
  4. ‘complete’

Good practice would have the function check for the current state and only act once, when in the correct state. Naturally this logic is absent from the fnRemoveAllStatus function.

The third piece of the the puzzle: What is running when?
$(document).ready vs $(window).load vs ExecuteOrDelayUntilScriptLoaded
The difference between these options in regards to what we have just discussed is that $(document).ready and ExecuteOrDelayUntilScriptLoaded run when document.readyState is ‘interactive’ whereas $(window).load runs when document.readyState is ‘complete’.

Laying the final puzzle piece: Why’s it working in IE but not Chrome?
When running the code in IE, rather than executing the script block during the ‘interactive’ readyState is was being executed after transitioning to the ‘complete’ readyState and this meant that is was running after the fnRemoveAllStatus call as we desire. I believe this happens because the sp.js file is being added via a script link control with ‘LoadAfterUI’ set to true which is only understood by IE. I haven’t investigated this last comment, if I am wrong please leave a comment about it below.

Solution
So, the solution is rather simple once you have this understanding. Wrap the command in $(window).load to ensure it occurs after the fnRemoveAllStatus method is called during the transition into the ‘complete’ state. Like this:

$(window).load(function() {
    ExecuteOrDelayUntilScriptLoaded(function () {
        SP.UI.Status.addStatus("This is an archived site", "removed for brevity");
    }, 'sp.js'););
});

NB: ExecuteOrDelayUntilScriptLoaded will also load scripts which are marked to load on-demand. If you are testing in Chrome you may begin to believe it unnecessary to use ExecuteOrDelayUntilScriptLoaded when using $(window).load as all scripts have loaded by then. This is true for browsers other than IE. In IE we must use this function to ensure that the script is loaded at all.

As a final note I’d like to add that apart from this specific case I would suggest using $(document).ready rather than $(window).load as it will mean that the page loads faster (unless of course your script requires all resources to be loaded before acting. e.g. you are working with images of undefined sizes).

Excel data connection with Access 2013 App

I have had a hard time creating data connections with an Access 2013 App database. After a good few hours spent scouring the internet for a solution, and a good few more hours uncovering a “solution” that is underwhelming at best, I am happy to share with you my findings. I really hope that someone will leave a comment with a better solution at some point in the future.

This blog post will provide step-by-step guide on how to achieve a data connection from an Excel workbook (which can be hosted in SharePoint) to the SQL database behind an Access 2013 App. Once this is achieved, a good BI developer should have no trouble visualising the data captured via the Access App with the help of pivot tables, slicing and graphing.

The first step is to identify the server address and database to connect to along with the credentials required to authenticate.

  1. This can be done by navigating to the Access App, clicking the ‘settings’ icon, then clicking ‘Customize in Access’

    Launching the app database in Access
    Launching the app database in Access
  2. Download the .accdw file and open it to launch access
  3. Click ‘FILE’ in the ribbon
  4. Click ‘Manage’
  5. In the drop-down menu ensure that ‘From Any Location’ and ‘Enable Read-Only Connection’ are highlighted with pink squares. If not, click them

    Determining the Access database location and credentials
    Determining the Access database location and credentials
  6. Click ‘View Read-Only Connection Information’
  7. Take note of Server, Database, UserName, and Password from this dialog as you will need them all later

    Access connection information dialog
    Access connection information dialog

Next we use this information to create the data connection.

  1. Launch Excel
  2. Create a new external data connection ‘From Data Connection Wizard’

    Launching the Excel data connection wizard
    Launching the Excel data connection wizard
  3. Click ‘Other/Advanced’, then ‘Next’
    5otherAdvancedConnection
  4. Click ‘SQL Server Native Client 11.0’, then ‘Next’
    6SQLClient11
  5. On the ‘Data Link Properties’ dialog, uncheck the ‘Blank Password’ box and check the ‘Allow saving password’ box, then input the server name, user name, password, and database

    Configuring the data connection. Ensure you provide the database
    Configuring the data connection. Ensure you provide the database
  6. Test the connection, you should see a dialog box with ‘Test Connection Succeeded’
  7. Note that it is when you attempt to make a data connection without providing the database that you get the following error which I bet lead to to this post:

    Failure to connect to the Access 2013 App's SQL database
    Cannot open server ‘xxxxxxxxxx’ requested by the login. Client with IP address ‘00.000.000.000’ is not allowed to access the server. To enable access, use the Windows Azure Management Portal or run sp_set_firewall_rule on the master database to create a firewall rule for this IP address or address range. It may take up to five minutes for this change to take effect.
  8. You can now click ‘Ok’
  9. Uncheck the ‘Use Trusted Connection’ checkbox and replace the existing password with the correct one. Click ‘Ok’
  10. Select a table and click ‘Next’. You can get fancy here later, let’s just get it working first.
  11. Click ‘Finish’
  12. Click ‘Ok’
  13. The data connection will fail with the following error:

    Connection error
    Initialization of the data source failed. Check the database server or contact your administrator. Make sure the external database is available, and then try the operation again. If you see this message again, create a new data source to connect to the database.

The final frustration!

On the next dialog, uncheck the ‘Use Trusted Connection’ checkbox and replace the existing password with the correct one. Click ‘Ok’.
The second time it works. This process of providing the connection credentials twice is required not only upon the creation of the connection but also every time the data needs to be refreshed. It makes for a rather poor UX and it is a pretty awful scenario to have to explain to a client.

I really want to believe that there is a setting (most probably under the ‘All’ tab on the ‘Data Link Properties’ dialog) that will workaround this issue however I am yet to find it. Please leave a comment if you find a solution to this issue.

 

Reading meeting invites from SPEmailMessage

SharePoint allows developers to create receivers for the EmailReceived event which occurs when a list receives an email. I have a use case which requires me to leverage this event in order to forward incoming email to a set of users according to a number of business specific rules. To achieve this I must create a custom email message object (we are using aspNetEmail to send email) from the message object received in the event receiver. I need to be able to extract all of the parts from the SPEmailMessage to create this new object. The SPEmailMessage object is pretty easy to work with; the attachments are in the attachments collection, the subject is in the subject property – you get the idea. However, there is one ‘property’ that isn’t as trivial to extract from the object: a meeting invite.

EmailKey

I will explain how to extract the meeting invite below but first let me provide a basic overview of how an email is stored in eml format. The eml format is relevant because the SPEmailMessage can be constructed from an eml stream and also because when SharePoint is configured to attach incoming emails to discussion items it does so using eml. The first lines of an email in eml format are the email headers (think properties). These are simply key value pairs and includes things like ‘to’, ‘from’, ‘date’, ‘subject’ and many other less obvious properties including threading info. Then comes the mime body parts. These should represent the ‘same’ content in different formats (mime types). Typically this includes a text/plain block and a text/html block. A client which supports HTML will render the latter body part where otherwise it might render the plain text body part as the email content. Finally, attachments are listed out with their own set of headers and the binary content (commonly represented as a base64 encoded string).

When a meeting invite is sent to a SharePoint list without attachments the meeting invite itself can be found in the attachments collection of the SPEmailMessage object. But don’t be fooled. Although it is present in this situation, if you send same meeting invite with an attached document then – sad face – the meeting invite is not in the attachment collection (the attached documents will be). Nor can the invite be found in any of the public properties on the email object. It’s not that strange that meeting invite isn’t present in the attachments collection; it is strange that it can ever be found there. I say this because if we consider the eml format, a meeting invite is stored as another mime body part (of type text/calendar) and not as an attachment at all.

Eventually, after much investigation and reflection, we discovered a way to read the mime body parts directly from the email using only Microsoft libraries with the help of reflection. Once we have the meeting invite as a memory stream we parse it into a dictionary of string properties. The dictionary contains keys such as “LOCATION”, “SUMMARY”, “DESCRIPTION”, “DTSTART”, “DTEND” and “UID”, along with any other data stored as part of the invite. See the example code below:

Finally, I’d like to note that we only have requirements to support Outlook clients at this point so please consider that your mileage may vary when you get it out into the real world. Good luck.