If you use jsLink to override the rendering of list views then you may have noticed that your custom jsLink no longer renders a message when there are no items returned in the view. I am going to discuss with code samples how to display a ‘no items’ message – or at least help you stop overriding it.
If, alternatively, you have a ‘no items’ message being displayed and just want to modify the text, try this link.
If you don’t know what jsLink is then it is worth learning about it. Try this link.
What am I doing wrong?
Chances you are making the same mistake that many people make. A mistake that has been replicated again and again online and doesn’t break anything but does prevent the display of the ‘no items’ message and the paging control. When you override Templates.Header you DO NOT need to override Templates.Footer in order to close tags which you opened in the header.
Although doing so seems to make sense, you can rest assured knowing that tags you open in the header will be closed auto-magically after the item templates have completed rendering. In fact, the footer template is rendered in a different table cell to the header and item templates when this all hits the page. Think of the footer template as a distinct block that is rendered after everything else rather than the end of the same block.
By overriding the footer template you are also inadvertently overriding the ‘no items’ message and the list view paging control. You can see exactly what you are overriding by inspected the default values for the templates. Below is snippet from clientrenderer.js which shows the default footer template.
So what should you do?
If you just want the default no items message and can get away with not overriding the footer template (as in the first code snippet), then great – you are all done.
If want a custom message then check out the link at the very top of the article (in summary: renderCtx.ListSchema.NoListItem = "Nada, nothing, zilch";).
If you want to override the footer template or perhaps you want the message to appear within a wrapper tag defined in the header or you want some custom logic behind which message to display then you can do that too – keep reading.
Doing it yourself
I’ve written a utility function that is based on the logic in the OOTB footer template that makes it easier to manage the ‘no items’ text. This function does NOT replicate the paging functionality. If you need paging and are overriding the footer template then you will need to replicate the paging functionality as well. You will need to look into clientrenderer.js to find out how MSFT do this.
Looking at this snippet you can see the if-else block where you can define custom messages for different list templates or if the lack of results has occurred only after a search term was provided. This sample should not be considered the superlative version, it just does a basic job in line with what happens by default.
Below are two examples of how you may want to use this. The first is by overriding the footer template, and the second is by overriding the header template. The advantage of sticking this code into the header template is that it allows you wrap the no items message in the same wrapper tags that you defined for the main content.
For aiding findability:
There are no items to show in this view of the list
Your search returned no results
Some items might be hidden. Include these in your search
Still didn’t find it? Try searching the entire site.
It would be worth reading the intro of my earlier article to get a better understanding of what is happening in the snippets provided in this post.
As the most common usage will surely be to produce search result page URLs that are refined on a single value, I have written an ‘overload’ function that simplifies calling the method in this scenario
The ‘search page URL’ can be provided to the functions in a number of ways including:
“/search” : to the web. The default page for that web. In the case of an Enterprise Search Centre this will be the ‘Everything’ search results page
“/search/Pages/peopleresults.aspx” : to the page
Use an absolute URL if you are out of the context of the SharePoint Online tenant in which the search page resides. This will be true for provider hosted add-ins (apps)
If you are writing your own refiner, then pass an empty string and set window.location.hash to the result of the function
This script has no dependencies on other libraries (jQuery, SP.js, etc)
The ADAL library simplifies the process of obtaining and caching the authentication tokens required to retrieve data from Office 365. It is possible to avoid the ADAL library and handle this yourself, although I would recommend doing so as a learning exercise only.
I failed to find a simple example of how to achieve this, my search results often filled with examples of calling the APIs from server-side code or else utilising the Angular.js framework. This example is based on a more complex example.
The following snippet will log to the browser console the results of a call the to files endpoint of the Office 365 unified API, which will return a JSON object containing information about the files in the current users’ OD4B.
Register an Azure Active Directory App. Note that *every* Office 365 subscriptions comes with AAD and supports the creation of an app
Associate the required ‘permissions to other services’, in this case ‘Read users files’ via the Office 365 Unified API
Allow implicit flow
Not covered explicitly in the above article but also critical are the following steps:
Get the App’s Client ID and copy it into the snippet
Get the Azure Active Directory subscription ID and copy it into the snippet
Once the above steps have been completed, you can try out the snippet by embedding in a Script Editor web part, or you can run it externally to SharePoint as part of, say, a provider hosted app.
NOTE: I found that the call to files endpoint is failing for certain users. I am still unsure whether this is due to external vs internal users (is working for internal [.onmicrosoft.com] users) or whether it could be licencing issue. The /beta/me endpoint is working in all cases.
CORS: Cross-Origin Resource Sharing
ADAL: Active Directly Authentication Library
OD4B: OneDrive 4 Business
For solutions that are contained in a single site collection, or span a small number of site collections, or are in a tenant where the other solutions are not trusted or are unknown, then I have a strong preference to use site collection scoped search schema rather than tenant scoped.
Side note: I am yet to come across a situation where I would use site scoped search schema. In my mind, the existence of search schema at this level only serves to confuse.
For those that aren’t fully aware, search schema (the set of managed properties that are accessible via the search framework) can be provisioned at the tenant, site collection, or site scope. These scopes are hierarchical such that managed properties are inherited from the tenant scope down to the site scope but can be overridden along the way. There are some good articles that delve into this in more detail.
By provisioning search schema at the site collection level you are mitigating the risks of errors related to other solutions changing the properties which your solution relies upon. This is especially relevant in SharePoint Online where all solutions in the tenant have to share a common set of RefinableTypeXX managed properties.
There are some important exceptions, of course.
People Search, a.k.a User Profile Search, a.k.a Local People Results
In SharePoint Online, people properties are indexed on a very slow schedule. We requested more information from Microsoft regarding this and were told that this schedule is ‘confidential’. I have found that when using site-collection scoped managed properties it can take *weeks* for them to get populated. I have found much better (although still poor) performance using tenant scoped properties (usually within a few days). Assuming you do require custom search schema for people properties I would still recommend provisioning all remaining managed properties (all those not mapped to people properties) at the site collection level.
Many site collections
Of course, having many site collections which require the same search schema is valid reason to go tenant scoped. This is purely due to management of the properties going forwards. A solid scripted deployment procedure should not care if you are provision search schema to 1 or 50 site collections – but anyone maintaining the solution will definitely care if they have update 50 schemas manually, or are suddenly required to script something which they feel should be *easy*. Even in this scenario you should still consider how much you trust other solutions in the tenant against the impact of finding out that one day your managed properties are mapped incorrectly. Depending on your solution this could lead to errors that are left undetected, or conversely obviously break your home page.
Yammer and SharePoint Online are becoming more and more integrated. Recently, with the Yammer Embed widget supporting SSO from Office 365 to Yammer, we are in a situation where we can perform actions against Yammer from SharePoint Online without requiring further authentication.
This opens up opportunities for utilising the Yammer SDK and Yammer REST API to build all kinds of Yammer interactions directly into your SharePoint pages. It also allows us to start implementing some of those anti-patterns that customers want but Yammer doesn’t want to support as they’re against ‘social freedom’. A prime example of this is forcing users into groups. In some scenarios it may be rather practical. I won’t discuss the pros and cons of this further but do consider that Microsoft would rather you coerce users socially to make their own decision to join the ‘correct’ groups rather than programmatically deciding for them.
Regardless of that, I’m going to provide an example which, upon page load, joins the current O365 user’s Yammer identity to a Yammer group based upon their SharePoint user profile.I’d like to point out that if Yammer Embed is present on the page and is enabled with Single Sign On, then the authentication piece can be hidden entirely from the user. I am currently unaware how to achieve SSO with Yammer from SharePoint Online without pigging-backing Yammer Embed, although I haven’t looked in earnest so I suspect it’s achievable without too much effort.
A few notes on the code:
Add the first code snippet to a page with a Script Editor web part. It calls the initiation code and any configurations can be provided here, and modified when live
The second code snippet contains all the logic. This can be included to the page in any manner you wish but you must ensure it has loaded prior to running the init function
In order to use the Yammer SDK you must register a Yammer app on the target network and provide the client ID as the data-app-id attribute on the script element which includes it
Each user must authorise the Yammer app, just once, before it can act in their behalf. I have implemented this as a status message, an example of which can be seen in the image below
The code references ‘hut Id’ which is just a value stored in user’s profile and which is used to map a user to a Yammer group
I use local storage to prevent the code executing more often than every 24 hours. This has been commented out for clarity, however I would recommend functionality such as this is re-implemented
The experience of signing into Yammer from SharePoint is different if SharePoint is hosted on-premise or online. Only when online is the same identity used and can an SSO experience be achieved. In contrast, on-prem, the disconnect between O365 and Yammer credentials allows users to to provide credentials for any Yammer user in any Yammer network rather than being restricted to the associated identity
And finally, the code:
Finally, for completeness, here is a the settings object which I pass to Yammer Embed to achieve SSO with Yammer from SharePoint Online. I find that in practice anywhere I would want to run the above code I also have a feed of some sort that is appropriate to display. If this is not the case for you, hiding the feed with display:none will achieve the same result as long as the width of the Yammer Embed is equal to or greater than 400px. Note that this is *not* required, however without it the user may be prompted to provide their Yammer credentials.
The SharePoint REST API is touted as being the tool to provide inter-platform integration with SharePoint Online. However, outside of .NET the authentication piece is not so straightforward. App authentication solves this issue for registered apps but I want to show how remote user authentication can be achieved, regardless of platform.
In a .NET environment please refer to the ADAL library for authentication rather than writing it yourself.
The goal of this post is to provide examples of the HTTP requests which need to be made in order to authenticate SharePoint Online. It then provides an example of using the same technique to upload a document and update metadata just to prove it all works 🙂
I wrote about using the SharePoint REST API here (and background here, and here).
First we must provide a username and password of a user with Contribute access to the Roster Data library and the URL at which we want access to the SharePoint Online Security Token Service.
This is done by POSTing the following XML as the request body to: https://login.microsoftonline.com/extSTS.srf
The response from the request includes the security token needed to get the access token. It is the value which has been stricken out in orange in the image below.
Get the access token
Once the security token has been retrieved it must be used to fetch the access token. The can be done by POSTing to the following URL with the security token as the request body: https://yourdomain.sharepoint.com/_forms/default.aspx?wa=wsignin1.0
The response from this request includes couple of cookies which must be passed as headers with all future requests. They are marked with the ‘Set-Cookie’ header. We need the ones beginning with rtFa= and FedAuth=. They can be seen the below image of the response headers.
Get the request digest
The request digest is a .NET security feature that ensures any update requests are coming from a single session. It must also be included with any POST requests.
The request digest is fetched by POSTing to: https://yourdomain.sharepoint.com/_api/contextinfo
The access token cookies must be included as Cookie headers with the request as shown in the image below.
The response from the request will include the request digest in the XML response as in the image below. The entire contents of the FormDigestValue tag will required, including the date time portion and timezone offset (-0000).
Upload a document with metadata
Upload the document
Now that we have all the authentication headers we can make update calls into SharePoint Online as the user whose credentials we originally supplied when fetching the security token.
In order to upload a document perform the following POST request: https://yourdomain.sharepoint.com/subweb/_api/web/lists/getbytitle(‘list name’)
A number of headers must be send with the request including the access token cookies, the request digest (X-RequestDigest) and the accept header as shown in the image below. The body of the request must contain the content of the document being uploaded.
The response of this request contains some minimal metadata about the file and can be safely ignored. However, for completeness here it is.
The unique ID property could be used to fetch the document in order to perform metadata updates rather than URL as done in the following example.
Update document metadata
The final step which needs to take place is update the document in SharePoint with the relevant metadata.
This can be done with yet another POST request. This time to the following URI: https://yourdomain.sharepoint.com/subweb/_api/web/lists/getbytitle(‘listTitle')
All the headers sent with the previous request must be sent with this request as well. The request body is a JSON object which defines the metadata fields to be updated. The fieldname and fieldValue properties must be updated as required. Note that the fieldname property must be equal to the field internal name not the field display name. An example of this is in the image below.
The response from this request provides success notification for each individual field update as shown below.
So, this should now be enough to write an application in any server-side language which supports web requests and work against SharePoint Online. I’d love to see some implementations of this, please comment if you’ve done it.
I’d like to acknowledge the following posts as they were invaluable references:
I have been developing a console app that utilises the SharePoint C# CSOM to deploy a solution to SharePoint Online (a.k.a Office 365 SharePoint). The solution involves more than just a wsp (although it has one of those too). I have encountered a few difficulties during this process and this blog will discuss those:
(Re)creating a site collection
Importing a large-ish taxonomy
Uploading and installing a sandboxed solution (that contain only declarative elements)
Hooking up of taxonomy and (root site) lookup columns
Pre-creating a number of sites with specific features enabled (including the root site)
Before I go any further, for those of you reading this before doing something similar yourselves, please be aware of two constraints which caught me by surprise:
You can’t leverage the same import taxonomy function that is available in Term store management. If you already have files in that format you will need some custom code (I have an example later on) or you may want to import from a more robust XML formatted document
The CSOM does not support uploading or activating sandboxed solutions! However, there is a CodePlex project that assists with this. I also include the dll later in the post that I have rebuilt with references to the lastest v16 Microsoft.SharePoint.Client dlls.
The CSOM does not support activating web scoped features! You can active site scoped but not web scoped. You need to use web templates to achieve this. Again, I will cover this in some more depth later on.
Deleting and recreating a site collection
The initial step of the deployment process involves creating a new site collection (having deleted it first as required). In order to perform actions at this scope (tenant) you cannot create your client context in the same manner as usual (with a reference to the site collection; as it is yet to exist and the site collection delete and empty recycle bin require it too). Instead you must create the client context passing in tenant/admin site URL.
This is the one that looks like this: https://<tenant>-admin.sharepoint.com
You can then create a Microsoft.Online.SharePoint.Tenant object by passing the ‘admin’ client content to its constructor. This object requires a reference to the Microsoft.Online.SharePoint.Client.Tenant assembly which is available by downloading and installing the SharePoint Server 2013 Client Components SDK. The assembly can then be found here: C:\Program Files\SharePoint Client Components\16.0\Assemblies
The tenant object provides the methods required to perform the create and delete site collection actions. This process involves a lot of waiting about for deletion to complete, and then provisioning to complete. Unfortunately you can’t continue with other actions until this has occurred. I found this to take upwards of three minutes.
As mentioned above you can’t pass those CSV files directly to the CSOM and have it import it all for you. In my scenario we had already developed a lot (dozens) of term sets in the form of these CSV files so that were able to import them during a discovery phase so it was important that I could support the import of taxonomy in this form. I wrote code to support in the import of these files, but only to the point that it meets my immediate requirements. Please use the following as a rough guide only as it is not fully featured (or tested beyond the happy path).
There is a CodePlex project that provides this functionality (as well as some authentication utilities) that I mentioned above. It performs web requests to UI and I am very glad someone else has already done this before me! It was originally created when SharePoint 2010 was present in the cloud and references the v14 assemblies of the Microsoft.SharePoint.Client assemblies accordingly. If you don’t mind maintain references to both v14 and v16 assemblies then this might be fine. I have instead rebuilt the source having replaced the references with the v16 equivalents.
FYI: v14 is SharePoint 2010, v15 is SharePoint 2013, v16 is SharePoint 2013 Online specific
Activating web features
Actually there isn’t a lot more to say here other than you must use web templates if you need to create sites with features enabled as part of the deployment process as it can’t (currently) be done using the CSOM. I would recommend using the web template for nothing other than activating features and put all other declarative elements in a feature. This will provide the best upgrade experience in the future.
Hooking up taxonomy columns
The best place to start is almost certainly a reference to Chris O’Brien’s blog on this here. As I have the luxury of being able to run further deployment code after uploading/activating the sandboxed solution I opted to avoid having to rebuild the solution for various environments and instead hook-up the columns using the CSOM and a mapping. There is a catch with this though.
If your list instance is built from a list template which defines the managed metadata columns then updating the site column via the CSOM fails to push down the new SspID. To get around this, DO NOT include managed metadata column definitions as part of the list definition (in the fields element). When you run the CSOM to update the site columns it will update the content type and add the column to the list instance with the correct SspID.
Good luck building your SharePoint Online CSOM deployment framework!
When running the SP.UI.Status.addStatus command upon page load, the status message was being hidden almost immediately in Chrome but worked as expected in IE.
This worked as expected in IE, however the status message would appear for a split second and then disappear in Chrome.
The first piece of the the puzzle: What is happening?
After getting deep with Chrome DevTools I found the script responsible for hiding the status message. SharePoint utilises the document.onreadystatechange handle to run a function called fnRemoveAllStatus. I think you can guess what it achieves. Why this is being run at this point is beyond me. Importantly, I don’t want to prevent it running in case it serves a purpose that I’m unaware of.
The second piece of the the puzzle: How’s that work?
If a function is assigned to document.onreadystatechange it will be run as many as four times (depending on when in the cycle the assignment occurs), once for each transition between the following sequence of states:
Good practice would have the function check for the current state and only act once, when in the correct state. Naturally this logic is absent from the fnRemoveAllStatus function.
The third piece of the the puzzle: What is running when? $(document).ready vs $(window).load vs ExecuteOrDelayUntilScriptLoaded
The difference between these options in regards to what we have just discussed is that $(document).ready and ExecuteOrDelayUntilScriptLoaded run when document.readyState is ‘interactive’ whereas $(window).load runs when document.readyState is ‘complete’.
Laying the final puzzle piece: Why’s it working in IE but not Chrome?
When running the code in IE, rather than executing the script block during the ‘interactive’ readyState is was being executed after transitioning to the ‘complete’ readyState and this meant that is was running after the fnRemoveAllStatus call as we desire. I believe this happens because the sp.js file is being added via a script link control with ‘LoadAfterUI’ set to true which is only understood by IE. I haven’t investigated this last comment, if I am wrong please leave a comment about it below.
So, the solution is rather simple once you have this understanding. Wrap the command in $(window).load to ensure it occurs after the fnRemoveAllStatus method is called during the transition into the ‘complete’ state. Like this:
NB:ExecuteOrDelayUntilScriptLoaded will also load scripts which are marked to load on-demand. If you are testing in Chrome you may begin to believe it unnecessary to use ExecuteOrDelayUntilScriptLoaded when using $(window).load as all scripts have loaded by then. This is true for browsers other than IE. In IE we must use this function to ensure that the script is loaded at all.
As a final note I’d like to add that apart from this specific case I would suggest using $(document).ready rather than $(window).load as it will mean that the page loads faster (unless of course your script requires all resources to be loaded before acting. e.g. you are working with images of undefined sizes).
I have had a hard time creating data connections with an Access 2013 App database. After a good few hours spent scouring the internet for a solution, and a good few more hours uncovering a “solution” that is underwhelming at best, I am happy to share with you my findings. I really hope that someone will leave a comment with a better solution at some point in the future.
This blog post will provide step-by-step guide on how to achieve a data connection from an Excel workbook (which can be hosted in SharePoint) to the SQL database behind an Access 2013 App. Once this is achieved, a good BI developer should have no trouble visualising the data captured via the Access App with the help of pivot tables, slicing and graphing.
The first step is to identify the server address and database to connect to along with the credentials required to authenticate.
This can be done by navigating to the Access App, clicking the ‘settings’ icon, then clicking ‘Customize in Access’
Download the .accdw file and open it to launch access
Click ‘FILE’ in the ribbon
In the drop-down menu ensure that ‘From Any Location’ and ‘Enable Read-Only Connection’ are highlighted with pink squares. If not, click them
Click ‘View Read-Only Connection Information’
Take note of Server, Database, UserName, and Password from this dialog as you will need them all later
Next we use this information to create the data connection.
Create a new external data connection ‘From Data Connection Wizard’
Click ‘Other/Advanced’, then ‘Next’
Click ‘SQL Server Native Client 11.0’, then ‘Next’
On the ‘Data Link Properties’ dialog, uncheck the ‘Blank Password’ box and check the ‘Allow saving password’ box, then input the server name, user name, password, and database
Test the connection, you should see a dialog box with ‘Test Connection Succeeded’
Note that it is when you attempt to make a data connection without providing the database that you get the following error which I bet lead to to this post:
You can now click ‘Ok’
Uncheck the ‘Use Trusted Connection’ checkbox and replace the existing password with the correct one. Click ‘Ok’
Select a table and click ‘Next’. You can get fancy here later, let’s just get it working first.
The data connection will fail with the following error:
The final frustration!
On the next dialog, uncheck the ‘Use Trusted Connection’ checkbox and replace the existing password with the correct one. Click ‘Ok’.
The second time it works. This process of providing the connection credentials twice is required not only upon the creation of the connection but also every time the data needs to be refreshed. It makes for a rather poor UX and it is a pretty awful scenario to have to explain to a client.
I really want to believe that there is a setting (most probably under the ‘All’ tab on the ‘Data Link Properties’ dialog) that will workaround this issue however I am yet to find it. Please leave a comment if you find a solution to this issue.
SharePoint allows developers to create receivers for the EmailReceived event which occurs when a list receives an email. I have a use case which requires me to leverage this event in order to forward incoming email to a set of users according to a number of business specific rules. To achieve this I must create a custom email message object (we are using aspNetEmail to send email) from the message object received in the event receiver. I need to be able to extract all of the parts from the SPEmailMessage to create this new object. The SPEmailMessage object is pretty easy to work with; the attachments are in the attachments collection, the subject is in the subject property – you get the idea. However, there is one ‘property’ that isn’t as trivial to extract from the object: a meeting invite.
I will explain how to extract the meeting invite below but first let me provide a basic overview of how an email is stored in eml format. The eml format is relevant because the SPEmailMessage can be constructed from an eml stream and also because when SharePoint is configured to attach incoming emails to discussion items it does so using eml. The first lines of an email in eml format are the email headers (think properties). These are simply key value pairs and includes things like ‘to’, ‘from’, ‘date’, ‘subject’ and many other less obvious properties including threading info. Then comes the mime body parts. These should represent the ‘same’ content in different formats (mime types). Typically this includes a text/plain block and a text/html block. A client which supports HTML will render the latter body part where otherwise it might render the plain text body part as the email content. Finally, attachments are listed out with their own set of headers and the binary content (commonly represented as a base64 encoded string).
When a meeting invite is sent to a SharePoint list without attachments the meeting invite itself can be found in the attachments collection of the SPEmailMessage object. But don’t be fooled. Although it is present in this situation, if you send same meeting invite with an attached document then – sad face – the meeting invite is not in the attachment collection (the attached documents will be). Nor can the invite be found in any of the public properties on the email object. It’s not that strange that meeting invite isn’t present in the attachments collection; it is strange that it can ever be found there. I say this because if we consider the eml format, a meeting invite is stored as another mime body part (of type text/calendar) and not as an attachment at all.
Eventually, after much investigation and reflection, we discovered a way to read the mime body parts directly from the email using only Microsoft libraries with the help of reflection. Once we have the meeting invite as a memory stream we parse it into a dictionary of string properties. The dictionary contains keys such as “LOCATION”, “SUMMARY”, “DESCRIPTION”, “DTSTART”, “DTEND” and “UID”, along with any other data stored as part of the invite. See the example code below:
Finally, I’d like to note that we only have requirements to support Outlook clients at this point so please consider that your mileage may vary when you get it out into the real world. Good luck.