Quantcast
Channel: Richard diZerega's Blog
Viewing all 68 articles
Browse latest View live

Optimizing User Experience of Apps for SharePoint 2013

$
0
0

The User Experience Challenge

Just a decade ago iframes were frowned upon on the web. Not all browsers supported them and some organizations even applied browser policies to block iframes. Fast-forward to today and iframes are a staple of web development and the cornerstone of both the Facebook and SharePoint 2013 app architecture. But that doesn't mean iframes don't come with some unique challenges. One challenge I've been particularly concerned about is the user experience (UX) of apps for SharePoint and how apps from the Office Marketplace appear in existing SharePoint sites.

Thankfully, MSDN has fantastic resources and guidance on UX design for apps in SharePoint 2013, which should be considered the definitive guide for designing SharePoint app UX. In this post I will explore some of these resources and provide samples and lessons learned for optimizing the user experience of apps for SharePoint 2013.

SharePoint Apps 101

To understand the UX challenge, it helps to have some basic understanding of SharePoint’s app architecture. Apps for SharePoint deliver additional capabilities, but run outside of the host SharePoint site. These apps can be SharePoint-hosted as HTML/client-side script OR Cloud-hosted anywhere in the world using any web technology (ex: PHP from a Private Cloud web server on-premise, ASP.NET MVC from Azure, Ruby hosted with a 3rd party ISV, etc). The important point is that in all cases, the app’s code runs outside of the SharePoint site collection that is consuming it. Yes, this is even the case with SharePoint-hosted apps, which get hosted in a different domain in the farm. For more information on app hosting options, see Hosting options for apps for SharePoint.

Apps for SharePoint can be displayed in two basic ways.  The first option is to display app pages in full-screen (full content area of a browser), which is the only required experience an app must provide. In this configuration, apps are launched from links or custom actions in SharePoint, which redirect to the remote app host along with some contextual information requested by the app developer. When this redirection occurs, users are taken completely out of the site and to the app (as the browser URL will indicate).  The second delivery option is to display the app pages in an iframe via SharePoint dialog or the new app part.  SharePoint dialogs are similar to 2010 and can be leveraged in an app through a custom action.  App parts are very similar to Page Viewer web parts, but have a mechanism to dynamically pass contextual information and app part properties to the page they display. With SharePoint dialogs and app parts, users stay in their SharePoint site, but have iframe windows into app pages.

Regular SharePoint PageApp Page Displayed in Full-Screen
App Page Displayed in a SharePoint DialogApp Pages Displayed in App Parts

Now back to the UX challenge. Imagine a SharePoint 2013 site with a number of app parts. Without meticulous design, each of these app parts could appear like completely separated experiences. With the introduction of the Office Marketplace, app developers will be developing apps for thousands of tenants, each with their own unique style/branding. So how does an app developer write apps that take on the appearance of a tenant's environment? The answer is a little different for each display option:

Pages Displayed in Full-Screen

For app pages displayed in full-screen, SharePoint redirects users away from the SharePoint site and to the location where the app is hosted (along with some contextual information passed through URL parameters and/or POST messages). When created in Visual Studio, SharePoint-hosted pages reference the host site styles by default, while cloud-hosted pages start relatively un-styled.  Without the developer taking an action, it will seem that SharePoint-hosted pages automatically fit in with the host site while cloud-hosted pages have nothing from the host site to style them. Luckily, app developers for cloud-hosted apps can inherit the appearance of a specified SharePoint site by wrapping app pages in the SharePoint Client Chrome Control. The Chrome Control adds header html (title, breadcrumb, icon, etc.) and injects styles from the referenced SharePoint site for styling other html elements. Here are the high-level steps to implement the Chrome Control:

Step 1: Configure the query string in the AppManifest.xml to load the correct default app page and pass the appropriate contextual information through the URL

Step 2: Add a div placeholder element to the top of all app pages for hosting the SharePoint Client Chrome Control

Step 3: Include script on app pages to reference the SP.UI.Controls.js script from the host site and use it to load the Chrome Control into the placeholder from step #2

Pages Displayed in iframe Elements

App parts or "client web parts" display app pages in iframes on the host SharePoint site. Although similar to a Page Viewer web part, app parts have the added ability for custom web part properties.  These properties get passed to the app part's page via URL parameters (along with additional contextual information the developer requests). Since app parts and pages displayed in the SharePoint dialog do not display full-screen, they should NOT leverage the SharePoint Client Chrome Control like pages displayed in full-screen. Instead, SharePoint styles can be incorporated by referencing style resources from the host SharePoint site. Here are the high-level steps to implement SharePoint style resources in an app part page:

Step 1: Add a Client Web Part to your SharePoint 2013 app project

Step 2: Configure the client web part's Content Source in the Element.xml to correct page and pass the appropriate contextual information through the URL

Step 3: Include script on app part pages to inject a new style sheet link element into the head of the page referencing the /_layouts/15/defaultcss.ashx resource from the host SharePoint site

Here are the high-level steps to implement SharePoint style resources in a page displayed in the SharePoint dialog:

Step 1: Add a UI Custom Action (Host Web) to your SharePoint 2013 app project

Step 2: Configure the UrlAction in the Element.xml to correct page and pass the appropriate contextual information through the URL and set HostWebDialog="true" on the Custom Action (HostWebDialogHeight and HostWebDialogWidth should also be set)

Step 3: Include script on the dialog pages to inject a new style sheet link element into the head of the page referencing the /_layouts/15/defaultcss.ashx resource from the host SharePoint site

So I've Referenced SharePoint Styles…Now What?

Importing SharePoint style resources from the referenced site is only half the battle. App developers need to be aware of SharePoint style classes in order to take on the appearance of the SharePoint site. The MSDN documentation on Apps for SharePoint design guidelines has several comprehensive tables detailing common SharePoint style classes. However, a good bowser developer tool (such as Internet Explorer's F12 Developer Tools) can help identify additional classes.

Resizing app parts can be another challenge, particularly in highly dynamic app pages. Resizing is controlled by the app part's height and width web part properties, which are ultimately applied to the height and width of the iframe that gets rendered. These properties can be configured to a default size by the developer in the Elements.xml, but can be changed by a site designer when placed on a page. Unfortunately, app parts do not auto-size as the content contained in them grows. Additionally, an app page cannot walk the DOM hierarchy outside the page to adjust itself (this is blocked across domains). However, SharePoint 2013 implements the HTML5/javascript postMessage framework to achieve resizing from an app part page. window.postMessage enables safe cross-domain communication between a parent page and iframe page, provided they are listening for messages.  Here is SharePoint's implementation of postMessage for app part pages:

//****************   IMPORTANT   ****************
//postMessage resizing has a bug in the SharePoint 2013 Preview
//This is a know bug that will be fixed in the final release
window.parent.postMessage('<message senderId={your ID}>resize(120, 300)</message>', this.location.hostname);

IMPORTANT NOTE: At the time of this post, postMessage resizing had a bug in the SharePoint 2013 Preview.  This is a known bug that will be fixed in the final release.  Later in the article, I will provide additional details of postMessage communication and a sample solution.  Here are some general recommendations for resizing app part pages:

  • Microsoft recommends sizing app parts in increments of 30px.  If all app developers follow this guideline, app parts will more smoothly dovetail into a tenant's page (there is nothing worse than two web parts displayed side-by-side, but 2-3px off...increments of 30px greatly help this).
  • Width is typically more challenging to deal with as height, since inadequate width will usually wrap elements and thus increase the height required to display all page content. Height is also challenged by dynamic elements such as a Repeaters and GridViews that grow vertically. As such, I recommend avoiding overuse of no-wrap styling.
  • Regardless of resizing capabilities, you should try designing to a predictable size. This isn't a foolproof approach, since styles can have a huge impact on rendering height (ex: padding and font size). However, careful design planning can get you close. I recommend constraining dynamic content such as implementing paging on GridView elements. This combined with the ClientWebPart configuration for DefaultHeight and DefaultWidth properties can deliver a great result in most cases.
  • Try to minimize scrolling containers. SharePoint minimally uses these and you should too. They make fixed-size design easy but can quickly make a page look like an iframe mess.

Getting Started

If you haven’t already figured it out, apps get most contextual information about the host SharePoint site through URL parameters (user information is also available through other methods).  MSDN has some great documentation on URL string and tokens in apps for SharePoint that details the dynamic URL parameter options.  In addition to these parameters, developers can pass their own.  In my case, I passed a DisplayType parameter so my script could be reused and have a method to determine if the page was being displayed in full-screen or in an iframe.  This could be especially helpful if your app contained a page that is leveraged across both display options.  All the samples below were developed using Autohosted apps.  Autohosted apps are deployed in Azure, which emphasizes the separation between SharePoint and my app (it also enables me to write server-side code).

Step 1: Provision a Developer Site for debugging your app (this must be site collection using the “Developer Site” template).  I’m debugging my app in Office 365, so I provisioned my Developer Site in the Office 365 Admin Portal:

Step 2: Create new project in Visual Studio 2012 using the App for SharePoint 2013 template:

Step 3: Select the site you created in step 1 for debugging and leave the default hosting option of Autohosted and click Finish.  You may be prompted to authenticate to the SharePoint site:

If you are new to apps for SharePoint 2013, you will notice the solution contains two projects.  The first project is the actual SharePoint app, which will contain nothing but xml manifests/elements to define the contents of the app.  The second project is a web application to host all the markup and code for the app.  When debugging Autohosted apps, SharePoint will direct app requests to this web application running on localhost in IIS Express.  If you deploy the app, it will be provisioned inside Azure.

Solution Explorer of a New App for SharePoint 2013 Project

Creating an App with Full-Screen Display

By default, new SharePoint 2013 app projects already include a page to handle the launch experience, which is always a full page.  This page can be found in the web application project under Pages\Default.aspx.  The new project could immediately be deployed, but let's go a little deeper by adding some addition information and the Chrome Control:

Step 1 – The Manifest: Locate the AppManifest.xml in the SharePoint app project and open it.  Visual Studio provides a great AppManifest editor that launches by default.  Locate the Start page field...this tells SharePoint what the default page is for the app.  The Start page should already point to Default.aspx and have a special {StandardTokens} parameter in the Query string field.  SharePoint will use this to pass contextual information to the app's default page (ex: Site URL, Language, etc.).  Update the Query string field with additional parameters for SPHostTitle={HostTitle}, SPHostLogo={HostLogoUrl}, and DisplayType=FullScreen as seen below:

Step 2 – The Chrome Placeholder: Open the Default.aspx markup page and add a div placeholder for the SharePoint Client Chrome Control.  This should be the first element in the body/form of the html page as seen below:

<body>
   <form id="form1" runat="server">
        <!-- Chrome control placeholder -->
        <div id="divSPChrome"></div>

Step 3 – The Script: Script to load the SharePoint Client Chrome Control can be added directly to the page.  However, I created a script file so I could re-leverage the script on multiple pages.  I also wanted the script to leverage my custom DisplayType URL parameter for reuse across full-screen and iframe page displays.  The script has three basic components:

  1. Document loaded event to check the DisplayType and initialize the appropriate script (/_layouts/15/SP.UI.Controls.js in the case of full-screen pages):

    $(document).ready(
        function () {
            //Get the DisplayType from url parameters
            var displayType = decodeURIComponent(getQueryStringParameter('DisplayType'));

            //Get the URI decoded SharePoint site url from the SPHostUrl parameter.
            var spHostUrl = decodeURIComponent(getQueryStringParameter('SPHostUrl'));

            //Build absolute path to the layouts root with the spHostUrl
            var layoutsRoot = spHostUrl + '/_layouts/15/';

            //Execute the correct script based on the displayType
            if (displayType == 'FullScreen') {
                //Load the SP.UI.Controls.js file to render the App Chrome
               $.getScript(layoutsRoot + 'SP.UI.Controls.js', renderSPChrome);
            }
            else if (displayType == 'iframe') {
                //Create a Link element for the defaultcss.ashx resource
                var linkElement = document.createElement('link');
                linkElement.setAttribute('rel', 'stylesheet');
                linkElement.setAttribute('href', layoutsRoot + 'defaultcss.ashx');

                //Add the linkElement as a child to the head section of the html
                var headElement = document.getElementsByTagName('head');
                headElement[0].appendChild(linkElement);
            }
        });

  2. Callback function for loading the chrome after the appropriate scripts have loaded:

    function renderSPChrome() {
        //Get the host site logo url from the SPHostLogoUrl parameter
        var hostlogourl = decodeURIComponent(getQueryStringParameter('SPHostLogoUrl'));

        //Set the chrome options for launching Help, Account, and Contact pages
        var options = {
            'appIconUrl': hostlogourl,
            'appTitle': document.title,
            'appHelpPageUrl': 'Help.html?' + document.URL.split('?')[1],
            'settingsLinks': [
                {
                    'linkUrl': 'Account.html?' + document.URL.split('?')[1],
                    'displayName': 'Account settings'
                },
                {
                    'linkUrl': 'Contact.html?' + document.URL.split('?')[1],
                    'displayName': 'Contact us'
                }
            ]
        };
       
        //Load the Chrome Control in the divSPChrome element of the page
        var chromeNavigation = new SP.UI.Controls.Navigation('divSPChrome', options);
        chromeNavigation.setVisible(true);
    }

  3. A utility function to pull named parameters from the URL:

    function getQueryStringParameter(urlParameterKey) {
        var params = document.URL.split('?')[1].split('&');
        var strParams = '';
        for (var i = 0; i < params.length; i = i + 1) {
            var singleParam = params[i].split('=');
            if (singleParam[0] == urlParameterKey)
                return singleParam[1];
        }
    }

After referencing the custom UX script, Microsoft AJAX, and jQuery, the head of the markup page should look like similar to the following:

<head runat="server">
    <title>My Full-Screen Page</title>
    <script type="text/javascript" src="../Script/MicrosoftAjax.js"></script>
    <script type="text/javascript" src="../Script/jquery-1.7.2.min.js"></script>
    <script type="text/javascript" src="../Script/UXScript.js"></script>
</head>

Step 4 – Testing:  I added text and controls to my page referencing known CSS classes from SharePoint (ex: ms-accentText, ms-listviewtable, etc.).  I then deployed and tested my full-screen app page in three scenarios…with the custom chrome script removed, with the chrome script and the default composed look, and the chrome script with several alternate composed looks.  Here are the results:

Without Chrome Control

Chrome Control with Default Composed Look

Chrome Control with "Sea Monster" Composed Look

Chrome Control with "Nature" Composed Look

Creating an App with App Part Page Display

Step 1 – Adding the App Part: Right-click the SharePoint app project and Add> New Item to bring up the new item dialog.  Select Client Web Part (Host Web) from the list, name it, and click the Add button:

You should notice that the new client web part added an Elements.xml file to the project but no markup or code.  Remember, the SharePoint app project will contain nothing but xml manifests/elements to define the contents of the app.  All markup and code will live in the web application project that is deployed to the web server (Azure in the case of our Autohosted app).

Step 2 – The Element.xml: Open the client web part’s Elements.xml file.  This file contains the app part’s configuration for the markup page to render, the default size, and any custom web part properties. I added a Property for useSiteStyles so I can toggle the SharePoint style resource on and off inside my app part.  Next, set the Content Src to the appropriate page from the web application project and including the contextual URL parameters for the useSiteTypes property, {StandardTokens}, and DisplayType=iframe as seen below:

<?xml version="1.0" encoding="utf-8"?>
<Elements xmlns="http://schemas.microsoft.com/sharepoint/">
    <ClientWebPart Name="MyClientWebPart"
        Title="My Client WebPart"
        Description="Basic Client WebPart with property to optionally consume styles from the parent SharePoint site"
        DefaultWidth="500"
        DefaultHeight="400">
        <Content Type="html"
            Src="~remoteAppUrl/Pages/MyClientWebPartPage.aspx?useSiteStyles=_useSiteStyles_&amp;{StandardTokens}&amp;DisplayType=iframe" />
        <Properties>
            <Property
                Name="useSiteStyles"
                Type="boolean"
                RequiresDesignerPermission="true"
                DefaultValue="true"
                WebCategory="App Settings"
                WebDisplayName="Import Site Styles">
            </Property>
        </Properties>
    </ClientWebPart>
</Elements>

Step 3 – The Script: If you looked carefully at the full-screen script, you may have noticed a check for DisplayType of “iframe”.  That’s right; the script for importing SharePoint style resources is already in place to support app parts and SharePoint dialogs:


else if (displayType == 'iframe') {
    //Create a Link element for the defaultcss.ashx resource
    var linkElement = document.createElement('link');
    linkElement.setAttribute('rel', 'stylesheet');
    linkElement.setAttribute('href', layoutsRoot + 'defaultcss.ashx');

    //Add the linkElement as a child to the head section of the html
    var headElement = document.getElementsByTagName('head');
    headElement[0].appendChild(linkElement);
}

However, our app part will conditionally use this script based on the useSiteStyles web part property value.  I decided to implement this server-side using the RegisterClientScriptInclude method in my page loaded event:

//get the useSiteStyles property and register the UXScript.js if the property is true
bool useSiteStyles = Convert.ToBoolean(Page.Request["useSiteStyles"]);
if (useSiteStyles) {
    Page.ClientScript.RegisterClientScriptInclude(typeof(MyClientWebPartPage),
        "MyClientWebPartPageScript", "../Script/UXScript.js");
}

Step 4 – Testing: Like the full-screen app page, I added text and controls to my app part page referencing known CSS classes from SharePoint (ex: ms-accentText, ms-listviewtable, etc.).  Next, I created a two column wiki page and placed my app part in each column side-by-side. The left app part had useSiteStyles set to false and the right app part had this property set to true for a nice side-by-side comparison.  Finally, I tested the app parts in three scenarios…without paging on the GridView, with paging on the GridView, and with several alternate composed looks.  Here are the results:

App Parts without GridView Paging

App Parts with GridView Paging

App Parts with "Sea Monster" Composed Look

App Parts with "Nature" Composed Look

App Part in Older Browser**

**One thing I learned developing this post is that IE now supports transparent windowed elements such as iframes...even cross-domain.  I was startled to see the site's background images showing through the app parts.  To do this, I explicitly set the background-color of the app part pages to transparent.  Older and non-IE browsers may not have the same results (I tested IE and Chrome and both rendered transparent).  The picture above illustrates the look without a transparent body on the app part pages and what I would expect in older browsers.

Creating an App with SharePoint Dialog Page Display

App pages can also be launched inside of the SharePoint dialog framework using custom actions.  From a UX perspective, the SharePoint dialog is implemented almost identically to an app part.  Both should typically be displayed with SharePoint styles, but without the chrome control.  Functionally, pages launched in the SharePoint dialog tend to differ from app parts in the contextual information passed to the page (ex: selected list or items), how the page is launched (ribbon button, contextual menu, etc.), and how it displays on the screen (in the dialog instead of an element on a SharePoint page).  Here are the steps to deliver a page in the SharePoint dialog: 

Step 1 – Adding the Custom Action: Right-click the SharePoint app project and Add> New Item to bring up the new item dialog.  Select Custom Action (Host Web) from the list, name it, and click the Add button:

You should notice that the new custom action added an Elements.xml file to the project but no markup or code.  Remember, the SharePoint app project will contain nothing but xml manifests/elements to define the contents of the app.  All markup and code will live in the web application project that is deployed to the web server (Azure in the case of our Autohosted app).

Step 2 – The Element.xml: Open the custom action's Elements.xml file.  This xml is very similar to traditional SharePoint custom actions, but takes advantage of special app URL parameters and has properties for launching the page in the SharePoint dialog (HostWebDialog, HostWebDialogWidth, HostWebDialogHeight).  Also notice my use of DisplayType=iframe so I can reuse my chrome/style script.  If you are new to custom actions, the following adds a "Launch Page" button to the "Manage" group of the "Files" tab in a Document Library:

<?xmlversion="1.0"encoding="utf-8"?>
<Elementsxmlns="http://schemas.microsoft.com/sharepoint/">
   
<CustomAction
            Id="LaunchDialogAction"
            RegistrationId="101"
            RegistrationType="List"
            Location="CommandUI.Ribbon"
            Title="Launch Dialog"
            HostWebDialog="true"
            HostWebDialogHeight="420"
            HostWebDialogWidth="510">
        <CommandUIExtension>   
            <CommandUIDefinitions>
                <CommandUIDefinitionLocation="Ribbon.Documents.Manage.Controls._children">
                    <Button
                        Id="Ribbon.Library.Settings.PropertyViewer"
                        Alt="Launch Dialog"   
                        Sequence="40"
                        Command="Invoke_LaunchDialog"
                        LabelText="Launch Dialog"   
                        TemplateAlias="o1"
                        Image32by32="_layouts/15/images/placeholder32x32.png"
                        Image16by16="_layouts/15/images/placeholder16x16.png"/>
                </CommandUIDefinition>
            </CommandUIDefinitions>
            <CommandUIHandlers>
                <CommandUIHandler
                    Command="Invoke_LaunchDialog"
                    CommandAction="~remoteAppUrl/Pages/MyDialogPage.aspx?useSiteStyles=true&amp;{StandardTokens}&amp;DisplayType=iframe&amp;HostUrl={HostUrl}&amp;Source={Source}&amp;ListURLDir={ListUrlDir}&amp;SelectedListID={SelectedListId}&amp;SelectedItemID={SelectedItemId}"/>
            </CommandUIHandlers>
        </CommandUIExtension>    
    </CustomAction>
</Elements>

 

Step 3 – The Script: Identical to app parts...just make sure you include the script on the app page.

Step 4 – Testing: As mention above, the custom action adds in this solution adds a button to the "Files" tab for Document Libraries.  The app page it launches will display all the selected files in the Document Library.  This is a little different from other parts of the app, in that the app needs read permissions to list(s) in the site.  The pictures below illustrate this permission setting in the AppManifest, the app permission screen in SharePoint where the app is trusted to read a specific list, and the launched app page in the SharePoint dialog (with consistent styling):

Permissions Settings in AppManifest

App Permissions in SharePoint

App Page Displayed in SharePoint Dialog
 

postMessage Resizing

As mentioned earlier, the final release of SharePoint 2013 will support app part page resizing through postMessage client-side communication.  In this model, the parent page and iframe page can communicate to each other, provided they are listening for messages.  SharePoint 2013 will listen for resize messages from app part pages and adjust the height and width of the app part based on the size specified in the message.  The MSDN documentation for this will not work in the SharePoint 2013 Preview, but will be available in the final SharePoint 2013 release.  Since postMessage is a relatively new to most developers, I have provided a code sample that illustrates this communication in a simple web application (outside of SharePoint).  When SharePoint 2013 is released, look for an update or new post outlining postMessage resizing in SharePoint.  You can download the postMessage solution here:  postMessage.zip.

Conclusion

The UX Challenge with apps for SharePoint 2013 will exist.  App developers need to be aware of SharePoint style classes and the methods discussed in this article for achieving the best UX.  Consider the MSDN documentation on UX design for apps in SharePoint 2013 as the definitive source for UX design decisions. With these resources and good design planning an app developer can achieve excellent results for almost any tenant!

You can download my sample project here:  OptimizingAppUX.zip

Hope this help!


Real-world Apps for SharePoint 2013 - Kudos (Part 1)

$
0
0

In this two part series, I will explore developing real-world solutions via apps for SharePoint 2013.  Part 1 will focus on the basic app delivery without complex tenancy considerations (the solution will store all information in a list on the tenant).  In part 2, we will explore tenancy considerations in the Office Marketplace for solutions leveraging cloud storage in SQL Azure and other advanced scenarios.

The Solution

Microsoft uses a popular employee recognition application called Kudos.  I can send a kudos to anyone in the organization to recognize them for exceptional work.  When a kudos is submitted, an email is sent to the recipient, the submitter, and both of their managers (if applicable).  Additionally, the kudos is added to social newsfeeds for a wider audience to see.  Statistics are available to see historic kudos activity for employees.  I felt like this was a perfect candidate for a SharePoint 2013 app, given the rich integration with social, reporting, and workflow.  The video below outlines the final solution of Part 1, which is detailed throughout this post (including the provided code):

(Please visit the site to view this video)

Getting Started

The solutions in this series will be developed as Autohosted solutions, which are ultimately deployed to Azure for hosting.  I should note that the solution in Part 1 could have been developed as a SharePoint hosted solution, but I chose Autohosted given the direction we will take in Part 2.  Since Austohosted solutions provide the flexibility of server-side code, Part 1 has a really nice mix of CSOM and SSOM.  It also does a good job of illustrating the differences in the three web locations…the host web (consumes the app), and app web (hosts SharePoint-specific components of the solution), and the remote app web (hosts the app logic).  An app will always have a host web but not always a remote web (SharePoint-hosted apps) or app web (optional in Autohosted and Provider hosted apps).  The Part 1 solution leverages all three as outlined in the diagram below:

Building the Solution

I started by creating an App for SharePoint 2013 project in Visual Studio leveraging my Office 365 Preview tenant for debugging and configured as Autohosted.  The app will read profiles, write to lists/newsfeed, and use SharePoint utilities for sending email, all of which require specific app permissions.  App permissions are set in the AppManifest.xml.  Here are the permissions I set for the Kudos app:

With app permissions in place, I turned my attention to storage of kudos.  For Part 1, I decided to use a SharePoint list for storage.  In the future I will modify the solution to use SQL Azure for storage, which will be better for capacity/growth and much more efficient for advanced analytics and reporting (not to mention cleaner to work with compared to CAML).  Lists in SharePoint apps get provisioned in the app web, which is on a separate domain from the consuming site.  Below is the simple list structure I came up with for storing kudos.  Notice the manager fields are optional, since not all employees have a manager or have it set on their profile:

Next, I turned my attention to the markup and script that users would interact with to submit kudos.  Apps for SharePoint are required to provide a full-screen user experience, but I also wanted the app to be surfaced in existing SharePoint pages.  To do this, I added a Client Web Part to the SharePoint project and will deliver the same kudos page in both full-screen and in the app part.  For details on designing/styling for both experiences, see my previous post on Optimizing User Experience of Apps for SharePoint 2013.  The Kudos app has two views.  The first view has contains an employee look-up form and displays kudos statistics to the user such as the number of kudos sent/received over time and the users balance (we will limit the user to 4 submissions/week).  The second view will display details of the kudos recipient (Name, Profile Picture, Title, etc) and provide the form for the kudos text.  Below are pictures of these two views that we will dissect later in this post:

Kudos Stats and Lookup FormKudos Entry Form


When the Kudos app is first loaded, it will display kudos statistics for the user (ex: number of kudos sent and received over time).  This is achieved through SSOM CAML queries on page load (note: this could also be done client-side):

protected void Page_Load(object sender, EventArgs e)
{
    if (!this.IsPostBack)
    {
        //get context token for
        var contextToken = TokenHelper.GetContextTokenFromRequest(Page.Request);
        hdnContextToken.Value = contextToken;
        string hostweburl = Request["SPHostUrl"];
        string appweburl = Request["SPAppWebUrl"];
        using (var clientContext = TokenHelper.GetClientContextWithContextToken(appweburl, contextToken, Request.Url.Authority))
        {
            //load the current user
            Web web = clientContext.Web;
            User currentUser = web.CurrentUser;
            clientContext.Load(currentUser);
            clientContext.ExecuteQuery();

            //get the current users kudos activity
            ListCollection lists = web.Lists;
            List kudosList = clientContext.Web.Lists.GetByTitle("KudosList");
            CamlQuery receivedQuery = newCamlQuery()
            {
                ViewXml = "<View><Query><Where><Eq><FieldRef Name='Recipient' LookupId='TRUE' /><Value Type='User'>" + currentUser.Id + "</Value></Eq></Where></Query><ViewFields><FieldRef Name='Title' /><FieldRef Name='Submitter' /><FieldRef Name='SubmitterManager' /><FieldRef Name='Recipient' /><FieldRef Name='RecipientManager' /></ViewFields></View>"
            };
            var receivedItems = kudosList.GetItems(receivedQuery);
            CamlQuery sentQuery = newCamlQuery()
            {
                ViewXml = "<View><Query><Where><Eq><FieldRef Name='Submitter' LookupId='TRUE' /><Value Type='User'>" + currentUser.Id + "</Value></Eq></Where></Query><ViewFields><FieldRef Name='Title' /><FieldRef Name='Submitter' /><FieldRef Name='SubmitterManager' /><FieldRef Name='Recipient' /><FieldRef Name='RecipientManager' /></ViewFields></View>"
            };
            var sentItems = kudosList.GetItems(sentQuery);

            clientContext.Load(receivedItems, items => items.IncludeWithDefaultProperties(item => item.DisplayName));   
            clientContext.Load(sentItems, items => items.IncludeWithDefaultProperties(item => item.DisplayName));
            clientContext.ExecuteQuery();

            //convert to generics collection
            List<Kudo> receivedKudos = receivedItems.ToKudosList();
            List<Kudo> sentKudos = sentItems.ToKudosList();

            //set statistics
            int availableKudos = 4 - sentKudos.Count(i => i.CreatedDate > DateTime.Now.Subtract(TimeSpan.FromDays(7)));
            hdnSentQuota.Value = availableKudos.ToString();
            lblReceivedThisMonth.Text = String.Format("{0} Kudos received this month", receivedKudos.Count(i => i.CreatedDate > DateTime.Now.Subtract(TimeSpan.FromDays(30))).ToString());
            lblReceivedAllTime.Text = String.Format("{0} received all time!", receivedKudos.Count.ToString());
            lblSentThisMonth.Text = String.Format("{0} Kudos sent this month", sentKudos.Count(i => i.CreatedDate > DateTime.Now.Subtract(TimeSpan.FromDays(30))).ToString());
            lblRemainingThisWeek.Text = String.Format("You can send {0} more this week", availableKudos.ToString());
            lblAfterThisKudos.Text = String.Format("After this kudos, you can send {0} this week", (availableKudos - 1).ToString());   
        }
    }
}

 

Most of the heavy lifting prior to submitting a kudos will be handled client-side (employee lookups, profile queries, etc).  This requires referencing several script file from the host site including SP.js, SP.Runtime.js, init.js, SP.UserProfiles.js, and SP.RequestExecutor.js.  The last two of these are probably the least familiar but particularly important.  SP.UserProfiles.js provides client-side access to profiles and SP.RequestExecutor.js wraps all of our client-side requests with the appropriate OAuth details.  Here is how I referenced them dynamically from the host site:

//Load the required SharePoint libraries and wire events.
$(document).ready(function () {
   
//Get the URI decoded URLs.
   
hostweburl = decodeURIComponent(getQueryStringParameter('SPHostUrl'));
   
appweburl = decodeURIComponent(getQueryStringParameter('SPAppWebUrl'));
   
var scriptbase = hostweburl + '/_layouts/15/';

   
//load all appropriate scripts for the page to function
    $.getScript(scriptbase + 'SP.Runtime.js', function () {
        $.getScript(scriptbase + 'SP.js', function () {
            $.getScript(scriptbase + 'SP.RequestExecutor.js', registerContextAndProxy);
            $.getScript(scriptbase + 'init.js', function () {
                $.getScript(scriptbase + 'SP.UserProfiles.js', function () { });
            
});
             });
    });

 

The People Pickers in SharePoint 2013 have a great auto-fill capability I wanted to replicate in my employee lookup.  After some hunting and script debugging, I found a fantastic and largely undocumented client-side function for doing users lookups.  SP.UI.ApplicationPages.ClientPeoplePickerWebServiceInterface.clientPeoplePickerSearchUser takes the client context and a ClientPeoplePickerQueryParameters object to find partial matches on a query.  I wired this into the keyup event of my lookup textbox as follows:

//loadwire keyup on textbox to do user lookups
$('#txtKudosRecipient').keyup(function (event) {
   
var txt = $('#txtKudosRecipient').val();
    if ($('#txtKudosRecipient').hasClass('txtLookupSelected'))
        $('#txtKudosRecipient').removeClass('txtLookupSelected');
    if (txt.length > 0) {
        var query = new SP.UI.ApplicationPages.ClientPeoplePickerQueryParameters();
        query.set_allowMultipleEntities(false);
        query.set_maximumEntitySuggestions(50);
        query.set_principalType(1);
        query.set_principalSource(15);
        query.set_queryString(txt);
       
var searchResult = SP.UI.ApplicationPages.ClientPeoplePickerWebServiceInterface.clientPeoplePickerSearchUser(context, query);
        context.executeQueryAsync(function () {
            var results = context.parseObjectFromJsonString(searchResult.get_value());
            var txtResults = '';
            if (results) {
                if (results.length > 0) {
                    for (var i = 0; i < results.length; i++) {
                        var item = results[i];
                        var loginName = item['Key'];
                        var displayName = item['DisplayText'];
                        var title = item['EntityData']['Title'];   
                        ...

 

Once a user is selected, script will query for detailed profile information on submitter and recipient and toggle to the kudos entry view.  No be surprises here, but a decent amount of CSOM using the new UserProfile scripts:

//function that is fired when a recipient is selected from the suggestions dialog or btnSearch
function recipientSelected(recipientKey, recipientText) {
   
$('#txtKudosRecipient').val(recipientText);
    $('#txtKudosRecipient').addClass('txtLookupSelected');
    $('#divUserSearch').css('display', 'none');

    //look up user
    var peopleMgr = new SP.UserProfiles.PeopleManager(context);
    var submitterProfile = peopleMgr.getMyProperties();
    var recipientProfile = peopleMgr.getPropertiesFor(recipientKey);
    context.load(submitterProfile, 'AccountName', 'PictureUrl', 'ExtendedManagers', 'Title', 'Email', 'DisplayName');
    context.load(recipientProfile, 'AccountName', 'PictureUrl', 'ExtendedManagers', 'Title', 'Email', 'DisplayName');
    context.executeQueryAsync(function () {   
        var url = recipientProfile.get_pictureUrl();
        var title = recipientProfile.get_title();
        var email = recipientProfile.get_email();

        //set profile image source
        $('#imgRecipient').attr('src', url);
        $('#imgRecipient').attr('alt', recipientText);

        //set label text
        $('#lblRecipient').html(recipientText);
        $('#lblRecipientTitle').html(title);
        $('#lblRecipientEmail').html(email);

        //set hidden fields
        $('#hdnSubmitter').val(submitterProfile.get_accountName());
        $('#hdnSubmitterName').val(submitterProfile.get_displayName());
        $('#hdnRecipient').val(recipientProfile.get_accountName());
        $('#hdnRecipientName').val(submitterProfile.get_displayName());
        var sMgrs = submitterProfile.get_extendedManagers();
        if (sMgrs.length > 0)   
            $('#hdnSubmitterManager').val(sMgrs[sMgrs.length - 1]);
        else
            $('#hdnSubmitterManager').val('');
        var rMgrs = recipientProfile.get_extendedManagers();
        if (rMgrs.length > 0)
            $('#hdnRecipientManager').val(rMgrs[rMgrs.length - 1]);
        else
            $('#hdnRecipientManager').val('');
   
}, function () {   
        alert('Failed to load user profile details');
    });
}

 

When the user submits a kudos, the kudos form will execute it's one and only postback.  In reality, everything the postback does with SSOM could be achieved client-side with CSOM.  However, I'm looking to do some advanced things in Part 2 that I think will be easier server-side (ex: impersonate the social post as a "Kudos" service account).  The postback does three basic things…adds a kudos record to the kudos list on the app web, creates a post on the social feed with a recipient mention, and emails the kudos to the submitter, recipient, and their managers (if applicable).  Here is the postback code:

protectedvoid btnSend_Click(object sender, ImageClickEventArgs e)
{
    //get all managers
    Kudo newKudo = newKudo();
    newKudo.KudosText = txtMessage.Text;

    //get context token for
    string contextToken = hdnContextToken.Value;
    string hostweburl = Request["SPHostUrl"];
    string appweburl = Request["SPAppWebUrl"];
    using (var clientContext = TokenHelper.GetClientContextWithContextToken(appweburl, contextToken, Request.Url.Authority))
    {
        //get the context
        Web web = clientContext.Web;
        ListCollection lists = web.Lists;
        List kudosList = clientContext.Web.Lists.GetByTitle("KudosList");

        //ensure submitter
        newKudo.Submitter = web.EnsureUser(hdnSubmitter.Value);
        clientContext.Load(newKudo.Submitter);

        //ensure recipient
        newKudo.Recipient = web.EnsureUser(hdnRecipient.Value);
        clientContext.Load(newKudo.Recipient);

        //ensure submitter manager (if applicable)
        if (!String.IsNullOrEmpty(hdnSubmitterManager.Value))
        {
            newKudo.SubmitterManager = web.EnsureUser(hdnSubmitterManager.Value);
            clientContext.Load(newKudo.SubmitterManager);
        }

        //ensure recipient manager (if applicable)
        if (!String.IsNullOrEmpty(hdnRecipientManager.Value))
        {
            newKudo.RecipientManager = web.EnsureUser(hdnRecipientManager.Value);
            clientContext.Load(newKudo.RecipientManager);
        }
        clientContext.ExecuteQuery();

        //add the listitem and execute changes to SharePoint
        clientContext.Load(kudosList, list => list.Fields);
        Microsoft.SharePoint.Client.ListItem kudosListItem = newKudo.Add(kudosList);
        clientContext.Load(kudosListItem);
        clientContext.ExecuteQuery();

        //write to social feed
        SocialFeedManager socialMgr = newSocialFeedManager(clientContext);
        var post = newSocialPostCreationData();
        post.ContentText = "Sent @{0} a Kudos for:\n'" + txtMessage.Text + "'";
        post.ContentItems = new[]
        {
            newSocialDataItem
            {
                ItemType = SocialDataItemType.User,
                AccountName = newKudo.Recipient.LoginName
            }
        };
        ClientResult<SocialThread> resultThread = socialMgr.CreatePost("", post);
        clientContext.ExecuteQuery();

        //send email to appropriate parties
        EmailProperties email = newEmailProperties();
        email.To = newList<String>() { newKudo.Recipient.Email };
        email.CC = newList<String>() { newKudo.Submitter.Email };
        if (!String.IsNullOrEmpty(hdnSubmitterManager.Value))
            ((List<String>)email.CC).Add(newKudo.SubmitterManager.Email);
        if (!String.IsNullOrEmpty(hdnRecipientManager.Value))
            ((List<String>)email.CC).Add(newKudo.RecipientManager.Email);
        email.Subject = String.Format("You have received public Kudos from {0}", hdnSubmitterName.Value);
        email.Body = String.Format("<html><body><p>You have received the following public Kudos from {0}:</p><p style=\"font-style: italic\">\"{1}\"</p><p>To recognize this achievement, this Kudos has been shared with both of your managers and will be visible to anyone in your Newsfeed.</p></body></html>", hdnSubmitterName.Value, txtMessage.Text);
        Utility.SendEmail(clientContext, email);
        clientContext.ExecuteQuery();

        //update stats on the landing page
        int availableKudos = Convert.ToInt32(hdnSentQuota.Value) - 1;
        hdnSentQuota.Value = availableKudos.ToString();
        int sentThisMonth = Convert.ToInt32(lblSentThisMonth.Text.Substring(0, lblSentThisMonth.Text.IndexOf(' '))) + 1;
        lblSentThisMonth.Text = String.Format("{0} Kudos sent this month", sentThisMonth.ToString());
        lblRemainingThisWeek.Text = String.Format("You can send {0} more this week", availableKudos.ToString());
        lblAfterThisKudos.Text = String.Format("After this kudos, you can send {0} this week", (availableKudos - 1).ToString());
        txtMessage.Text = "";
    }
}

 

That's about it!  Here are some screenshots of the Kudos app in action:

Kudos App in Existing WikiPage (during lookup)

Kudos Form with Selected Recipient

Newsfeed with Kudos Activity

Kudos Email with Managers CC'ed

Final Thoughts

Kudos was a really fun app to develop.  It's an app that almost any organization could leverage to further capitalize on the power of social in SharePoint 2013.  Best of all, it runs in Office 365!  Look for Part 2 in the next few weeks, where I will expand upon the solution to leverage SQL Azure storage, workflow, and address multi-tenancy for the marketplace.  I hope this was helpful and get you excited about building fantastic apps for SharePoint 2013!

Kudos Part 1 Code: KudosPart1.zip

 

Real-world Apps for SharePoint 2013 - Kudos (Part 2)

$
0
0

Several months ago I wrote a post about building real-world apps for SharePoint 2013.  In that post, I walked through the creation of an employee recognition app for SharePoint 2013 called Kudos.  The original Kudos app leveraged SharePoint lists for storage and wasn’t very configurable.  In this post, we’ll evolve the Kudos app to address several of these limitations, add some additional capabilities, and discuss the tenancy considerations for autohosted apps.

The Solution

The new Kudos app will leverage SQL Azure for data storage, which is much more efficient, scalable, and easier to query in our solution.  The new app will also differentiate between app visitors and app administrators by allowing administrators to configurable app-specific settings.  Gamification features will be added to allow configurable “badges” to be included in Kudos submissions.  Finally, the app will be updated to handle direct navigation where context tokens are null.  All of these features are illustrated in the following video:

(Please visit the site to view this video)

Autohosted App Tenancy

Before jumping into the intricacies of the new app, I want to discuss the tenancy considerations of autohosted apps.  When an autohosted app is installed into a SharePoint site, all the app components (web/database/workflow) are auto-provisioned in a Microsoft owned Azure account (o365apps.net).  Not only is provisioning automatic, high-availability and disaster recovery is automatic (hence the name “autohosted”).  This happens each time the app is installed in a SharePoint site (even within the same Office 365 tenancy).  For example, if I installed my autohosted app into 8 team sites, it would result in 8 separate web sites and databases in Azure.  The diagram below illustrates the tenancy of autohosted apps.  If the isolated tenancy of autohosted apps doesn’t work for you, provider-hosted apps offers more flexibility.  However, with flexibility comes complexity as things are less “automatic”.

Storage with SQL Azure

When I first learned about the ability to provision databases on-demand for autohosted apps, it sounded like some sort of voodoo magic.  How would a newly provisioned app know what connection string to use?  It turns out that Microsoft came up with a very elegant solution to the problem.  In fact, managed CSOM provides APIs to retrieve either a SqlConnection object or the raw connection string using the client context.  Before I jump into these APIs, let me explain the setup to leverage databases in autohosted app solutions.

The new Kudos app will leverage the ADO.NET Entity Framework to talk with SQL Azure. The Entity Framework supports two models…entities first or database first.  Although it is a matter of preference, I tend to use the database first approach except for in MVC apps.  To do this, we’ll start with a database that we will later use to generate our entity model.  Visual Studio 2012 includes LocalDB which is essentially an improved on-demand version of SQL Express.  I like to create isolated instances in LocalDB for my apps, which can be done with the SqlLocalDB.exe command line:

Next, we need to add a database project to our app solution.  Right-click the solution and select Add > New Project.  In the Add New Project dialog, find the SQL Server Database Project template under Other Languages > SQL Server.  Give the project a name and add it to the solution:

We need to make the app project aware of this new database project (almost like adding a project reference).  To create this reference, set the SQL Database property on the app project to our new database project.  When we set this property, Visual Studio will warn us that the target for the database project will be changed to SQL Azure:

Next, we can add all the appropriate tables and procedures to the database project.  For Kudos, I added tables for Kudos and AppSettings.  I also added a GetKudosHistory stored procedure for pulling historical statistics on a user.  A stored procedure will be more efficient than directly querying the entity model with numerous where clauses.  Finally, I added a post-deployment script to seed the AppSettings table with some default settings:

DECLARE @SettingsCount INT
SELECT @SettingsCount = COUNT(*) FROM dbo.AppSettings

--Seed the AppSettings table with default settings if empty
IF (@SettingsCount < 1)
BEGIN
   
INSERTINTO dbo.AppSettings (UseQuota, Quota, QuotaWindow, PostToNewsfeed, EmailRecipient, 
        EmailSubmitter, EmailRecipientManager, EmailSubmitterManager)
    VALUES (1, 4, 2, 1, 1, 1, 1, 1)
END

 

Before we can generate an entity model, we need to publish the database to the LocalDB instance we created earlier.  Remember, this is automatic once we deploy the autohosted app, but here we need to publish manually so we can generate the initial entity model.  The Publish Database wizard can be launch be right-clicking the database project and selecting publish:

With the database published in LocalDB we can turn our attention to the entity model, which will be added to the web project.  The ADO.NET Entity Data Model selection can be found under Visual C# > Data in the Add New Item dialog:

Using the Entity Data Model Wizard We, we will select “Generate from Database”, provide the connection to our database on LocalDB, and select all the database objects we want represented in our entity model:

Side note: I prefer my entity model to contain a constructor that accepts a connection string.  This constructor will get generated automatically if I change the “Code Generation Strategy” property on the entity model from None to Default.  However, I also need to delete the two .tt files that are nested under the model in solution explorer.  I’ve done it this way, but it is all a matter of preference:

As mentioned earlier, CSOM includes new APIs for working with SQL Azure databases in autohosted apps.  AppInstance.TryGetAppDatabaseConnectionDirect can be used to retrieve a SqlConnection object for the database in SQL Azure.  With this approach, you should configure the connection string in your web.config with the name LocalDBInstanceForDebugging and the API will dynamically use this connection when debugging:

var contextToken = TokenHelper.GetContextTokenFromRequest(Page.Request);
var hostWeb = Page.Request["SPHostUrl"];
using (var clientContext = TokenHelper.GetClientContextWithContextToken(hostWeb, contextToken, Request.Url.Authority))
{
    SqlConnection conn = new SqlConnection();
    bool isReadOnly;
    AppInstance.TryGetAppDatabaseConnectionDirect(clientContext, out conn, out isReadOnly);
    clientContext.ExecuteQuery();

    //Start using SqlConnection

 

Although powerful for many development scenarios, a SqlConnection object is not the most appropriate for use with the Entity Framework.  Instead, we will leverage AppInstance.RetrieveAppDatabaseConnectionString to retrieve the raw connection string in the Kudos app.  This API will NOT return the raw connection string when debugging, so the code will check for a null connection string and directly leverage the LocalDBInstanceForDebugging connection if appropriate:

var contextToken = TokenHelper.GetContextTokenFromRequest(Page.Request);
var hostWeb = Page.Request["SPHostUrl"];
using (var clientContext = TokenHelper.GetClientContextWithContextToken(hostWeb, contextToken, Request.Url.Authority))
{
    ClientResult<string> connStringResult = AppInstance.RetrieveAppDatabaseConnectionString(clientContext);
    clientContext.ExecuteQuery();
    string connString = connStringResult.Value;

    //connection string will be empty if in debug mode
    if (String.IsNullOrEmpty(connString))
        connString = ConfigurationManager.ConnectionStrings["LocalDBInstanceForDebugging"].ConnectionString;

    //start using connection string (ex: Entity Framework)

 

Autohosted app deployment automatically handles all the connection string wire-up magic with SQL Azure.  However, it won’t generate an Entity Framework connection string, which requires additional metadata.  Instead, we can manually convert the connection string using the EntityConnectionStringBuilder class.  I did this in a ConnectionUtil class as seen below:

using Microsoft.SharePoint.Client;
using System;
using System.Collections.Generic;
using System.Data.EntityClient;
using System.Linq;
using System.Web;
using System.Web.Configuration;

namespace SharePointKudosSQLWeb.Util
{
    public classConnectionUtil
    {
        public static string GetEntityConnectionString(ClientContext clientContext)
        {
            //try to get the connection string from the clientContext
            ClientResult<string> result = AppInstance.RetrieveAppDatabaseConnectionString(clientContext);
            clientContext.ExecuteQuery();
            string connString = result.Value;

            //if the connection string is empty, then this is debug mode
            if (String.IsNullOrEmpty(connString))
                connString = WebConfigurationManager.ConnectionStrings["LocalDBInstanceForDebugging"].ConnectionString;

            //build an Entity Framework connection string
            EntityConnectionStringBuilder connBuilder = newEntityConnectionStringBuilder();
            connBuilder.Provider = "System.Data.SqlClient";
            connBuilder.ProviderConnectionString = connString;
            connBuilder.Metadata = "res://*/KudosModel.csdl|res://*/KudosModel.ssdl|res://*/KudosModel.msl";

            //return the formatted connection string
            return connBuilder.ConnectionString;
        }
    }
}

 

Gamification/Badges

The new employee recognition apps includes the ability to send badges to the recipient of a Kudos.  A document library in the app web is the perfect storage location for these badges.  This will allow an administrator to add/remove badges as is necessary.  Most of the work in developing the badge funcationality is in the BadgePicker control for selecting a badge in the Kudos form:

We also want to include the badge in the social post to the newsfeed:

if (settings.PostToNewsfeed)
{
    SocialFeedManager socialMgr = new SocialFeedManager(clientContext);
    var post = new SocialPostCreationData();
    post.ContentText = "Sent @{0} a Kudo for " + hdnSelectedBadgeTitle.Value + ":\n'" + txtMessage.Text + "'";
    post.ContentItems = new[]
    {
        new SocialDataItem
        {
            ItemType = SocialDataItemType.User,
            AccountName = recipient.LoginName
        }
    };
    post.Attachment = new SocialAttachment()
    {
        AttachmentKind = SocialAttachmentKind.Image,
        Uri = hdnSelectedBadgeUrl.Value
    };
    ClientResult<SocialThread> resultThread = socialMgr.CreatePost(null, post);
    clientContext.ExecuteQuery();
}

 

Distinguishing App Roles

As I started to develop real-world solutions using the app model, I quickly identified the need to distinguish an app administrator from a normal visitor.  For Kudos, the administrator needs the ability to configure quotas, quota durations, notification settings, and badges for the application. 

I see two distinct approaches to application roles…inherit the permissions from the app web or build a complete custom permission model in the apps storage (ex: SQL Azure).  A custom model might be necessary for complex permission needs.  For Kudos, I just need to know if a user is an administrator and the app web can easily provide that.  You might ask why I’m going to the app web and not the host web for checking permissions.  Checking permissions on the host web would require my app to have full control on the host web, which it would otherwise not require.  It is bad practice for your app to request more permissions than it needs.  This is especially true in our case, since the app is free to check permissions on its own app web (which are inherited from the host web).

I decided to surface my admin screens through the settings menu in the chrome control.  This posed another challenge, since this menu is rendered using client-side script.  I found that checking the effective permissions of the user took too many API calls to do client-side (one call to get the current user and one to get effective permissions for the user).  Instead, I decided to check effective permissions in managed code and conditionally output the appropriate script.  I delivered this through a base page class that all of my pages inherited from:

namespace SharePointKudosSQLWeb.Pages
{
    public classKudosBasePage : System.Web.UI.Page
    {
        protected override void OnLoad(EventArgs e)
        {
            bool hasManageWebPerms = false;

            //get SharePoint context
            var spContext = Util.ContextUtil.Current;
            using (var clientContext = TokenHelper.GetClientContextWithContextToken(spContext.AppWebUrl, spContext.ContextTokenString, Request.Url.Authority))
            {
                //check if the user has ManageWeb permissions from app web
                BasePermissions perms = newBasePermissions();
                perms.Set(PermissionKind.ManageWeb);
                ClientResult<bool> result = clientContext.Web.DoesUserHavePermissions(perms);
                clientContext.ExecuteQuery();
                hasManageWebPerms = result.Value;
            }

            //define initial script
            string script = @"
            function chromeLoaded() {
                $('body').show();
            }
            //function callback to render chrome after SP.UI.Controls.js loads
            function renderSPChrome() {
                //Get the host site logo url from the SPHostLogoUrl parameter
                var hostlogourl = decodeURIComponent(getQueryStringParameter('SPHostLogoUrl'));

                var links = [{
                    'linkUrl': 'mailto:kudos@tenant.onmicrosoft.com',
                    'displayName': 'Contact us'
                }];
                ";

            //add link to settings if the current user has ManageWeb permissions
            if (hasManageWebPerms)
            {
                script += @"links.push({
                        'linkUrl': 'Settings.aspx?" + Request.QueryString + @"',
                        'displayName': 'App Settings'
                    });";
                script += @"links.push({
                        'linkUrl': '" + String.Format("{0}/KudosBadges", spContext.AppWebUrl) + @"',
                        'displayName': 'Badge Mgmt'
                    });";
            }
               
            //add remainder of script
            script += @"
                //Set the chrome options for launching Help, Account, and Contact pages
                var options = {
                    'appIconUrl': hostlogourl,
                    'appTitle': document.title,
                    'settingsLinks': links,
                    'onCssLoaded': 'chromeLoaded()'
                };

                //Load the Chrome Control in the divSPChrome element of the page
                var chromeNavigation = new SP.UI.Controls.Navigation('divSPChrome', options);
                chromeNavigation.setVisible(true);

            }";

            //register script in page
            Page.ClientScript.RegisterClientScriptBlock(typeof(KudosBasePage), "KudosBasePage", script, true);

            //call base onload
            base.OnLoad(e);
        }
    }
}

 

Deciding on client-side or managed CSOM is an important consideration to weigh for all API calls.  Each will likely take the same number of calls, but managed code should have less latency when multiple API calls are required.  This is especially true for autohosted apps, where the app and the SharePoint Online farm might be hosted from the same Microsoft data center.  At worst, the app and the farm would communicate over Microsoft’s massive communication backbone between data centers.

Direct Navigation

It possible (and likely) that some users will try navigating directly to the app instead of through SharePoint.  This “direct navigation” is a special use case to develop around, especially when the app leverages OAuth for communication back to SharePoint (which will be 100% of the time in Office 365).  When a user launches an app from SharePoint, SharePoint posts a context token to the app.  Through this context token, the app can talk to Azure ACS to get an access token it can use for calling SharePoint APIs.  Without a context token, the app will be denied access to the SharePoint APIs.  Luckily, SharePoint 2013 provides an app redirect page specifically to refresh context tokens (/_layouts/15/appredirect.aspx).  TokenHelper.cs includes built-in functions to build the correct redirect url.  All apps for SharePoint should be written to take advantage of this.  In my case, I built a ContextUtil class to handle the app redirect and cache my tokens for an appropriate length of time.

publicstaticContextUtil Current
{
    get
    {
        ContextUtil spContext = HttpContext.Current.Session["SharePointContext"] asContextUtil;
        if (spContext == null || !spContext.IsValid)
            spContext = newContextUtil(HttpContext.Current.Request);

        if (spContext.IsValid)
            return spContext;
        else
        {
            HttpContext.Current.Response.Redirect(GetRedirectUrl());
            return null;
        }
    }
}

private static string GetRedirectUrl()
{
    string hostWebUrl = HttpContext.Current.Request["SPHostUrl"];
    returnTokenHelper.GetAppContextTokenRequestUrl(hostWebUrl,
        HttpContext.Current.Server.UrlEncode(HttpContext.Current.Request.Url.ToString()));
}

 

Style "Toggles"

In Part 1, I developed a reusable script to add the chrome control or host web styles based on a DisplayType url parameter.  I still think this is a great practice, as it allows the same app pages to be displayed in full-screen, app parts, and dialogs.  However, I’ve grown tired of the delayed style "toggle" that occurs when the chrome control or host styles are finally loaded into the app page (ugly page…wait for it…YAY, pretty page).  During preview, this sometimes took 4-8 seconds.  In the new Kudos app, I’ve defaulted the body display style to none on all pages and used callbacks to display the body once the chrome control or host styles have completely loaded.  This provides a much better user experience for the user:

<%@PageLanguage="C#"AutoEventWireup="true"CodeBehind="Default.aspx.cs"Inherits="SharePointKudosSQLWeb.Pages.Default"%>
<%@RegisterSrc="~/UserControls/KudosStatsControl.ascx"TagPrefix="uc1"TagName="KudosStatsControl"%>
<%@RegisterSrc="~/UserControls/KudosControl.ascx"TagPrefix="uc1"TagName="KudosControl"%>
<!DOCTYPEhtml PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<htmlxmlns="http://www.w3.org/1999/xhtml">
<headrunat="server">
    <title>Kudos Employee Recognition</title>
    <scripttype="text/javascript"src="../Scripts/MicrosoftAjax.js"></script>
    <scripttype="text/javascript"src="../Scripts/jquery-1.7.1.min.js"></script>
    <scripttype="text/javascript"src="../Scripts/KudosScript.js"></script>
</head>
<bodystyle="display: none;">
    <formid="form1"runat="server"autocomplete="off">

 

Final Thoughts

The new Kudos employee recognition app is a great reference for autohosted apps.  It makes good use of new SharePoint APIs, maximizes different UX delivery methods, and follows several patterns that I would consider best practices for developing apps for SharePoint.  Best of all, I’m providing 100% of the code to the community.  So pull down the code, take note of the patterns/intricacies, and go build some great apps for SharePoint! 

Kudos Part 2 Code: KudosAdvanced.zip

SharePoint 2013 App Deployment through "App Stapling"

$
0
0

Feature Stapling is a popular development practice for adding functionality to a specific type of site in SharePoint.  Although it has become the preferred approach over custom site definitions, it still requires the deployment of a farm solution.  Unfortunately, SharePoint Online doesn’t support features stapling or custom site definitions.  So how can an organization deploy capabilities to a group of existing sites or new sites that match specific criteria?  The answer is app deployment through an app catalog…something I like to call “app stapling”.  Here is a video that illustrates the concepts outlined in this blog post:

(Please visit the site to view this video)

The App Catalog

The App Catalog is a special site collection aimed at storing, managing, and delivering Office/SharePoint apps to the enterprise.  Administrators of an App Catalog can upload apps and make them available to site owners (they can also flip a kill switch to disable the apps).  However, administrators can also use the catalog to push apps to specific site collections, managed paths, and site templates.  To get started, you should provision an app catalog in your tenancy (on premise users will need to do additional steps outlined in Configure an environment for apps for SharePoint).  This can be done from SharePoint Admin Center in Office 365 by clicking on the Apps link in the side navigation and then selecting App Catalog:

Once the App Catalog has been provisioned, it should look similar to this:

App Deployment

In order to "push" and app out to specific sites, we first need to upload it to the App Catalog.  Navigate to the App Catalog and select the Apps for SharePoint link in the side navigation.  You can simply drag/drop an .app file into this library to make it available to the organization:

Next, we need to install the app into the App Catalog (not to be confused with uploading to the App Catalog).  This seems a little odd, but it is the only way to push it specific sites in the tenancy.  You can install the app into the App Catalog just like you would from any other SharePoint site.  Select Site Content > add an app > From Your Organization > and select the app to install:

Now that the app is installed into the App Catalog, we can manage the deployment to other sites.  Select Site Content from the App Catalog and find the installed app.  Selecting the ellipse should display a "Deployment" option does doesn't display in other sites:

This Manage App Deployment screen allows an administrator deploy/retract an app to/from specific site collections, managed paths, and site templates.  Performing this "app staple" will work on all existing sites and any new sites that meet the criteria.

A few interesting notes about how this functions:

  • Similar to deployment, an App Catalog administrator can retract apps from specific sites (or flip the kill switch to disable it from all sites)
  • Because the app is pushed by an administrator, site owners will not be able to remove the app from a site that meets the deployment criteria.  Not even a site collection administrator can remove the app.
  • This centralized deployment also share the same centralized app resources (App Web and Remote Web).  Essentially, the app is deployed, but not installed in the sites.  All sites will leverage the App Web and Remote Web from the instance installed in the App Catalog.  This significantly changes tenancy considerations for SharePoint-hosted and Autohosted apps, which typically get their own dedicate app resources.
  • Because of centralized deployment, remote events such as “Handle App Installed”, “Handle App Uninstalled”, and “Handle App Upgrade” will only fire once (when the app is installed in the App Catalog).  Because of this, I think it will be hard to leverage “App Stapling” for branding purposes.  I’ve successful developed branding apps that deploy/set a masterpage from the App Web, but the remote events were a key part of that solution.

Final Thoughts

App deployment through the App Catalog provides a great deal of flexibility and governance for SharePoint deployments on-premise and in Office 365.  I can see “App Stapling” being used for a number of scenarios where feature-stapling has been used in the past.

Combining Apps for SharePoint and Apps for Office

$
0
0

One of the interesting capabilities of the new SharePoint/Office app models is combining them to deliver complete real-world solutions.  In fact, an app for SharePoint can contain/embed an App for Office.  Combining apps can lead to some powerful scenarios. 

Imagine an expense report app for Office that interacted with Word to assemble a complete expense report.  Now think about that delivered as the default document of a document library in SharePoint (perhaps even with powerful workflow capabilities).  Hopefully you can start to see this as a power way to package capabilities together.  This post will focus on embedding an app for Office in an app for SharePoint and exposing it through a document library.  The video below illustrates the solution outlined in this post:

(Please visit the site to view this video)

To get started, we need an app for SharePoint project.  Any hosting type will work, but I went with SharePoint-hosted for simplicity (use provider-hosted or autohosted if you want to leverage server-side code).  Once the project has been provisioned in Visual Studio 2012, we need to add an app for Office to the project, which should be an option in the Add New Item dialog:

Visual Studio needs to know what type of app for Office we are developing.  The options include content apps for Excel or task pane apps that target Word, Excel, and/or PowerPoint.  I selected a task pane app for Microsoft Word:

Next, we have an option to select an existing Office document to associate the app with.  This is great when an existing document template already exists.  The default option is to start with a blank Office document, which is what I selected:

After completing the wizard, a number of new assets will be added to the project, including presentation elements for delivering the UI of the app for Office (styles, script, and html).  It also added the Office document (via module) and the app manifest for the Office app (seen below as ExpenseReportApp.xml):

The Word document (ExpenseReportApp.docx) should already be associated with our app for Office.  Other than that, it is an empty Word document (remember we could have selected an existing document).  When you open it, don't be concerned that the app for Office doesn't load...we haven't deployed the html page yet:

Next, we need to add a document library to store expense reports and associate with our Office document.  Launch the Add New Item dialog to add a List to the project:

Base the new list on a customizable Document Library and click Next.  Be careful not to click Finish, or we will miss an important step:

The next screen allows us to select an Office document to associate as the document template for the new library.  This is the key step in wiring our app for Office to the new document library.  Had we skipped this screen, the library would use the default blank Word document:

Associating the new library with a document template will automatically generate a custom content type for library and list template.  To complete the solution, we just need to update the entry point of the SharePoint app to the URL of our document library.  You can look at the ListInstance to find this URL and then set it in the AppManifest.xml of the SharePoint app project:

When we debug or deploy the solution, the app should take us directly to the document library.  The document library should already have the content type association we need to launch our app for Office:

And here is the final result of the app for Office launched from our document library:

I hope this post helped illustrate the interesting capabilities that can be delivered by combining apps for SharePoint and apps for Office.  To make it even more interesting, an app for Office could call back into SharePoint APIs.  I’d love to hear about the creative combinations you come up with, so feel free to post them below!

Leveraging SharePoint dialogs in Apps for SharePoint

$
0
0

One of the creative ways Apps for SharePoint can be exposed in a site is through the SharePoint dialog.  In this way, apps can deliver contextual capabilities to a SharePoint site.  Dialogs will typically be launched from custom actions in the app.  In this post I will discuss how to launch an app through the SharePoint dialog, pass information into the app, and close the dialog from within the app.

Launching the Dialog

Launching an app page in the SharePoint dialog requires the use of a custom action.  Apps for SharePoint support Menu Item Custom Actions and Ribbon Custom Actions.  Both of these can be deployed to the Host Web or the App Web.  The wizard for adding these custom actions makes it very easy to scope custom actions appropriately as is shown in the table below:

 Ribbon Custom ActionMenu Item Custom Action
List TemplateXX
List InstanceXX
Content Type X
File Extension X

 

As you can see in the table above, Menu Item Custom Actions have additional flexibility to scope against specific Content Types or File Extensions.  Ribbon Custom Actions are unique because they scope to a specific ribbon tab and group, which a wizard will prompt you for when added to an app:

In additional to location, Ribbon Custom Actions typically have icons associated with them.  Ribbon icons are especially challenging when deploying the custom action to the host web.  An App for SharePoint can only deploy images to the app web (via module) or remote web, neither of which will render nicely in the ribbon of the host web.  The solution is to specify the image source as a Base64 encoded image as seen below:

 <CommandUIDefinitions>
    <CommandUIDefinitionLocation="Ribbon.Documents.New.Controls._children">
        <ButtonId="Ribbon.Documents.New.UploadToAzureMediaSvcButton"
            Alt="Upload to Media Services"
            Sequence="1"
            Command="Invoke_UploadToAzureMediaSvcButtonRequest"
            LabelText="Upload to Media Services"
            TemplateAlias="o1"
            Image32by32="data:image/png;base64, iVBORw0KGgoAAAANSUhEUgAAACAAAAAgCAYAAABzenr0AAAABGdBTUEAAK/...
            Image16by16="data:image/png;base64, iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAAABGdBTUEAAK/...

 

I created a quick tool to Base64 encode images, which is provided at the end of this post.  Here is the UI of that tool:

A custom action can be converted to leverage the SharePoint dialog by adding the HostWebDialog, HostWebDialogHeight, and HostWebDialogWidth attributes to the CustomAction element in the Elements.xml:

<?xmlversion="1.0" encoding="utf-8"?>
<Elementsxmlns="http://schemas.microsoft.com/sharepoint/">
    <CustomActionId="a76f4430-c8b1-4317-b673-260429ca6dc1.UploadToAzureMediaSvc"
        RegistrationType="List"
        RegistrationId="851"
        Location="CommandUI.Ribbon"
        Sequence="10001"
        Title="Upload to Azure Media Services"
        HostWebDialog="true"
        HostWebDialogHeight="420"
        HostWebDialogWidth="510">

 

Passing Context to the Dialog

App will generally get contextual information from URL parameters or information posted from SharePoint.  Custom actions have a number of dynamic URL parameters they can leverage for contextual information:

URL ParameterAction TypeDescription
HostUrlBothURL of the host web
SiteUrlBothURL of the host site collection
SourceBothThe absolute path of the page invoking the custom action
ListUrlDirBothRelative URL path for the contextual list
ListIdMenu ItemThe ID of the contextual list
SelectedListIdRibbonThe ID of the contextual list
ItemIdMenu ItemThe ID of the contextual item
SelectedItemIdRibbonComma-separated list IDs for selected items

 

Closing the Dialog

Autohosted and Provider-hosted Apps for SharePoint are hosted outside of SharePoint, and as such can’t leverage typical SharePoint scripts to close the dialog (ex: SP.UI.ModalDialog.commonModalDialogClose()).  Luckily, I have hunted through many lines of core.js to find you the solution!  Similar to resizing app parts (client web parts), we can use the HTML5 postMessage API to close the dialog from the app (and optionally refresh the parent page).  This shouldn’t come as a surprise, since the postMessage API was meant to provide cross-domain communication and is used in other app scenarios.  CloseCustomActionDialogRefresh and CloseCustomActionDialogNoRefresh are the two messages SharePoint is “listening” for to close the dialog.  I wrapped these in a simple function I can releverage across all my apps:

 function closeParentDialog(refresh) {
    var target = parent.postMessage ? parent : (parent.document.postMessage ? parent.document : undefined);
    if (refresh)
        target.postMessage('CloseCustomActionDialogRefresh', '*');
    else
        target.postMessage('CloseCustomActionDialogNoRefresh', '*');
}

 

Final Thoughts

Leveraging custom actions with the SharePoint dialog makes an app feel even more integrated than with app parts.  They provide contextual integration and are the only mechanism for pushing a change in UI on the host web (remember an app part is optionally added by a user).  Below are links to a code sample and the Image Encoder tool mentioned above:

App for SharePoint with dialogs:  SharePointDialogs.zip

Image Encoder Tool: ImageEncoder.zip

Corporate YouTube and Video Delivery via SharePoint 2013

$
0
0

Want to deliver an internal/corporate “YouTube” for your organization using SharePoint?  Looking to maximize your SharePoint deployment by incorporating video/media delivery?  Worried about the storage/bandwidth implications of allowing anyone in the enterprise to contribute video/media?  Then then post if for you!  I will outline a solution that addresses many of the limitations to native media delivery in SharePoint 2013.  The solution outlined in this post is also illustrated in the following video:

(Please visit the site to view this video)

 

Steve Fox wrote a great post a few months back on SharePoint 2013 and Windows Azure Media Services.  In it, Steve illustrated the use of Azure Media Services to deliver media within an app for SharePoint.  Steve introduced some powerful concepts that I want to take to the next level.  I envision a complete solution that allows users to contribute any media format, from any asset library in SharePoint, and send it through Azure Media Services for encoding and hosting (possibly around the globe using Azure’s Content Delivery Network).  But why Azure Media Services?  To understand why, it’s important to consider the video capabilities native to SharePoint 2013, their limitations, and past case studies on SharePoint as a media platform.

Video and SharePoint

Out of the box, SharePoint provides very basic video delivery capabilities.  Asset libraries allow users to upload videos, which are stored as SQL BLOBs in content databases (just like any other file in SharePoint).  Videos tend to be larger in size, which are both inefficient for BLOB storage and quickly grow content databases to maximum recommended capacity (200GB).  Videos are also subject to the maximum file upload size in SharePoint, which is 50MB by default with a hard limit of 2GB.  Videos that are uploaded into SharePoint can be consumed by “progressive download” in the exact same format and quality as the contributor uploaded.  This means that a 1080P .wmv uploaded to SharePoint will only be consumable as a 1080P .wmv file.  A progressive download also lacks intelligent fast-forward capabilities.  If a user only cares about the end of a long video, they have to download the entire video to get to the end.  Many enterprises with SharePoint (including Microsoft) have implemented solutions aimed addressing these challenges, including Remote Blob Storage (RBS), Blob Caching, Branch Caching, Bitrate Throttling, Encoding/Transcoding, DRM, and many more.

Academy is Microsoft’s internal social video platform built on SharePoint.  Academy sets a high standard for media delivery maturity, with customized media uploads, geographically distributed streaming, adjustable quality by device/connection, and advanced encoding workflows to accept almost any media format.  Academy is impressive, but represents years of effort, refinement, and lessons learned from digital asset and web content management.  Most organizations would find it very challenging to close the gap between media delivery native to SharePoint and what Microsoft has built internally with Academy…until now!

Academy (Microsoft's internal social video platform built on SharePoint)

The landscape has changed significantly since Academy was first deployed at Microsoft several years ago.  Now, Windows Azure can provide the platform for media storage (Blob Storage), encoding/streaming (Media Services), and global distribution (Azure CDN) with very little CAPEX.  Additionally, the new SharePoint app model can help deliver these Azure capabilities, providing a highly customized media delivery experience to any SharePoint farm…even in SharePoint Online/Office 365.

NOTE: although advanced capabilities like encoding and remote storage aren’t native to SharePoint 2013, Microsoft did incorporate a number of media delivery patterns from Academy.  For example, Microsoft recognized that media delivery is much more than just a video…it includes social discussions, analytics, and supporting documentation (ex: slide deck, code, etc).  As such, the Document Set or “video set” is the container for videos in SharePoint 2013.  Downloads/Podcasting, thumbnail generation, and embed codes are other capabilities carried forward from Academy.

 

The Solution

Our app will exploit the best of both SharePoint 2013 and Windows Azure for media contribution, consumption, and management.  These capabilities will be delivered through a number of pages in our app and ribbon buttons in the host site.  I have detailed each of these solution components below.  The solution also leverages a SQL Azure database to keep track of videos sent through the app and app preferences/settings.

Upload.aspx

Upload.aspx (Empty)
Upload.aspx (Processing)

 

We want media contributors to have a familiar experience contributing videos through our app.  Since videos are traditionally uploaded through the ribbon on asset libraries, our solution will leverage the same ribbon to launch the upload page for our app.  The asset library ribbon button will launch the upload page inside the SharePoint dialog.  You can see this ribbon button and the upload dialog in the two screenshots above and the custom action xml to make it happen below:

Custom action for adding ribbon button to asset libraries
<?xmlversion="1.0" encoding="utf-8"?>
<Elementsxmlns="http://schemas.microsoft.com/sharepoint/">
  <CustomActionId="a76f4430-c8b1-4317-b673-260429ca6dc1.UploadToAzureMediaSvc"
                RegistrationType="List"
                RegistrationId="851"
                Location="CommandUI.Ribbon"
                Sequence="10001"
                Title="Upload to Azure Media Services"
                HostWebDialog="true"
                HostWebDialogHeight="420"
                HostWebDialogWidth="510">
    <CommandUIExtension>
      <CommandUIDefinitions>
        <CommandUIDefinitionLocation="Ribbon.Documents.New.Controls._children">
          <ButtonId="Ribbon.Documents.New.UploadToAzureMediaSvcButton"
                  Alt="Upload to Media Services"
                  Sequence="1"
                  Command="Invoke_UploadToAzureMediaSvcButtonRequest"
                  LabelText="Upload to Media Services"
                  TemplateAlias="o1"
                  Image32by32="data:image/png;base64, iVBORw0KGgoAAAANSUhEUgAAACAAAAAgCAYAAABzenr0AAAABGdBTUEAAK..."
                  Image16by16="data:image/png;base64, iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAAABGdBTUEAAK..." />
        </CommandUIDefinition>
      </CommandUIDefinitions>
      <CommandUIHandlers>
        <CommandUIHandlerCommand="Invoke_UploadToAzureMediaSvcButtonRequest"
                          CommandAction="~remoteAppUrl/Pages/Upload.aspx?{StandardTokens}&amp;HostUrl={HostUrl}&amp;SiteUrl={SiteUrl}&amp;Source={Source}&amp;ListURLDir={ListUrlDir}&amp;SelectedListID={SelectedListId}&amp;DisplayType=iframe"/>
      </CommandUIHandlers>
    </CommandUIExtension>
  </CustomAction>
</Elements>

 

The upload page is where most of the magic happens, allowing users to send videos to Azure Media Services for encoding, thumbnail generation, and blob storage.  Uploading large videos, sending them to the cloud, and encoding could take significant time.  Most users won’t want to sit around waiting for this to complete.  For the proof of concept (POC), many of these processes are executed on a separate thread so the upload page respond once all the data posts.  Ideally, we would leverage an Azure Worker Process for this long running process (which is similar to a Windows Service).  However, our solution is implemented as an autohosted app for SharePoint for simplicity in deployment/tenancy.  Threading was easier to implement in that model for this POC.

Upload event for starting Azure Encoding thread

protectedvoid btnOk_Click(object sender, EventArgs e)
{
    //get file bytes from the hdnImageBytes or the fileselect control
    byte[] mediaBytes = null;
    string fileName = "";
    if (!String.IsNullOrEmpty(Page.Request["hdnImageBytes"]))
    {
        //get media from hdnImageBytes
        var base64MediaString = Page.Request["hdnImageBytes"];
        fileName = Page.Request["hdnFileName"];
        base64MediaString = base64MediaString.Substring(base64MediaString.IndexOf(',') + 1);
        mediaBytes = Convert.FromBase64String(base64MediaString);
    }
    if (mediaBytes != null&& mediaBytes.Length > 0)
    {
        //add database record for the media
        var spContext = Util.ContextUtil.Current;
        AzureMediaServicesJob amsJob = new AzureMediaServicesJob(ContextUtil.Current.ContextDetails);
        using (var clientContext = TokenHelper.GetClientContextWithContextToken(spContext.ContextDetails.HostWebUrl, spContext.ContextDetails.ContextTokenString, Request.Url.Authority))
        {
            //get user information
            Web web = clientContext.Web;
            User currentUser = web.CurrentUser;
            clientContext.Load(currentUser);
            clientContext.ExecuteQuery();

            using (AzureMediaModel model = newAzureMediaModel(ConnectionUtil.GetEntityConnectionString(clientContext)))
            {
                //create the record
                Media newMedia = newMedia();
                newMedia.Title = txtTitle.Text;
                newMedia.StatusId = 1;
                newMedia.AuthorLoginName = currentUser.LoginName;
                newMedia.AuthorEmail = currentUser.Email;
                model.Media.AddObject(newMedia);
                model.SaveChanges();
                amsJob.ItemID = newMedia.Id;

                //get default settings
                amsJob.ListUrl = Page.Request["ListUrlDir"];
                amsJob.ListID = newGuid(Page.Request["SelectedListId"]);
                amsJob.MediaBytes = mediaBytes;
                amsJob.MediaFileName = fileName;
                amsJob.IOPath = Server.MapPath("~");
            }
        }
               
        //set the itemID back on the form so it can check processing
        ScriptManager.RegisterStartupScript(updatePanel, updatePanel.GetType(), "checkStatus", String.Format("checkMediaStatus({0});", amsJob.ItemID), true);

        //Start a new thread to perform the Media encoding and document set creation
        Thread thread = newThread(ProcessMediaUtil.UploadMedia);
        thread.Name = String.Format("EncodingTask{0}", amsJob.ItemID.ToString());
        thread.Start(amsJob);
    }
    else
    {
        //TODO: notify user no file provided
    }
}

 

The solution leverages a ProcessMediaUtil class to upload media to Azure Blob Storage, encode the videos to a common format, publish the encoded media, and generate thumbnails.

UploadMedia method on ProcessMediaUtil class

publicstaticvoid UploadMedia(object azureMediaServicesJob)
{
    AzureMediaServicesJob amsJob = (AzureMediaServicesJob)azureMediaServicesJob;

    //add new listItem
    using (var clientContext = TokenHelper.GetClientContextWithContextToken(amsJob.ContextDetails.HostWebUrl, amsJob.ContextDetails.ContextTokenString, amsJob.ContextDetails.ServerUrl))
    {
        using (AzureMediaModel model = newAzureMediaModel(Util.ConnectionUtil.GetEntityConnectionString(clientContext)))
        {
            try
            {
                //get settings from database
                Settings appSettings = model.Settings.FirstOrDefault();

                // Initialize the Azure account information
                string connString = String.Format("DefaultEndpointsProtocol={0};AccountName={1};AccountKey={2}",
                    "https", appSettings.StorageAccountName, appSettings.StorageAccountKey);
                CloudStorageAccount cloudStorageAccount = CloudStorageAccount.Parse(connString);
                mediaContext = newCloudMediaContext(appSettings.MediaAccountName, appSettings.MediaAccountKey);
                CloudBlobClient blobClient = cloudStorageAccount.CreateCloudBlobClient();

                //upload the asset to blob storage and publish
                var asset = UploadBlob(blobClient, amsJob.MediaFileName, amsJob.MediaBytes);
                string url = PublishAsset(asset, amsJob.MediaFileName);
                mediaContext = newCloudMediaContext(appSettings.MediaAccountName, appSettings.MediaAccountKey);

                //update the status
                Media mediaItem = model.Media.FirstOrDefault(i => i.Id == amsJob.ItemID);
                mediaItem.StatusId = 2;
                model.SaveChanges();

                //Encode the asset
                IJob job = EncodeAsset(asset, appSettings, amsJob.MediaFileName);

                //refresh the context and publish the encoded asset
                mediaContext = newCloudMediaContext(appSettings.MediaAccountName, appSettings.MediaAccountKey);
                job = mediaContext.Jobs.Where(j => j.Id == job.Id).FirstOrDefault();
                var encodingTask = job.Tasks.Where(t => t.Name == "Encoding").FirstOrDefault();
                var encodedAsset = encodingTask.OutputAssets.FirstOrDefault();
                url = String.Format(PublishAsset(encodedAsset, "SmoothStream-" + asset.Name), amsJob.MediaFileName);

                //update the database record with the correct streaming url and the status
                mediaItem.MediaSvcUrl = url;
                mediaItem.StatusId = 3;
                model.SaveChanges();

                //download the thumbnail bytes and publish video set
                byte[] thumbBytes = GetThumbnailBytes(blobClient, job);
                string targetDocSetUrl = PublishVideoSet(clientContext, amsJob, mediaItem, thumbBytes);

                //send email confirmation
                EmailProperties email = newEmailProperties();
                email.To = newList<String>() { mediaItem.AuthorEmail };
                email.Subject = String.Format("Your video \"{0}\" is ready!", mediaItem.Title);
                email.Body = String.Format("<html><body><p>Your video \"{0}\" has finished processing and can be viewed at the following address:</p><p><a href=\"{1}\">{1}</a></p></body></html>", mediaItem.Title, targetDocSetUrl);
                Utility.SendEmail(clientContext, email);
                clientContext.ExecuteQuery();

                //update status...hard-code aspect ratio for now
                mediaItem.StatusId = 4;
                mediaItem.Width = 700;
                mediaItem.Height = 393;
                mediaItem.SharePointUrl = targetDocSetUrl;
                model.SaveChanges();
            }
            catch (Exception ex)
            {
                //update the status
                Media mediaItem = model.Media.FirstOrDefault(i => i.Id == amsJob.ItemID);
                mediaItem.StatusId = 5;
                mediaItem.ErrorMessage = ex.Message;
                model.SaveChanges();
            }
        }
    }
}

 

Azure utility methods in ProcessMediaUtil class

privatestaticIAsset UploadBlob(CloudBlobClient blobClient, string publishedName, byte[] fileBytes)
{
    var asset = mediaContext.Assets.Create(publishedName, AssetCreationOptions.None);
    var writePolicy = mediaContext.AccessPolicies.Create("policy for copying", TimeSpan.FromMinutes(30), AccessPermissions.Write | AccessPermissions.List);
    var destination = mediaContext.Locators.CreateSasLocator(asset, writePolicy, DateTime.UtcNow.AddMinutes(-5));
    var destContainer = blobClient.GetContainerReference(new Uri(destination.Path).Segments[1]);
    var destBlob = destContainer.GetBlockBlobReference(publishedName);
    destBlob.UploadByteArray(fileBytes);
    destBlob.Properties.ContentType = "video/mp4";
    destBlob.SetProperties();
    return asset;
}

privatestaticstring PublishAsset(IAsset asset, string publishedName)
{
    var assetFile = asset.AssetFiles.Create(publishedName);
    assetFile.Update();
    asset = mediaContext.Assets.Where(a => a.Id == asset.Id).FirstOrDefault();
    var readPolicy = mediaContext.AccessPolicies.Create("policy for access", TimeSpan.FromDays(365 * 3), AccessPermissions.Read | AccessPermissions.List);
    var readLocator = mediaContext.Locators.CreateSasLocator(asset, readPolicy, DateTime.UtcNow.AddMinutes(-5));
    string[] parts = readLocator.Path.Split('?');
    return parts[0] + "/{0}?" + parts[1];
}

privatestaticIJob EncodeAsset(IAsset asset, Settings appSettings, string publishedName)
{
    var assetToEncode = mediaContext.Assets.Where(a => a.Id == asset.Id).FirstOrDefault();
    if (assetToEncode == null)
    {
        throw newArgumentException("Could not find assetId: " + asset.Id);
    }
    IJob job = mediaContext.Jobs.Create("Encoding " + assetToEncode.Name + " to " + appSettings.EncodingOptions.DisplayName);

    //add encoding task
    IMediaProcessor latestWameMediaProcessor = (from p in mediaContext.MediaProcessors where p.Name == "Windows Azure Media Encoder"select p).ToList().OrderBy(wame => newVersion(wame.Version)).LastOrDefault();
    ITask encodeTask = job.Tasks.AddNew("Encoding", latestWameMediaProcessor, appSettings.EncodingOptions.EncodingConfiguration, TaskOptions.None);
    encodeTask.InputAssets.Add(assetToEncode);
    encodeTask.OutputAssets.AddNew("SmoothStream-" + publishedName, AssetCreationOptions.None);

    //add thumbnail task
    ITask thumbTask = job.Tasks.AddNew("Generate thumbnail", latestWameMediaProcessor, "Thumbnails", TaskOptions.None);
    thumbTask.InputAssets.Add(assetToEncode);
    thumbTask.OutputAssets.AddNew("Thumb-" + assetToEncode.Name + ".jpg", AssetCreationOptions.None);

    //Submit the job and wait for it to complete
    job.StateChanged += newEventHandler<JobStateChangedEventArgs>((sender, jsc) =>
    {
        //do nothing...could change status here, but we are waiting
    });
    job.Submit();
    job.GetExecutionProgressTask(CancellationToken.None).Wait();
    return job;
}

privatestaticbyte[] GetThumbnailBytes(CloudBlobClient blobClient, IJob job)
{
    var thumbTask = job.Tasks.Where(t => t.Name == "Generate thumbnail").FirstOrDefault();
    var thumbAsset = mediaContext.Assets.Where(a => a.Id == thumbTask.OutputAssets[0].Id).FirstOrDefault();
    var thumbFile = thumbAsset.AssetFiles.FirstOrDefault();
    var writePolicy = mediaContext.AccessPolicies.Create("policy for copying", TimeSpan.FromMinutes(30), AccessPermissions.Write | AccessPermissions.List);
    var destination = mediaContext.Locators.CreateSasLocator(thumbAsset, writePolicy, DateTime.UtcNow.AddMinutes(-5));
    var destContainer = blobClient.GetContainerReference(new Uri(destination.Path).Segments[1]);
    var destBlob = destContainer.GetBlockBlobReference(thumbFile.Name);
    return destBlob.DownloadByteArray();
}

 

The ProcessMediaUtil will also create the "video set" in SharePoint once all the Azure jobs are complete.  The new video set will reference the Azure-hosted video via an embed code (which is stored in the hidden VideoSetEmbedCode column of the video set).

PublishVideoSet method on ProcessMediaUtil class

privatestaticstring PublishVideoSet(ClientContext clientContext, AzureMediaServicesJob amsJob, Media mediaItem, byte[] thumbBytes)
{
    //get the media list
    List mediaList = clientContext.Web.Lists.GetById(amsJob.ListID);
    clientContext.Load(mediaList, i => i.Fields);
    clientContext.Load(mediaList, i => i.ParentWebUrl);
    clientContext.ExecuteQuery();

    //create the Video document set
    ListItemCreationInformation itemCreateInfo = newListItemCreationInformation();
    itemCreateInfo.UnderlyingObjectType = FileSystemObjectType.Folder;
    itemCreateInfo.LeafName = mediaItem.Title;
    Microsoft.SharePoint.Client.ListItem newMediaItem = mediaList.AddItem(itemCreateInfo);
    newMediaItem["Title"] = mediaItem.Title;
    newMediaItem["ContentTypeId"] = "0x0120D520A80800E9538ABD5B77E14096B2460EC920FD5E";

    //hard-code the video aspect ratio for now
    newMediaItem["VideoSetEmbedCode"] = String.Format("<iframe width='700' height='400' src='{0}/Pages/Player.aspx?Item={1}&SPHostUrl={2}' frameborder='0' style='overflow: hidden' allowfullscreen></iframe>",
        String.Format("https://{0}", amsJob.ContextDetails.ServerUrl), amsJob.ItemID.ToString(), amsJob.ContextDetails.HostWebUrl);
    newMediaItem.Update();
    clientContext.ExecuteQuery();

    //add subfolders folders for the app
    string targetDocSetUrl = amsJob.ListUrl + "/" + mediaItem.Title;
    Folder folder = clientContext.Web.GetFolderByServerRelativeUrl(targetDocSetUrl);
    clientContext.Load(folder, f => f.UniqueContentTypeOrder);
    clientContext.ExecuteQuery();

    //add the required subfolders for a Video
    var f1 = folder.Folders.Add(targetDocSetUrl + "/Additional Content");
    var f2 = folder.Folders.Add(targetDocSetUrl + "/Preview Images");
    clientContext.ExecuteQuery();

    //upload thumbnail
    FileCreationInformation fileInfo = newFileCreationInformation();
    fileInfo.Content = thumbBytes;
    fileInfo.Url = amsJob.ListUrl + "/" + mediaItem.Title + "/Preview Images/" + amsJob.MediaFileName + "_thumb.jpg";
    Microsoft.SharePoint.Client.File previewImg = f2.Files.Add(fileInfo);
    clientContext.Load(previewImg, i => i.ServerRelativeUrl);
    clientContext.ExecuteQuery();
    newMediaItem["AlternateThumbnailUrl"] = new Microsoft.SharePoint.Client.FieldUrlValue() { Url = previewImg.ServerRelativeUrl };
    newMediaItem.Update();
    clientContext.ExecuteQuery();

    return amsJob.ContextDetails.HostWebUrl + "/" + targetDocSetUrl;
}

 

Our solution will provide the user with status updates as the Azure job(s) process.  To do this, our app will host a REST/JSON status service.  This service is secured by the same context token our app leverages and can be called periodically from client-side script on our page.

StatusService for checking encoding status via REST/JSON

namespace AzureMediaManagerWeb.Services
{
    [ServiceContract]
    [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)]
    publicclassStatusService
    {
        [OperationContract]
        [WebInvoke(Method = "POST", BodyStyle = WebMessageBodyStyle.WrappedRequest, RequestFormat = WebMessageFormat.Json, ResponseFormat = WebMessageFormat.Json)]
        publicint GetStatus(int itemID, string contextToken, string hostWebUrl, string authority)
        {
            int statusID = 0;
            using (var clientContext = TokenHelper.GetClientContextWithContextToken(hostWebUrl, contextToken, authority))
            {
                using (AzureMediaModel model = newAzureMediaModel(Util.ConnectionUtil.GetEntityConnectionString(clientContext)))
                {
                    statusID = model.Media.FirstOrDefault(i => i.Id == itemID).MediaStatus.Id;
                }
            }

            return statusID;
        }
    }
}

 

jQuery script for calling StatusService

function checkMediaStatus(id) {
    var json = JSON.parse(getCookie('SPContext'));
    var data = {
        itemID: id,
        contextToken: json.ContextTokenString,
        hostWebUrl: json.HostWebUrl,
        authority: json.ServerUrl
    };
    $.ajax({
        cache: false,
        url: '../Services/StatusService.svc/GetStatus',
        data: JSON.stringify(data),
        dataType: 'json',
        type: 'POST',
        contentType: 'application/json; charset=utf-8',
        success: function (result) {
            $('#hdnStatus').val(result.d);
            switch (result.d) {
                case 4:
                    $('#imgStatus4').css('display', 'block');
                    $('#divStatus4').find('.divWaitingDots').css('display', 'none');
                    $('#btnOk').removeAttr('disabled');
                    $('#btnOk').click(function () {
                        closeParentDialog(true);
                        returnfalse;
                    });
                case 3:
                    $('#imgStatus3').css('display', 'block');
                    $('#divStatus3').find('.divWaitingDots').css('display', 'none');
                case 2:
                    $('#imgStatus2').css('display', 'block');
                    $('#divStatus2').find('.divWaitingDots').css('display', 'none');
                case 1:
                    $('#imgStatus1').css('display', 'block');
                    $('#divStatus5').css('display', 'block');
                    $('#divStatus1').find('.divWaitingDots').css('display', 'none');
                    break;
            }

            //recursively call self
            if (result.d != 4)
                setTimeout(checkMediaStatus, 10000, id);
        },
        error: function (result) {
            var status = $('#hdnStatus').val();
            for (var i = 4; i > parseInt(status) ; i--) {
                $('#imgStatus' + i).attr('src', '../images/fail.png');
                $('#imgStatus' + i).css('display', 'block');
                $('#divStatus' + i).find('.divWaitingDots').css('display', 'none');
            }
        }
    });
}

 

Player.aspx

Player.aspx embedded in SharePoint video set page

In SharePoint 2013, videos can be contributed as a file, URL, or embed code (IFRAME to video).  You might be curious why the solution needs a player page instead of just providing SharePoint with a URL to the video hosted in Azure Media Services.  Unfortunately, SharePoint will not accept a parameterized video URL (ex: MyVideo.mp4?contextToken=xzy123) such as the one Azure Media Services will provide.  Even if it did, our solution might want to leverage a different media player that supports smooth streaming or advanced video analytics (ex: how long the user watched the video).  Instead, we will leverage a video embed code, which is ultimately an IFRAME pointing to a page hosting the video.  This is same way we would reference a YouTube video in a SharePoint asset library.  Azure Media Services does not provide a player page, so our app will deliver it.  In the screenshot above, you see the typical video set page displayed in SharePoint...the Player.aspx app page is being displayed in an IFRAME to display the video (the IFRAME is the same size as the video).  To make the player page dynamic, our embed codes will always pass a video id to the player page via URL parameter.  Our player page will use this video id to lookup the streaming URL stored in the app’s SQL Azure database.  We will also pass the host web URL in case the player page needs to get a context token from SharePoint.

Example of YouTube embed code markup
<iframe width="560" height="315" src="http://www.youtube.com/embed/TX1W8bwvVlw" frameborder="0" allowfullscreen></iframe>

 

Example of embed code markup for the solution
<iframe width="700" height="400" src="https://someapp.o365apps.net/Pages/Player.aspx?Item=342&SPHostUrl=http://MyHostWebUrl" frameborder="0" style="overflow: hidden" allowfullscreen></iframe>

 

PageLoad of the Player.aspx to output a video element
publicpartialclass Player : System.Web.UI.Page
{
    protectedvoid Page_Load(object sender, EventArgs e)
    {
        if (!String.IsNullOrEmpty(Page.Request["Item"]))
        {
            var spContext = Util.ContextUtil.Current;
            using (var clientContext = TokenHelper.GetClientContextWithContextToken(spContext.ContextDetails.HostWebUrl, spContext.ContextDetails.ContextTokenString, Request.Url.Authority))
            {
                using (AzureMediaModel model = new AzureMediaModel(ConnectionUtil.GetEntityConnectionString(clientContext)))
                {
                    //get item with the specified ID
                    int itemID = Convert.ToInt32(Page.Request["Item"]);
                    Media mediaItem = model.Media.SingleOrDefault(i => i.Id == itemID);
                    if (mediaItem != null)
                    {
                        string mediaMarkup = @"<video id='myVideo' class='pf-video' height='{0}' width='{1}' controls='controls'><source src='{2}' type='video/mp4;codecs=/""avc1.42E01E, mp4a.40.2/""' /></video>";
                        divVideo.Controls.Add(new LiteralControl(String.Format(mediaMarkup, mediaItem.Height.ToString(), mediaItem.Width.ToString(), mediaItem.MediaSvcUrl)));
                    }
                }
            }
        }
    }
}

 

The result is a video set in SharePoint that looks exactly like any other.  With so many moving parts, I’ve provided a diagram to clear any confusion:

Diagram of the player page and embed code logic

Default.aspx

Default.aspx page to display media processed by app

With videos being stored in Azure and the video set living in SharePoint, there exists the potential for orphaned items (“video sets” without corresponding videos and videos without corresponding “video sets”).  The default.aspx page allows users to identify potential orphan items or errors that may have occurred during processing.  It is the default page when a user enters the full-screen view of the app.  Nothing fancy here…just a GridView that displays information from our app database.

Settings.aspx

Settings.aspx for configuring app settings

In order for our app to integrate with Azure, it will need an account name and access key for Azure Blob Storage and Azure Media Services.  The settings page will allow an app administrator to configure these account settings, which will be stored in the app’s database.  The settings page will also allow an app administrator to specify additional app administrators and select the output format(s) for encoding.  Nothing interesting here, other than the multi-user people picker control.

Multi-user people picker for setting app admins

The settings page is security trimmed to app administrators.  Because we need at least one app administrator, the app will make use of the app installed remote event to capture the user that installs the app.

Handle App Installed in app project properties

App installed remote event to capture installer as adminisrator

public SPRemoteEventResult ProcessEvent(SPRemoteEventProperties properties)
{
    SPRemoteEventResult result = new SPRemoteEventResult();
    using (ClientContext clientContext = TokenHelper.CreateAppEventClientContext(properties, false))
    {
        if (clientContext != null)
        {
            //get the current user and seed the database with his information
            PeopleManager pm = new PeopleManager(clientContext);
            var props = pm.GetMyProperties();
            clientContext.Load(props);
            clientContext.ExecuteQuery();

            using (AzureMediaModel model = new AzureMediaModel(ConnectionUtil.GetEntityConnectionString(clientContext)))
            {
                Administrators primaryAdmin = new AzureMediaManagerWeb.Administrators();
                primaryAdmin.LoginName = props.AccountName;
                primaryAdmin.DisplayName = props.DisplayName;
                model.Administrators.AddObject(primaryAdmin);
                model.SaveChanges();
            }
        }
    }

    return result;
}

 

Final Thoughts

This solution might seem like a lot of work.  However, it addresses most of the serious limitations in native SharePoint video delivery that will only become more obvious as video contribution increases in a farm.  I have provided the code for the solution in the link below.  This is NOT a production ready solution.  However, it could be the start for taking media to the next level in your organization!

App for SharePoint Solution: AzureMediaManager.zip

NOTE: if you download the solution and try to debug, Internet Explorer will not let you use the drag-drop upload in the debugging browser. This is because the debug browser is running elevated (due to running Visual Studio as an administrator) and Windows Explorer NOT running elevated. IE sees this as a security threat. The solution is to open a new browser window while debugging for uploads.

Cross-site publishing alternatives in SharePoint Online/Office 365

$
0
0

Cross-site publishing is one of the powerful new capabilities in SharePoint 2013.  It enables the separation of data entry from display and breaks down the container barriers that have traditionally existed in SharePoint (ex: rolling up information across site collections).  Cross-site publishing is delivered through search and a number of new features, including list/library catalogs, catalog connections, and the content search web part.  Unfortunately, SharePoint Online/Office 365 doesn’t currently support these features.  Until they are added to the service (possibly in a quarterly update), customers will be looking for alternatives to close the gap.  In this post, I will outline several alternatives for delivering cross-site and search-driven content in SharePoint Online and how to template these views for reuse.  Here is a video that outlines the solution:

(Please visit the site to view this video)

 

NOTE: I’m a huge proponent of SharePoint Online.  After visiting several Microsoft data centers, I feel confident that Microsoft is better positioned to run SharePoint infrastructure than almost any organization in the world.  SharePoint Online has very close feature parity to SharePoint on-premise, with the primary gaps existing in cross-site publishing and advanced business intelligence.  Although these capabilities have acceptable alternatives in the cloud (as will be outlined in this post), organizations looking to maximize the cloud might consider SharePoint running in IaaS for immediate access to these features.

 

Apps for SharePoint

The new SharePoint app model is fully supported in SharePoint Online and can be used to deliver customizations to SharePoint using any web technology.  New SharePoint APIs can be used with the app model to deliver an experience similar to cross-site publishing.  In fact, the content search web part could be re-written for delivery through the app model as an “App Part” for SharePoint Online. 
Although the app model provides great flexibility and reuse, it does come with some drawbacks.  Because an app part is delivered through a glorified IFRAME, it would be challenging to navigate to a new page from within the app part.  A link within the app would only navigate within the IFRAME (not the parent of the IFRAME).  Secondly, there isn’t a great mechanism for templating a site to automatically leverage an app part on its page(s).  Apps do not work with site templates, so a site that contains an app cannot be saved as a template.  Apps can be “stapled” to sites, but the app installed event (which would be needed to add the app part to a page) only fires when the app is installed into the app catalog.

REST APIs and Script Editor

The script editor web part is a powerful new tool that can help deliver flexible customization into SharePoint Online.  The script editor web part allows a block of client-side script to be added to any wiki or web part page in a site.  Combined with the new SharePoint REST APIs, the script editor web part can deliver mash-ups very similar to cross-site publishing and the content search web part.  Unlike apps for SharePoint, the script editor isn’t constrained by IFRAME containers, app permissions, or templating limitations.  In fact, a well-configured script editor web part could be exported and re-imported into the web part gallery for reuse.

Cross-site publishing leverages “catalogs” for precise querying of specific content.  Any List/Library can be designated as a catalog.  By making this designation, SharePoint will automatically create managed properties for columns of the List/Library and ultimately generate a search result source in sites that consume the catalog.  Although SharePoint Online doesn’t support catalogs, it support the building blocks such as managed properties and result sources.  These can be manually configured to provide the same precise querying in SharePoint Online and exploited in the script editor web part for display.

Calling Search REST APIs
<divid="divContentContainer"></div>
<scripttype="text/javascript">
    $(document).ready(function ($) {
        var basePath = "https://tenant.sharepoint.com/sites/somesite/_api/";
        $.ajax({
            url: basePath + "search/query?Querytext='ContentType:News'",
            type: "GET",
            headers: { "Accept": "application/json;odata=verbose" },
            success: function (data) {
                //script to build UI HERE
            },
            error: function (data) {
                //output error HERE
            }
        });
    });
</script>

 

An easier approach might be to directly reference a list/library in the REST call of our client-side script.  This wouldn’t require manual search configuration and would provide real-time publishing (no waiting for new items to get indexed).  You could think of this approach similar to a content by query web part across site collections (possibly even farms) and the REST API makes it all possible!

List REST APIs
<divid="divContentContainer"></div>
<scripttype="text/javascript">
    $(document).ready(function ($) {
        var basePath = "https://tenant.sharepoint.com/sites/somesite/_api/";
        $.ajax({
            url: basePath + "web/lists/GetByTitle('News')/items/?$select=Title&$filter=Feature eq 0",
            type: "GET",
            headers: { "Accept": "application/json;odata=verbose" },
            success: function (data) {
                //script to build UI HERE
            },
            error: function (data) {
                //output error HERE
            }
        });
    });
</script>

 

The content search web part uses display templates to render search results in different arrangements (ex: list with images, image carousel, etc).  There are two types of display templates the content search web part leverages…the control template, which renders the container around the items, and the item template, which renders each individual item in the search results.  This is very similar to the way a Repeater control works in ASP.NET.  Display templates are authored using HTML, but are converted to client-side script automatically by SharePoint for rendering.  I mention this because our approach is very similar…we will leverage a container and then loop through and render items in script.  In fact, all the examples in this post were converted from display templates in a public site I’m working on. 

Item display template for content search web part

<!--#_
var encodedId = $htmlEncode(ctx.ClientControl.get_nextUniqueId() + "_ImageTitle_");
var rem = index % 3;
var even = true;
if (rem == 1)
    even = false;

var pictureURL = $getItemValue(ctx, "Picture URL");
var pictureId = encodedId + "picture";
var pictureMarkup = Srch.ContentBySearch.getPictureMarkup(pictureURL, 140, 90, ctx.CurrentItem, "mtcImg140", line1, pictureId);
var pictureLinkId = encodedId + "pictureLink";
var pictureContainerId = encodedId + "pictureContainer";
var dataContainerId = encodedId + "dataContainer";
var dataContainerOverlayId = encodedId + "dataContainerOverlay";
var line1LinkId = encodedId + "line1Link";
var line1Id = encodedId + "line1";
 _#-->
<divstyle="width: 320px; float: left; display: table; margin-bottom: 10px; margin-top: 5px;">
   <ahref="_#= linkURL =#_">
      <divstyle="float: left; width: 140px; padding-right: 10px;">
         <imgsrc="_#= pictureURL =#_" class="mtcImg140" style="width: 140px;" />
      </div>
      <divstyle="float: left; width: 170px">
         <divclass="mtcProfileHeadermtcProfileHeaderP">_#= line1 =#_</div>
      </div>
   </a>
</div>

 

Script equivalent

<divid="divUnfeaturedNews"></div>
<scripttype="text/javascript">
    $(document).ready(function ($) {
        var basePath = "https://richdizzcom.sharepoint.com/sites/dallasmtcauth/_api/";
        $.ajax({
            url: basePath + "web/lists/GetByTitle('News')/items/?$select=Title&$filter=Feature eq 0",
            type: "GET",
            headers: { "Accept": "application/json;odata=verbose" },
            success: function (data) {
                //get the details for each item
                var listData = data.d.results;
                var itemCount = listData.length;
                var processedCount = 0;
                var ul = $("<ul style='list-style-type: none; padding-left: 0px;' class='cbs-List'>");
                for (i = 0; i < listData.length; i++) {
                    $.ajax({
                        url: listData[i].__metadata["uri"] + "/FieldValuesAsHtml",
                        type: "GET",
                        headers: { "Accept": "application/json;odata=verbose" },
                        success: function (data) {
                            processedCount++;
                            var htmlStr = "<li style='display: inline;'><div style='width: 320px; float: left; display: table; margin-bottom: 10px; margin-top: 5px;'>";
                            htmlStr += "<a href='#'>";
                            htmlStr += "<div style='float: left; width: 140px; padding-right: 10px;'>";
                            htmlStr += setImageWidth(data.d.PublishingRollupImage, '140');
                            htmlStr += "</div>";
                            htmlStr += "<div style='float: left; width: 170px'>";
                            htmlStr += "<div class='mtcProfileHeader mtcProfileHeaderP'>" + data.d.Title + "</div>";
                            htmlStr += "</div></a></div></li>";
                            ul.append($(htmlStr))
                            if (processedCount == itemCount) {
                                $("#divUnfeaturedNews").append(ul);
                            }
                        },
                        error: function (data) {
                            alert(data.statusText);
                        }
                    });
                }
            },
            error: function (data) {
                alert(data.statusText);
            }
        });
    });

    function setImageWidth(imgString, width) {
        var img = $(imgString);
        img.css('width', width);
        return img[0].outerHTML;
    }
</script>

 

Even one of the more complex carousel views from my site took less than 30min to convert to the script editor approach.

Advanced carousel script

<divid="divFeaturedNews">
    <divclass="mtc-Slideshow" id="divSlideShow" style="width: 610px;">
        <divstyle="width: 100%; float: left;">
            <divid="divSlideShowSection">
                <divstyle="width: 100%;">
                    <divclass="mtc-SlideshowItems" id="divSlideShowSectionContainer" style="width: 610px; height: 275px; float: left; border-style: none; overflow: hidden; position: relative;">
                        <divid="divFeaturedNewsItemContainer">
                        </div>
                    </div>
                </div>
            </div>
        </div>
    </div>
</div>
<scripttype="text/javascript">
    $(document).ready(function ($) {
        var basePath = "https://richdizzcom.sharepoint.com/sites/dallasmtcauth/_api/";
        $.ajax({
            url: basePath + "web/lists/GetByTitle('News')/items/?$select=Title&$filter=Feature eq 1&$top=4",
            type: "GET",
            headers: { "Accept": "application/json;odata=verbose" },
            success: function (data) {
                var listData = data.d.results;
                for (i = 0; i < listData.length; i++) {
                    getItemDetails(listData, i, listData.length);
                }
            },
            error: function (data) {
                alert(data.statusText);
            }
        });
    });
    var processCount = 0;
    function getItemDetails(listData, i, count) {
        $.ajax({
            url: listData[i].__metadata["uri"] + "/FieldValuesAsHtml",
            type: "GET",
            headers: { "Accept": "application/json;odata=verbose" },
            success: function (data) {
                processCount++;
                var itemHtml = "<div class='mtcItems' id='divPic_" + i + "' style='width: 610px; height: 275px; float: left; position: absolute; border-bottom: 1px dotted #ababab; z-index: 1; left: 0px;'>"
                itemHtml += "<div id='container_" + i + "' style='width: 610px; height: 275px; float: left;'>";
                itemHtml += "<a href='#' title='" + data.d.Caption_x005f_x0020_x005f_Title + "' style='width: 610px; height: 275px;'>";
                itemHtml += data.d.Feature_x005f_x0020_x005f_Image;
                itemHtml += "</a></div></div>";
                itemHtml += "<div class='titleContainerClass' id='divTitle_" + i + "' data-originalidx='" + i + "' data-currentidx='" + i + "' style='height: 25px; z-index: 2; position: absolute; background-color: rgba(255, 255, 255, 0.8); cursor: pointer; padding-right: 10px; margin: 0px; padding-left: 10px; margin-top: 4px; color: #000; font-size: 18px;' onclick='changeSlide(this);'>";
                itemHtml += data.d.Caption_x005f_x0020_x005f_Title;
                itemHtml += "<span id='currentSpan_" + i + "' style='display: none; font-size: 16px;'>" + data.d.Caption_x005f_x0020_x005f_Body + "</span></div>";
                $('#divFeaturedNewsItemContainer').append(itemHtml);

                if (processCount == count) {
                    allItemsLoaded();
                }
            },
            error: function (data) {
                alert(data.statusText);
            }
        });
    }
    window.mtc_init = function (controlDiv) {
        var slideItems = controlDiv.children;
        for (var i = 0; i < slideItems.length; i++) {
            if (i > 0) {
                slideItems[i].style.left = '610px';
            }
        };
    };

    function allItemsLoaded() {
        var slideshows = document.querySelectorAll(".mtc-SlideshowItems");
        for (var i = 0; i < slideshows.length; i++) {
            mtc_init(slideshows[i].children[0]);
        }

        var div = $('#divTitle_0');
        cssTitle(div, true);
        var top = 160;
        for (i = 1; i < 4; i++) {
            var divx = $('#divTitle_' + i);
            cssTitle(divx, false);
            divx.css('top', top);
            top += 35;
        }
    }

    function cssTitle(div, selected) {
        if (selected) {
            div.css('height', 'auto');
            div.css('width', '300px');
            div.css('top', '10px');
            div.css('left', '0px');
            div.css('font-size', '26px');
            div.css('padding-top', '5px');
            div.css('padding-bottom', '5px');
            div.find('span').css('display', 'block');
        }
        else {
            div.css('height', '25px');
            div.css('width', 'auto');
            div.css('left', '0px');
            div.css('font-size', '18px');
            div.css('padding-top', '0px');
            div.css('padding-bottom', '0px');
            div.find('span').css('display', 'none');
        }
    }

    window.changeSlide = function (item) {
        //get all title containers
        var listItems = document.querySelectorAll('.titleContainerClass');
        var currentIndexVals = { 0: null, 1: null, 2: null, 3: null };
        var newIndexVals = { 0: null, 1: null, 2: null, 3: null };

        for (var i = 0; i < listItems.length; i++) {
            //current Index
            currentIndexVals[i] = parseInt(listItems[i].getAttribute('data-currentidx'));
        }

        var selectedIndex = 0; //selected Index will always be 0
        var leftOffset = '';
        var originalSelectedIndex = '';

        var nextSelected = '';
        var originalNextIndex = '';

        if (item == null) {
            var item0 = document.querySelector('[data-currentidx="' + currentIndexVals[0] + '"]');
            originalSelectedIndex = parseInt(item0.getAttribute('data-originalidx'));
            originalNextIndex = originalSelectedIndex + 1;
            nextSelected = currentIndexVals[0] + 1;
        }
        else {
            nextSelected = item.getAttribute('data-currentidx');
            originalNextIndex = item.getAttribute('data-originalidx');
        }

        if (nextSelected == 0) { return; }

        for (i = 0; i < listItems.length; i++) {
            if (currentIndexVals[i] == selectedIndex) {
                //this is the selected item, so move to bottom and animate
                var div = $('[data-currentidx="0"]');
                cssTitle(div, false);
                div.css('left', '-400px');
                div.css('top', '230px');

                newIndexVals[i] = 3;
                var item0 = document.querySelector('[data-currentidx="0"]');
                originalSelectedIndex = item0.getAttribute('data-originalidx');

                //annimate
                div.delay(500).animate(
                    { left: '0px' }, 500, function () {
                    });
            }
            elseif (currentIndexVals[i] == nextSelected) {
                //this is the NEW selected item, so resize and slide in as selected
                var div = $('[data-currentidx="' + nextSelected + '"]');
                cssTitle(div, true);
                div.css('left', '-610px');

                newIndexVals[i] = 0;

                //annimate
                div.delay(500).animate(
                    { left: '0px' }, 500, function () {
                    });
            }
            else {
                //move up in queue
                var curIdx = currentIndexVals[i];
                var div = $('[data-currentidx="' + curIdx + '"]');

                var topStr = div.css('top');
                var topInt = parseInt(topStr.substring(0, topStr.length - 1));

                if (curIdx != 1 && nextSelected == 1 || curIdx > nextSelected) {
                    topInt = topInt - 35;
                    if (curIdx - 1 == 2) { newIndexVals[i] = 2 };
                    if (curIdx - 1 == 1) { newIndexVals[i] = 1 };
                }

                //move up
                div.animate(
                    { top: topInt }, 500, function () {
                    });
            }
        };

        if (originalNextIndex < 0)
            originalNextIndex = itemCount - 1;

        //adjust pictures
        $('#divPic_' + originalNextIndex).css('left', '610px');
        leftOffset = '-610px';

        $('#divPic_' + originalSelectedIndex).animate(
            { left: leftOffset }, 500, function () {
            });

        $('#divPic_' + originalNextIndex).animate(
            { left: '0px' }, 500, function () {
            });

        var item0 = document.querySelector('[data-currentidx="' + currentIndexVals[0] + '"]');
        var item1 = document.querySelector('[data-currentidx="' + currentIndexVals[1] + '"]');
        var item2 = document.querySelector('[data-currentidx="' + currentIndexVals[2] + '"]');
        var item3 = document.querySelector('[data-currentidx="' + currentIndexVals[3] + '"]');
        if (newIndexVals[0] != null) { item0.setAttribute('data-currentidx', newIndexVals[0]) };
        if (newIndexVals[1] != null) { item1.setAttribute('data-currentidx', newIndexVals[1]) };
        if (newIndexVals[2] != null) { item2.setAttribute('data-currentidx', newIndexVals[2]) };
        if (newIndexVals[3] != null) { item3.setAttribute('data-currentidx', newIndexVals[3]) };
    };
</script>

 

End-result of script editors in SharePoint Online

Separate authoring site collection

Final Thoughts

I hope this post helped illustrate ways to display content across traditional SharePoint boundaries without cross-site publishing and how to template those displays for reuse.  SharePoint Online might eventually get cross-site publishing feature, but that doesn’t mean you have to wait to achieve the same result.  In fact, the script approach is so similar to display templates, it should be an easy transition to cross-site publishing in the future.  I want to give a shout out to my colleague Nathan Miller for has assist in this vision.


Self-Service Site Provisioning using Apps for SharePoint 2013

$
0
0

I’ve always been a big fan of self-service site provisioning in SharePoint.  That process is even better in SharePoint 2013 (including SharePoint Online), with the ability to specify a custom site creation form.  Owning the provisioning process is incredibly powerful for farm/tenant administrators to configure the options and outputs of site provisioning.  By leveraging the new app model, we can customize the provision process in SharePoint Online to activate specific features, capabilities, and branding to our sites.  This post will explore the process for delivering custom site creation using an app for SharePoint.  The video below demonstrates the solution outlined in this post.

(Please visit the site to view this video)

Self-Service Sites in SharePoint 2013

Self-service provisioning is improved in SharePoint 2013 by allowing an administrator to specify a custom creation form.  This form can take over the provisioning process and will be the cornerstone of the solution outlined below.  Without implementing a custom form, the default form only allows a user to specify a title.  The default output will be a Team site created as a sub-site at a specified location.  Again, if we want to capture additional details or provide different options, we must introduce a custom creation form.

Default Create a Site form in SharePoint 2013

The Solution

Because the app needs the ability to create sub-sites and site collections anywhere in the tenancy, it will need FullControl permission on the entire tenancy.  The app will also need to make app-only calls to SharePoint, so it can work with tenant objects or sites outside the context.  Both these settings can be configured in the Permissions tab of the AppManifest.xml.

AppManifest Permissions for out App

 

NOTE: You should typically avoid requesting tenancy permissions in your apps…especially with FullControl.  It is a best practice for apps to request the minimum permissions they need to function.  The “tenancy” permission scope is in place specifically for scenarios like provisioning.  Tenancy-scoped apps will typically be developed in-house, as I would be REALLY surprised if the Office Store accepted an app requesting Tenant FullControl permissions.

 

Our app will enable administrators to dynamically configure site creation settings using a Library in the app web.  I’m using a library instead of a list, so each option can be displayed in the site creation form with an icon.  Each row in the Library will represent a site provisioning option for an end-user.  The columns to support this include the following:

  • Title– the title of the configuration option (ex: Small Team Site)
  • Site Template– the WebTemplate name to be used in provisioning (ex: STS#0 for Team Site)
  • Base Path– the absolute URL where this option will get provisioned (ex: https://tenant/teams/)
  • Site Type– choice of “Subsite” or “Site Collection”
  • MasterPage URL– url of the masterpage file to apply (leave blank for no branding)
  • Storage Maximum Limit– the storage quota in MB (only applicable for site collections)
  • UserCode Maximum Limit– the user code quota in points (only applicable for site collections)

Library Configuration in App Project

I leveraged a module in the solution to pre-load a few provisioning options, but an administrator could easily edit these options or add new options through the library.

Module Elements.xml to pre-load provisioning options
<?xml version="1.0" encoding="utf-8"?>
<Elements xmlns="http://schemas.microsoft.com/sharepoint/">
  <Module Name="AppAssets">
    <File Path="AppAssets\Blog.png" Url="SSConfig/Blog.png" ReplaceContent="TRUE">
      <Property Name="Title" Value="Blog" Type="string"></Property>
      <Property Name="SiteTemplate" Value="BLOG#0" Type="string"></Property>
      <Property Name="BasePath" Value="https://richdizzcom.sharepoint.com/sites/Blogs/" Type="string"></Property>
      <Property Name="SiteType" Value="Subsite" Type="string"></Property>
      <Property Name="MasterPageUrl" Value="/sites/Blogs/_catalogs/masterpage/DallasMTC.com.master" Type="string"></Property>
      <Property Name="StorageMaximumLevel" Value="100" Type="string"></Property>
      <Property Name="UserCodeMaximumLevel" Value="300" Type="string"></Property>
    </File>
    <File Path="AppAssets\Community.png" Url="SSConfig/Community.png" ReplaceContent="TRUE">
      <Property Name="Title" Value="Community" Type="string"></Property>
      <Property Name="SiteTemplate" Value="COMMUNITY#0" Type="string"></Property>
      <Property Name="BasePath" Value="https://richdizzcom.sharepoint.com/sites/Communities/" Type="string"></Property>
      <Property Name="SiteType" Value="Subsite" Type="string"></Property>
      <Property Name="MasterPageUrl" Value="/sites/Communities/_catalogs/masterpage/DallasMTC.com.master" Type="string"></Property>
      <Property Name="StorageMaximumLevel" Value="100" Type="string"></Property>
      <Property Name="UserCodeMaximumLevel" Value="300" Type="string"></Property>
    </File>
    <File Path="AppAssets\Project.png" Url="SSConfig/Project.png" ReplaceContent="TRUE">
      <Property Name="Title" Value="Project" Type="string"></Property>
      <Property Name="SiteTemplate" Value="PROJECTSITE#0" Type="string"></Property>
      <Property Name="BasePath" Value="https://richdizzcom.sharepoint.com/teams/" Type="string"></Property>
      <Property Name="SiteType" Value="Site Collection" Type="string"></Property>
      <Property Name="MasterPageUrl" Value="https://richdizzcom.sharepoint.com/_catalogs/masterpage/DallasMTC.com.master" Type="string"></Property>
      <Property Name="StorageMaximumLevel" Value="100" Type="string"></Property>
      <Property Name="UserCodeMaximumLevel" Value="300" Type="string"></Property>
    </File>
    <File Path="AppAssets\Publishing.png" Url="SSConfig/Publishing.png" ReplaceContent="TRUE">
      <Property Name="Title" Value="Publishing" Type="string"></Property>
      <Property Name="SiteTemplate" Value="BLANKINTERNET#0" Type="string"></Property>
      <Property Name="BasePath" Value="https://richdizzcom.sharepoint.com/sites/" Type="string"></Property>
      <Property Name="SiteType" Value="Site Collection" Type="string"></Property>
      <Property Name="MasterPageUrl" Value="https://richdizzcom.sharepoint.com/_catalogs/masterpage/DallasMTC.com.master" Type="string"></Property>
      <Property Name="StorageMaximumLevel" Value="100" Type="string"></Property>
      <Property Name="UserCodeMaximumLevel" Value="300" Type="string"></Property>
    </File>
    <File Path="AppAssets\Team.png" Url="SSConfig/Team.png" ReplaceContent="TRUE">
      <Property Name="Title" Value="Team" Type="string"></Property>
      <Property Name="SiteTemplate" Value="STS#0" Type="string"></Property>
      <Property Name="BasePath" Value="https://richdizzcom.sharepoint.com/teams/" Type="string"></Property>
      <Property Name="SiteType" Value="Site Collection" Type="string"></Property>
      <Property Name="MasterPageUrl" Value="https://richdizzcom.sharepoint.com/_catalogs/masterpage/DallasMTC.com.master" Type="string"></Property>
      <Property Name="StorageMaximumLevel" Value="100" Type="string"></Property>
      <Property Name="UserCodeMaximumLevel" Value="300" Type="string"></Property>
    </File>
  </Module>
</Elements>

 

Our app will deliver a site creation form that will query the library to display site creation options.

PageLoad and btnCreate Events

private List<SSConfig> configList;
private const string SHAREPOINT_PID = "00000003-0000-0ff1-ce00-000000000000";
private const string TENANT_ADMIN_URL = "https://richdizzcom-admin.sharepoint.com";
protected void Page_Load(object sender, EventArgs e)
{
    //get SharePoint context
    var spContext = Util.ContextUtil.Current;
    using (var clientContext = TokenHelper.GetClientContextWithContextToken(spContext.ContextDetails.AppWebUrl, spContext.ContextDetails.ContextTokenString, Request.Url.Authority))
    {
        //populate the badges control
        List list = clientContext.Web.Lists.GetByTitle("SSConfig");
        CamlQuery query = new CamlQuery()
        {
            ViewXml = "<View><ViewFields><FieldRef Name='Title' /><FieldRef Name='SiteTemplate' /><FieldRef Name='BasePath' /><FieldRef Name='SiteType' /><FieldRef Name='MasterPageUrl' /><FieldRef Name='StorageMaximumLevel' /><FieldRef Name='UserCodeMaximumLevel' /></ViewFields></View>"
        };
        var items = list.GetItems(query);
        clientContext.Load(items, i => i.IncludeWithDefaultProperties(j => j.DisplayName));
        clientContext.ExecuteQuery();
        configList = items.ToList(spContext.ContextDetails.AppWebUrl, "SSConfig");
    }

    if (!this.IsPostBack)
    {
        //bind repeater
        repeaterTemplate.DataSource = configList;
        repeaterTemplate.DataBind();

        //configure buttons based on display type
        if (Page.Request["IsDlg"] == "1")
            btnCancel.Attributes.Add("onclick", "javascript:closeDialog();return false;");
        else
            btnCancel.Click += btnCancel_Click;
    }
}

protected void btnCancel_Click(object sender, EventArgs e)
{
    Response.Redirect(Page.Request["SPHostUrl"]);
}

protected void btnCreate_Click(object sender, EventArgs e)
{
    //get the selected config
    SSConfig selectedConfig = configList.FirstOrDefault(i => i.Title.Equals(hdnSelectedTemplate.Value));
    if (selectedConfig != null)
    {
        string webUrl = "";
        if (selectedConfig.SiteType.Equals("Site Collection", StringComparison.CurrentCultureIgnoreCase))
            webUrl = CreateSiteCollection(selectedConfig);
        else
            webUrl = CreateSubsite(selectedConfig);

        //redirect to new site
        ClientScript.RegisterStartupScript(typeof(Default), "RedirectToSite", "navigateParent('" + webUrl + "');", true);
    }
}

 

Once the user submits the site creation form, the app will provision differently based on site type (Subsite or Site Collection).  One thing that is common between the two methods is the need to execute app-only calls, since we will likely be provisioning with a different context from where the form is hosted (ex: site collection will require the context of the tenant administration site).  The TokenHelper contains a GetAppOnlyAccessToken method to get the access token for a specific site that is different from the context of the form.

To provision a sub-site, we need to establish context with the site that will host our new sub-site (ie – the parent site).  To apply a brand to a sub-site, I’m requiring the master page to exist in the Master Page Gallery of the root web in the site collection.  That way, I can just set the MasterUrl and CustomMasterUrl after the new sub-site is provisioned.

Code to create sub-site

private string CreateSubsite(SSConfig selectedConfig)
{
    string webUrl = selectedConfig.BasePath + txtUrl.Text;

    //create subsite
    var parentSite = new Uri(selectedConfig.BasePath);  //static for my tenant
    var token = TokenHelper.GetAppOnlyAccessToken(SHAREPOINT_PID, parentSite.Authority, null).AccessToken;
    using (var clientContext = TokenHelper.GetClientContextWithAccessToken(parentSite.ToString(), token))
    {
        var properties = new WebCreationInformation()
        {
            Url = txtUrl.Text,
            Title = txtTitle.Text,
            Description = txtDescription.Text,
            WebTemplate = selectedConfig.SiteTemplate,
            UseSamePermissionsAsParentSite = false
        };

        //create and load the new web
        Web newWeb = clientContext.Web.Webs.Add(properties);
        clientContext.Load(newWeb, w => w.Title);
        clientContext.ExecuteQuery();

        //TODO: set additional owners

        //apply the masterpage to the site (if applicable)
        if (!String.IsNullOrEmpty(selectedConfig.MasterUrl))
        {
            newWeb.MasterUrl = selectedConfig.MasterUrl;
            newWeb.CustomMasterUrl = selectedConfig.MasterUrl;
        }

        /**************************************************************************************/
        /*   Placeholder area for updating additional settings and features on the new site   */
        /**************************************************************************************/

        //update the web with the new settings
        newWeb.Update();
        clientContext.ExecuteQuery();
    }

    return webUrl;
}

 

Provisioning a site collection is a little more complex.  First we need to establish context with tenant administration site and use a Tenant object for creation.  The Tenant object can be found in the Microsoft.Online.SharePoint.Client.Tenant assembly which comes with the SharePoint Online Management Shell download.  Provisioning the site collection can take a while to complete, so this occurs asynchronously using the SpoOperation class.  We can leverage this to wait until the creation operation is complete before trying to apply the master page.  Applying a master page to a site collection is a little more complex.  The master page needs to live within the site collection, so our solution will download the master page from the location referenced in the configuration and upload it into the Master Page Gallery of the new site collection before setting MasterUrl and CustomMasterUrl on the site.  This requires that scripts and styles in the master page are referenced using absolute paths.  Pulling down and applying an entire design package would likely be a more elegant, but was overkill for this proof of concept.

Code to create site collection

private byte[] GetMasterPageFile(string masterUrl)
{
    byte[] mpBytes = null;

    //get the siteurl of the masterpage
    string siteUrl = masterUrl.Substring(0, masterUrl.IndexOf("/_catalogs"));

    var siteUri = new Uri(siteUrl);  //static for my tenant
    var token = TokenHelper.GetAppOnlyAccessToken(SHAREPOINT_PID, siteUri.Authority, null).AccessToken;
    using (var clientContext = TokenHelper.GetClientContextWithAccessToken(siteUri.ToString(), token))
    {
        string relativeMasterUrl = masterUrl.Substring(8);
        relativeMasterUrl = relativeMasterUrl.Substring(relativeMasterUrl.IndexOf("/"));
        File file = clientContext.Web.GetFileByServerRelativeUrl(relativeMasterUrl);
        var stream = file.OpenBinaryStream();
        clientContext.ExecuteQuery();
        using (stream.Value)
        {
            mpBytes = new Byte[stream.Value.Length];
            stream.Value.Read(mpBytes, 0, mpBytes.Length);
        }
    }

    return mpBytes;
}

private string CreateSiteCollection(SSConfig selectedConfig)
{
    string webUrl = "";

    //create site collection using the Tenant object
    var tenantAdminUri = new Uri(TENANT_ADMIN_URL);  //static for my tenant
    var token = TokenHelper.GetAppOnlyAccessToken(SHAREPOINT_PID, tenantAdminUri.Authority, null).AccessToken;
    using (var clientContext = TokenHelper.GetClientContextWithAccessToken(tenantAdminUri.ToString(), token))
    {
        var tenant = new Tenant(clientContext);
        webUrl = String.Format("{0}{1}", selectedConfig.BasePath, txtUrl.Text);
        var properties = new SiteCreationProperties()
        {
            Url = webUrl,
            Owner = "ridize@richdizzcom.onmicrosoft.com",
            Title = txtTitle.Text,
            Template = selectedConfig.SiteTemplate,
            StorageMaximumLevel = Convert.ToInt32(selectedConfig.StorageMaximumLevel),
            UserCodeMaximumLevel = Convert.ToDouble(selectedConfig.UserCodeMaximumLevel)
        };
        SpoOperation op = tenant.CreateSite(properties);
        clientContext.Load(tenant);
        clientContext.Load(op, i => i.IsComplete);
        clientContext.ExecuteQuery();

        //check if site creation operation is complete
        while (!op.IsComplete)
        {
            //wait 30seconds and try again
            System.Threading.Thread.Sleep(30000);
            op.RefreshLoad();
            clientContext.ExecuteQuery();
        }
    }

    //get the newly created site collection
    var siteUri = new Uri(webUrl);  //static for my tenant
    token = TokenHelper.GetAppOnlyAccessToken(SHAREPOINT_PID, siteUri.Authority, null).AccessToken;
    using (var clientContext = TokenHelper.GetClientContextWithAccessToken(siteUri.ToString(), token))
    {
        var newWeb = clientContext.Web;
        clientContext.Load(newWeb);
        clientContext.ExecuteQuery();

        //update description
        newWeb.Description = txtDescription.Text;

        //TODO: set additional site collection administrators

        //apply the masterpage to the site (if applicable)
        if (!String.IsNullOrEmpty(selectedConfig.MasterUrl))
        {
            //get the the masterpage bytes from it's existing location
            byte[] masterBytes = GetMasterPageFile(selectedConfig.MasterUrl);
            string newMasterUrl = String.Format("{0}{1}/_catalogs/masterpage/ssp.master", selectedConfig.BasePath, txtUrl.Text);
                   
            //upload to masterpage gallery of new web and set
            List list = newWeb.Lists.GetByTitle("Master Page Gallery");
            clientContext.Load(list, i => i.RootFolder);
            clientContext.ExecuteQuery();
            FileCreationInformation fileInfo = new FileCreationInformation();
            fileInfo.Content = masterBytes;
            fileInfo.Url = newMasterUrl;
            Microsoft.SharePoint.Client.File masterPage = list.RootFolder.Files.Add(fileInfo);
            string relativeMasterUrl = newMasterUrl.Substring(8);
            relativeMasterUrl = relativeMasterUrl.Substring(relativeMasterUrl.IndexOf("/"));

            //we can finally set the masterurls on the newWeb
            newWeb.MasterUrl = relativeMasterUrl;
            newWeb.CustomMasterUrl = relativeMasterUrl;
        }

        /**************************************************************************************/
        /*   Placeholder area for updating additional settings and features on the new site   */
        /**************************************************************************************/

 

        //update the web with the new settings
        newWeb.Update();
        clientContext.ExecuteQuery();
    }

    return webUrl;
}

 

 

 

Because our site creation form will be launch in the “Start a Site” dialog, our app needs to be able to communicate back with the SharePoint page that hosts the dialog.  This communication is necessary to make the dialog page visible, resize the dialog, close the dialog, and navigate away from the dialog.  This cross-domain communication is achieved using the HTML5 postMessage API, where SharePoint “listens” for MakePageVisible, Resize, CloseDialog, and NavigateParent messages from the page displayed within the dialog IFRAME.  Below are the scripts to implement this messaging, which I hunted down through thousands of line of javascript.

postMessage scripts for interacting with dialog parent

//Makes the page visible in the dialog
function MakeSSCDialogPageVisible() {
    var dlgMadeVisible = false;
    try {
        var dlg = window.top.g_childDialog;
        if (Boolean(window.frameElement) && Boolean(window.frameElement.makeVisible)) {
            window.frameElement.makeVisible();
            dlgMadeVisible = true;
        }
    }
    catch (ex) {
    }
    if (!dlgMadeVisible && Boolean(top) && Boolean(top.postMessage)) {
        var message = "MakePageVisible";
        top.postMessage(message, "*");
    }
}

//Resizes the dialog for the page size
function UpdateSSCDialogPageSize() {
    var dlgResized = false;
    try {
        var dlg = window.top.g_childDialog;
        if (!fIsNullOrUndefined(dlg)) {
            dlg.autoSize();
            dlgResized = true;
        }
    }
    catch (ex) {
    }
    if (!dlgResized && Boolean(top) && Boolean(top.postMessage)) {
        var message = "PageWidth=450;PageHeight=480";
        top.postMessage(message, "*");
    }
}

//postMessage to SharePoint for closing dialog
function closeDialog() {
    var target = parent.postMessage ? parent : (parent.document.postMessage ? parent.document : undefined);
    target.postMessage('CloseDialog', '*');
}

//postMessage to close the dialog and navigate from the page to a specified url
function navigateParent(url) {
    var target = parent.postMessage ? parent : (parent.document.postMessage ? parent.document : undefined);
    target.postMessage('NavigateParent=' + url, '*');
}

 

In order for our site creation form to launch from the “add site” link on the “sites” page, we need to configure it as the “Start a Site” form in the tenant administration site.  This field is limited to 255 characters, so we will likely need to remove some of the standard tokens that are passed to our app.  At minimum, the URL MUST include SPHostUrl and SPAppWebUrl parameters.  Our app will leverage these to get context and access tokens from SharePoint.

"Start a Site" configuration in SharePoint admin center

Our app should now be ready for primetime!  Here is a screenshot of our custom app launched from the “Start a Site” dialog and in full-screen mode.

Our custom provisioning app launched from the "Create a Site" link

The fullscreen version of the app

Final Thoughts

I hope this post helped illustrate the power of delivering self-service site creation and how the app model can take it to the next level.  I see this and “App Stapling” as the primary two patterns for pushing capability/features into sites and site collections in SharePoint Online.  Please note that the provided code is not production ready and also references on of my tenants.

Solution code: http://sdrv.ms/XSBX7n

Advanced Content Enrichment in SharePoint 2013 Search

$
0
0

Microsoft re-engineered the search experience in SharePoint 2013 to take advantage of the best capabilities from FAST plus many new capabilities built from the ground up.  Although much has been said about the query side changes of search (result sources, query rules, content by search web part, display templates, etc), the feed side of search got similar love from Redmond.  In this post I’ll discuss a concept carried over from FAST that allows crawled content to be manually massaged before getting added to the search index.  Several basic examples of this capability exist, so I’ll throw some advanced solution challenges at it.  The solution adds a sentiment analysis score to indexed social activity as is outlined in the video below.

(Please visit the site to view this video)

The content enrichment web service (CEWS) callout is a component of the content processing pipeline that enables organizations to augment content before it is added to the search index.  CEWS can be any external SOAP-based web service that implements the IContentProcessingEnrichmentService interface.  SharePoint can be configured to call CEWS with specific managed properties and (optionally) the raw binary file.  CEWS can update existing managed property values and/or add completely new managed properties.  The outputs of this enrichment service get merged into content before it is added to the search index.  The CEWS callout can be used for numerous data cleansing, entity extraction, classification, and tagging scenarios such as:

  • Perform sentiment analysis on social activity and augment activity with a sentiment score
  • Translate a field or folder structure to a taxonomy term in the managed metadata service
  • Derive an item property based on one or more other properties
  • Perform lookups against line of business data and tag items with that data
  • Parse the raw binary file for more advanced entity extraction

The content enrichment web service is a synchronous callout in the content processing pipeline.  As such, complex operations in CEWS can have a big impact on crawl durations/performance.  An additional challenge exists in the enrichment of content that hasn’t changed (and thus doesn’t get crawled).  An item only goes through the content processing pipeline during full crawls or incremental/continuous crawls after the item is updated/marked dirty.  When only the enriched properties need to change, a full crawl is the only out of the box approach to content enrichment.

The solution outlined in this post addresses both of these challenges.  It will deliver an asynchronous CEWS callout and a process for marking an indexed item as dirty so it can be re-crawled without touching/updating the actual item.  The entire solution has three primary components…a content enrichment web service, a custom SharePoint timer job for marking content in the crawl log for re-crawl, and a database to queue asynchronous results that other components can reference.

High-level Architecture of Async CEWS Solution

Enrichment Queue (SQL Database)

Because of the asynchronous nature of the solution, operations will be running on different threads, some of which could be long running.  In order to persist information between threads, I leveraged a single-table SQL database to queue asynchronously processed items.  Here is the schema and description of that database table.

Idinteger identity column that serves as the unique id of the rows in the database
ItemPaththe absolute path to the item as provided by the crawler and crawl logs
ManagedPropertythe managed property that gets its value from an asynchronous operation
DataTypethe data type of the managed property so we can cast value correctly
CrawlDatethe date the item was sent through CEWS that serves as a crawl timestamp
Valuethe value derived from the asynchronous operation

Content Enrichment Web Service

As mentioned at the beginning of the post, the content enrichment web service callout is implemented by creating a web service that references the IContentProcessingEnrichmentService interface.  There are a number of great example of this online, including MSDN. Instead, this post will focus on calling asynchronous operations from this callout.  The main objective of making the CEWS callout asynchronous is to prevent the negative impact a long running process could have on crawling content.  The best way to do this in CEWS is to collect all the information we need in the callout, pass the information to a long running process queue, update any items that have values ready from the queue, and then release the callout thread (before the long running process completes).

Process Diagram of Async CEWS

Below is the callout code in its entirety.  Note that I leveraged the entity framework for connecting to my enrichment queue database (ContentEnrichmentEntities class below):

Content Enrichment Web Service

using Microsoft.Office.Server.Search.Administration;
using Microsoft.Office.Server.Search.ContentProcessingEnrichment;
using Microsoft.Office.Server.Search.ContentProcessingEnrichment.PropertyTypes;
using Microsoft.SharePoint;
using System;
using System.Collections.Generic;
using System.Configuration;
using System.IO;
using System.Linq;
using System.Net;
using System.Runtime.Serialization;
using System.ServiceModel;
using System.Text;
using System.Threading;

namespace ContentEnrichmentServices
{
    public class Service1 : IContentProcessingEnrichmentService
    {
        private const int UNEXPECTED_ERROR = 2;
        public ProcessedItem ProcessItem(Item item)
        {
            //initialize the processedItem
            processedItem.ErrorCode = 0;
            processedItem.ItemProperties.Clear();
           
            try
            {
                //only process items where ContentType:Item
                var ct = item.ItemProperties.FirstOrDefault(i => i.Name.Equals("ContentType", StringComparison.Ordinal));
                if (ct != null && ct.ObjectValue.ToString().Equals("Item", StringComparison.CurrentCultureIgnoreCase))
                {
                    //get path and use database to process async enrichment data
                    var path = item.ItemProperties.FirstOrDefault(i => i.Name.Equals("Path", StringComparison.Ordinal));
                    var title = item.ItemProperties.FirstOrDefault(i => i.Name.Equals("Title", StringComparison.Ordinal));
                    var sentiment = item.ItemProperties.FirstOrDefault(i => i.Name.Equals("Sentiment", StringComparison.Ordinal));
                    if (path != null && title != null)
                    {
                        using (ContentEnrichmentEntities entities = new ContentEnrichmentEntities(ConfigurationManager.ConnectionStrings["ContentEnrichmentEntities"].ConnectionString))
                        {
                            //try to get the item from the database
                            string pathValue = path.ObjectValue.ToString();
                            var asyncItem = entities.EnrichmentAsyncData.FirstOrDefault(i => i.ItemPath.Equals(pathValue, StringComparison.CurrentCultureIgnoreCase));
                            if (asyncItem != null && !String.IsNullOrEmpty(asyncItem.Value))
                            {
                                //add the property to processedItem
                                Property<decimal> sentimentProperty = new Property<decimal>()
                                {
                                    Name = asyncItem.ManagedProperty,
                                    Value = Convert.ToDecimal(asyncItem.Value)
                                };
                                processedItem.ItemProperties.Add(sentimentProperty);

                                //delete the async item from the database
                                entities.EnrichmentAsyncData.DeleteObject(asyncItem);
                            }
                            else
                            {
                                if (sentiment != null && sentiment.ObjectValue != null)
                                    processedItem.ItemProperties.Add(sentiment);
                                if (asyncItem == null)
                                {
                                    //add to database
                                    EnrichmentAsyncData newAsyncItem = new EnrichmentAsyncData()
                                    {
                                        ManagedProperty = "Sentiment",
                                        DataType = "System.Decimal",
                                        ItemPath = path.ObjectValue.ToString(),
                                        CrawlDate = DateTime.Now.ToUniversalTime()
                                    };
                                    entities.EnrichmentAsyncData.AddObject(newAsyncItem);

                                    //Start a new thread for this async operation
                                    Thread thread = new Thread(GetSentiment);
                                    thread.Name = "Async - " + path;
                                    var data = new AsyncData()
                                    {
                                        Path = path.ObjectValue.ToString(),
                                        Data = title.ObjectValue.ToString()
                                    };
                                    thread.Start(data);
                                }
                            }

                            //save the changes
                            entities.SaveChanges();
                        }
                    }
                }
            }
            catch (Exception)
            {
                processedItem.ErrorCode = UNEXPECTED_ERROR;
            }

            return processedItem;
        }

        /// <summary>
        /// Called on a separate thread to perform sentiment analysis on text
        /// </summary>
        /// <param name="data">object containing the crawl path and text to analyze</param>
        public static void GetSentiment(object data)
        {
            AsyncData asyncData = (AsyncData)data;
            HttpWebRequest myRequest = (HttpWebRequest)HttpWebRequest.Create("http://text-processing.com/api/sentiment/");
            myRequest.Method = "POST";
            string text = "text=" + asyncData.Data;
            byte[] bytes = Encoding.UTF8.GetBytes(text);
            myRequest.ContentLength = bytes.Length;

            using (Stream requestStream = myRequest.GetRequestStream())
            {
                requestStream.Write(bytes, 0, bytes.Length);
                requestStream.Flush();
                requestStream.Close();

                using (WebResponse response = myRequest.GetResponse())
                {
                    using (StreamReader reader = new StreamReader(response.GetResponseStream()))
                    {
                        string result = reader.ReadToEnd();
                        using (ContentEnrichmentEntities entities = new ContentEnrichmentEntities(ConfigurationManager.ConnectionStrings["ContentEnrichmentEntities"].ConnectionString))
                        {
                            //try to get the item from the database
                            var asyncItem = entities.EnrichmentAsyncData.FirstOrDefault(i => i.ItemPath.Equals(asyncData.Path, StringComparison.CurrentCultureIgnoreCase));
                            if (asyncItem != null && String.IsNullOrEmpty(asyncItem.Value))
                            {
                                //calculate sentiment from result
                                string neg = result.Substring(result.IndexOf("\"neg\": ") + 7);
                                neg = neg.Substring(0, neg.IndexOf(','));
                                string pos = result.Substring(result.IndexOf("\"pos\": ") + 7);
                                pos = pos.Substring(0, pos.IndexOf('}'));
                                decimal negD = Convert.ToDecimal(neg);
                                decimal posD = Convert.ToDecimal(pos);
                                decimal sentiment = 5 + (-5 * negD) + (5 * posD);

                                asyncItem.Value = sentiment.ToString();
                                entities.SaveChanges();
                            }
                        }
                    }
                }
            }
        }

        private readonly ProcessedItem processedItem = new ProcessedItem()
        {
            ItemProperties = new List<AbstractProperty>()
        };
    }

    public class AsyncData
    {
        public string Path { get; set; }
        public string Data { get; set; }
    }
}

 

The content enrichment web service is associated with a search service application using Windows PowerShell.  The configuration of this service has a lot of flexibility around the managed properties going in and out of CEWS and the criteria for triggering the callout.  In my example the trigger is empty, indicating all items going through CEWS:

PowerShell to Register CEWS

$ssa = Get-SPEnterpriseSearchServiceApplication
Remove-SPEnterpriseSearchContentEnrichmentConfiguration –SearchApplication $ssa
$config = New-SPEnterpriseSearchContentEnrichmentConfiguration
$config.Endpoint = "http://localhost:8888/Service1.svc"
$config.DebugMode = $false
$config.SendRawData = $false
$config.InputProperties = "Path", "ContentType", "Title", "Sentiment"
$config.OutputProperties = "Sentiment"
Set-SPEnterpriseSearchContentEnrichmentConfiguration –SearchApplication $ssa –ContentEnrichmentConfiguration $config

 

Timer Job (Force Re-Crawl)

The biggest challenge with an asynchronous enrichment approach is updating the index after the CEWS thread is released.  No API exists to directly update items in the search index, so CEWS is the last opportunity to augment an item before it becomes available to users executing queries.  The best we can do is kick-off an asynchronous thread that can queue enrichment data for the next crawl.  Marking individual items for re-crawl is a critical component to the solution, because “the next crawl” will only crawl items if a full crawl occurs or if the search connector believes the source items have updated (which could be never).  The crawl log in Central Administration provides a mechanism to mark individual indexed items for re-crawl

CrawlLogURLExplorer.aspx option to recrawl

 

I decompiled the CrawlLogURLExplorer.aspx page and was pleased to find it leveraged a Microsoft.Office.Server.Search.Administration.CrawlLog class with a public RecrawlDocument method to re-crawl items by path.  This API will basically update an item in the crawl log so it looks like an error to the crawler, and thus picked up in the next incremental/continuous crawl.

So why a custom SharePoint timer job?  An item may not yet be represented in the crawl log when it completes our asynchronous thread (especially for new items).  Calling RecrawlDocument on a path that does not exist in the crawl log would do nothing.  The timer job allows us to mark items for re-crawl only if the most recent crawl is complete or has a start date after the crawl timestamp of the item.  In short, it will take a minimum of two incremental crawls for a new item to get enrichment data with this asynchronous approach.

Custom Timer Job

using Microsoft.Office.Server.Search.Administration;
using Microsoft.SharePoint;
using Microsoft.SharePoint.Administration;
using System;
using System.Collections.Generic;
using System.Data.EntityClient;
using System.Linq;
using System.Text;
using System.Threading.Tasks;

namespace ContentEnrichmentTimerJob
{
    public class ContentEnrichmentJob : SPJobDefinition
    {
        public ContentEnrichmentJob() : base() { }
        public ContentEnrichmentJob(string jobName, SPService service, SPServer server, SPJobLockType targetType) : base(jobName, service, server, targetType) { }        
        public ContentEnrichmentJob(string jobName, SPWebApplication webApplication) : base(jobName, webApplication, null, SPJobLockType.ContentDatabase)
        {
            this.Title = "Content Enrichment Timer Job";        
        }

        public override void Execute(Guid targetInstanceId)
        {
            try
            {
                SearchServiceApplication application = SearchService.Service.SearchServiceApplications.FirstOrDefault();
                CrawlLog crawlLog = new CrawlLog(application);
                using (ContentEnrichmentEntities entities = new ContentEnrichmentEntities(GetEntityConnection()))
                {
                    //process all items in the database that where added before the current crawl
                    DateTime start, stop;
                    GetLatestCrawlTimes(WebApplication.Sites[0], out start, out stop); //use the first site collection for context
                    foreach (var item in entities.EnrichmentAsyncData.Where(i => i.CrawlDate < start || stop != DateTime.MaxValue))
                    {
                        crawlLog.RecrawlDocument(item.ItemPath.TrimEnd('/'));
                    }
                }
            }
            catch (Exception)
            {
                //TODO: log error
            }
        }

        private EntityConnection GetEntityConnection()
        {
            //build an Entity Framework connection string in code...to lazy to update OWSTIMER config
            EntityConnectionStringBuilder connBuilder = new EntityConnectionStringBuilder();
            connBuilder.Provider = "System.Data.SqlClient";
            connBuilder.ProviderConnectionString = "data source=SHPT01;initial catalog=ContentEnrichment;integrated security=True;MultipleActiveResultSets=True;App=EntityFramework";
            connBuilder.Metadata = "res://*/ContentEnrichmentModel.csdl|res://*/ContentEnrichmentModel.ssdl|res://*/ContentEnrichmentModel.msl";

            //return the formatted connection string
            return new EntityConnection(connBuilder.ToString());
        }

        private void GetLatestCrawlTimes(SPSite site, out DateTime start, out DateTime stop)
        {
            //mark item for recrawl
            SPServiceContext context = SPServiceContext.GetContext(site);
            SearchServiceApplication application = SearchService.Service.SearchServiceApplications.FirstOrDefault();
            Content content = new Content(application);
            ContentSource cs = content.ContentSources["Local SharePoint sites"];
            CrawlLog crawlLog = new CrawlLog(application);
            var history = crawlLog.GetCrawlHistory(1, cs.Id);
            start = Convert.ToDateTime(history.Rows[0]["CrawlStartTime"]);
            stop = Convert.ToDateTime(history.Rows[0]["CrawlEndTime"]);
        }
    }
}

 

With these three solution components in place, we get the following before/after experience in search

BeforeAfter
 

Final Thoughts

Content enrichment is a very mature but powerful search customization.  I hope this post helped illustrate a creative use of content enrichment that can take your search experience to the next level.

Code for solution: ContentEnrichmentServices.zip

App Approaches to Common SharePoint Customizations

$
0
0

I have spent the last year traveling the world and authoring numerous posts/solutions to evangelize SharePoint development using apps for SharePoint.  Apps for SharePoint are incredibly powerful, but many SharePoint developers still gravitate to farm solutions based on familiarity and misconceptions on the "limitations" of the app model.  I'm confident that almost ANY SharePoint solution can be delivered in the app model with good understanding and a little creativity.  To support my case, I decided to assemble a comprehensive list of common SharePoint customizations, evaluate their support in the new app model, and provide alternate solutions where feasible.  Although it is clear that gaps exist between apps and farm solutions, alternatives are available in my cases.  Feel free to send me a note if you feel I'm missing something from this list.

CustomizationApp CapabilityComments/Limitations/Alternatives
Alternate CSS (in hive)Apps cannot deploy files to SharePoint's root files/hive. Instead, css can be deployed to SharePoint using a module or app installed event.
Alternate CSS (in module)Apps can use modules to deploy css to SharePoint, but only to the app web…not the host web. Referencing css from an app web can have inconsistent results in a host web across depending on the users browser and browser settings. As an alternative, a app installed event can be used to deploy css to a host web.
Alternate CSS (apply)Apps can leverage an app installed event to apply alternate css to a SharePoint site. This is similar to using feature receivers in farm solutions for the same results.
App for OfficeDeploying an app for Office inside SharePoint solution is completely new in 2013 and fully supported using apps for SharePoint. This can be a very powerful components to include in solutions, as is outlined in this post
Application PageApps cannot deploy files to SharePoint's root files/hive, which includes the layouts folder. As an alternative, web part pages can be deployed through a module or app installed event receiver.
ColumnDeclarative columns can only be deployed to the app web…not the host web. If you need to deploy columns into the host web (ex: a content syndication hub), you can leverage an app installed event to provision columns programmatically in the host web.
Content TypeDeclarative content types can only be deployed to the app web…not the host web. If you need to deploy content types into the host web (ex: a content syndication hub), you can leverage an app installed event to provision content types programmatically in the host web.
Custom ActionApps support only specific types of custom actions, including ribbon commands and context menus in both the app web and host web. These can be very powerful when combined with dialog boxes, but do not cover the full spectrum of custom actions available to farm solutions (ex: Site Actions menu).
Delegate ControlApps for SharePoint do not support "Control" elements that can be used with delegate controls in a master page. Delegate controls are a common mechanism for swapping out functionality of a site using a farm solutions (particularly useful with the AdditionalPageHead delegate). As an alternative, the same result can be achieved through the design in a custom master page (ex: place specific html or server controls in the master page).
Feature ReceiverApps have remote events for "App Installed", "App Uninstalling", and "App Upgraded". All of these can be used in a similar way to feature receivers in farm solutions. It should be noted that app remote events do not implement reliable messaging.
Feature StaplingApps can be "stapled" through the app catalog.  This has more flexible rules than traditional stapling, allowing specific sites, managed paths, or site templates to get apps pushed to them. The app catalog even provides a mechanism for app retraction. However, apps installed through app stapling share a single app web (from the app catalog) and app remote events only fire during installation into the catalog. For more information, see my post a few months back on "app stapling".
Field TypeCustom field types require files to be deployed into SharePoint's root files/hive, which is off limits to the app model. As an alternative, apps can easily deploy custom forms that can leverage almost any web/html controls imaginable. Custom field types have been discouraged, so this approach is even better than farm solutions IMO.
Image (in hive)Apps cannot deploy files to SharePoint's root files/hive. Instead, images can be deployed to SharePoint using a module or app installed event.
List DefinitionApps can deploy custom list definitions, but only to the app web and not the host web.
List Event ReceiverApps for SharePoint can deploy remote event receivers that provide similar functionality to list event receivers in farm solutions. However, they can only be deployed to lists/libraries in the app web, not the host web.
List InstanceDeclarative list instances can only be deployed to the app web and not the host web. If you need a list provisioned in the host web, you can alternatively look at provisioning the list programmatically into the host web using an app installed event.
Master Page (in hive)Apps cannot deploy files to SharePoint's root files/hive. Instead, master pages can be deployed to SharePoint using a module or app installed event.
Master Page (module)Apps can use modules to deploy master pages to SharePoint, but only to the app web…not the host web. Referencing a master page from an app web can have inconsistent results in a host web across depending on the users browser and browser settings. As an alternative, a app installed event can be used to deploy a master page to a host web.
Master Page (apply)Apps can leverage an app installed remote event to apply a master page to a SharePoint site. This is similar to using feature receivers in farm solutions for the same results.
ModuleApps can leverage modules to declaratively deploy files into SharePoint. However, they can only deploy to the app web. As an alternative, a app installed event can be used to deploy files to a host web.
Script (in hive)Apps cannot deploy files to SharePoint's root files/hive. Instead, script can be deployed to SharePoint using a module or app installed event.
Script (module)Apps can use modules to deploy script to SharePoint, but only to the app web…not the host web. Referencing script from an app web can have inconsistent results in a host web across depending on the users browser and browser settings. This also poses a cross-domain issue for the script. As an alternative, a app installed event can be used to deploy script to a host web.
Search ConfigurationThe query side of search is largely re-architected in SharePoint 2013 and supports the export/import of search configuration. Apps for SharePoint can be used to consistently and efficiently deploy a search configuration to a number of sites!
Service ApplicationCloud-hosted apps for SharePoint (Provider and Autohosted) can deliver similar functionality to service applications. Apps can have tenancy permissions and configured to make app-only API calls (ex: outside the context of a user). It all depends on the desired integration and capabilities to determine if an app is an acceptable alternative.
Site DefinitionSite definitions deploy files into SharePoint's root files/hive, which is not a capability of apps for SharePoint. As an alternative, consider web templates, "app stapling", or controlling the provisioning process.
ThemeAlthough I would question the sanity of someone looking to create a theme (we seem to change themes in each release of SharePoint), they can be deployed through an app module (to app web) or app installed event (to host web). This includes the new "composed looks" in SharePoint 2013 that include several files (master pages, theme color palette, fonts, images, etc).
Timer JobAn external process (ex: worker role) can deliver similar functionality to a timer job given the app has tenancy permissions and the ability to perform app-only calls. It all depends on the desired integration and capabilities to determine if an app is an acceptable alternative. I did this for media encoding in this solution and for site collection provisioning in this solution.
User Control (controltemplates)Apps cannot deploy files to SharePoint's root files/hive, which includes the controltemplates folder. As an alternative, user controls can live in the remote web of an app and used in some similar ways to a user control in controltemplates, but not as delegate controls or page directives in SharePoint pages.
Web Service (ISAPI)ISAPI web services (under the _vti_bin url) are deployed to the root files/hive of SharePoint, which is off limits to apps. As an alternative, "Cloud-hosted" apps for SharePoint (Provider and Autohosted) can host custom web service in their remote web
Web PartApps can deploy "App Parts" (also called Client Web Parts) to SharePoint, but these can only have certain types of custom properties (string, int, bool, enum) and cannot use native web part connections. As an alternative, App Parts can leverage postMessage APIs to communicate to SharePoint (and possibly to each other).
Web Part PageA web part page is simply an .aspx file deployed through a module. Apps support modules, but only to the app web and not the host web. An alternative is to use the app installed event to add the web part page to the host web. I give this a 50% because it would be impossible to add web parts declaratively to the web part page using this approach (would have to be programmatically or through the UI)
Web TemplateWeb templates live in the solutions catalog of the host web's site collection root. An app installed event can be used to deploy a web template .wsp to the solution catalog, but cannot activate it.
WorkflowWorkflow is re-architected in 2013 to be declarative and is completely supported in the new app model.
Workflow ActivityWorkflow is re-architected in 2013 to be declarative and is completely supported in the new app model.

Important Note

Apps for SharePoint are designed to leverage the app web for all SharePoint things it needs.  The app web is a hidden sub-site under the host web that provides isolation for the app.  This is why ALL the declarative elements of an app are deployed to the app web and not host web (ex: modules, columns, content types, etc).  Apps are architected this way so they leave no trace of themselves on the host web when they are uninstalled.  The list above mentions the use of the app installed event to deploy elements to the host web.  This is NOT an appropriate practice for general app development, especially apps that are designed for the Office Store.  The approach is more appropriate for internal development, including customers moving to SharePoint Online or with extensive farm solutions.

Getting Creative

I have two creative tips that have helped me close the gap between apps and farm solutions:

  1. Control the provisioning process...by doing this, you can tack on just about anything you can imagine using CSOM before you hand the site over to a user.  This includes, branding, specialized permissions, feature activation, etc.  I recently authored a solution that shows how to do this with the app model (even provisioning site collections).  You can find that solution here.
  2. Add App-only Permission to your app...doing this allows your app to call SharePoint APIs outside the context of a user.  This is enormously helpful for long running processes/jobs or from other apps.  The solution referenced in tip #1 also uses this approach.

Closing Words

Beyond the list above, it is important to recognize the numerous benefits of the new app model.  Apps for SharePoint do not run server code on the SharePoint farm, making them much safer to run (including in SharePoint Online) and don't impact upgrades/service packs.  They are cloud-ready and cater to the numerous HTML/JS developers in the world, opening the door to numerous solution possibilities and cost efficient developer resources.  One of the biggest advantages I've realized is in my development environment.  In the past, I've carried a huge laptop with multiple cores and 32GB or memory.  This was required because SharePoint assemblies weren't remoteable, requiring a full SharePoint server to be an effective developer.  Today, I can use almost anything with a browser and a developer site collection (add Visual Studio for cloud-hosted apps).  The picture below is my new SharePoint development machine and my old development machine...ahhh life is good!

SharePoint Sandbox isn't Dead...UserCode is

$
0
0

With the introduction of apps for SharePoint, many have speculated that sandbox solutions are dead/deprecated.  This is accurate for solutions containing assemblies running on the Sandboxed Code Service (aka - SPUCHostService.exe).  However, declarative solutions are very much still in play and widely used internally by SharePoint (ex: Web Templates and Design Manager).  A declarative .wsp package (one containing no assemblies) is a powerful way to provision elements into the host web.  So don't be afraid to leverage the solutions gallery as long as your .wsp packages don't contain code.

If your solution contains code and a farm solution isn't an option (read: SharePoint Online), you might consider an approach I outlined in my recent post on App Approaches for Common SharePoint Customizations.  In it, I discussed the use of app installed remote events to provision elements in the host web.  This pattern is generally discouraged, as apps for SharePoint should uninstall cleanly (hence the reason for an app web).  However, it is appropriate for internal apps (not marketplace apps) containing programming logic and host web dependencies.  The app model should not be confused as a deployment mechanism, so if your solution doesn't have code...consider the solution gallery.

Apps for SharePoint and Yammer

$
0
0

I have found that almost every app for SharePoint can benefit from leveraging a social platform.  It might be as simple as using a social platform for basic profile information/pictures.  The most obvious choice for many app developers is to leverage SharePoint as the social platform.  After all, the app manifest allows developers to request permissions to the social features of SharePoint.  However, Microsoft offers a more feature-rich social platform in Yammer.  Microsoft’s future social investments will be heavily concentrated on Yammer and the Yammer/SharePoint integration.  A glimpse into these investments are detailed in Jared Spataro’s post on the enterprise social roadmap.  By targeting both social platforms, SharePoint app developers greatly increase their customer reach and better position their apps for the direction of the SharePoint platform.  In this post, I will outline some patterns and lessons learned from developing Social Nucleus, an app for SharePoint that can leverage SharePoint or Yammer as its social platform.

(Please visit the site to view this video)

App Catalogs

Both SharePoint and Yammer have public and private catalogs for apps.  In SharePoint, internal/private apps can be published in an app catalog site collection and made available within an organization.  Public SharePoint apps can be found in the SharePoint Store, a public marketplace for solutions that have been tested and validated by Microsoft.  In Yammer, the only difference between a public and private app is a “Global” flag.  Apps without this global flag are only available to the home network of the app developer.  Apps that are deployed to the Global App Directory go through a vetting process with Yammer, and if approved, are marked "Global" and listed in directory.  App developers can build one app and have it listed in two app catalogs...very cool!  This is the pattern I took with Social Nucleus, which is listed FREE in both the SharePoint Store and Yammer’s Global App Directory.

SharePoint StoreYammer Global Catalog
  

Yammer Development

If you are interested in developing apps against Yammer, I highly recommend reviewing their API Documentation.  You will find it thorough, very similar to SharePoint, and easy to use.

Authentication

Both SharePoint and Yammer app models leverage OAuth to allow an app to access platform resources on behalf of the user.  When you register a new Yammer app, it looks VERY similar to the appregnew.aspx page used to register apps for SharePoint:

SharePoint App RegistrationYammer App Registration
  

Yammer provides both client-side and server-side authentication approaches, but perhaps the easiest is to leverage the Yammer Login Button in the JavaScript SDK.  Referencing this script (with an app client id) will handle the entire OAuth process against Yammer (including API calls).

Yammer Script Include (note the app client id)

<script type='text/javascript' data-app-id='PRO_APP_ID' src='https://assets.yammer.com/platform/yam.js'></script>

 

Building a hybrid social app with SharePoint/Yammer brings some interesting considerations when it comes to authentication.  This is especially true when an app has two entry points…entry from SharePoint and entry from Yammer.  SharePoint OAuth will require us to know the specific things about how the app is installed in SharePoint, such as the host web URL and app web URL.  These values are typically passed to the app as URL parameters from SharePoint.  Without these parameters, our app will automatically assume the user navigated from Yammer (or entered the app URL directly) and default to Yammer as the social platform.

Check if user came from SharePoint

//get sharepoint app params
var shpt = decodeURIComponent(getQueryStringParameter('shpt'));
var hostweburl = decodeURIComponent(getQueryStringParameter('SPHostUrl'));
appweburl = decodeURIComponent(getQueryStringParameter('SPAppWebUrl'));
if (appweburl != 'null') {
    //only display SharePoint social option if we came from SharePoint
    $('#divShptButton').css('display', 'block');
    $('#divShptButton').click(function () {
        window.location = hostweburl;
    })
}

 

When the app is accessed from SharePoint, we know the user has already authenticated to SharePoint.  However, we don’t know if the user is already authenticated to Yammer (regardless of single sign-on configuration).  As such, our app will check the login status (yam.getLoginStatus) of the user in Yammer and display Yammer’s Login Button accordingly.

Check user's Yammer login status

 

yam.getLoginStatus(function (response) {
    //set the yam auth response so we can later toggle the yammer login button
    if (response.authResponse) {
        yamAuth = true;
    }

 

    //check if we came from SharePoint
    if (appweburl != 'null') {
        if (!yamAuth) {
            //display the login control
            yam.connect.loginButton('#divYammerLogin', function (resp) {
                if (!auth) //this is a hack for double return from the login button
                {
                    auth = true;
                    if (resp.authResponse) {
                        display_type = 'yam';
                        application_start_yammer();
                    }
                }
            });
        }

 

 

APIs

Yammer’s REST APIs are very similar to the REST APIs available in SharePoint.  Both platforms have basic APIs for getting profile information, following/followers, posts, etc.  To minimize the platform-specific logic, I added a layer of abstraction between the social platforms and the user interface using platform independent functions and objects.  The key was to determine the properties my app needed for each entity and how those properties mapped to the json returned from the APIs.  Below is a sample of how I did this for a user.

Function abstraction for users

//################ Get User ################
function get_user(user_id, callback) {
    if (display_type == 'yam')
        get_yammer_user(user_id, function (user) {
            callback(user);
        });
    else
        get_sharepoint_user(user_id, function (user) {
            callback(user);
        });
}
function get_yammer_user(user_id, callback) {
    yam.request({
        url: 'https://www.yammer.com/api/v1/users/' + user_id + '.json',
        method: 'GET',
        success: function (user) {
            callback(parse_yammer_entity(user));
        },
        error: function (err) {
            callback(null)
        }
    });
}
function get_sharepoint_user(user_id, callback) {
    executor.executeAsync({
        url: appweburl + "/_api/SP.UserProfiles.PeopleManager/GetPropertiesFor(accountName=@v)?@v='" + user_id.replace('#', '%23') + "'",
        method: 'GET',
        headers: { 'Accept': 'application/json; odata=verbose' },
        success: function (data) {
            var user = $.parseJSON(data.body);
            callback(parse_sharepoint_entity(user.d));
        },
        error: function (err) {
            callback(null);
        }
    });
}

 

Parse Yammer and SharePoint data into common entity

function parse_yammer_entity(entity) {
    return {
        'name': entity.full_name,
        'id': entity.id,
        'type': entity.type,
        'pic': entity.mugshot_url_template,
        'privacy': entity.privacy,
        'followers': (entity.stats != null) ? entity.stats.followers : null,
        'followers_loaded': 0,
        'followers_more': false,
        'following': (entity.stats != null) ? entity.stats.following : null,
        'following_loaded': 0,
        'following_more': false,
        'members': (entity.stats != null) ? entity.stats.members : null,
        'members_loaded': 0,
        'members_more': false
    };
}
function parse_sharepoint_entity(entity) {
    return {
        'name': entity.DisplayName,
        'id': entity.AccountName,
        'type': 'user',
        'pic': (entity.PictureUrl == null) ? '/style/images/nopic.png' : entity.PictureUrl,
        'privacy': null,
        'followers': 0,
        'followers_loaded': 0,
        'followers_more': false,
        'following': 0,
        'following_loaded': 0,
        'following_more': false,
        'members': 0,
        'members_loaded': 0,
        'members_more': false
    };
}

 

As you can see in the sample above, both platforms have their own APIs for making cross-domain REST calls.  In SharePoint, we use the executeAsync method of the RequestExecutor class (found in the SP.RequestEcecutor.js library).  In Yammer, we use the yam.request method.  Both of these methods do more than just handle cross-domain calls, they also handle all the OAuth/token stuff so that it is transparent to the developer.

Pitfalls and Lessons Learned

Although most of my development on this hybrid app was smooth sailing, there were a few pitfalls I ran into that you can learn from:

Cross-domain Conflicts
It turns out that SharePoint’s RequestExecutor and Yammer’s JavaScript SDK conflict with each other on event listeners used in processing the response on cross-domain REST requests.  I spent hours trying to unregister the conflicts, but ultimately found an easier way to address the issue.  I decided to load the Yammer scripts by default (so I can check for Yammer login status) and redirect the user to the same page (but without the Yammer scripts) if they select SharePoint as the social platform.  This means that only one platform’s libraries are registered at any given time.

Redirect for SharePoint

//handle SharePoint login click
$('#divShptLogin').click(function () {
    //redirect back to page and allow code-behind to strip out Yammer include
    window.location = window.location + '&shpt=1';
});

 

Throttling
Yammer institutes rate limits on their REST APIs.  These limits vary, but are throttled at 10 request in 10 seconds for the profile and following APIs I leveraged in Social Nucleus.  This comes into play for APIs that are paged.  For example, popular users or groups could have thousands of followers in Yammer.  The API for followers brings back blocks of 50 at a time.  I originally looked to load all followers recursively, but quickly ran into these throttled limits.  Instead, I notify the user that that the first 50 are loaded and allow them to load more.

Global Catalog
Before an app for Yammer is flagged as “Global”, it will only function against the home network of the developer.  Many organizations might have multiple networks (ex: divisions) but still want the app private.  Yammer has the ability to mark apps with a hidden “Global” flag so the app can function against multiple networks, but not display in the global app catalog.  If you have this scenario, I would suggest posting a message to the Yammer Developer Network.  Another related roadblock exists with app submissions to the SharePoint Store.  During the validation process for the SharePoint Store, testers will need to validate the app against a test/demo network in Yammer in order to approve the app.  For this to occur, you will likely need Yammer app validation prior to SharePoint app validation

Redirects
Since Yammer uses a static URL redirect during OAuth, I originally found it difficult to manage development and production environments.  My solution was to register two Yammer apps…a development app that redirects to localhost and a production app that redirects my product web application running in Windows Azure.  I used a simple DEBUG check to spit out the correct include.  If you look at the code below, you might find it odd that I used LiteralControl objects instead of .NET's ScriptManager.  Yammer requires an additional attribute (data-app-id) on the script include that required me to do it this way.

Handle Dev and Prod app ids for Yammer

protected void Page_Load(object sender, EventArgs e)
{
    if (Page.Request["shpt"] == null || Page.Request["shpt"] != "1")
    {
        #if DEBUG
            Page.Header.Controls.Add(new LiteralControl("<script type='text/javascript' data-app-id='DEV_APP_ID' src='https://assets.yammer.com/platform/yam.js'></script>"));
        #else
            Page.Header.Controls.Add(new LiteralControl("<script type='text/javascript' data-app-id='PRO_APP_ID' src='https://assets.yammer.com/platform/yam.js'></script>"));
        #endif
    }
}

 

IE Security Zones
IT and InfoSec Pros might love Internet Explorer’s security zones, but I despise them as an app developer.  When SharePoint/Yammer and your app live in different security zones, users can experience inconsistent results (ex: pictures fail to load since they live in another zone).  Even worse…when your app defaults to the “Internet” zone, the entire Yammer OAuth process breaks (popups and cross-window communication is blocked).  The only client-side solution I have found is to put the app in the “trusted sites” security zone.  Chrome and Firefox don't have this security "feature" and thus don't have the issue.  I plan on writing a post on these security zone hardships with apps in the future, but be warned…

Final Thoughts

I hope this post helped illustrate the ease of developing apps for SharePoint AND Yammer.  Yammer is a significant investment to SharePoint and will drive Microsoft’s social direction in the platform.  If you develop apps for SharePoint, consider Yammer in those solutions if they have a social play.  Providing Yammer hooks will give the app greater visibility/reach and better position it for the future.

Taxonomy Picker in SharePoint Provider-Hosted App

$
0
0

One of the exciting announcements at the 2014 SharePoint Conference, was the general availability the Office App Model Samples (OfficeAMS) on Codeplex.  OfficeAMS is grass-roots effort by volunteer Microsoft developers to provide code samples using the Office/SharePoint app model that address common customization scenarios.  The samples in OfficeAMS include many scenarios that the SharePoint community felt was impossible or challenging to deliver without full-trust code.  It includes almost 30 complex scenarios such as site provisioning, branding, templating, profile sync, UX controls, and many more. Vesa Juvonen has a more comprehensive write-up of OfficeAMS on his blog (Vesa was also a significant contributor to the effort).

One of the scenarios that I contributed in OfficeAMS is a Taxonomy Picker control for provider-hosted apps. I used this control in several SPC sessions and was surprised by the number of people interested in using it in their apps. In this post, I will detail this Taxonomy Picker control and how to use it in your apps.  I’ve also included a video that outlines the use of the control.

(Please visit the site to view this video)

Setup

The taxonomy picker connects to TermSets in the Managed Metadata Service in SharePoint. To do this from an app, the app needs a minimum of Read permissions to the Taxonomy scope. The app needs Write permissions to the Taxonomy scope if you plan to use “Open” TermSets that users can create Terms in (aka – “Folksonomies”).

Taxonomy scoped Permissions in AppManifest.xml

The app project also needs to include several script and style files from the OfficeAMS download.  These are available in the Contoso.Components.TaxonomyPicker project.  The important files are as follows:

  • Scripts\taxonomypickercontrol.js
  • Scripts\taxonomypickercontrol_resources.en.js
  • Styles\taxonomypickercontrol.css
  • Styles\Images (entire folder of images)
OPTIONAL STEPS: If you plan to use the taxonomy picker in conjunction with ListItems in SharePoint (ex: read/write items from lists with managed metadata columns), you should also add a reference to the Microsoft.SharePoint.Client.Taxonomy.dll assembly, which makes it easy to work with TaxonomyFieldValue and TaxonomyFieldValueCollection values. The OfficeAMS project also has a TaxonomyPickerExtensions.cs file with extension methods for working with these values and the taxonomy picker. Again, the assembly reference and extension class are optional steps for working with ListItems that have managed metadata columns and are explained below.

 

The taxonomy picker is a client-side control, which uses the client-side object model (CSOM) to query the Managed Metadata Service.  To do this, the app needs to reference a number of JavaScript files from SharePoint, such as sp.runtime.js, sp.js, sp.requestexecutor.js, init.js, and sp.taxonomy.js.  The sp.taxonomy.js file allows the app to make calls into the Managed Metadata Service. I like to use Microsoft’s AJAX library to load these scripts dynamically using the getScript extension.  This library is automatically added to ASP.NET pages with a ScriptManager control. Below is the complete block of script I use to load the appropriate files for both the taxonomy picker and the SharePoint client chrome control

Dynamically Loading Scripts

//Wait for the page to load
$(document).ready(function () {

    //Get the URI decoded SharePoint site url from the SPHostUrl parameter.
    var spHostUrl = decodeURIComponent(getQueryStringParameter('SPHostUrl'));
    var appWebUrl = decodeURIComponent(getQueryStringParameter('SPAppWebUrl'));
    var spLanguage = decodeURIComponent(getQueryStringParameter('SPLanguage'));

    //Build absolute path to the layouts root with the spHostUrl
    var layoutsRoot = spHostUrl + '/_layouts/15/';

    //load all appropriate scripts for the page to function
    $.getScript(layoutsRoot + 'SP.Runtime.js',
        function () {
            $.getScript(layoutsRoot + 'SP.js',
                function () {
                    //Load the SP.UI.Controls.js file to render the App Chrome
                    $.getScript(layoutsRoot + 'SP.UI.Controls.js', renderSPChrome);

                    //load scripts for cross site calls (needed to use the people picker control in an IFrame)
                    $.getScript(layoutsRoot + 'SP.RequestExecutor.js', function () {
                        context = new SP.ClientContext(appWebUrl);
                        var factory = new SP.ProxyWebRequestExecutorFactory(appWebUrl);
                        context.set_webRequestExecutorFactory(factory);
                    });

                    //load scripts for calling taxonomy APIs
                    $.getScript(layoutsRoot + 'init.js',
                        function () {
                            $.getScript(layoutsRoot + 'sp.taxonomy.js',
                                function () {
                                    //TAXONOMY PICKERS READY TO BIND
                                });
                        });
                });
        });
});

 

Binding the Taxonomy Picker

After the setup steps, the taxonomy picker is extremely easy to implement. All you need is a hidden input control in html and one line of JavaScript to convert the input into a taxonomy picker:

Binding the Taxonomy Picker

<inputtype="hidden"id="taxPickerKeywords" />


//bind the taxonomy picker to the default keywords termset
$('#taxPickerKeywords').taxpicker({ isMulti: true, allowFillIn: true, useKeywords: true }, context);

 

The taxpicker extension takes two required parameters...the options for the taxonomy picker and the SharePoint client context object (SP.ClientContext).  The taxonomypicker.js version I’ve attached also has a third optional parameter for a value changed callback (not in OfficeAMS yet).  Below is a table that details the options you can pass into the taxpicker extension and some examples of bindings:

Parameters

optionsThe first parameter of the TaxonomyPicker sets the options for the control.  The properties that can be set include:
  • isMulti– Boolean indicating if taxonomy picker support multiple value
  • allowFillIn– Boolean indicating if the control allows fill-ins (Open TermSets only)
  • termSetId– the GUID of the TermSet to bind against (available from Term Mgmt)
  • useHashtags– Boolean indicating if the default hashtags TermSet should be used
  • useKeyword– Boolean indicating if the default keywords TermSet should be used
  • maxSuggestions– integer for the max number of suggestions to list (defaults is 10)
  • lcid– the locale ID for creating terms (default is 1033)
  • language– the language code for the control (defaults to en-us)
contextThe second parameter is an initialized SP.ClientContext object
changed callbackThe third parameter is an optional value changed callback delegate to provide events when the value of the taxonomy picker changes

 

Examples of Taxonomy Picker Bindings

//bind the taxonomy picker to the default keywords termset
$('#taxPickerKeywords').taxpicker({ isMulti: true, allowFillIn: true, useKeywords: true }, context);

//Single-select open termset field (TODO: change the GUID to a termset id from your deployment)
$('#taxPickerOpenSingle').taxpicker({ isMulti: false, allowFillIn: true, termSetId: 'ac8b3d2f-37e9-4f75-8f67-6fb8f8bfb39b' }, context);

//Multi-select closed termset field (TODO: change the GUID to a termset id from your deployment)
$('#taxPickerClosedMulti').taxpicker({ isMulti: true, allowFillIn: false, termSetId: '1c4da890-60c8-4b91-ad3a-cf79ebe1281a' }, context);

//Use default Hashtags termset and limit the suggestions to 5 and value changed callback
$('#taxPickerHashtags').taxpicker({ isMulti: true, allowFillIn: true, useHashtags: true, maxSuggestions: 5 }, context, function() { alert('My value changed'); });

 

You can find the ID of a TermSet by selecting the TermSet in the Term Store Manager.  The Term Store Manager is exposed to site collection administrators within s site collection or from Tenant/Central Administration.

Getting TermSet ID from Term Store Manager

Reading and Writing Values

If you want to work with the taxonomy picker in .NET, you need to follow the optional steps in the setup section above to include the Microsoft.SharePoint.Client.Taxonomy reference and extension class.  You should also change the hidden fields to include the attribute runat=”server”.  The taxonomy picker control can be set by initializing the value of the hidden field with JSON in the following format [{"Id":TermID, "Name": TermLabel}] (ex: [{"Id":"a8ff0f61-2c10-4add-8307-cb1712703887", "Name": "Exchange"}]).  The TaxonomyPickerExtensions.cs extension class includes extension methods for converting TaxonomyFieldValue and TaxonomyFieldValueCollection objects into this JSON format:

Setting Taxonomy Picker Value from .NET

protected void Page_Load(object sender, EventArgs e)
{
    //The following code shows how to set a taxonomy field server-side
    var spContext = SharePointContextProvider.Current.GetSharePointContext(Context);
    using (var clientContext = spContext.CreateUserClientContextForSPHost())
    {
        var list = clientContext.Web.Lists.GetByTitle("TaxTest");
        var listItem = list.GetItemById(1);

        clientContext.Load(listItem);
        clientContext.ExecuteQuery();

        taxPickerOpenSingle.Value = ((TaxonomyFieldValue)listItem["OpenSingle"]).Serialize();
        taxPickerClosedMulti.Value = ((TaxonomyFieldValueCollection)listItem["ClosedMulti"]).Serialize();
    }
}

 

The taxonomy picker will store the selected terms in the hidden field using JSON string format.  These values can be accessed by other client-side scripts or server-side following a post.  The JSON will include the Term Name, Id, and PathOfTerm (ex: World;North America;United States).  JSON.parse can be used client-side to convert the hidden input’s value to a typed object and any number of server-side libraries can be used (ex: JSON.net).

Conclusion

Building in the SharePoint app model does NOT mean you have to compromise on user experience. I hope this taxonomy picker helps illustrate that.  You can get the taxonomy picker and many other code samples in the Office App Model Samples on Codeplex.

Code sample from blog: TaxonomyPickerSample.zip

SharePoint Timer Jobs running as Windows Azure Web Jobs

$
0
0

The Office App Model Samples (Office AMS) contain some great examples of writing console applications using CSOM. A console application is the foundation for building timer jobs against SharePoint Online and offers a safer alternative to full-trust timer jobs written for on-premises SharePoint. In this post, I will show you how incredibly easy it is to build a timer job as a console application, package it, deploy it as a Web Job in Windows Azure, and schedule it to periodically run against SharePoint. Kirk Evans authored a fantastic post last month on Building a SharePoint App as a Timer Job that I highly recommend you read to understand the logistics of running a console application using OAuth and the app model.

(Please visit the site to view this video)

OAuth or Service Account

Kirk’s sample leveraged OAuth and App Only permissions to perform operations on the SharePoint tenant. The alternative is to leverage a service account and explicitly authenticate as this account to perform operations.  This can be done using normal Windows Authentication or using the SharePointOnlineCredentials class with SharePoint Online:

Using SharePointOnlineCredentials for ClientContext
char[] pwdChars = ConfigurationManager.AppSettings["AccountPassword"].ToCharArray();
System.Security.SecureString pwd = new System.Security.SecureString();
for (int i = 0; i < pwdChars.Length; i++)
    pwd.AppendChar(pwdChars[i]);
ClientContext cc = newClientContext("https://tenant.sharepoint.com");
cc.AuthenticationMode = ClientAuthenticationMode.Default;
cc.Credentials = newSharePointOnlineCredentials(ConfigurationManager.AppSettings["AccountUsername"], pwd);

 

I prefer the OAuth approach used by Kirk as it is easier to provide tenant-wide permissions and you don’t have to deal with periodic password changes a service account might encounter.  The OAuth approach requires you to deploy a provider-hosted app and then leverage that app’s client id and client secret in a console application. Kirk’s post covers this in great detail.

The example below is a timer job written to leverage CSOM and OAuth in a console application. This specific job copies OneDrive Usage Guidelines to every OneDrive for Business site (aka – every my site). This is a very common ask from customers and represents a perfect scenario for app model timer jobs.

Console application to upload Usage Guidelines to all OneDrive sites

using Microsoft.SharePoint.Client;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;

namespace OneDrivePoliciesJob
{
    classProgram
    {
        //this is a hack...real solution would enumerate sites or store them in a database
        private static string[] sites = new string[] {
            "https://rzna-my.sharepoint.com/personal/ridize_rzna_onmicrosoft_com",
            "https://rzna-my.sharepoint.com/personal/alexd_rzna_onmicrosoft_com",
            "https://rzna-my.sharepoint.com/personal/annew_rzna_onmicrosoft_com",
            "https://rzna-my.sharepoint.com/personal/roby_rzna_onmicrosoft_com"};

        /// <summary>
        /// This sample shows how to execute a timer job against SharePoint using CSOM.  Although built as a console app,
        /// the executable will be uploaded Azure and run as a WebJob in azurewebsites (ie - scheduled task).  This sample
        /// uses OAuth to authenticate with app-only calls.  To do this, you need to first trust the client id and secret with
        /// SharePoint.  You can also use the SharePointOnlineCredentials class to authenticate with credentials instead of OAuth
        /// </summary>
        /// <param name="args"></param>
        static void Main(string[] args)
        {
            foreach (var site in sites)
            {
                //get site
                Uri siteUri = newUri(site);

                //Get the realm for the URL
                string realm = TokenHelper.GetRealmFromTargetUrl(siteUri);

                //Get the access token for the URL.  Requires this app to be registered with the tenant
                string accessToken = TokenHelper.GetAppOnlyAccessToken(TokenHelper.SharePointPrincipal, siteUri.Authority, realm).AccessToken;

                //Get client context with access token
                using (var clientContext = TokenHelper.GetClientContextWithAccessToken(siteUri.ToString(), accessToken))
                {
                    var folder = clientContext.Web.Lists.GetByTitle("Documents").RootFolder;
                    clientContext.Load(folder);
                    clientContext.ExecuteQuery();

                    //upload the "OneDrive for Business Usage Guidelines.docx"
                    using (var stream = System.IO.File.OpenRead("OneDrive for Business Usage Guidelines.docx"))
                    {
                        Console.WriteLine("Uploading guidelines to " + site);
                        FileCreationInformation fileInfo = newFileCreationInformation();
                        fileInfo.ContentStream = stream;
                        fileInfo.Overwrite = true;
                        fileInfo.Url = "OneDrive for Business Usage Guidelines.docx";
                        folder.Files.Add(fileInfo);
                        clientContext.ExecuteQuery();
                    }
                }
            }
        }
    }
}

 

Packaging

After testing the console application locally, we can look to packaging it for deployment into Windows Azure. The first step in packaging is to ensure all the referenced assemblies will be available in the cloud. By default, Windows Azure won’t have CSOM assemblies (ex: Microsoft.SharePoint.Client, Microsoft.SharePoint.Client.Runtime). Any assembly reference in question should be configured with “Copy Local” set to True.

After configuring these references, rebuild the project to create a new output. A traditional provider-hosted app packages with an .app extension, which is deployed into an app catalog. The console application will package as a .zip of the applications output directory (ex: <project folder>\bin\Debug or <project folder>\bin\Release). It does not matter what you name this zip file. Notice below that I zipped up the Debug output folder that contains the console application executable and referenced assemblies:

Deploying and Scheduling

Azure Websites have a new feature called Web Jobs that can host background operations (ex: console applications) to run on demand, continuously, or on a schedule.  The first step is provisioning an Azure Websites in the Azure Management Portal (https://manage.windowsazure.com). Select “Web Sites” from the side navigation and use the NEW button to create a new web site. Here is an example of using Quick Create to create a Web Site in the North Central Azure data center:

Once the Web Site finishes provisioning (5-10 seconds), open it by clicking it in the Web Site listing. You will notice a new tab across the top for WEBJOBS (currently in Preview). Click on this link to display the list of Web Jobs (show be empty):

Next, click on the “ADD A JOB” link to launch the New Job dialog. Specify a name for the job, browse to the zip file we created of the console application’s output directory, and set the Web Job to “Run on a schedule” before clicking the next button:

On the next screen, you can specify the reoccurrence and schedule for the Web Job. The schedule can easily be changed later and the Web Job can be run on demand (very similar to a typical SharePoint Timer Job):

After configuring and accepting the schedule, the Web Job will deploy into Windows Azure and begin running on the specified reoccurrence. The Web Jobs listing will show the status of the job, the time and outcome of the last run, and links to change the schedule and view the logs. This screen also provides the ability to run Web Jobs on demand by selecting a job and clicking on the RUN ONCE button in the footer:

Clicking on the schedule link will take you to the schedule dashboard, which provides some high-level metrics of jobs and links the tweak the schedule:

Final Thoughts

Kirk Evans and Office AMS have outlined some great patterns for performing operations against SharePoint using console applications and CSOM. If the operation a timer job is performing can be refactored to use CSOM, Windows Azure Web Jobs can help duplicate the SharePoint Timer Service and scheduling/executing interface. This pattern works both for SharePoint Online and SharePoint On-Premises and will help better prepare you for upgrades and the cloud.

Download the OneDrive Usage Guidelines Timer Job used in this post: http://1drv.ms/1jox3Hq


Yammer Analytics with Excel and Power BI

$
0
0

Congratulations, your organization has rolled out Yammer, the best darn enterprise social platform on the planet! You probably already have some great adoption momentum, exciting new communities of knowledge, and employees/customers collaborating across organization boundaries like never before. But now it’s time to start analyzing the information contained within Yammer, identify key trends/insights, and use those trends/insights to become a more responsive organization. You might even have your boss (or their boss) on your back to start measuring ROI from the Yammer investment. Where to start…

Sure, Yammer provides high-level metrics, exports, and APIs that together, contains most of the raw data you would use to perform social mining on the enterprise. However, exports and API make most Yammer Administrators feel like the information is still locked far inside Yammer. They need simple and flexible reporting tools that are familiar and easy to use. Fortunately, the Microsoft BI stack with Microsoft Excel and Power BI are here to the rescue!

In this post, I will outline the step to take standard data exports from Yammer and convert them into detailed reporting models with rich data visualizations. Other than a few data enhancement utilities (that I'll provide for free), we'll achieve everything using Microsoft Excel and Power BI. The steps outlined in this post are also illustrated in the video below and in a related session I delivered at the 2014 SharePoint Conference titled Yammer mining - dig in and "listen" to what your big *social* data is saying.

(Please visit the site to view this video)

Collecting Raw Social Data

We will use a combination of Yammer data exports and APIs to collect the data for our reporting model. Yammer Network Administrators can collect data exports from Yammer’s Network Admin portal.  The data export interface only has a few parameters such as the export start date and checkbox options for attachments and external networks. Anything more granular will need to be achieved through post-export filtering.

  1. Login to Yammer as a network administrator (only available to network admins)
  2. Navigate to the Network Admin portal within Yammer
  3. Select “Export Data” from the “Content and security” section of the side navigation
  4. Select a start date for the export (read: all additional filter must be completed after the export)
  5. Optionally include attachment and external networks

What You Get

Yammer exports include most of the essential data elements needed to build the baseline reporting model and will serve as the basis for collecting additional data attributes.  Below is a comprehensive list of elements (aka - “dimensions”) included in the export and a diagram of how they relate to each other:

  • Admins
  • Files
  • Groups
  • Messages
  • Networks
  • Pages
  • Topics
  • Users

In the relationship diagram below, notice that Files and Topics do not have a direct relationship with Messages. These dimensions ARE related, but data returned from Yammer does not support the creation of relationships without additional data manipulation. Don’t worry, we’ll investigate data manipulation shortly.

For this post, we will concentrate on building a reporting model with Messages, Users, and Groups. The other dimensions are interesting, but Messages, Users, and Groups are likely the most valuable to start with. The methodology applied to these can be replicated to incorporate the other dimensions for a more comprehensive social reporting model.

What is Missing

Although Messages, Users, and Groups encompass the primary dimensions in our reporting model, some dimensions and attributes aren’t provided in the data exports. I’ve listed some of the major gaps below, but I’m sure you will find others.

  • Detailed Date Dimension– although most of the exports have date/time attributes, date/time values can be challenging to query against. Providing a formal date dimension is much more user friendly.  For example, “Show me Message Counts by Group between 1/1/2014 and 1/31/2014” can be simplified with a date dimension to “Show me Message Counts by Group in January”
  • Mentions– User and Topic mentions are embedded in the body of the exported messages (ex: “I am preparing for my [Tag:3422:SPC14] with my co-presenter [User:773833:nmiller]”). This makes mentions impossible to effectively query. Beside the challenge of being hidden in a unstructured message body, Mentions actually have a 1:many relationship with messages, meaning a single message can (and often does) have numerous mentions. Two support this relationship, mentions should be broken out as separate dimension(s)
  • Following– the data exports do not contain any details on who follows who, who follows what group, or even general follower/following counts for a user
  • Likes/Shares– the data exports do not contain any information on the number of likes/shares a message has or who performed the like/share
  • Message Sentiment– one of the hot trends in social mining is to perform sentiment analysis on social activity. Rolled up sentiment scores can provide a high-level monitor of positive/negative activity in a social network. This is general a “nice to have” and definitely not included in the standard Yammer data exports
  • Time to Reply– although the messages in the data export are easily grouped by thread, it isn’t easy to calculate the time between messages in a thread. This information can be helpful in comparing response time to traditional email communications or measuring community responsiveness
  • Detailed User Demographics– the data export for Users provides some very basic user demographics (job_title, location, and department). However, I have found the data quality of these attributes to be extremely poor in every network export I’ve worked with. It seems that only a small population of users (10-20%) bother to populate these fields in their profiles. This might improve once we have a more unified user profile between Yammer, SharePoint, and Active Directory. However, an HRIS system tends to be a more definitive source for user demographics information in an organization. It might make sense to work with Human Resources to get an acceptable export of demographics. Location/Geography is particularly useful as the Microsoft BI tools have some fabulous location-based data visual we can apply to it

Enhanced Exports

When I first set out to document my approach to Yammer analytics, I began to write detailed steps for filling in the gaps outlined in “What is Missing”. This involved calling Yammer REST APIs, using Office Apps, and complex Excel formulas. Ultimately, I felt like the effort was getting overly complex for the average Yammer Administrator to accomplish. Instead, I decided to build an export utility hosted in Windows Azure to perform all the export and augmentation for you. For those that are interested in the details of this utility (ex: for the purpose of adding additional enhancements), I have provide provided the entire Visual Studio solution HERE for download.

NOTICE: The Yammer Export Utility is a free tool offered warrantee-free and without support. It will perform a full Yammer export from the dates specified in the wizard. Although this may contain private messages and messages in private groups, the utility will completely ignore these records if you chose to exclude them. The Yammer Export Utility will not use your data for any reason other than to provide an enhanced export. The Yammer Export Utility will not provide your data to any 3rd party with the exception of an optional sentiment analysis service. Please be aware that Yammer imposes rate limits on API calls (“speed limits” on the information superhighway). These limits can slow export completion to hours or even days depending on the volume of export activity and users to process.

 

The Yammer Export Processor is available at https://yammer.azurewebsites.net. It provides a wizard that will allow a Yammer Network Administrator to configure and perform an enhanced export from a Yammer network (user MUST be an administrator of the network they select for export). The first step will ask you to log into Yammer:

Next, you will be asked to agree to the terms and conditions, which validates that you understand the terms of use outlined both in the utility and above:

After accepting the Terms and Conditions, you must select a network to perform the export on. You MUST be a verified administrator on the Yammer network you select in order to perform the export:

Next, the wizard will ask you to specify a start date for the export. The utility will export everything from this date forward. Be cautious in trying to export too much content at once…longer timeframes can significantly increase processing time:

After specifying an export timeframe, you can customize the enhancement activities performed on the export, including likes, shares, mentions, follows, and more:

If you selected “Process message sentiment” on the export options screen, you will prompted to provide an API from Mashape.com, which hosts the sentiment analysis engine for the export processor:

Finally, the Yammer Export Processor will display a summary screen to review before starting the export. This is your last chance to review the details before processing:

Once you start the export, it could take a few minutes to show progress. Complete processing time will vary greatly based on the export timeframe and the volume of content in the network (including users). Keep in mind that large exports could take days to complete. Bookmark the URL and check back later to get a status of the export:

Once the export completes, it will have a link at the top to download the enhanced export files:

If you want to use the pre-built Excel model (explained later in the post), you MUST copy all the export files to C:\Exports. The data connections in the provided Excel model are configured to this specific location:

Modeling Raw Social Data

Great, we have a bunch of raw data…now what? Excel has all the tools we need to import the raw social data, model it with relationships, and build rich/insightful visuals. Rather than building an Excel model from scratch, I’ve provided a pre-built model that is engineered to easily refresh against the output of the Yammer Export Processor. The important pre-requisite is that you have Excel 2013 with Power Pivot enabled and you have copied the data export files to C:\Exports on your local computer.

Download the YammerPowerBI.xlsx workbook to your local machine and open it in Excel 2013. Click on the POWERPIVOT tab in the ribbon (Power Pivot tab is missing? Enable it) and click the Manage button to launch the Power Pivot window:

Next, find the refresh button in the ribbon and click on the down arrow to select Refresh All.

This will launch the Data Refresh dialog, which will refresh the workbook with the data from the Yammer Export Processor that was copied to C:\Exports:

The data refresh could take time to complete depending on the volume of content in the exports. For very large exports, it is recommended you leverage the 64-bit version of Office 2013. This will allow Excel to leverage more local resources to work with the big data in memory. Once the data refresh is complete, you can close the Power Pivot window and experiment with some of the pre-built Power View dashboard (or build your own visuals).

Data Visualizations

The provided YammerPowerBI.xlsx workbook already contain a number of pre-built Power View dashboards. Power View is just one of many visualizations available in Excel and SharePoint. Here is a more comprehensive listing and examples:

Power View – Power View delivers highly interactive dashboards leveraging a number of unique visuals that are automatically connected to each other. Power View dashboards live within the Excel workbook and can be uploaded to SharePoint for online viewing:

Power Maps– if you have (or can get) accurate location information for users, Power Maps provides the premier location-based reporting, with rich visualization layers and time-based animations. Below is video recording of a Power Map report showing Message Count and Sentiment by Location over Time:

(Please visit the site to view this video)

Power BI for SharePoint Online– SharePoint Online users can license the Power BI app for SharePoint. This provides a number of online BI services, including Q&A, a semantic BI search tool. With Q&A, users can simply ask questions in a search box and Power BI will display the appropriate visualization (ex: “Show me thread count by group for 2013”):

Excel Pivot Tables/Charts– Excel has traditionally provided interactive Pivot Tables/Charts, and Excel 2013 enhances that experience with additional chart visuals and enhanced slicers/filters:

Conclusion

I hope this post and the tools I’ve provided help you realize the social insights you are looking for with Yammer. If you want to better understand how to build some of these exports/models from scratch, I highly encourage you to watch my session at the SharePoint Conference.

Download the pre-built YammerPowerBI.xlsx workbook

Download the code for the Yammer Export Processor as a Visual Studio solution

NOTICE: The Yammer Export Processor has been tested with the best resources I have available to me. That said, I'm not an admin of any large networks so testing on large networks isn't as well tested as I'd like. Please reach out to me at richdizz at outlook dot com if you run into any issues running the utility and I'll do my best to debug.

 

Connected SharePoint App Parts

$
0
0

Last year I authored a post on using SharePoint 2013's embed code web part as an alternative to custom app parts. The embed code approach has the advantage of not using IFRAMEs, but isn’t repeatable, relies on the client for all application logic, and eliminates the use of server-side technologies like ASP.NET and MVC. But is the fact that app parts use IFRAMEs really that bad? IFRAMEs provide a nice security boundary/sandbox and doesn't mean the user experience has to be compromised. You see, apps parts can reference styling from the host web and can even communicate to the host web for resizing. But can they support complex web part capabilities such as connections with other app parts? In this post, I will outline an approach to communicate between app parts using HTML5 technologies that should work in any modern browser.

(Please visit the site to view this video)

Window/IFRAME Communication

The HTML5 postMessage API enables sending data between windows/IFRAMES, even across domains. This is the same method used by SharePoint for resizing app parts and communication to/from pages displayed in the SharePoint dialog. The postMessage API is a safe approach to cross-domain communication because the window/IFRAME must explicitly "listen" for messages, else they are ignored.

Listening for messages between window/IFRAME is identical and uses the script window.addEventListener as seen below:

Listening for message
if (typeof window.addEventListener != 'undefined') {
    window.addEventListener('message', processMessage, false);
}

 

Posting messages between window/IFRAME is slightly different. From the IFRAME, you call window.parent.postMessage and from the host page you post to the IFRAME by calling the script iframe.contentWindow.postMessage as seen below:

postMessage from IFRAME to parent page
window.parent.postMessage(message, document.referrer);

 

postMessage from parent page to IFRAME
document.getElementById('myIframe').contentWindow.postMessage(data, '*');

 

Applying to App Parts

In order to get app parts communicating with each other, the host page needs script to broker all the communication between app parts. This can be achieved with the new Script Editor web part, but I prefer the Content Editor, as it can be exported with our script and re-imported into the web part gallery for reuse. This allows an average end-user (who has no business messing with script) to place the elements on the page. In this host page script, I use JQuery to find all the app parts to post messages to:

Finding all App Parts on page
$('.ms-WPBody').find('iframe').each(function (i, e) {
    e.contentWindow.postMessage(data, '*');
});

 

Conclusion

I’m actually not a big fan of connected web part as I think they are too complex for the average end-user to configure. However, this post was more about illustrating the flexibility of app parts even though they are loaded in IFRAMEs. The complete solution used in this post (and video) is available HERE.

Autohosted Apps are Dead…Big Deal!

$
0
0

When Microsoft introduced SharePoint’s Application Model, they released a preview hosting option called "autohosting". Autohosted apps were exclusive to SharePoint Online and allowed developers automatically provision websites and (optional) databases in Windows Azure on-demand. Developers didn’t even need a Windows Azure account...everything is provisioned automatically in a Microsoft-owned Azure tenant. In my early app work, I made heavy use of autohosted apps. I even presented the very first autohosted apps session at the 2012 SharePoint Conference. However, it didn’t take long for me to gravitate to provider-hosted apps and abandon autohosting completely. Last week, Microsoft announced an end to the autohosted app preview. As such, I thought I’d provide my top-10 list for why autohosted apps are inferior to provider-hosted apps:

  1. Preview Feature
    Autohosted apps were always offered as a preview feature for SharePoint Online. Some preview features eventually become "official", but all are subject to change or sunset with little notice. Autohosted apps ultimately didn’t make the cut. After June 30th, developers will not be able to create new autohosted apps in SharePoint. Developers are encouraged to convert existing autohosted apps into provider-hosted apps, which is extremely easy to do.
  2. GUIDs for URLs
    All autohosted apps are deployed to a Microsoft-owned Azure tenant. Each autohosted app is assigned a GUID host-name under the o365apps.net domain. You cannot pick a host-name…you get a GUID. Seriously, who wants an app URL that looks like https://8e3b69b8-afe2-4b9a-a290-9c82cac2a050.o365apps.net???
  3. Tenancy Isolation
    Unless you leverage centralized app deployment, each install of an autohosted app will provision an isolated website and optional database (see figure below). This isolated tenancy might make sense for many apps, but other apps might need data to persist across sites (ex: Helpdesk tickets). Although not impossible in autohosted apps, it can be difficult to achieve given the default tenancy and scalability of autohosting.

    Figure 1: Default Autohosted Tenancy

     

  4. Not Scalable
    Autohosted apps are provisioned as small websites in a Microsoft-owned tenant of Windows Azure. You have NO ability to manually tweak these websites, including scaling them for more users/traffic. As such, Microsoft has been clear that autohosted apps weren't ideal for high-traffic or organization-wide solutions. They were designed to deliver team solutions for up to ~250 users.
  5. Painful Upgrades
    The remote web in an autohosted app is deployed automatically when the app is installed into a SharePoint site. Developers do not have direct access to this Azure website. Upgrading an autohosted app can only be achieved by deploying a new version of the app through the app catalog. Once the user accepts the new app version (required to get the update), a background process will take a full backup of the website (and database) and TRY to apply the update. The lack of upgrade transparency and flexibility is a big pain point of autohosted apps (imagine database schema updates). With a provider-hosted app, I can update my web project in seconds without my tenants having to do anything (they might not even know an upgrade occurred).
  6. Online Only
    Autohosted apps have always been an exclusive hosting option for SharePoint Online. Targeting only SharePoint Online customers greatly limits the reach of an app and neglects numerous potential on-premises customers. With very little effort, the same provider-hosted app can target both SharePoint Online and SharePoint on-premises.
  7. Developer Flexibility
    Autohosted apps offered no flexibility in hosting. You get a small Azure website with no ability to configure or manage it. This greatly limits the flexibility of the developer in using other platforms (ex: PHP, Ruby, etc) or even advanced .NET technologies. For example, Vesa Juvonen recently wrote a great post about delivering connected app parts using SignalR. Unfortunately, SignalR requires a special setting on the Azure website that you cannot configure with autohosted apps.
  8. Heavy Packaging
    An autohosted app package (.app) contains a compressed zip file with the entire web project (including all project assemblies and assembly references set to copy local). SharePoint uses this package to "auto" deploy the web assets to Azure each time the app is installed into a SharePoint site. Although this isn’t horrible, I’m not a big fan of uploading assemblies (read: intellectual property) to an app catalog that the entire organization likely has access to.
  9. Questionable Licensing
    Since autohosted apps have always been in preview, their licensing has always been in question. They are hosted in a Microsoft-owned Azure tenant, so what do they cost me as a SharePoint Online customer? How many autohosted apps do I get in SharePoint Online? Will it be licensed by the app, the app install, or the user? Too many licensing questions for me to be confident doing any significant development with them.
  10. False Sense of Ease
    Many people (including me) were attracted to autohosted apps given their (perceived) ease of hosting. Right-click project > deploy > DONE! You never had to mess with Azure or a hosting web server. However, Azure websites (which were also in preview when autohosted apps were announced but now "official") are incredibly fast and easy to provision and deploy to. In fact, I can create a new Azure website and publish a provider-hosted app in about 15-30 seconds. Ultimately, a provider-hosted app deployed to an Azure website is about as easy as autohosting promised. However, the provider-hosted option doesn’t have all the limitations outlined in this article.

Hopefully this top-10 list will make you feel better about autohosted apps going away. Provider-hosted apps provide the ultimate in flexibility/scale and are the way to go for SharePoint extensibility!

Displaying Cross-Domain/Secure Images from SharePoint Apps

$
0
0

One of my biggest frustrations in developing apps are the cross-domain challenges that inherently exist when we decouple apps from the platform(s) they needs to consume. There are a number of approaches for dealing with cross-domain data such as cross-origin resource sharing (CORS) and JSONP. However, images are trickier…especially when they require authentication. Internet Explorer Security Zones can also wreak havoc on images when domains are in different zones. In this post, I’ll illustrate a technique for dealing with these image challenges. The following video illustrates the concepts of this post:

(Please visit the site to view this video)

The Challenge

OAuth enables an application to consume 3rd party data/services without the knowledge of the user’s credentials to that 3rd party. It accomplishes this (at least the first time) through redirection to the 3rd party (where the user might get prompted to login) for tokens the app can use to retrieve data. Browsers are very efficient at handling location redirection to support OAuth, but HTML image elements are not. If the html image (in app domain) has its source is set to a 3rd party URL (image domain), cross-domain challenges can exist. The two biggest challenges exist when the image is secure (ie – requires authentication):

  • Browser is authenticated to the image domain, but the image domain is in a separate IE Security Zone that prevents the app domain from sharing the authentication cookie
  • Browser is NOT authenticated to the image domain. The app might have an access token to get data from image domain, but an HTML image element is not access token aware

Both these scenarios can result in broken images as seen in below. As app developers, we should assume that our apps will run in a different security zone as SharePoint AND that users of our apps will NOT use the “keep me signed in” option of Office 365.

Broken images are the result of the HTML image element not being able to automatically login to display the secured image. You can easily see this by monitoring the web traffic during the picture request as seen below using the F12 Developer Tools of Internet Explorer. In the sample below, an image in a provider-hosted app is attempting to display a picture in the AppWeb. You can see the original image request in the top row and then several redirects (HTTP 302)...ultimately to login.microsoftonline.com:

The Solution

Instead of introducing a cross-domain image source, my approach is to leverage a RESTful service to read the image bytes and return a base64 encoded representation of the image. Any modern browser can this as the source of an HTML image element:

URL vs Base64 Image Source

<!-- URL Image Source -->
<img src="https://somedomain/sites/site/library/picture.png"/>
<!-- base64 Image Source...truncated for readability -->
<img src="data:image/png;base64,%RkGAV...EmFjYgJH2" />

 

The image service will accept the details needed to locate the image (ex: site URL, folder, filename, etc) and the access token to set on the header when it performs a GET on the image. The actual code is surprisingly simple:

Get Image RESTful Service

[ServiceContract(Namespace = "Core.CrossDomainImagesWeb")]
[AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)]
publicclassImgService
{
    [OperationContract]
    [WebGet]
    publicstring GetImage(string accessToken, string site, string folder, string file)
    {
        //make the request
        string url = String.Format("{0}_api/web/GetFolderByServerRelativeUrl('{1}')/Files('{2}')/$value", site, folder, file);
        HttpWebRequest request = WebRequest.Create(url) asHttpWebRequest;
        request.Headers.Add("Authorization", "Bearer" + " " + accessToken);
        using (HttpWebResponse response = request.GetResponse() asHttpWebResponse)
        {
            using (var sourceSteam = response.GetResponseStream())
            {
                using (var newStream = newMemoryStream())
                {
                    sourceSteam.CopyTo(newStream);
                    byte[] bytes = newStream.ToArray();
                    return "data:image/png;base64, " + Convert.ToBase64String(bytes);
                }
            }
        }
    }
}

 

The great part of this approach is that the service can be consumed server-side or client-side as seen in the code below:

Calling GetImage Server-side

//use ImgService to get the base64 representation of image
Services.ImgService svc = new Services.ImgService();
Image2.ImageUrl = svc.GetImage(spContext.UserAccessTokenForSPAppWeb, spContext.SPAppWebUrl.ToString(), "AppImages", "O365.png");

 

Calling GetImage Client-side

//make client-side call for the third image in base64 format
var appWebUrl = decodeURIComponent(getQueryStringParameter('SPAppWebUrl'));
$.ajax({
    url: '../Services/ImgService.svc/GetImage?accessToken=' + $('#hdnAccessToken').val() + '&site=' + encodeURIComponent(appWebUrl + '/') + '&folder=AppImages&file=O365.png',
    dataType: 'json',
    success: function (data) {
        $('#Image3').attr('src', data.d);
    },
    error: function (err) {
        alert('error occurred');
    }
});

 

Here is a screenshot of a provider-hosted app that displays the same image from AppWeb three different ways…using a traditional absolute URL, a base64 encoded image rendered server-side, and a base64 encoded image rendered client-side. Notice the broken image of the absolute URL and successful rendering of the two base64 encoded images:

Conclusion

IE Security Zones and cross-domain images have been one of my biggest frustrations in app development. I hope that the solution outlined above can help you avoid similar frustration. Look for a complete sample of this solution in future release of the Office AMS project on CodePlex

Using Azure to SSL-Enable an Http REST Service

$
0
0

The internet is full of interesting (and often free) REST services that expose countless new data sources and operations to client-side applications. A week doesn’t go by where I’m not playing with some new service for a client-side app I’m building. One challenge I’ve encountered is when I’m building an SSL-enabled app against a service with no https end-point. Most browsers will block client-side REST calls with mixed protocols. This is particularly troublesome when developing apps for SharePoint and Office, which require SSL (as any app should that passes around access tokens). In this post, I will outline a simple approach I use to proxy http REST services through Azure for SSL. Azure does most of the work by delivering websites with free SSL end-points (at least if you can live with the default https://*.azurewebsites.net domain).

The sample below is about as simple as it gets. We make the HttpWebRequest to the REST service within a WCF service (could just as easily use Web API). The json returned from the http REST service is simply passed through to the original client request in the return statement (but as https).

Simple Pass-through Proxy
[ServiceContract]
[AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)]
publicclass Cars
{
    [OperationContract]
    [WebGet()]
    publicstring GetByVIN(string vin)
    {
        HttpWebRequest myRequest = (HttpWebRequest)HttpWebRequest.Create(String.Format("http://api.edmunds.com/v1/api/toolsrepository/vindecoder?vin={0}&api_key={1}&fmt=json", vin, ConfigurationManager.AppSettings["EdmundsAPIKey"]));
        myRequest.Method = "GET";
        using (WebResponse response = myRequest.GetResponse())
        {
            using (StreamReader reader = new StreamReader(response.GetResponseStream()))
            {
                return reader.ReadToEnd();
            }
        }
    }
}

 

Sometimes this technique is helpful for SSL-enabling AND trimming some of the unused content from a 3rd party service. In this sample I use JSON.NET to deserialize the json and return only the data I needed for my client-side application.

Trim Data in Proxy

[ServiceContract]
[AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)]
publicclass Rhymes
{
    [OperationContract]
    [WebGet()]
    public List<string> GetRhymes(string text)
    {
        List<string> data = null;
        HttpWebRequest myRequest = (HttpWebRequest)HttpWebRequest.Create(String.Format("http://rhymebrain.com/talk?function=getRhymes&word={0}", text));
        myRequest.Method = "GET";
        using (WebResponse response = myRequest.GetResponse())
        {
            using (StreamReader reader = new StreamReader(response.GetResponseStream()))
            {
                var json = reader.ReadToEnd();
                var x = JsonConvert.DeserializeObject<List<WordItem>>(json);
                data = x.Select(i => i.word).ToList();
            }
        }

        return data;
    }
}

publicclass WordItem
{
    public string word { get; set; }
    public int freq { get; set; }
    public int score { get; set; }
    public string flags { get; set; }
    public string syllables { get; set; }
}

 

Besides not being SSL-enabled, this service required a cross-domain http POST that didn’t work client-side. In this case, my app was also hosting the service, so re-publishing as a POST was fine (no longer cross-domain). However, I also converted it to a GET to demonstrate that technique and the power of the proxy.

Proxy as HTTP POST

[ServiceContract]
[AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)]
public class Sentiment
{
    [OperationContract]
    [WebGet()]
    public string SentimentGet(string text)
    {
        HttpWebRequest myRequest = (HttpWebRequest)HttpWebRequest.Create("http://text-processing.com/api/sentiment/");
        myRequest.Method = "POST";
        string data = "text=" + text;
        byte[] bytes = Encoding.UTF8.GetBytes(data);
        myRequest.ContentLength = bytes.Length;


        using (Stream requestStream = myRequest.GetRequestStream())
        {
            requestStream.Write(bytes, 0, bytes.Length);
            requestStream.Flush();
            requestStream.Close();

            using (WebResponse response = myRequest.GetResponse())
            {
                using (StreamReader reader = new StreamReader(response.GetResponseStream()))
                {
                    return reader.ReadToEnd();
                }
            }
        }
    }

    [OperationContract]
    [WebInvoke(Method = "POST", BodyStyle = WebMessageBodyStyle.WrappedRequest, RequestFormat = WebMessageFormat.Json, ResponseFormat = WebMessageFormat.Json)]
    public string SentimentPost(string text)
    {
        HttpWebRequest myRequest = (HttpWebRequest)HttpWebRequest.Create("http://text-processing.com/api/sentiment/");
        myRequest.Method = "POST";
        string data = "text=" + text;
        byte[] bytes = Encoding.UTF8.GetBytes(data);
        myRequest.ContentLength = bytes.Length;


        using (Stream requestStream = myRequest.GetRequestStream())
        {
            requestStream.Write(bytes, 0, bytes.Length);
            requestStream.Flush();
            requestStream.Close();

            using (WebResponse response = myRequest.GetResponse())
            {
                using (StreamReader reader = new StreamReader(response.GetResponseStream()))
                {
                    return reader.ReadToEnd();
                }
            }
        }
    }
}

 

This technique also works for creating a REST end-point for any traditional SOAP web service you might find on the interwebs. The sample below created a REST end-point for an old stock quote service I’ve been using for over a decade.

Proxy Traditional SOAP Service

[ServiceContract]
[AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)]
public classStocks
{
    [OperationContract]
    [WebGet()]
    public Quote GetQuote(string s)
    {
        using (StockService.StockQuoteSoapClient client = new StockService.StockQuoteSoapClient("StockQuoteSoap"))
        {
            var quote = client.GetQuote(s);
            XDocument doc = XDocument.Parse(quote);
            Quote q = new Quote();
            q.symbol = s;
            q.last = Convert.ToDouble(doc.Descendants("Last").First().Value);
            q.change = Convert.ToDouble(doc.Descendants("Change").First().Value);
            q.prev_close = Convert.ToDouble(doc.Descendants("PreviousClose").First().Value);
            q.pct_change = (q.last - q.prev_close) / q.prev_close;
            return q;
        }
    }
}

public classQuote
{
    public string symbol { get; set; }
    public double last { get; set; }
    public double change { get; set; }
    public double prev_close { get; set; }
    public double pct_change { get; set; }  
}

 

So there you have it…couldn’t be easier. You can download my solution HERE. You will notice I add all my services to the same project (even though they are unrelated)…I do this mainly because they are all utilities for me and to conserve Azure Websites (ex: if you only get 10 free). Don’t give up on that great service just because you run into client-side challenges…turn to Azure!
 

Viewing all 68 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>