Quantcast
Channel: Richard diZerega's Blog
Viewing all 68 articles
Browse latest View live

Expiring External User Sharing in SharePoint Online

$
0
0

SharePoint Online makes it extremely easy to share sites and content with external users. For this reason, SharePoint Online has seen rapid adoption for many extranet scenarios and in OneDrive for Business. SharePoint Online provides administrators the tools to manage external sharing, including enabling/disabling sharing and visibility into external users within a site collection. External sharing is simple, secure, and extremely powerful.  However, once content is shared externally, it stays shared forever…or at least until it is manually revoked by a content owner or administrator. In this post, I will outline a solution to set expiration timers on external sharing in SharePoint Online. The solution will also give content owners easy methods to extend/revoke external user access. This layer of external sharing governance is frequently requested by my customers and easily achievable with the Office 365 Developer platform. Here is comprehensive video overview of the solution if you want to see it in action:

(Please visit the site to view this video)

 

NOTE: Although this solution is exclusive to SharePoint Online, external sharing can be delivered in a similar way on-premises. That said, Microsoft has pulled off some crazy technical gymnastics in SharePoint Online to make it effortless for users to share and IT to manage (read: don’t try this at home kids). If you really want to deliver this on-premises, I highly recommend investigating CloudExtra.net for on-premises “Extranet in a box” with SharePoint.

 

The Solution

The solution logic will be based expiration and warning thresholds. These thresholds could be tenant-wide, template-specific, site-specific, and almost anything in between. The expiration threshold represents the number of days external users will be permitted access before their access expires or is extended by a content owner. The warning threshold represents the number of days external users will be permitted access before the solution sends expiration warnings to the “invited by” content owner or site collection administrator. These email warnings will provide commands to extend the external share or immediately revoke access. If the warnings are ignored long enough to reach the expiration threshold, the external access will automatically be revoked by the solution. This “NAG” feature is very similar to Microsoft’s implementation of site disposition…just in this case we are talking external access disposition. Don’t follow? Here is a quick 50sec cartoon that simplifies the concept:

(Please visit the site to view this video)

 

The solution is implemented with three components. First, a console application “timer job” will run daily to iterate site collections, capture all external users in a site, and process shares that exceed thresholds (either sending email warnings or revoking access). A database will keep track of all external users by site collection, including the original shared to date and the date their access was extended (if applicable). Finally, a website will provide content owners and site administrators an interface to respond to expiration warnings by extending or revoking external access for specific external users. The solution structure in Visual Studio can be seen below and illustrates the projects outlined above. The entire solution could be deployed to a single free Azure Website (with WebJob) and SQL Azure Database.

Detail of Solution in Visual Studio

 

Finding Detailed Information on External Sharing

The first challenge in building this solution was finding accurate external sharing details…at minimum the external user identity, invited by user identity, and shared date. This turned out to be surprisingly challenging. I started by looking at the “Access Requests” list that exists in site collections that have external sharing. This stores all outstanding and historical external sharing invitations…or so I thought. It turns out this will only track external users that haven’t already accepted a sharing request somewhere else in the entire tenant. For example…if I share “Site Collection A” with “Joe Vendor” and “Joe Vendor” is already an active external user somewhere else in the tenant (ex: “Site Collection B”), he will never show up in the “Access Requests” list.

The Office 365 Administration Portal offers an External Sharing menu that enables administrators to manually view/manage external users by site collection. If these details were exposed in the user interface, I held hope a matching API would exist in the SharePoint Online Administration assemblies. Turns out, I was right…the Microsoft.Online.SharePoint.TenantManagement namespace has an Office365Tenant class with GetExternalUsersForSite method (tip: be care not to confuse this Office365Tenant class with the slightly different Tenant class used for site collection provisioning…they are even in slight different namespaces of the assembly). The GetExternalUsersForSite method takes a site collection URL and is paged to return 50 external users at a time. I used the code below to convert ALL the external users into my own entities so I could quickly dispose the administration client context:

GetExternalUsersForSite

//use O365 Tenant Administration to get all the external sharing details for this site
List<ExternalShareDetails> shares = new List<ExternalShareDetails>();
string adminRealm = TokenHelper.GetRealmFromTargetUrl(tenantAdminUri);
var adminToken = TokenHelper.GetAppOnlyAccessToken(TokenHelper.SharePointPrincipal, tenantAdminUri.Authority, adminRealm).AccessToken;
using (var clientContext = TokenHelper.GetClientContextWithAccessToken(tenantAdminUri.ToString(), adminToken))
{
    //load the tenant
    var tenant = new Office365Tenant(clientContext);
    clientContext.Load(tenant);
    clientContext.ExecuteQuery();

    //initalize varables to going through the paged results
    int position = 0;
    bool hasMore = true;
    while (hasMore)
    {
        //get external users 50 at a time (this is the limit and why we are paging)
        var externalUsers = tenant.GetExternalUsersForSite(siteUrl, position, 50, String.Empty, SortOrder.Descending);
        clientContext.Load(externalUsers, i => i.TotalUserCount);
        clientContext.Load(externalUsers, i => i.ExternalUserCollection);
        clientContext.ExecuteQuery();

        //convert each external user to our own entity
        foreach (var extUser in externalUsers.ExternalUserCollection)
        {
            position++;
            shares.Add(new ExternalShareDetails()
            {
                AcceptedAs = extUser.AcceptedAs.ToLower(),
                DisplayName = extUser.DisplayName,
                InvitedAs = extUser.InvitedAs.ToLower(),
                InvitedBy = (String.IsNullOrEmpty(extUser.InvitedBy)) ? null : extUser.InvitedBy.ToLower(),
                UserId = extUser.UserId,
                WhenCreated = extUser.WhenCreated
            });
        }
                       
        //determine if we have more pages to process
        hasMore = (externalUsers.TotalUserCount > position);
    }
}

 

Here are the details of what GetExternalUsersForSite returns for each external user:

PropertyDescription
AcceptedAsThe email address used to accept the external share
DisplayNameThe display name resolved when the user accepts the external share
InvitedAsThe email address that was provided to share externally
InvitedByThe email address of the user that invited the external user**
UniqueIdA 16-character hexadecimal unique id for the user (ex: 1003BFFD8883C6D1)
UserIdUser ID of the external user in the SiteUsers list for the site collection in question
WhenCreatedThe date the external user was first resolved in the tenant***

**InvitedBy will only contain a value if the share introduced the external user to the tenancy (ie - their first accepted invite to the tenant)

***WhenCreated returns the date the external user was first resolved in the tenant…NOT the shared date

The results from GetExternalUsersForSite provided a comprehensive list of external users for a site collection, but had a data quality issue for external users that had previously accepted external sharing requests somewhere else in my tenant (such as “Joe Vendor” mentioned earlier). For these users, the InvitedBy was empty and the WhenCreated date represented the date they first accepted an external share in my tenant (not when they accepted sharing for that specific site collection). InvitedBy isn’t that critical as I can warn the site administrator, but the original share date is essential for the expiration logic of the solution. I found an accurate date in an old friend…the user information list for the site collection (ex: _catalogs/users). This list is accessible via REST and very easy to query since GetExternalUsersForSite gives us the actual UserId of the user within the site collection. We can use the Created column to determine the accurate share date.

Using REST w/ User Information List for Actual Share Date
var shareRecord = entities.ExternalShares.FirstOrDefault(i => i.LoginName.Equals(externalShare.AcceptedAs));
if (shareRecord != null)
{
    //Update LastProcessedDate column of the record with the processDate
    shareRecord.LastProcessedDate = processDate;
    entities.SaveChanges();
}
else
{
    //get the original share date
    var details = getREST(accessToken, String.Format("{0}/_api/Web/SiteUserInfoList/Items({1})/FieldValuesAsText", siteUrl, externalShare.UserId));
    externalShare.WhenCreated = Convert.ToDateTime(details.Descendants(ns + "Created").FirstOrDefault().Value);
    shareRecord = new ExternalShare()
    {
        UniqueIdentifier = Guid.NewGuid(),
        SiteCollectionUrl = siteUrl.ToLower(),
        LoginName = externalShare.AcceptedAs,
        UserId = externalShare.UserId,
        InvitedBy = (String.IsNullOrEmpty(externalShare.InvitedBy)) ? siteOwner.Email : externalShare.InvitedBy,
        OriginalSharedDate = externalShare.WhenCreated,
        LastProcessedDate = processDate
    };
    entities.ExternalShares.Add(shareRecord);
    entities.SaveChanges();
}

 

The definitive source of external user information is collected from a combination of the GetExternalUsersForSite method AND the User Information List. The table below summarizes the sourcing.

 New External Users in TenantExisting External Users in Tenant
External User IdentityGetExternalUsersForSiteGetExternalUsersForSite
Invited By User IdentityGetExternalUsersForSite/_api/site/Owner
Shared Date/_api/Web/SiteUserInfoList/_api/Web/SiteUserInfoList

 

Revoking and Extending Access

The Office365Tenant class has a RemoveExternalUser class, which takes an array of unique external user ids. However, this doesn’t allow you to specify a site collection so I suspect it removes the external user from the entire tenant (which we don’t want). Even if this method was site collection specific, I think it is good practice to minimize the use of the SharePoint Online Administration assembly whenever possible. In this case, GetExternalUsersForSite provided a site-specific UserId for external users, which can be used to remove them from the SiteUsers collection in the root web. This will cascade delete the external user everywhere in the site collection. Doing this could leave broken permission inheritance in the site. I originally had heartburn over this, but broken inheritance seems to be an accepted reality of the new open sharing user experience. Also notice that RefreshShareDate takes precedence over OriginalShareDate…this is how we take extending access into consideration (RefreshShareDate will be null for any external user that hasn’t been extended).

Revoke External User by Deleting SiteUser

//check if the record falls inside the warnings
double daysActive = processDate.Subtract(shareRecord.OriginalSharedDate).TotalDays;
if (shareRecord.RefreshSharedDate != null)
    daysActive = processDate.Subtract((DateTime)shareRecord.RefreshSharedDate).TotalDays;

//check for cutoff
if (daysActive > cutoffDuration)
{
    //remove the SPUser from the site
    clientContext.Web.SiteUsers.RemoveById(externalShare.UserId);
    clientContext.ExecuteQuery();

    //delete the record
    entities.ExternalShares.Remove(shareRecord);
    entities.SaveChanges();
}

 

To extend access, we will allow content owners and site collection administrators to reset the expiration clock through an MVC web application. Below you can see the code used to send expiration warnings (which contain direct links to extend/revoke views). Notice that the solution leverages GUIDs to provide direct links to controller actions.

Sending Expiration Warnings
else if (daysActive > warningDuration)
{
    int expiresIn = Convert.ToInt32(cutoffDuration - daysActive);
    //send email to InvitedBy (which will be site collection owner when null)
    EmailProperties email = new EmailProperties();
    email.To = new List<String>() { shareRecord.InvitedBy };
    email.Subject = String.Format("Action Required: External sharing with {0} about to expire", externalShare.AcceptedAs);
    email.Body = String.Format("<html><body><p>You are receiving this message because you are the site administrator of <a href='{0}'>{0}</a> OR you shared it with {1}. The external access for this user is set to expire in {2} days. Use the link below to view additional details and perform actions to revoke OR extend access for another {3} days. If you do not act on this notice, the external access for this user to terminate in {2} days.</p><ul><li><a href='{4}Details/{5}'>View Details</a></li><li><a href='{4}Extend/{5}'>Extend {3} Days</a></li><li><a href='{4}Revoke/{5}'>Revoke Access</a></li></ul></body></html>", siteUrl, externalShare.AcceptedAs, expiresIn.ToString(), cutoffDuration.ToString(), webUrl, shareRecord.UniqueIdentifier);
    Utility.SendEmail(clientContext, email);
    clientContext.ExecuteQuery();
}

 

Expiration Warning Email

Below is the MVC controller for both Extend and Revoke. Revoke in the controller is identical to the “timer job” console application and Extend simply sets the RefreshShareDate.

MVC Controller

// GET: Details/92128104-7BA4-4FEE-BB6C-91CCE968F4DD
public ActionResult Details(string id)
{
    if (id == null)
    {
        return View("Error");
    }
    Guid uniqueID;
    try
    {
        uniqueID = new Guid(id);
    }
    catch (Exception)
    {
        return View("Error");
    }
    ExternalShare externalShare = db.ExternalShares.FirstOrDefault(i => i.UniqueIdentifier == uniqueID);
    if (externalShare == null)
    {
        return View("Error");
    }
    return View(externalShare);
}

// GET: Extend/92128104-7BA4-4FEE-BB6C-91CCE968F4DD
public ActionResult Extend(string id)
{
    if (id == null)
    {
        return View("Error");
    }
    Guid uniqueID;
    try
    {
        uniqueID = new Guid(id);
    }
    catch (Exception)
    {
        return View("Error");
    }
    ExternalShare externalShare = db.ExternalShares.FirstOrDefault(i => i.UniqueIdentifier == uniqueID);
    if (externalShare == null)
    {
        return View("Error");
    }

    //update the share with a new RefreshSharedDate
    externalShare.RefreshSharedDate = DateTime.Now;
    db.SaveChanges();

    return View(externalShare);
}

// GET: Revoke/92128104-7BA4-4FEE-BB6C-91CCE968F4DD
public ActionResult Revoke(string id)
{
    if (id == null)
    {
        return View("Error");
    }
    Guid uniqueID;
    try
    {
        uniqueID = new Guid(id);
    }
    catch (Exception)
    {
        return View("Error");
    }
    ExternalShare externalShare = db.ExternalShares.FirstOrDefault(i => i.UniqueIdentifier == uniqueID);
    if (externalShare == null)
    {
        return View("Error");
    }

    //get an AppOnly accessToken and clientContext for the site collection
    Uri siteUri = new Uri(externalShare.SiteCollectionUrl);
    string realm = TokenHelper.GetRealmFromTargetUrl(siteUri);
    string accessToken = TokenHelper.GetAppOnlyAccessToken(TokenHelper.SharePointPrincipal, siteUri.Authority, realm).AccessToken;
    using (var clientContext = TokenHelper.GetClientContextWithAccessToken(siteUri.ToString(), accessToken))
    {
        //remove the SPUser from the site
        clientContext.Web.SiteUsers.RemoveById(externalShare.UserId);
        clientContext.ExecuteQuery();

        //delete the record
        db.ExternalShares.Remove(externalShare);
        db.SaveChanges();
    }

    //display the confirmation
    return View(externalShare);
}

 

Warning Detail in MVC App

Revoke Confirmation in MVC App

Extend Confirmation in MVC App

You might be wondering where those important warning and expiration thresholds are configured. For simplicity in this solution, I configured them in the appSettings section of the console app and MVC app configuration files. However, the solution could be configured to implement more advanced threshold logic such as template-specific thresholds.

Configuration appSettings
  <appSettings>
    <addkey="ClientID" value="YOUR_APP_ID" />
    <addkey="ClientSecret" value="YOUR_APP_SECRET" />
    <addkey="WarningDuration" value="50" />
    <addkey="CutoffDuration" value="60" />
    <addkey="TenantName" value="rzna" />
    <addkey="TenantUpnDomain" value="rzna.onmicrosoft.com" />
  </appSettings>

 

Final Thoughts

I hope this solution helped illustrate how Office 365 can deliver great sharing user experience WITH the sharing compliance/governance that legal teams and information security are demanding. You can download the entire solution on the Office 365 Developer Patterns and Practices (PnP) site of GitHub and view deployment details HERE


SharePoint Online Information Architecture Considerations

$
0
0

The geek in me would love to have a developer cure-all to ensure a successful SharePoint deployment (ok…provisioning comes really close). In reality, success is based largely on great Organizational Change Management (OCM) and solid Information Architecture (IA). Even the most mature SharePoint organizations struggle with these activities. This is especially true as they move SharePoint workloads to the multi-tenant cloud of Office 365. In this post, I’ll discuss Information Architecture considerations specific to SharePoint Online. In many ways, the cloud demands re-imagined patterns for Information Architecture in SharePoint. I’ll break down the discussion into four areas…URL Limitations, Capacity Considerations, Templates/Provisioning, and Managed Metadata/Content Types.

NOTE: Information Architecture is a departure from my normal development themed posts, but you will find that Office 365 development patterns can help deliver a stronger IA implementation in SharePoint Online. I will make many references to Office 365 Development Patterns and Practices on GitHub that includes samples to automate and add governance to an IA strategy.

 

What is a Tenant

Before getting started, it would help to define the word “tenant” that is often mentioned in Office 365 and/or SharePoint Online discussions. After frequent confused looks by customers, I no longer assume understanding of this term in context. Office 365 and SharePoint Online are considered a multi-tenant software as a service (MT SaaS). To simplify, you can consider SharePoint a large apartment building Microsoft owns in the cloud. Our customers are the “tenants” in that apartment building. Tenants share the apartment building (SharePoint), but Microsoft does everything possible to avoid tenants from disrupting other tenants. Microsoft has also offered customers dedicated SharePoint hosting. Dedicated hosting gives a customer their own apartment building, but isn’t nearly as economical as multi-tenant. Multi-tenant SharePoint Online is where most innovation investments are being made and where Microsoft hosts its own internal SharePoint workloads (we wouldn’t put you in an apartment we wouldn’t live in ourselves).

URL Limitations

The use of “vanity URLs” is a practice I’ve observed in numerous on-premises SharePoint deployments. Web applications, host-named site collections, and managed paths cater to every end-user URL whim. This epidemic adds challenges to already challenging cloud migrations. SharePoint Online takes a very simple approach to URL organization. Outside of OneDrive for Business, all SharePoint Online sites must fit under a single host-name (https://tenant.sharepoint.com). The table below lists all the host-names in multi-tenant SharePoint Online and their intended use.

Host-NameDescription
https://tenant.sharepoint.comDefault SharePoint Online host name where most authenticated sites will be provisioned
https://tenant-my.sharepoint.comReserved for My Site Host and OneDrive for Business sites. Supports manual subsite provisioning but not site collections outside of OneDrive.
https://tenant-admin.sharepoint.comSharePoint Online admin center (think of this as a slim version of Central Administration for SharePoint Online administrators)
https://tenant-public.sharepoint.comAnonymous public site reserved for internet presence (great for small and mid-size organizations but less useful with orgs with existing web presence)

 

You can specify your own custom domain for the public site, but NOT the default internal host-name. I have seen customers implement a number of redirect alternatives to overcome this challenge including DNS mapping, http modules, tiny URL/mapping solutions, etc.

In addition to host-name limitations, you cannot add new managed paths to SharePoint Online. For the default internal host-name you get the root, /search (explicit managed path), /sites (wildcard managed path), /teams (wildcard managed path), and /portals (wildcard managed path). The portals managed path was just recently added (a reader named Brian pointed it out to me). Although there is no right/wrong approach in applying these to SharePoint workloads, the table below outlines a popular approach I think makes a lot of sense:

Managed PathDescription
https://tenant.sharepoint.com/Reserve for corporate intranet or search
https://tenant.sharepoint.com/search/Reserve for enterprise search center
https://tenant.sharepoint.com/teams/*Reserve for “commodity” sites such as teams, projects, and self-service provisioning
https://tenant.sharepoint.com/sites/*Reserve for divisional, departmental, or specialty sites (ex: Record Center(s), SharePoint Applications, etc)
https://tenant.sharepoint.com/portals/*Reserve for large divisional, departmental, or specialty sites and publishing (ex: Video Portal, Knowledge Centers, etc)

 

Capacity Considerations

If you think the limited variety of URL options calls into question the traditional strategy of using more site collections…you are on to something. For on-premises customers, I always recommend the approach “when in doubt, break it out…into separate site collections”. This keeps content databases manageable for disaster recovery and (hopefully) under the recommended 200GB size for collaborative content. However, worrying about content databases and disaster recovery are things of the past for SharePoint Online customers…let us (Microsoft) deal with that. All signs point to re-thinking how you break up sites when moving to SharePoint in the cloud. In addition to URL limitations, multi-tenant enterprise customers are currently limited to 500,000 site collections (excluding OneDrive for Business). This limit has gone up several times since general availability...including most recently from 10,000 (thanks to Dean for pointing this out). To make up for any limitations in URLs and site collection limits, Microsoft recently announced the ability to scale site collections in SharePoint Online to 1TB. This is significantly beyond the 200GB recommendation for collaborative content on-premises. I don’t have specific guidance on how to break up sites in SharePoint Online as I don't think there is a one-size fits all solution. However, I will provide some generic guidance and best practices:

  • Continue to error on the side of breaking up sites into separate site collections
  • Keep content grouped within the same site collections where it logically makes sense and that fits within the 1TB capacity limitations of SharePoint Online (including room for future growth)
  • Site collections are the highest level securable container in SharePoint, so consider separate site collections when sites have highly uniquely permissions
  • Keep collaborative content separated from published content (ex: I wouldn’t include the Finance Department’s team sites in the same site collection as their informational/intranet pages available to the entire company)
  • Consider implementing retention/disposition strategy for both site collections and content to keep site collection count/sizes down and to recycle available site collection URLs
  • Just because site collections can grow bigger in the cloud, don't go overboard with elaborate site hierarchies. This is never a good idea as it makes it very hard to reorganize content in the future

I still like site collection provisioning for "commodity" sites such as teams and project (especially when combined with self-service provisioning). It is just too hard to predict how these types of sites will be used, where they should get provisioned, and how much space they need. What changes in SharePoint Online is the need to consider retention in commodity sites. Yes retention…that great little feature that SharePoint has supported for years, every organization says they need, yet hardly anyone implements. A site collection disposition strategy can great help manage capacity limitations in the cloud. We leverage a site collection disposition strategy internally at Microsoft, but all site collections must have two employee owners to avoid disposition of critical content when an employee switches roles or jobs. Also remember...retention/disposition does not necessarily mean delete. A disposition strategy might be a process to move high-value assets to a record center before deleting the unused collaborative site.

Templates and Provisioning

Templates enable sites to be pre-configured for quick/easy/repeatable provisioning elsewhere. Throughout SharePoint’s history, there have been a number of approaches for provisioning such as site definitions, site templates, web templates, feature stapling, provisioning providers, and more. Site templates and web templates are the only out of the box approach available for patterned provisioning in SharePoint Online. However, neither of these approaches are ideal. For example, site templates only work for subsites and do not support publishing. Web templates can miss out on new functionality “stapled” to sites in the ever-changing SharePoint Online. For a comprehensive list of provisioning techniques and their limitations, I highly recommend Vesa Juvonen’s post on site provisioning techniques.

Instead of site or web templates, it is recommended that SharePoint Online customers adopt a programmatic site provisioning strategy. Programmatically owning the provisioning process for site collections and subsites can achieve almost any desired outcome (including many things impossible in templates). This technique requires a customization, but it is highly documented and promoted by Microsoft and SharePoint Online experts. Most of the popular provisioning patterns are available on the Office 365 Developer Patterns and Practices initiative on GitHub. And solutions have been features on my blog, Vesa's blog,Channel 9, and most importantly Microsoft's SharePoint Online solution pack for branding and site provisioning

Since programmatic provisioning has already been thoroughly covered in other publications, I will provide an abstract. The concept is to replace standard provisioning with custom provisioning, including templates, forms, and provisioning execution.

The first step in delivering custom provisioning is defining site configurations. These configurations define the expected outcome from provisioning and are analogous to templates. Configurations can include almost anything, including branding, features, lists/libraries, page configurations, etc. A popular pattern on GitHub uses XML definitions for these “templates”, but the configurations could be stored virtually anywhere the provisioning “app” can read. These provisioning options will likely be presented to the user for selection in custom provisioning form(s) outlined below.

The second component in delivering custom provisioning are custom provisioning forms. These should replace the out of the box provision forms that exist throughout SharePoint Online (ex: /_layouts/15/newsbweb.aspx). SharePoint Online admin center already allows a custom “Start a site” form to be specified on the “Sites” portal of SharePoint Online (see image below). This is often used to enable self-service site collection provisioning. For subsite provisioning, our custom provisioning “app” can hijack the new subsite links throughout SharePoint (using a simple script-injection technique). Both these patterns have been thoroughly documented in Office 365 Developer Patterns and Practices.

Finally, the custom provisioning app needs to provision site collections and subsites based on the selected site configuration and any additional criteria specified in the provisioning form(s). The provisioning process will programmatically create sites and then add/remove components based on selected site configurations. Once the provisioning process is complete, the user can be directed to the newly created site. Again, a number of patterns are already available for download off GitHub, including applying branding, alternate CSS/logos, creating content types, script injections, Yammer integration, page/wikipage manipulation, and much more. However, almost anything is considered fair game in provisioning so long as the Client-Side Object Model (CSOM) supports it.

Managed Metadata and Content Types

Managed Metadata and Content Types can play key roles in a strong IA strategy. In on-premises deployments, these components can be synchronized across site collections, web applications, and even farms using the Managed Metadata Service. Keeping one definitive source for taxonomies and content types can greatly facilitate content search/discovery and policy adherence. Although SharePoint Online supports similar features (including content type syndication), it does not support hybrid synchronization with on-premises definitions. To support a strong enterprise content management strategy, SharePoint Online taxonomies and content types might need extra care to keep term and content type IDs identical to on-premises equivalents. Luckily, the client-side object model can help to achieve this when term sets (taxonomies) and content types are programmatically provisioned in the cloud. In fact, part of the site collection provisioning process might include the provisioning of custom content types (perhaps even setting default field values based on the site properties). It is not recommended you provision content types declaratively as was common in the feature framework. Although this can be done with a declarative sandbox solution in SharePoint Online, programmatic provision is the recommended method (unless the app is part of an app). Office 365 Patterns and Practices on GitHub contains samples for creating content types, synchronizing term sets, and even auto-tagging solutions.

Final Thoughts

This post wasn't meant to outline prescriptive guidance for information architecture. My goal was to outline some important IA considerations specific to SharePoint Online. I hope the details I've outlined helped you understand the unique constraints of multi-tenancy and some patterns for achieving information architecture utopia in SharePoint Online. Remember to checkout the Office 365 Development Patterns and Practices for samples for achieving many of the recommended patterns.

SharePoint Online Implementation Roadmap

$
0
0

This post will likely piss off a few consultant friends/readers that make a living selling “SharePoint Implementation Roadmaps”. These multi-week (sometimes multi-month) roadmap efforts produce hefty services fees and fluff documentation of obvious milestones (that will ultimately draw more consulting services to successfully deliver). The roadmap noise has gotten even louder with the insurgence of Office 365. Most enterprise customers own at least some Office 365 and are looking to maximize that investment. Of the services in Office 365, Exchange and Lync migrations tend to be more black/white compared to SharePoint’s 50 shades of gray. A week doesn’t go by where a customer doesn’t ask me “where do I start with SharePoint Online”. I’m going to simplify the answer into two generic approaches…either start with commodity sites or start with a specialty site. In this post, I’ll explain my logic behind these two generic approaches and some general implementation guidance for a successful migration to the cloud.

NOTE: I may have opened this post with a cynical view of consultants delivering implementation roadmaps. That said, I feel strongly that the best path to success in the cloud is to take that journey with a good services partner. A good partner can make all the difference in the world in an Office 365 implementation.

 

SharePoint as a Service

Before jumping into implementation approaches, it is very important to understand what it means to move SharePoint workloads to Office 365/SharePoint Online. Microsoft has traditionally released Office software in large waves. 2003, 2007, 2010, and 2013 have all been significant release years for Office/SharePoint. New releases bring new features and an elevated level of innovation in the products. Unfortunately, innovation would typically stay stagnant following a major release...at least until the next major release (~3 years later). A three year hold on innovation was more palatable a decade ago. Today, consumerization has enterprise users demanding the same levels of innovation they use at home. For Microsoft, moving from a three-year innovation cycle to continuous innovation was imperative for delivering the best productivity tools in the world.

Don’t believe me…consider the fact that the iPad didn’t exist when SharePoint 2010 was released. By the release of SharePoint 2013 almost every enterprise user carried at least one touch device that SharePoint 2010 struggled to support. One of the most dramatic visualizations of this trend are the pictures below. Taken in 2005 and 2013 at the inaugurations of Popes Benedict and Francis (respectively), they illustrate the rapid rate of innovation our world is experiencing and demanding from Microsoft.

Pope Benedict XVI Inauguration (2005)Pope Francis Inauguration (2013)
  

 

Rapid enterprise innovation would be great in an ideal world. In reality, most Microsoft customers struggle to keep up with the traditional 3-year release cycles. Upgrade blockers, budget/resource constraints, and conservative upgrade methodology keep most organizations at least a year or two behind current bits. To overcome this, Microsoft made the strategic decision to innovate cloud-first and in a much more frequent cadence (from every 3-years to every 3-months). This is a critical principle to understand of Office 365/SharePoint Online. You will stay innovative in SharePoint Online because we own the upgrade process and won't let you fall behind. The chart below illustrates innovation over time for SharePoint Server, SharePoint Online, and the typical on-premises SharePoint customer. A few important notes in the chart:

  • Notice the innovation gap that occurs between a major release and when a customer actually upgrades. I still have customers on SharePoint 2003, so this gap can be much more substantial than the chart represents
  • Notice how SharePoint Online recently surpassed SharePoint Server in innovation and is on a trajectory to stay ahead. When SharePoint 2013 was first released, there were a number of features unavailable to the cloud (ex: content by search, cross-site publishing, etc). Although feature disparities still exist, more gaps exist on-premises with a flood of recent cloud-only features such as Power BI, Video Portals, Delve, Storage Capacities, etc
  • SharePoint Online innovation does not plateau…it offers continuous innovation to customers that have adopted the service
Chart Showing SharePoint Innovation Trends over Time

Continuous innovation can cause concerns for organizations mindful of the change management implications that come with rapid change. Microsoft tries to address these concerns through timely and thorough communications on the Office Blogs, MSDN, the Office 365 IT Pro network of Yammer, the Office 365 Message Center, and the Office 365 Change Management Guide for the Enterprise. Additionally, it is important to consider the magnitude of change. Innovation in the service will be steady, but likely more subtle than the traditional three-year upgrades

Implementation Preparation

Before jumping into a SharePoint Online implementation, there are a number of steps that are recommended for better success and the user experience in the cloud. The first step is to ensure identity/authentication is configured correctly. Behind the scenes, SharePoint Online always uses identities from Azure Active Directory. These cloud identities originate in one of three ways...100% cloud identities, synchronized identities, or synchronized identities with federation. The diagram below summarizes these three options, but you can also reference Choosing a sign-in model for Office 365.

Cloud identities are users/groups that are provisioned directly into Azure AD using the Azure management portal or the Office 365 admin portal. This identity model is completely cloud-contained, meaning it doesn't require on-premises infrastructure and Office 365 will own the entire sign-in experience (including user passwords). The cloud identity approach is commonly used by small organizations, pilots, and POCs.

Synchronized identities are users/groups synchronized to Azure AD from an on-premises identity provider such as Active Directory. In addition to profile information, synchronized identities have their password hashes synchronized to Azure AD so they don't require separate passwords in the cloud. This option is simple and requires less infrastructure, but can be confusing in short periods following a password change (where passwords are temporarily out of sync). With synchronized identities, Office 365 will owns the entire sign-in experience (except for the passwords).

Federated identities are similar to synchronized identities in that users/groups are synchronized into Azure AD from an on-premises identity provider. However, password hashes are not synchronized in a federated identity model. Instead, Office 365 relies upon a federated service (typically ADFS) to own the login process. Office 365 trusts the federated identity provider to return a set of claims that Office 365 can use to "impersonate" the synchronized user accounts in Azure AD. The important aspect of federated identities is that the Office 365 customer owns the sign-in process...not Office 365.

SharePoint is a container-based technology (site collections, subsites, lists, libraries, etc) that isn’t easy to reorganize. As such, I highly recommend getting a better understanding of the different containers in SharePoint and their capacity constraints in SharePoint Online. Establishing a solid information architecture strategy is imperative since a wrong decision on information architecture can cost significant time and money to fix (ex: converting a subsite into a site collection requires a migration tool). I recently authored a post on SharePoint Online Information Architecture Considerations that explains some of the primary constraints and patterns for IA in the cloud, including URL limitations, capacity constraints, templates/provisioning, and managed metadata/content types.

Implementation Approaches

I’ve seen numerous methods for implementing SharePoint Online, but most successful implementations can be summarized into two approaches…the “Commodity Approach” and the “Specialty Approach”. I have never seen a successful “big bang” implementation, so both of my recommendations follow an agile methodology for delivering quick wins over time. With an agile migration to the cloud, most customers will be left (at least temporarily) in a hybrid state. “Hybrid” exists when some SharePoint content exists on-premises and some exists in the cloud. This has special considerations that I’ll cover after detailing “Commodity" and "Specialty" approaches.

Commodity Approach

The idea behind the Commodity Approach to SharePoint Online implementations is to start with “commodity” sites. Commodity sites are sites with little differentiation that are provisioned to satisfy a want or need. Examples of commodity sites include OneDrive for Business sites (aka – personal sites) and Team/Project sites. These workloads demand very little differentiation, so they fit nicely into a commodity or “cookie-cutter” model. They typically require very little customization and can immediately benefit from being cloud-hosted (internet accessible, easy external sharing, increased storage quotas, always current, etc). For these reasons, commodity sites are a very popular and successful place to start in SharePoint Online. In fact, Microsoft began their internal move to SharePoint Online with OneDrive for Business and new team sites. Commodity sites are typically great candidates for self-service provisioning/disposition strategies. For more information on this, see the Office 365 Development Patterns and Practices on GitHub.

Commodity sites typically have the largest footprint in a SharePoint deployment. This makes migration of existing on-premises commodity sites an important consideration in a SharePoint Online implementation strategy. For migrating commodity sites, I’ve seen two successful strategies...”let them move it” and “move it for them”. With “let them move it”, the benefits of SharePoint Online are socialized to the organization and users are encouraged to manually migrate their own content into the cloud. With “move it for them”, migration tools are used to migrate existing on-premises content for users. Microsoft took a hybrid approach to migrations that worked incredibly well. We started with a 6-month self-service migration period for OneDrive and Team/Project sites. After the 6-month self-service period, we offered migration services for remaining legacy sites and disposed of many other sites. We found that most users were motivated to migrate their content to a) clean up the mess in their existing on-premises sites and b) get access to the great new features exclusive to SharePoint Online (internet accessible, easy external sharing, increased storage quotas, always current, etc). At the end of the self-service migration period, we had significantly less content to migrate than expected (which is a good thing since most migration tools charge by content volume).

I want to be clear that Microsoft did not cut corners in our own migration to the cloud. Although we own the service, we didn’t perform our migration using back-door methods unavailable to our customers (ex: content database attach). We used the same migration tools available to you, such as Dell’s (formerly Quest) Migration Suite for SharePoint, AvePoint’s DocAve Migrator, Metalogix’s Content Matrix, and several others. Most of the migration tools achieve similar results, so I don’t endorse or recommend any specific one. I recommend you evaluate migration tools based on existing vendor relationships, migration requirements, and budget constraints.

Specialty Approach

Implementing SharePoint Online with a "Specialty Approach" makes sense when your organization has an immediate SharePoint need or initiative that would be a good candidate for the cloud. “Specialty” refers to a special one-off need and can take many forms:

  • Already have an intranet redesign in this years budget?
  • Looking to deliver a special workload like a corporate YouTube or Knowledge Center in SharePoint?
  • Have a one-off process that you want to deliver and manage in SharePoint (ex: Procurement)?
  • Want a jump start in delivering enterprise social features to your organization?
  • Intrigued by some of the new cloud-only features like Delve, Office 365 Groups, Power BI, or the new Video Portals?

If you said yes to any of these, you might be a candidate for a Specialty Approach to SharePoint Online implementations. Like commodity sites that have minimal customizations, specialty sites are also a low-risk place to start SharePoint Online given their small footprint (usually just a few site collections). A specialty approach can also help generate migration momentum for commodity sites. Although Microsoft started with a Commodity Approach for our own migration, we quickly shifted focus to specialty sites once OneDrive and Team sites were rolling in the cloud.

Hybrid

Unless your organization is brand new to SharePoint, your first delve into SharePoint Online will immediately put you in a “hybrid” SharePoint landscape. Hybrid is a term used to describe a SharePoint deployment where content exists both on-premises and in the cloud. A hybrid SharePoint deployment comes with special considerations, especially when it comes to content search/discoverability. SharePoint search builds an index of content so it can deliver quick search results (similar to how the index in the back of a book works). In a hybrid configuration, you will have content in at least two places, neither of which can index the other. This introduces a disjointed search user experience with no ideal solution. An ideal search solution would allow search results to be returned from anywhere in a single set of results and relevancy. Although you can deliver a good user experience in hybrid scenarios, you cannot pull hybrid content into a single index (and thus no single results/relevancy). The hybrid search options available in SharePoint Online include a common search navigation or hybrid search federation. I’ve provided illustrations to better understand the user experience of each option. Below is a fictitious example of what an ideal search solution would look like if we could deliver a single results/relevancy. Notice how results from SharePoint On-Premises and SharePoint Online and intermixed based on one relevancy in this fictitious mock-up.

"Fictitious" Hybrid Search with Single Result/Relevancy

Option 1 - Common Search Navigation: With common search navigation, you will deliver 2+ search centers with a common user experience. At least one of the search centers will be delivered in SharePoint Online for delivering search results from the cloud. Ultimately, on-premises search centers will return on-premises search results and online search centers will deliver cloud search results. Leveraging a common user experience is what ties everything together. More specifically, an identical search navigation (also referred to as “scopes”) can give search the perception of being one experience (even though the user might be jumping between SharePoint Server and SharePoint Online). This is an elegantly simple solution to the hybrid challenge and one that has been popular inside Microsoft. It doesn’t require any scripts or trusts, just management of a search user experience in 2+ places. You can see in the illustration below that the user experience of the search centers are identical as the user jumps between cloud and on-premises by clicking on different search navigation links. If not for the URL changing, the user wouldn’t know they are jumping between farms. You can also see that no efforts are made to include hybrid results on the same page...cloud pages show cloud results and on-premises pages show on-premises results.

Hybrid Search via Common Search Experience/Navigation

Option 2 - Hybrid Search Federation: with hybrid search federation, you can deliver hybrid results on the same page (from SharePoint Server or SharePoint Online). However, the remote results will display in a separate display block. This configuration is often referred to as “remote indexing”, a term I find a little misleading since we aren’t actually indexing anything remote (just reading a remote index). Hybrid search federation does achieve at getting all results on one page. However, real remote indexing would allow search results to be returned with a single relevancy (read: no result blocks). The configuration to deliver hybrid search federation is outlined on MSDN and requires a multi-step configuration by an administrator on each end. Notice in the illustration that hybrid results are delivered on the same page, but in separate blocks and relevancy.

Hybrid Search via Remote Search Federation

Enterprise Social

Enterprise Social is a specialty workload popular for introducing Office 365 to the enterprise. It can set the foundation for enriched experiences through SharePoint, as ALL sites can benefit from social features such as pictures, profile information, likes, and shares. Office 365 offers two enterprise social platforms with SharePoint Social and Yammer. The SharePoint admin center allows administrators so select between these two options.

Configuring Social Platform for SharePoint Online

I recommend Office 365 customers use Yammer as their enterprise social platform for the following reasons:

  • Yammer offers a more mature and feature-rich social platform with native polls, praises, groups, graph APIs, better mobile clients, and much more
  • Deploying Yammer has no impact on the SharePoint Online footprint compared to SharePoint Social which requires OneDrive for Business sites (read: Yammer gives you more time to get your Information Architecture plan in order)
  • Microsoft will be focusing the majority of social investments in the Yammer platform…not SharePoint Social
  • The Yammer/SharePoint integration efforts have already made great progress, including document-contextual threads/post-to, search scopes, and Yammer apps/embeds
  • Where Yammer/SharePoint integration lacks, the Office 365 Developer Patterns and Practices offers provisioning samples for tighter integration (ex: provisioning Yammer groups or graph object with site provisioning). This also aligns with the Information Architecture recommendation of owning the site provisioning process
  • Yammer offers numerous resources for customer success including hands-on experts, case studies, and customer networks
  • Office 365 Enterprise, Mid-Sized Business and Education customers likely already own Yammer Enterprise

Final Thoughts

Although there are many pathways to SharePoint Online, most implementations start with commodity sites or a specialty site. Taking an agile bite out of the cloud can deliver a quick win that can set the momentum for successful cloud adoption. This post wasn’t meant to serve as a implementation roadmap replacement. My goal was to offer implementation patterns that have been successful with other customers so you can form your own implementation roadmap.

Using Apps for Excel for Custom Data Access and Reporting

$
0
0

Microsoft Excel is known for being the #1 tool in the world for business intelligence and reporting. Regardless of what format insights are delivered, users often desire the ability to export and work with data in Excel. For that reason, Excel plug-ins are incredibly popular for custom and proprietary data access. Unfortunately, traditional plug-ins can cripple a user’s ability to upgrade Office. PowerPivot and Power Query have made it easier to connect to a variety of data sources without the need for custom plug-ins. However, many data sources aren't supported in these tools, have complex constraints, or are too complex for an end-user to understand. For these scenarios, Excel-based Apps for Office can help close the gap by getting data into Excel. In this post, I’ll illustrate generic patterns for getting almost any data into Excel using the app model. The video below demonstrates the concepts of the post.

(Please visit the site to view this video)

Custom Data Access Scenarios

Microsoft has made heavy investment to position Excel as THE premier tool for business intelligence and reporting. Power Query and PowerPivot are big components of this strategy. With these tools, you users can natively connect to a myriad of data sources including those listed below. It is important to note that Power Query can funnel directly into PowerPivot, so the list represents a combination of supported data source across both tools.

  • Web Page
  • Excel File
  • CSV File
  • XML File
  • Text File
  • Folder
  • SQL Server Database
  • Microsoft SQL Azure Database
  • Access Database
  • Oracle Database
  • IBM DB2 Database
  • MySQL Database
  • PostgreSQL Database
  • Sybase Database
  • Teradata Database
  • SharePoint List
  • Odata Feed
  • Microsoft Azure Marketplace
  • Hadoop File (HDFS)
  • Microsoft Azure HDInsight
  • Microsoft Azure Blob Storage
  • Microsoft Azure Table Storage
  • Active Directory
  • Microsoft Exchange
  • Facebook
  • SAP BusinessObjects  BI Universe
  • Microsoft Analysis Services (Multi-dimensional)
  • Microsoft Analysis Services (Tabular)
  • SQL Server Parallel Data Warehouse
  • Informix
  • OLEDB/ODBC
  • Reporting Services Report

Unfortunately, data access isn’t always so black and white. Often, data structures can be too complex or proprietary to expose to end-users. Others might have complex authentication, throttling, or paging constraints that Power Query and PowerPivot struggle to support. For these scenarios (and for data sources not represented in the list), Apps for Office can play a key role in achieving Excel utopia.

My work with Yammer Analytics helped emphasize the need for these alternate data access patterns. Yammer REST APIs are comprehensive and secure. Although the APIs leverage REST/OData, they do so with paging/throttling and require a bearer token in the header of all request. Power Query and PowerPivot support REST/OData, but not with these additional constraints.

Why Apps for Office

Perhaps you have already solved complex data access challenges by developing Excel plug-ins with traditional technologies such as VSTO, VBA/Macros, etc. So why Apps for Office? Legacy development patterns aren’t going away anytime soon. However, the new app model will be the focus of Office extensibility investments and already has some significant advantages:

  • Apps for Office are developed using open web standards like HTML5/JavaScript and the web platform of your choice (like PHP or Ruby…use it)
  • Because they are delivered using web technologies, Apps for Office work in both the Office client and in the browser with Office Online
  • Apps for Office have an incredibly small client footprint. In fact, the only thing installed on the client is an xml file describing the app
  • Apps for Office are easily discoverable through private corporate catalogs or the public Office Store, which provides a marketplace for apps developed by Microsoft Partners

Client-Side Data Access

Excel tables will be the primary data structure we use to populate data from an app. Apps for Office interactive with Excel tables through the Web Extensibility Framework (WEF). WEF provides JavaScript APIs to read, write, and bind to tables using the Office.TableData data type. This data type defines the raw data and header definitions for an Excel table. I’ve developed two Office.TableData extension methods to generically populate it from any JSON array. This is the primary strategy of the pattern...get the data into JSON (regardless of data source) and we can easily inject it into Excel with these two extensions.

The addHeaders extension method initializes the table headers based on an object parameter. This object will typically be the table’s first row of data. The method will loop through and add headers/columns for each property of the object, ignoring complex data types (ex: typeof == object). Complex data types are ignored as they could represent one:many relationships between tables. This would be an interesting enhancement for a v2, but for now I’m keeping it simple with a single table.

addHeaders Extension to Office.TableData
//extension to Office.TableData to add headers
Office.TableData.prototype.addHeaders = function (obj) {
    var h = new Array();
    for (var prop in obj) {
        //ignore complex types empty columns and __type from WCF
        if (typeof (obj[prop]) != 'object'&&
            prop.trim().length > 0 &&
            prop != '__type')
            h.push(prop);
    }
    this.headers = h;
}

 

The addRange extension method appends rows to the Office.TableData based on a JSON array parameter. This method was specifically designed to support multiple appends, as would be common with throttled/paged results. The addRange method only looks at object properties that are defined as headers in the TableData object. As such, the headers should be set (manually or via addHeaders) prior to calling addRange.

addRange Extension to Office.TableData
//extension to Office.TableData to add a range of rows
Office.TableData.prototype.addRange = function (array) {
    for (i = 0; i < array.length; i++) {
        var itemsTemp = new Array();
        $(this.headers[0]).each(function (ii, ee) {
            itemsTemp.push(array[i][ee]);
        });
        this.rows.push(itemsTemp);
    }
}

 

The sample below outlines the use of these extension methods against JSON returned from a REST call.

  1. Initialize a new Office.TableData object
  2. Use the addHeaders extension method to define the columns/headers based on the first row of JSON data
  3. Use addRange extension method to load all the JSON data into the Office.TableData object
  4. Inject the Office.TableData into Excel by calling our setExcelData function
Using Extensions and Loading Excel Table
//wire up client-side processing
$('#btnSubmit1').click(function () {
    $.ajax({
        url: '../Services/Stocks.svc/GetHistory?stock=' + $('#txtSymbol1').val() + '&fromyear=' + $('#cboFromYear1').val(),
        method: 'GET',
        success: function (data) {
            //initalize the Office.TableData and load headers/rows from data
            var officeTable = new Office.TableData();
            officeTable.addHeaders(data.d[0]);
            officeTable.addRange(data.d);
            setExcelData(officeTable);
        },
        error: function (err) {
            showMessage('Error calling Stock Service');
        }
    });
    return false;
});

 

Using setSelectedDataAsync to Load Excel
//write the TableData to Excel
function setExcelData(officeTable) {
    if (officeTable != null) {
        Office.context.document.setSelectedDataAsync(officeTable, { coercionType: Office.CoercionType.Table }, function (asyncResult) {
            if (asyncResult.status == Office.AsyncResultStatus.Failed) {
                showMessage('Set Selected Data Failed');
            }
            else {
                showMessage('Set Selected Data Success');
            }
        });
    }
}

 

Server-Side Data Access

The client-side pattern works great for REST/OData scenarios with unique authentication/paging/throttling constraints. However, many data sources are not or cannot be exposed as REST/OData. For these scenarios, server-side data access methods must be used. Server-side data access opens up the door to connect with anything .NET supports. That said, the APIs available to Apps for Office can only interact with Excel workbooks client-side. Thus, the server-side strategy is to serialize data into JSON, inject the JSON as script on the page, and then use client-side APIs to integrate the JSON with the Excel workbook as the app loads. The code below shows a server-side button click event that retrieves data, serializes the data as a JSON string, and injects the JSON into the page using the Page's ClientScriptManager.

Server-side Data Access and JSON Serialization

protected void btnSubmit2_Click(object sender, EventArgs e)
{
    //use the stock service to get the history
    //although this samples a local service...
    //ANY data access .NET supports could be used
    Services.Stocks s = new Services.Stocks();
    var history = s.GetHistory(txtSymbol2.Text, Convert.ToInt32(cboFromYear2.SelectedValue));
    using (MemoryStream stream = newMemoryStream())
    {
        //serialize the List<StockStats> to a JSON string
        DataContractJsonSerializer ser = newDataContractJsonSerializer(typeof(List<Services.StockStat>));
        ser.WriteObject(stream, history);
        stream.Position = 0;
        StreamReader sr = newStreamReader(stream);
        var json = sr.ReadToEnd();

        //output the json string of stock history as javascript on the page so script can read and process it
        Page.ClientScript.RegisterStartupScript(typeof(Default), "JSONData", String.Format("var jsonData = {0};", json), true);
    }
}

 

The following code shows a client-side check for JSON as the app loads. Notice it handles the JSON identically to our client-side data access sample. That is the beauty of getting all data into JSON format.

Checking for JSON Client-Side on App Load

$(document).ready(function () {
    //check for json data loaded during server-side processing
    if (typeof jsonData == 'object') {
        //initalize the Office.TableData and load headers/rows from data
        var officeTable = new Office.TableData();
        officeTable.addHeaders(jsonData[0]);
        officeTable.addRange(jsonData);
        setExcelData(officeTable);
    }

    //wire up other client-side events...

 

Apps Delivered via Templates

One of the powerful concepts enabled through the app model is the ability to combine an App for Office with an Office template. This allows application logic to be packaged and distributed with pre-configured documents. The PayPal Invoicing App is a showcase example of this app/template packaging pattern. Although the PayPal app is more transactional in nature, the pattern also works in delivering reporting scenarios. Ultimately, the app provides a wizard for retrieving data and inserting it into Excel and the template uses the data with pre-configured models and visualizations. I recently used this pattern to deliver a Yammer Group Analytics app that is illustrated in the video below. Again, this is a very powerful way to deliver self-service reporting with pre-canned visuals.

(Please visit the site to view this video)

Final Thoughts

If you deliver or support an application with reporting needs, chances are you already have demands for getting application data into Excel. I hope this post helped illustrate how to Excel-enable applications that even Power Query and PowerPivot struggle with. Apps for Office can help close these reporting gaps by delivering high-value extensibility to Office without the invasive installation and niche technology of traditional Office plug-ins. Apps for Office are lightweight, work across client/browser, and are easy to develop. In fact, the solution outlined in this post was delivered in well under 100 lines of JavaScript. You can download the solution from the Office 365 Development Patterns and Practices of GitHub.

Developing Apps against the Office Graph

$
0
0

Last week, Microsoft started rolling out Delve to Office 365 customers. Delve is a cool new way to discover relevant information and connections across your work life. As cool as Delve is, I’m even more excited about the Office Graph that powers it. The Office Graph puts sophisticated machine learning on top of all the interactions you and your colleagues make with each other and content in Office 365. With the Office Graph you can identify information trending around people, content you have in common with others, and social connections that traverse organizational boundaries. Best of all, developers can leverage the Office Graph to create new and exciting scenarios that extend Office 365 like never before. In this post, I’ll illustrate some findings in developing my first Office Graph app. The video below illustrates some of the concepts of this post:

(Please visit the site to view this video)

 

NOTE: The Office Graph “learns” through the “actions” of users (aka - “actors”) against other user or objects in Office 365 (ex: site, document, etc.). Actions may take time to show up in the Office Graph because it leverages SharePoint’s search and search analytics technologies. Additionally, the more actors and actions, the more you will get out of the Office Graph. It might take some work to achieve this in a demo or developer tenant of Office 365. As a point of reference, I spent a good part of a Saturday signing in/out of 25 test accounts to generate enough desired activity and waited another 6-12 hours to see that activity show up in the Office Graph. Happy waiting :)

 

APIs and Graph Query Language (GQL)

I was extremely pleased to see detailed MSDN documentation accompany the release of Office Graph/Delve. Using GQL with the SharePoint Online Search REST API to query Office graph outlines the Graph Query Language (GQL) syntax and how to use it with the SharePoint Search APIs to query the Office Graph. The original “Marketecture” images for the Office Graph show a spider web of connections between people and content (see below).

In reality, this is exactly how the Office Graph and GQL work. People are “Actors” that perform activities/actions on other actors and objects. Ultimately, an activity/action generates a connection or “Edge”. When you query the Office Graph with GQL, you typically provide the Actor(s) and Action(s) and the Office Graph returns the Objects with “Edges” that match the actor/action criteria. Again, the Office Graph is queried through the standard SharePoint REST APIs for search, but with the additional GraphQuery syntax. Below are a few examples:

REST Examples with GQL

//Objects related to current user (ie - ME)
/_api/search/query?Querytext='*'&Properties='GraphQuery:ACTOR(ME)'

//Objects related to actor 342
/_api/search/query?Querytext='*'&Properties='GraphQuery:ACTOR(342)'

//Objects trending around current user (trending = action type 1020)
/_api/search/query?Querytext='*'&Properties='GraphQuery:ACTOR(ME\, action\:1020)'

//Objects related to current user and actor 342
/_api/search/query?Querytext='*'&Properties='GraphQuery:AND(ACTOR(ME)\, ACTOR(342))'

//Objects recently viewed by current user and modified by actor 342
/_api/search/query?Querytext='*'&Properties='GraphQuery:AND(ACTOR(ME\, action\:1001)\, ACTOR(342\, action\:1003))'

//People for whom the current worker works with
/_api/search/query?Querytext='*'&Properties='GraphQuery:ACTOR(ME\, action\:1019)'

//Objects related to actor 342 with 'Delve' in the title
/_api/search/query?Querytext='Title:Delve'&Properties='GraphQuery:ACTOR(342)'

 

Notice that the use of ME or specific IDs in the ACTOR part of the queries and the numeric action type code for the connection. Actors and Actions can be combined in numerous ways to deliver interesting intersections in the Office Graph. Below is a comprehensive list of action types and their visibility scope.

Action TypeDescriptionVisibilityID
PersonalFeedThe actor’s personal feed as shown on their Home view in Delve.Private1021
ModifiedItems that the actor has modified in the last three months.Public1003
OrgColleagueEveryone who reports to the same manager as the actor.Public1015
OrgDirectThe actor’s direct reports.Public1014
OrgManagerThe person whom the actor reports to.Public1013
OrgSkipLevelManagerThe actor’s skip-level manager.Public1016
WorkingWithPeople whom the actor communicates or works with frequently.Private1019
TrendingAroundItems popular with people whom the actor works or communicates with frequently.Public1020
ViewedItems viewed by the actor in the last three months.Private1001
WorkingWithPublicA public version of the WorkingWith edge.Public1033

 

The results returned from GQL queries are in a similar format as regular search queries against SharePoint’s REST APIs. However, GQL will add an additional “Edges” managed property that includes details about the action, the date of the action, and weight assigned by the ranking model. Below is an example of this property returned as part of the RelevantResult ResultTable of a Search API call.

Edges Managed Property
<d:element m:type="SP.KeyValue">
    <d:Key>Edges</d:Key>
    <d:Value>[{"ActorId":41391607,"ObjectId":151088624,
        "Properties":{"Action":1001,"Blob":[],
        "ObjectSource":1,"Time":"2014-08-19T13:46:29.0000000Z",
        "Weight":2}}]
    </d:Value>
    <d:ValueType>Edm.String</d:ValueType>
</d:element>

 

These Edge properties come into play when performing advanced GQL queries that specify a sort based on time (EdgeTime) or closeness (EdgeWeight). The sample below shows a search query to return people for whom the current user works with and sorted by closeness: 

WorkingWith by Closeness
//People for whom the current worker works with sorted by closeness
/_api/search/query?Querytext='*'&Properties='GraphQuery:ACTOR(ME\, action\:1019),GraphRankingModel:{"features"\:[{"function"\:"EdgeWeight"}]}'&RankingModelId='0c77ded8-c3ef-466d-929d-905670ea1d72'

 

One important note is that the Office 365 APIs do not currently support a search permission scope (I’m told it is coming). Until that exists, you will need to use the standard SharePoint app model to develop against the Office Graph.

Image Previews

One glimpse at Delve, and you immediately notice an attractively visual user interface. This isn’t your mama’s standard SharePoint search results. Visual previews accompany most Delve results without having to mouse/hover over anything. I could tell these visual previews would be a helpful in building my own applications against the Office Graph. After investigation, it appears that (at least for Office docs) a new layouts web handler generates on-demand previews based on some document parameters (very similar to dynamic image renditions). The getpreview.ashx handler accepts the Office document’s SiteID, WebID, UniqueID, and DocID to generate previews. All of these parameters can be retrieved as managed properties on a GCL query as is seen below and used in my app.

Managed Properties for Image Previews

//make REST call to get items trending around the current user GraphQuery:ACTOR(ME\, action\:1020)
$.ajax({
    url: appWebUrl + "/_api/search/query?Querytext='*'&Properties='GraphQuery:ACTOR(ME\\, action\\:1020)'&RowLimit=50" +
        "&SelectProperties='DocId,WebId,UniqueId,SiteID,ViewCountLifetime,Path,DisplayAuthor,FileExtension,Title,SiteTitle,SitePath'",
    method: "GET",
    headers: { "Accept": "application/json; odata=verbose" },
    success: function (data) {

 

Building Image Preview URL w/ getpreview.ashx
//build an image preview based on uniqueid, siteid, webid, and docid
o.pic = hostWebUrl + '/_layouts/15/getpreview.ashx?guidFile=' + o.uniqueId + '&guidSite=' + o.siteId + '&guidWeb=' + o.webId + '&docid=' + o.docId + '&ClientType=CodenameOsloWeb&size=small';

 

DISCLAIMER: The getpreview.ashx handler is a undocumented discovery. Use it with caution as it is subject to change without notice until officially documented.

 

The Final Product

For my first Office Graph app, I didn’t try to get too creative with GCL. Instead, I aimed at delivering an alternate/creative visual on top of some standard GCL queries. Specifically, I used d3.js to display Office Graph query results as animated bubbles sized by # of views. It’s a neat way to see the same results as Delve, but emphasized by popularity.

Delve (Browser)

Delve (Windows 8 Client)

Office Graph Bubbles App

Final Thoughts

I hope this post helped spark your interest in the power of the Office Graph for developers. The Office Graph opens up a new world of intelligence in Office 365 that you can harness your applications. You can download the “Office Graph Bubbles” app I built for this post HERE.

Developing Apps against the Office Graph – Part 2

$
0
0

Earlier this week I authored a blog post on Developing Apps against the Office Graph. In the post, I used static Graph Query Language (GQL) to display Office Graph results in a visualization different from Delve. In this post, I’ll take the solution further to utilize “edge weight” and dynamic GQL queries that include both object AND actors. Checkout the new solution in the video below, see how I built it in this post, and start contributing to the effort on GitHub!

(Please visit the site to view this video)

Dynamic GQL Queries

In my first post, I used a static GQL query that displayed trending content for the current user (GraphQuery:ACTOR(ME, action:1020)). I made no attempt to query other actors, actions, or combine GQL in complex AND/OR logic. It was a simple introduction into GQL with apps and served its purpose.

In the new solution, I wanted to add the ability query different actions of different actors (not just ME). I accomplished this by enabling actor navigation within the visualization and adding an actions filter panel. The active actor will always display in the “nucleus” and will default to the current user (similar to the first app).

Because the actor can change (through navigation) and multiple actions can be selected (through the filter panel), the app needed to support dynamic GQL queries. I broke the queries up into two REST calls…one for object actions types (ex: show trending content, show viewed content, etc) and one for actor action types (show direct reports, show colleagues, etc). This made it easy to parse query results with completely different managed properties. Some action types are considered Private and only work for the current user (ex: show viewed content) and other have specific Public action types (ex: 1019 = WorkingWith and 1033 = WorkingWithPublic). Notice how this is handled as the dynamic GQL query if constructed.

Building Dynamic GQL

//load the user by querying the Office Graph for trending content, Colleagues, WorkingWith, and Manager
var loadUser = function (actorId, callback) {
    var oLoaded = false, aLoaded = false, children = [], workingWithActionID = 1033; //1033 is the public WorkingWith action type
    if (actorId == 'ME')
        workingWithActionID = 1019; //use the private WorkingWith action type

    //build the object query
    var objectGQL = '', objectGQLcnt = 0;
    if ($('#showTrending').hasClass('selected')) {
        objectGQLcnt++;
        objectGQL += "ACTOR(" + actorId + "\\, action\\:1020)";
    }
    if ($('#showModified').hasClass('selected')) {
        objectGQLcnt++;
        if (objectGQLcnt > 1)
            objectGQL += "\\, ";
        objectGQL += "ACTOR(" + actorId + "\\, action\\:1003)";
    }
    if ($('#showViewed').hasClass('selected') && actorId == 'ME') {
        objectGQLcnt++;
        if (objectGQLcnt > 1)
            objectGQL += "\\, ";
        objectGQL += "ACTOR(" + actorId + "\\, action\\:1001)";
    }
    if (objectGQLcnt > 1)
        objectGQL = "OR(" + objectGQL + ")";

    //determine if the object query should be executed
    if (objectGQLcnt == 0)
        oLoaded = true;
    else {
        //get objects around the current actor
        $.ajax({
            url: appWebUrl + "/_api/search/query?Querytext='*'&Properties='GraphQuery:" + objectGQL + "'&RowLimit=50&SelectProperties='DocId,WebId,UniqueId,SiteID,ViewCountLifetime,Path,DisplayAuthor,FileExtension,Title,SiteTitle,SitePath'",
            method: 'GET',
            headers: { "Accept": "application/json; odata=verbose" },
            success: function (d) {
                if (d.d.query.PrimaryQueryResult != null&&
                    d.d.query.PrimaryQueryResult.RelevantResults != null&&
                    d.d.query.PrimaryQueryResult.RelevantResults.Table != null&&
                    d.d.query.PrimaryQueryResult.RelevantResults.Table.Rows != null&&
                    d.d.query.PrimaryQueryResult.RelevantResults.Table.Rows.results != null&&
                    d.d.query.PrimaryQueryResult.RelevantResults.Table.Rows.results.length > 0) {
                    $(d.d.query.PrimaryQueryResult.RelevantResults.Table.Rows.results).each(function (i, row) {
                        children.push(parseObjectResults(row));
                    });
                }

                oLoaded = true;
                if (aLoaded)
                    callback(children);
            },
            error: function (err) {
                showMessage('<div id="private" class="message">Errot calling the Office Graph for objects...refresh your browser and try again (<span class="hyperlink" onclick="javascript:$(this).parent().remove();">dismiss</span>).</div>');
            }
        });
    }

    //build the actor query
    var actorGQL = '', actorGQLcnt = 0;
    if ($('#showColleagues').hasClass('selected')) {
        actorGQLcnt++;
        actorGQL += "ACTOR(" + actorId + "\\, action\\:1015)";
    }
    if ($('#showWorkingwith').hasClass('selected')) {
        actorGQLcnt++;
        if (actorGQLcnt > 1)
            actorGQL += "\\, ";
        actorGQL += "ACTOR(" + actorId + "\\, action\\:" + workingWithActionID + ")";
    }
    if ($('#showManager').hasClass('selected')) {
        actorGQLcnt++;
        if (actorGQLcnt > 1)
            actorGQL += "\\, ";
        actorGQL += "ACTOR(" + actorId + "\\, action\\:1013)";
    }
    if ($('#showDirectreports').hasClass('selected')) {
        actorGQLcnt++;
        if (actorGQLcnt > 1)
            actorGQL += "\\, ";
        actorGQL += "ACTOR(" + actorId + "\\, action\\:1014)";
    }
    if (actorGQLcnt > 1)
        actorGQL = "OR(" + actorGQL + ")";

    //determine if the actor query should be executed
    if (actorGQLcnt == 0)
        aLoaded = true;
    else {
        //get actors around current actor
        $.ajax({
            url: appWebUrl + "/_api/search/query?Querytext='*'&Properties='GraphQuery:" + actorGQL + "'&RowLimit=200&SelectProperties='PictureURL,PreferredName,JobTitle,Path,Department'",
            method: 'GET',
            headers: { "Accept": "application/json; odata=verbose" },
            success: function (d) {
                if (d.d.query.PrimaryQueryResult != null&&
                    d.d.query.PrimaryQueryResult.RelevantResults != null&&
                    d.d.query.PrimaryQueryResult.RelevantResults.Table != null&&
                    d.d.query.PrimaryQueryResult.RelevantResults.Table.Rows != null&&
                    d.d.query.PrimaryQueryResult.RelevantResults.Table.Rows.results != null&&
                    d.d.query.PrimaryQueryResult.RelevantResults.Table.Rows.results.length > 0) {
                    $(d.d.query.PrimaryQueryResult.RelevantResults.Table.Rows.results).each(function (i, row) {
                        children.push(parseActorResults(row));
                    });
                }
                           
                aLoaded = true;
                if (oLoaded)
                    callback(children);
            },
            error: function (err) {
                showMessage('<div id="private" class="message">Error calling Office Graph for actors...refresh your browser and try again (<span class="hyperlink" onclick="javascript:$(this).parent().remove();">dismiss</span>).</div>');
            }
        });
    }
}

 

Using Edges

You might be wondering where ActorIDs come from in the code above. These are returned from the Office Graph as part of the “Edges” managed property that is included in Office Graph query results. In my last post, I provided an example of this property. The Edges value will be a JSON string that parses into an object array. Each item in the object array represents an edge that connects the actor to an object (the object could be any item/user in the Office Graph). The array can have multiple items if multiple edges/connections exist between the actor and object. Here are two examples described and as returned from the Office Graph.

Scenario 1: Bob (actor with ID of 111) might have an edge/connection to Frank (object with ID of 222) because Frank is his manager (action type 1013) and he works with him (action type 1019).

<d:element m:type="SP.KeyValue">
    <d:Key>Edges</d:Key>
    <d:Value>[{"ActorId":111,"ObjectId":222,
        "Properties":{"Action":1013,"Blob":[],
        "ObjectSource":1,"Time":"2014-08-19T10:46:49.0000000Z",
        "Weight":296}},
        {"ActorId":111,"ObjectId":222,
        "Properties":{"Action":1019,"Blob":[],
        "ObjectSource":1,"Time":"2014-08-17T18:16:21.0000000Z",
        "Weight":51}}]
    </d:Value>
    <d:ValueType>Edm.String</d:ValueType>
</d:element>

 

Scenario 2: Bob (actor with ID of 111) might have an edge/connection to Proposal.docx (object with ID of 333) because he viewed it recently (action type 1001) and is trending around him (1020).

<d:element m:type="SP.KeyValue">
    <d:Key>Edges</d:Key>
    <d:Value>[{"ActorId":111,"ObjectId":333,
        "Properties":{"Action":1001,"Blob":[],
        "ObjectSource":1,"Time":"2014-08-19T13:12:57.0000000Z",
        "Weight":7014}},
        {"ActorId":111,"ObjectId":333,
        "Properties":{"Action":1020,"Blob":[],
        "ObjectSource":1,"Time":"2014-08-17T11:44:34.0000000Z",
        "Weight":4056}}]
    </d:Value>
    <d:ValueType>Edm.String</d:ValueType>
</d:element>

 

To traverse the Office Graph through an actor, you need to use the ObjectId from the Edges managed property. The ActorId represents the actor the results came from. The new solution uses edge weight (ie – closeness) instead of view counts for bubble size. Because an object can have multiple edges, I decided to use the largest edge weight as seen in my parse function below:

Parse Query Results
//parse a search result row into an actor
var parseActorResults = function (row) {
    var o = {};
    o.type = 'actor';
    $(row.Cells.results).each(function (ii, ee) {
        if (ee.Key == 'PreferredName')
            o.title = ee.Value;
        else if (ee.Key == 'PictureURL')
            o.pic = ee.Value;
        else if (ee.Key == 'JobTitle')
            o.text1 = ee.Value;
        else if (ee.Key == 'Department')
            o.text2 = ee.Value;
        else if (ee.Key == 'Path')
            o.path = ee.Value;
        else if (ee.Key == 'DocId')
            o.docId = ee.Value;
        else if (ee.Key == 'Rank')
            o.rank = parseFloat(ee.Value);
        else if (ee.Key == 'Edges') {
            //get the highest edge weight
            var edges = JSON.parse(ee.Value);
            o.actorId = edges[0].ObjectId;
            $(edges).each(function (i, e) {
                var w = parseInt(e.Properties.Weight);
                if (o.edgeWeight == null || w > o.edgeWeight)
                    o.edgeWeight = w;
            });
        }
    });
    return o;
}

 

I also found that document objects had significantly larger edge weights than user objects. To adjust for this across two queries, I perform a normalization on user objects to keep their bubble size similar to document objects.

Edge Weight Normalization

//go through all children to counts and sum for edgeWeight normalization
var cntO = 0, totO = 0, cntA = 0, totA = 0;
$(entity.children).each(function (i, e) {
    if (e.type == 'actor') {
        totA += e.edgeWeight;
        cntA++;
    }
    else if (e.type == 'object') {
        totO += e.edgeWeight;
        cntO++;
    }
});

//normalize edgeWeight across objects and actors
totalEdgeWeight = 0;
$(entity.children).each(function (i, e) {
    //adjust edgeWeight for actors only
    if (e.type == 'actor') {
        //pct of average * average of objects
        e.edgeWeight = (e.edgeWeight / (totA / cntA)) * (totO / cntO);
    }
    totalEdgeWeight += e.edgeWeight
});

 

More on Preview Images

In the first post, I introduced the getpreview.ashx handler for generating on-demand preview images for documents. Some documents such as Excel and PDF don’t always render previews, so I added some logic for this. Ultimately, I try to pre-load the images (which I was already doing for SVG) and then revert to a static image if the pre-loaded image has a height or width of 0px. I also do this for users that don’t have a profile picture.

Handle Bad Preview Images

//load the images so we can get the natural dimensions
$('#divHide img').remove();
var hide = $('<div></div>');
hide.append('<img src="' + entity.pic + '" />');
$(entity.children).each(function (i, e) {
    hide.append('<img src="' + e.pic + '" />');
});
hide.appendTo('#divHide');
$('#divHide img').each(function (i, e) {
    if (i == 0) {
        entity.width = parseInt(e.naturalWidth);
        entity.height = parseInt(e.naturalHeight);
    }
    else {
        entity.children[i - 1].width = parseInt(e.naturalWidth);
        entity.children[i - 1].height = parseInt(e.naturalHeight);

        if (entity.children[i - 1].width == 0 ||
            entity.children[i - 1].height == 0) {
            if (entity.children[i - 1].type == 'actor') {
                entity.children[i - 1].width = 96;
                entity.children[i - 1].height = 96;
                entity.children[i - 1].pic = '../images/nopic.png';
            }
            else if (entity.children[i - 1].ext == 'xlsx' || entity.children[i - 1].ext == 'xls') {
                entity.children[i - 1].width = 300;
                entity.children[i - 1].height = 300;
                entity.children[i - 1].pic = '../images/excel.png';
            }
            else if (entity.children[i - 1].ext == 'docx' || entity.children[i - 1].ext == 'doc') {
                entity.children[i - 1].width = 300;
                entity.children[i - 1].height = 300;
                entity.children[i - 1].pic = '../images/word.png';
            }
            else if (entity.children[i - 1].ext == 'pdf') {
                entity.children[i - 1].width = 300;
                entity.children[i - 1].height = 300;
                entity.children[i - 1].pic = '../images/pdf.png';
            }
        }
    }
});

 

NOTE: the solution displays cross-domain user profile pictures, which can be buggy with Internet Explorer Security Zones. I’ve written a blog about handling cross-site images. However, I didn’t implement this pattern in the solution due to the potential volume of results. For best results, I recommend one of all of the following:

  • Sign into Office 365 with the “keep me signed in” option checked
  • Install the app in the MySiteHost or at least authenticate against the MySiteHost or OneDrive before opening the app
  • Make sure the app URL is in the same IE Security Zone as the hostweb and MySiteHost

 

Final Thoughts

Due to the popularity of the first post, I’ve decided to post the new solution on GitHub. Hopefully this will facilitate a community effort to add enhancements and fix bugs. Together, we can take the Office Graph to exciting new places.

Developing iOS Apps with Azure and Office 365 APIs

$
0
0

Microsoft will kick-off a world tour event this week with the Office 365 Summit (formerly Ignite). With planned stops across 6 continents and fresh new content/tracks, the Summit promised to be the premier Office 365 training event offered by Microsoft. I’ll have the pleasure of delivering developer sessions across two separate developer tracks with a focus on new Office 365 APIs. The Office 365 APIs are extremely exciting for the following reasons:

  • The Office 365 APIs promote Office 365 as a platform for developers. Ultimately, Office 365 can be seen a Software as a Service (SaaS) offering AND a Platform as a Service (PaaS) offering because of these API investments.
  • The Office 365 APIs remove the silos that exist in traditional Office/SharePoint solution development. Instead of developing solutions IN Office/SharePoint the Office 365 APIs enable solutions to be developed WITH Office/SharePoint.
  • The Office 365 APIs are delivered using standards such as REST/OData/OAuth, which enable developers from all backgrounds and technologies to leverage Office 365 features in their solutions. Instead of just targeting Microsoft/.NET developers, the APIs are technology agnostic for ANY developer.

As an evangelist, I speak frequently about this value proposition. As I prepared for the Office 365 Summit, I realized I have never leveraged the Office 365 APIs outside of Visual Studio (my comfort zone). As a weekend challenge, I decided to try developing an iOS app with the Office 365 APIs using XCode and Swift (both completely new to me). Are the Office 365 APIs and Azure really that easy to develop with outside of Visual Studio? You be the judge!

(Please visit the site to view this video)

 

NOTE: Visual Studio will always provide the premier developer experience with Azure and the Office 365 APIs. In fact, there are a number of fantastic solutions for developing iOS apps in Visual Studio (Xamarin, Cordova, etc). If not for the personal challenge, I’d probably use one of these technologies that keeps me in Visual Studio. Cordova is particularly interesting as it leverages HTML/JavaScript to deliver apps across mobile platforms (think responsive web wrapped in a simple native app). The Office 365 Summit has a session on developing cross-platform mobile apps with the Office 365 APIs and Cordova.

 

Getting Started

I decided to use Apple’s new Swift programming language, because (unlike Objective-C) I was unable to find ANY documentation, blogs, or starter projects for developing Swift apps with Windows Azure or Office 365. I figured Swift would add to the challenge and hopefully result in something useful for the iOS developer community. I borrowed a Macbook Air from work (yes, we have some Apple hardware in the MTC for demonstrations), downloaded XCode, and started developing. I divided my effort into four milestones…configuring the iOS app in Azure Active Directory (which is the identity provider to Office 365), authenticating to Azure AD within my iOS app, using the Office 365 Discovery Service to locate API end-points, and calling those end-points to consume Office 365 services. My applications calls several of these Office 365 services, but Azure AD's "Common Consent Framework" makes it easy authenticate once and consume multiple services. Below is a more comprehensive video that illustrates exactly how to get started and accomplish the four milestones that are detailed in this post.

(Please visit the site to view this video)

 

Configuring the Application in Azure AD

Registering the iOS app in Azure Active Directory is an extremely simple task. Visual Studio tooling automates most of this configuration for developers, so non-Microsoft developers will use the Windows Azure Portal (don't worry, it is still easy). MSDN details exactly how to add, updates, and delete applications in Azure Active Directory, but I'll provide the summary:

  1. Log into the Azure Management Portal with the Azure AD directory for your tenant
  2. Navigate to the "Active Directory" tab in the left navigation
  3. Click on the directory for your Office 365 tenant in the directory list
  4. Click on the "APPLICATIONS" tab in the top navigation
  5. Click on the "ADD" button at the bottom on the screen to launch the new application wizard
  6. Select "Add an application my organization is developing" in the new application wizard
  7. Give the application a NAME and select "NATIVE CLIENT APPLICATION" as the Type, then click the next button
  8. Provide a unique REDIRECT URI...this can be any valid URL (but it does not to have to actually resolve to anything), then click the complete button
  9. Once the Application has been provisioned, click on the "CONFIGURE" link in the top navigation
  10. Locate the "CLIENT ID" and copy this unique identifier for later use (our native app will need this)
  11. Locate the "permissions to other applications" section of the screen and grant the following permissions:
    1. Windows Azure Active Directory: "Read directory data" and "Enable sign-on and read users' profiles"
    2. Office 365 Exchange Online: "Read users' mail"
    3. Office 365 SharePoint Online: "Read users' files"
  12. Click the SAVE button at the bottom of the screen (don't forget this)
Application Permissions in Azure AD:

 

Client Authentication to Azure AD

It is possibly to accomplish a manual OAuth flow with Azure Active Directory. However, Microsoft offers the Active Directory Authentication Libraries (ADAL) to simplify this flow on most major platforms (iOS, Android, Windows, etc). ADAL for iOS is available on GitHub and makes easy work of authentication in our app. There are a few inputs ADAL will require to perform the authentication:

  • Authority: this will be https://login.windows.net/<tenantdomain> where <tenantdomain> is the full tenant domain for your Office 365 instance (ex: contoso.onmicrosoft.com)
  • ClientID: this is the unique identifier for the application within Azure Active Directory (value from Step 10 above)
  • RedirectURI: this is the redirect URL configured for the application within Azure Active Directory (value from Step 8 above)
  • Resource: this will be unique for each service we will call into. We will use Office 365's Discovery Service (discussed later) to get the resource URIs for each Office 365 service (ex: SharePoint, Exchange, etc)

Although iOS has some great features to store global configuration data, I took the lazy approach of hard-coding these as global variables.

Global Variables for ADAL

//define global variables for use across view controllers

var tenant:NSString = "rzna.onmicrosoft.com"

var authority:NSString = "https://login.windows.net/\(tenant)"

var clientID:NSString = "2908e4e2-c6a4-4829-b065-b15f7ab3ecef"

var redirectURI:NSURL = NSURL(string: "https://orgdna.azurewebsites.net")

var resources:Dictionary<String, Resource> = Dictionary<String, Resource>()

 

As mentioned before, ADAL makes it really easy to authenticate against Azure Active Directory with these variable. How easy? Is three lines of code easy enough for you? The key is to get all the ADALiOS assets in the XCode project correctly, including the ADALiOS.a library, the header files, and the Storyboards for handling the OAuth flow. I detail the steps in the longer video above. Once ADAL is in place, authenticating involves getting a ADAuthenticationContext with the authority (which will prompt a login) and the using the context to get a resource-specific access token using the resource, client id, and redirect URI. Here is the code with "Microsoft.SharePoint" hard-coded as the resource.

Authenticating to Azure AD with ADAL

//Use ADAL to authenticate the user against Azure Active Directory

var er:ADAuthenticationError? = nil

var authContext:ADAuthenticationContext = ADAuthenticationContext(authority: authority, error: &er)

authContext.acquireTokenWithResource("Microsoft.SharePoint", clientId: clientID, redirectUri: redirectURI, completionBlock: { (result: ADAuthenticationResult!) in

    //validate token exists in response

    if (result.accessToken == nil) {

        println("token nil")

    }

 

The Office 365 Discovery Service

When Microsoft released the Office 365 API Preview, they debuted a Discovery Service to locate API service end-points within Office 365. Many of the service end-points are universal such as Exchange which uses https://outlook.office365.com/ews/odata. However, every Office 365 user has a unique OneDrive path (typically at https://<tenant>-my.sharepoint.com/personal/<username>/_api). Because of this, the Discovery Service will be the first API call my iOS app makes after authenticating. I’ll store the results in a global Dictionary so resource details can be looked up from any controller view.

JSON from Office 365 Discovery Service
{
    d =     {
        results =         (
                        {
                Capability = MyFiles;
                EntityKey = "MyFiles@O365_SHAREPOINT";
                ProviderId = "72f988bf-86f1-41af-91ab-2d7cd011db47";
                ProviderName = Microsoft;
                ServiceAccountType = 2;
                ServiceEndpointUri = "https://rzna-my.sharepoint.com/personal/alexd_rzna_onmicrosoft_com/_api";
                ServiceId = "O365_SHAREPOINT";
                ServiceName = "Office 365 SharePoint";
                ServiceResourceId = "https://rzna-my.sharepoint.com/";
                "__metadata" =                 {
                    id = "https://api.office.com/discovery/me/services('MyFiles@O365_SHAREPOINT')";
                    type = "MS.Online.Discovery.ServiceInfo";
                    uri = "https://api.office.com/discovery/me/services('MyFiles@O365_SHAREPOINT')";
                };
            },
                        {
                Capability = Contacts;
                EntityKey = "Contacts@O365_EXCHANGE";
                ProviderId = "72f988bf-86f1-41af-91ab-2d7cd011db47";
                ProviderName = Microsoft;
                ServiceAccountType = 2;
                ServiceEndpointUri = "https://outlook.office365.com/ews/odata";
                ServiceId = "O365_EXCHANGE";
                ServiceName = "Office 365 Exchange";
                ServiceResourceId = "https://outlook.office365.com/";
                "__metadata" =                 {
                    id = "https://api.office.com/discovery/me/services('Contacts@O365_EXCHANGE')";
                    type = "MS.Online.Discovery.ServiceInfo";
                    uri = "https://api.office.com/discovery/me/services('Contacts@O365_EXCHANGE')";
                };
            },
                        {
                Capability = Calendar;
                EntityKey = "Calendar@O365_EXCHANGE";
                ProviderId = "72f988bf-86f1-41af-91ab-2d7cd011db47";
                ProviderName = Microsoft;
                ServiceAccountType = 2;
                ServiceEndpointUri = "https://outlook.office365.com/ews/odata";
                ServiceId = "O365_EXCHANGE";
                ServiceName = "Office 365 Exchange";
                ServiceResourceId = "https://outlook.office365.com/";
                "__metadata" =                 {
                    id = "https://api.office.com/discovery/me/services('Calendar@O365_EXCHANGE')";
                    type = "MS.Online.Discovery.ServiceInfo";
                    uri = "https://api.office.com/discovery/me/services('Calendar@O365_EXCHANGE')";
                };
            },
                        {
                Capability = Mail;
                EntityKey = "Mail@O365_EXCHANGE";
                ProviderId = "72f988bf-86f1-41af-91ab-2d7cd011db47";
                ProviderName = Microsoft;
                ServiceAccountType = 2;
                ServiceEndpointUri = "https://outlook.office365.com/ews/odata";
                ServiceId = "O365_EXCHANGE";
                ServiceName = "Office 365 Exchange";
                ServiceResourceId = "https://outlook.office365.com/";
                "__metadata" =                 {
                    id = "https://api.office.com/discovery/me/services('Mail@O365_EXCHANGE')";
                    type = "MS.Online.Discovery.ServiceInfo";
                    uri = "https://api.office.com/discovery/me/services('Mail@O365_EXCHANGE')";
                };
            }
        );
    };
}

 

Calling the Office 365 Discovery Service

//use the discovery service to see all resource end-points

let request = NSMutableURLRequest(URL: NSURL(string: "https://api.office.com/discovery/me/services"))

request.HTTPMethod = "GET"

request.setValue("application/json; odata=verbose", forHTTPHeaderField: "accept")

request.setValue("Bearer \(result.accessToken)", forHTTPHeaderField: "Authorization")

 

//make the call to the discovery service

NSURLConnection.sendAsynchronousRequest(request, queue: NSOperationQueue(), completionHandler:{ (response:NSURLResponse!, data: NSData!, error: NSError!) -> Void in

    var error:NSError? = nil

   

    let jsonResult: NSDictionary! = NSJSONSerialization.JSONObjectWithData(data, options:NSJSONReadingOptions.MutableContainers, error: &error) as? NSDictionary

   

    if (jsonResult != nil) {

        println(jsonResult)

        //parse the json into a Resource dictionary

        let results:NSArray = (jsonResult["d"] asNSDictionary)["results"] asNSArray

        for result in results {

            var r = Resource()

            r.Capability = result["Capability"] as? NSString

            r.EntityKey = result["EntityKey"] as? NSString

            r.ProviderId = result["ProviderId"] as? NSString

            r.ProviderName = result["ProviderName"] as? NSString

            r.ServiceAccountType = result["ServiceAccountType"] as? Int

            r.ServiceEndpointUri = result["ServiceEndpointUri"] as? NSString

            r.ServiceId = result["ServiceId"] as? NSString

            r.ServiceName = result["ServiceName"] as? NSString

            r.ServiceResourceId = result["ServiceResourceId"] as? NSString

            resources[r.Capability!] = r

        }

       

        //make sure we found the MyFiles items

        if (resources["MyFiles"] != nil) {

            //get an access token for this resource and query for

            self.loadFiles(resources["MyFiles"]!)

        }

    } else {

        // couldn't load JSON, look at error

        println("Unable to load Discovery json")

    }

})

 

Calling Office 365 APIs

I built my iOS app as a tabbed application with three controller views. The first tab will display files from OneDrive. The second tab will display mail from Exchange. The third tab will display colleagues from Azure Active Directory. API calls to a specific resource requires a resource-specific access token. Luckily, ADAL makes this easy to accomplish. I simply call acquireTokenWithResource on the ADAuthenticationContext to get a resource-specific access token. Then I can make normal REST call with the access token written to the request header.

Calling OneDrive for Business API

//calls the Office 365 APIs to get a list of files in the users OneDrive for Business

func loadFiles(resource: Resource) {

    //use the resource to get a resource-specfic token

    var er:ADAuthenticationError? = nil

    var authContext:ADAuthenticationContext = ADAuthenticationContext(authority: authority, error: &er)

    authContext.acquireTokenWithResource(resource.ServiceResourceId, clientId: clientID, redirectUri: redirectURI, completionBlock: { (result: ADAuthenticationResult!) in

        if (result.accessToken == nil) {

            println("token nil")

        }

        else {

            //build API string to get users OneDrive for Business Files

            let request = NSMutableURLRequest(URL: NSURL(string: "\(resource.ServiceEndpointUri!)/Files"))

            request.HTTPMethod = "GET"

            request.setValue("Bearer \(result.accessToken)", forHTTPHeaderField: "Authorization")

            request.setValue("application/json; odata=verbose", forHTTPHeaderField: "accept")

               

            //make the call to the OneDrive API

            NSURLConnection.sendAsynchronousRequest(request, queue: NSOperationQueue(), completionHandler:{ (response:NSURLResponse!, data: NSData!, error: NSError!) -> Void in

                var error:NSError? = nil

                let jsonResult: NSDictionary! = NSJSONSerialization.JSONObjectWithData(data, options:NSJSONReadingOptions.MutableContainers, error: &error) as? NSDictionary

                   

                if (jsonResult != nil) {

                    //parse the json into File objects in the table view

                    let results:NSArray = (jsonResult["d"] asNSDictionary)["results"] asNSArray

                    for result in results {

                        if ((result["Size"] asInt) > 0) {

                            //add to array

                            var f = File(name: result["Name"] asNSString, modified: result["TimeLastModified"] asNSString)

                            self.files.append(f)

                        }

                    }

                       

                    //update the UI

                    dispatch_async(dispatch_get_main_queue(), {

                        self.tblFiles.reloadData()

                        self.tblFiles.hidden = false

                        self.spinner.hidden = true

                        self.spinner.stopAnimating()

                        println("Files loaded")

                    })

                       

                } else {

                    // couldn't load JSON, look at error

                    println("Unable to load Files json")

                }

            })

        }

    })

}

 

Calling Mail API

//calls the Office 365 APIs to get a list of mail for the user

func loadMail(resource: Resource) {

    //use the resource to get a resource-specfic token

    var er:ADAuthenticationError? = nil

    var authContext:ADAuthenticationContext = ADAuthenticationContext(authority: authority, error: &er)

    authContext.acquireTokenWithResource(resource.ServiceResourceId, clientId: clientID, redirectUri: redirectURI, completionBlock: { (result: ADAuthenticationResult!) in

        if (result.accessToken == nil) {

            println("token nil")

        }

        else {

            //build API string to get users mail from Exchange

            let request = NSMutableURLRequest(URL: NSURL(string: "\(resource.ServiceEndpointUri!)/Me/Inbox/Messages"))

            request.HTTPMethod = "GET"

            request.setValue("Bearer \(result.accessToken)", forHTTPHeaderField: "Authorization")

            request.setValue("application/json", forHTTPHeaderField: "accept")

               

            //make the call to the OneDrive API

            NSURLConnection.sendAsynchronousRequest(request, queue: NSOperationQueue(), completionHandler:{ (response:NSURLResponse!, data: NSData!, error: NSError!) -> Void in

                var error:NSError? = nil

                let jsonResult: NSDictionary! = NSJSONSerialization.JSONObjectWithData(data, options:NSJSONReadingOptions.MutableContainers, error: &error) as? NSDictionary

                   

                if (jsonResult != nil) {

                    //parse the json into File objects in the table view

                    let results:NSArray = jsonResult["value"] asNSArray

                    for result in results {

                        var m = Mail(sender: (result["From"] asDictionary)["Name"] asNSString!, subject: result["Subject"] asNSString)

                        self.mail.append(m)

                    }

                       

                    //update the UI

                    dispatch_async(dispatch_get_main_queue(), {

                        //bind the table

                        self.tblMail.reloadData()

                        self.tblMail.hidden = false

                        self.spinner.hidden = true

                        self.spinner.stopAnimating()

                        println("Mail loaded")

                    })

                       

                } else {

                    // couldn't load JSON, look at error

                    println("Unable to load Mail json")

                }

            })

        }

    })

}

 

Calling Azure AD Graph API

//editing change action fires as the user types in the txtSearch textbox

@IBActionfunc editingChanged(sender: AnyObject) {

    //make sure the textbox has text

    if (countElements(txtSearch.text) > 0&& aadToken != nil) {

        //build REST call to Azure AD for user lookup

        let request = NSMutableURLRequest(URL: NSURL(string: "https://graph.windows.net/\(tenant)/users?$filter=startswith(displayName,'\(txtSearch.text)')&api-version=2013-11-08"))

        request.HTTPMethod = "GET"

        request.setValue("Bearer \(aadToken)", forHTTPHeaderField: "Authorization")

        request.setValue("application/json", forHTTPHeaderField: "accept")

           

        //send the request to the Azure AD Graph

        NSURLConnection.sendAsynchronousRequest(request, queue: NSOperationQueue(), completionHandler:{ (response:NSURLResponse!, data: NSData!, error: NSError!) -> Void in

            var error:NSError? = nil

            let jsonResult: NSDictionary! = NSJSONSerialization.JSONObjectWithData(data, options:NSJSONReadingOptions.MutableContainers, error: &error) as? NSDictionary

               

            if (jsonResult != nil) {

                //parse json results into Array

                let results:NSArray = jsonResult["value"] asNSArray

                   

                //clear out the users array so we can re-populate

                self.users.removeAll(keepCapacity: false)

                for result in results {

                    var u = User(name: result["displayName"] asNSString, email: result["mail"] asNSString)

                    self.users.append(u)

                }

                   

                //update the UI

                dispatch_async(dispatch_get_main_queue(), {

                    //bind the table

                    self.tblUsers.reloadData()

                })

                   

            } else {

                // couldn't load JSON, look at error

                println("Unable to load Mail json")

            }

        })

    }

}

 

Here are some screenshots of the app running in the iOS Simulator.

Azure AD AuthenticationOneDrive for Business Files
  

 

Mail from Exchange Online

 

Azure AD Directory Search

  

Conclusion

As expected, I spent far more time figuring out the Mac, XCode, and Swift than in consuming the Office 365 APIs. I can safely say (from experience now) that the Office 365 APIs and Azure are simple to integrate into almost any development platform. As long as the platform supports basic http requests (including header info), you can light it up with powerful Office 365 services!

You can download my XCode project HERE. Please note you need to register your own Application in your own Azure Active Directory and then change the Tenant, ClientID, and RedirectUri global variables accordingly.

Searching with the Office 365 APIs

$
0
0

Today, Microsoft quietly launched the “search” permission scope in Azure AD Applications. This is huge step in delivering new scenarios with the Office 365 APIs that leverage SharePoint search and the Office Graph. In this post, I’ll illustrate how to leverage search with the Office 365 APIs and use it to query site collections and modern groups a user has access to (a difficult/impossible task with the Office 365 APIs until today). The video below illustrates some of the key components of the post.

(Please visit the site to view this video)

Background

Apps for SharePoint and Office are considered “contextual” since they are launched from the platform they extend (often with contextual detail such as the host site URL). In contrast, applications built with the Office 365 APIs can stand on their own and connect INTO Office 365. Because these modern applications lack contextual launch parameters, the Office 365 APIs use a Discovery Service to get information on the resources available to the user/tenant. The Discovery Service provides high-level resource details such as Mail, Contacts, Calendar and OneDrive. It does not provide detail on every site collection a user has access to (a frequent ask by developers). The new search permission scope for Azure AD Applications can be used to query more granular details.

Adding the Search Permission

Configuring an application with permissions to Office 365 is usually accomplished through the “Add Connect Service” wizard of Visual Studio. Behind the scenes, this wizard registers an application in Azure AD and configures its “permission to other applications”. The ability for Azure AD to manage permissions to multiple apps/services is often referred to as “common consent”. Search is a brand new permission we can add through common consent. At the time of this post, the Visual Studio tooling does not surface this new permission scope. However, it can easily be added by going to the “Configure” tab of the application in Azure AD.

Manually Adding Search Permission in Azure

Performing Search in Code

The Office 365 APIs provide a comprehensive set of libraries for authentication, discovery, and performing operations against resources in Office 365. Although strongly-typed APIs exist for many operations, search is brand new to the Office 365 APIs so no strongly-typed functions yet exists. That doesn’t mean we can’t use the Office 365 APIs to get an access token that we can use with the REST endpoints for SharePoint search. To do this, we need a resource-specific access token. I was able to perform a successful search using the “MyFiles” resource. I suspect the “MyFiles” service resource URI is specialized for file access only. However, the Discovery Service should have a new “RootSite” resource that will work. The “RootSite” represents the root site collection in SharePoint Online (https://tenant.sharepoint.com) and not to be confused with the OneDrive root site or “My Site Host” (https://tenant-my.sharepoint.com).

Using Discovery Service to get "RootSite" Access Token

privateasyncTask<string> getAccessToken()
{
    // fetch from stuff user claims
    var signInUserId = ClaimsPrincipal.Current.FindFirst(ClaimTypes.NameIdentifier).Value;
    var userObjectId = ClaimsPrincipal.Current.FindFirst(SettingsHelper.ClaimTypeObjectIdentifier).Value;

    // setup app info for AuthenticationContext
    var clientCredential = newClientCredential(SettingsHelper.ClientId, SettingsHelper.ClientSecret);
    var userIdentifier = newUserIdentifier(userObjectId, UserIdentifierType.UniqueId);

    // create auth context (note: no token cache leveraged)
    authContext = newAuthenticationContext(SettingsHelper.AzureADAuthority);

    // create O365 discovery client
    DiscoveryClient discovery = newDiscoveryClient(newUri(SettingsHelper.O365DiscoveryServiceEndpoint),
        async () => {
            var authResult = await authContext.AcquireTokenSilentAsync(SettingsHelper.O365DiscoveryResourceId, clientCredential, userIdentifier);
            return authResult.AccessToken;
    });

    // query discovery service for endpoint for 'RootSite' endpoint
    dcr = await discovery.DiscoverCapabilityAsync("RootSite");

    // get access token for RootSite
    return authContext.AcquireToken(dcr.ServiceResourceId, clientCredential, newUserAssertion(userObjectId, UserIdentifierType.UniqueId.ToString())).AccessToken;
}

 

With our resource-specific access token, we can add it as a bearer token in the authorization header of our REST calls. The code below illustrates the full REST call and the parsing of JSON into strongly-typed classes.

Performing REST Search Query and Parsing Results

private static asyncTask<List<SearchResult>> getSearchResults(string query)
{
    List<SearchResult> results = newList<SearchResult>();
    SearchModel model = newSearchModel();
    var token = await model.getAccessToken();
    HttpClient client = newHttpClient();
    client.DefaultRequestHeaders.Add("Authorization", "Bearer " + token);
    client.DefaultRequestHeaders.Add("Accept", "application/json; odata=verbose");
    using (HttpResponseMessage response = await client.GetAsync(newUri(model.dcr.ServiceEndpointUri.ToString() + query, UriKind.Absolute)))
    {
        if (response.IsSuccessStatusCode)
        {
            XElement root = Json2Xml(await response.Content.ReadAsStringAsync());
            var items = root.Descendants("RelevantResults").Elements("Table").Elements("Rows").Elements("results").Elements("item");
            foreach (var item in items)
            {
                //loop through the properties returned for this item
                var newItem = newSearchResult();
                foreach (var prop in item.Descendants("item"))
                {
                    if (prop.Element("Key").Value == "Title")
                        newItem.Title = prop.Element("Value").Value;
                    else if (prop.Element("Key").Value == "Path")
                        newItem.Path = prop.Element("Value").Value;
                    else if (prop.Element("Key").Value == "SiteLogo")
                        newItem.SiteLogo = prop.Element("Value").Value;
                }
                       
                //only return site collections in primary domain...not the onedrive or public domains
                //this would probably be better placed in the original search query
                if (newItem.Path.ToLower().Contains(model.dcr.ServiceResourceId.ToLower()))
                    results.Add(newItem);
            }
        }
    }

    return results;
}

 

The getSearchResults method takes a search string in REST format. The sample provides two simple examples…one that queries all site collections for the user (contentclass:sts_site) and one that queries all modern groups for a user (contentclass:sts_site WebTemplate:GROUP). It is generic enough to take just about any search string formatted for REST, but these scenarios have specific interest to me and my customers. Notice that all search string start with /search. This is because the RootSite resource endpoint URI ends with _api.

Passing in Queries

public static asyncTask<List<SearchResult>> GetSiteCollections()
{
    return await getSearchResults("/search/query?querytext='contentclass:sts_site'&trimduplicates=true&rowlimit=50&SelectProperties='WebTemplate,Title,Path,SiteLogo'");
}

public static asyncTask<List<SearchResult>> GetModernGroups()
{
    return await getSearchResults("/search/query?querytext='contentclass:sts_site WebTemplate:GROUP'&trimduplicates=true&rowlimit=50&SelectProperties='WebTemplate,Title,Path,SiteLogo'");
}

 

Final Thoughts

Microsoft is really doubling down on the Office 365 APIs and the incorporating the search permission is a great example. I’m really excited to see what additional permissions 2015 will bring. You can download the sample MVC app used in the post and video HERE.


Connecting to SharePoint from an Office App

$
0
0

I'm frequently asked by developers how to connect to SharePoint from an App for Office. There are several ways to accomplish this, so I thought I'd document the patterns I've used. I'll detail five patterns:

  • Explicit Login
  • SharePoint-hosted Office App
  • App-only Permissions
  • Permissions "on the fly"
  • Office 365 APIs

Apps for Office typically lack user identity and contextual information that are important for connecting to SharePoint. You will see that most of the patterns will require the user to authenticate and/or provide the details of the site(s) to connect. These hardships may improve as Office evolves and new APIs/SDKs become available, but are a reality of the current app model.

Explicit Login (Code)

(Please visit the site to view this video)

The explicit login approach uses an app-hosted login form and CSOM to explicitly set credentials on the SharePoint Client Context. Although not elegant, an explicit login is a simple approach for connecting to SharePoint from an App for Office. CSOMs SharePointOnlineCredentials class can be used to provide credentials to SharePoint Online (similar approaches may be feasible on-premises). Store apps should never handle/prompt for user credentials. OAuth was introduced into the app model to prevent 3rd parties from handling user credentials. As such, this solution should only be considered for private apps (those published to a private app catalog).

Explicit Login with SharePointOnlineCredentials

protected void btnSignin_Click(object sender, EventArgs e)
{
    //read lists
    using (ClientContext client = newClientContext(txtSite.Text))
    {
        //set credentials on the clientcontext
        client.Credentials = newSharePointOnlineCredentials(txtUsername.Text, SecurePwd);
               
        //get all lists for the site
        var lists = client.Web.Lists;
        client.Load(lists);
        client.ExecuteQuery();

        //bind the lists to the lists dropdown
        List<ListDetail> listDetails = newList<ListDetail>();
        foreach(var list in lists)
        {
            listDetails.Add(newListDetail() { Id = list.Id, Title = list.Title });
        }
        cboList.DataSource = listDetails;
        cboList.DataBind();
        divLogin.Visible = false;
        divSelectList.Visible = true;
        lblHeader.Text = "Select List";
    }
}

 

SharePoint-hosted Office App (Code)

(Please visit the site to view this video)

Apps for Office can be delivered through a SharePoint app, which gives it direct context to SharePoint. In this scenario, the app is delivered in an Office template and its web content is rendered from a SharePoint app web. The Office template (containing the Office App) can be hosted in a SharePoint module, or as a content type in a document library. Either way, the Office app is launched in a template from SharePoint and not from the insert app button in Office. This solution works well for template scenarios or scenarios where the Office app is part of a larger SharePoint solution/site. SharePoint-hosting the Office app will limit the developer platform to client-side technologies. If you wanted to work with managed libraries/SDKs for Office (ex: Open XML SDK) you would need to do this behind a service client-side script could consume. It is possible to publish an app to the Office Store using this approach. However, it would be published as a SharePoint app, not an Office app. The code below shows how the app for Office can be SharePoint "aware" by using the script window.location or Office.context.document.url to get the app web and host web URLs for REST calls.

Getting AppWebUrl and HostWebUrl in SharePoint-hosted Office App

$(document).ready(function () {
    //determine the appweb and hostweb URLs based on the window.location
    var basePath = window.location.toString();
    basePath = basePath.substring(0, basePath.toLowerCase().indexOf("splistreader"));
    var appWebUrl = basePath + "splistreader";
    var hostWebUrl = basePath.substring(0, basePath.indexOf('-')) + basePath.substring(basePath.indexOf('.'));

    //get the lists from the host web
    $.ajax({
        url: appWebUrl + "/_api/SP.AppContextSite(@target)/web/lists?@target='" + hostWebUrl + "'",
        headers: {
            "Accept": "application/json; odata=verbose"
        },
        success: function (data) {
            $(data.d.results).each(function (i, e) {
                $("#cboList").append($("<option value='" + e.Id + "'>" + e.Title + "</option>"));
            });
            $("#btnGetData").removeAttr('disabled');
        },
        error: function (e) {
            $('#message').html('<div class="alert alert-danger" role="alert">Error occurred!</div>');
            $('#message').show();
        }
    });

 

App-only Permissions (Code)

(Please visit the site to view this video)

Another approach I've used to connect to SharePoint from an Office app is through a provider-hosted SharePoint app with app-only permissions. App-only permissions enables SharePoint to be queried without the normal user context. The app for Office will instead perform operations against SharePoint as an app and not a user. However, lack of user context doesn't mean this will work with zero context. At minimum, the Office app needs the URLs for any site(s) it will communicate with. This could be hard-coded for specific in-house scenarios or captured and cached from a user prompt. Enabling connections to any SharePoint site would require tenant-scoped permission in the SharePoint app. Due to the tenant permission scope and the lack of user/tenant context, this approach is not recommended for multi-tenant apps. You should be extra careful how you expose app-only functionality as it does not adhere to user permissions and could expose sensitive information to users.

App-only ClientContext with CSOM

publicActionResult Index(string site)
{
    List<SPList> list = newList<SPList>();

    //get site
    Uri siteUri = newUri(site);

    //Get the realm for the URL
    string realm = TokenHelper.GetRealmFromTargetUrl(siteUri);

    //Get the access token for the URL.  Requires this app to be registered with the tenant
    string accessToken = TokenHelper.GetAppOnlyAccessToken(TokenHelper.SharePointPrincipal, siteUri.Authority, realm).AccessToken;

    //Get client context with access token
    using (var clientContext = TokenHelper.GetClientContextWithAccessToken(siteUri.ToString(), accessToken))
    {
        var lists = clientContext.Web.Lists;
        clientContext.Load(lists);
        clientContext.ExecuteQuery();

        foreach (var l in lists)
        {
            list.Add(newSPList
            {
                Id = l.Id,
                SiteUrl = site.ToLower(),
                Title = l.Title
            });
        }
    }

    returnView(list);
}

 

Permissions "on the fly" (Code)

(Please visit the site to view this video)

Permissions "on the fly" is a technique that allows an app to dynamically ask for permissions to SharePoint resources at runtime. The dynamic permission request presents SharePoint with an app ID and desired permission(s). This technique requires a SharePoint app to be registered through the seller dashboard (or appregnew.aspx) before the app can request permissions (SharePoint won't give permissions to an app without registration details it can validate). The permissions "on the fly" flow (also called Authentication Code OAuth flow) will provide the app with a short-lived authorization code that can be used to get an access token for SharePoint resources. The app will also get refresh token that can be safely cached to avoid going through the permission flow again in the future.

To initiate the flow, the app should load or redirect to /_layouts/15/OAuthAuthorize.aspx relative to the desired SharePoint site the app wants to access. This should include the following URL parameters:

ParameterDescription
client_id
(required)
The app id of the SharePoint app as registered in the seller dashboard or appregnew.aspx
scope
(required)
Space-separated list of permissions the app is requesting (ex: Web.Manage). A comprehensive list of scope options can be found HERE
response_type
(required)
The authorization type you want back from the authorize flow. For permissions on the fly this will always be "code"
redirect_url
(required)
Where we want the authorization response returned. This should match the redirect uri registered in the seller dashboard of appregnew.aspx
isdlg
(optional)
Flag indicating if the Authentication Code OAuth flow is performed in a dialog or not
state
(optional)
Optional parameter that can be used to pass values through the OAuth flow and critical for referencing data between disconnected windows

 

I recommend launching the OAuthAuthorize.aspx page in a dialog window since the page is not responsive to render nicely in most Office app shapes. The pictures below illustrates the user experience difference in a task pane app.

OAuthAuthorize.aspx without dialogOAuthAuthorize.aspx with dialog
  

 

Using a dialog window delivers a better OAuthAuthorize.aspx page experience, but also introduces an issue as Office app isolation prevents the dialog from communicating back into the app. A solution is to have the app pass a reference (ex: GUID) through the state URL parameter. The redirect page (which receives the authorization code) can use this reference to talk back into the app (via SignalR or cached in a database the app can refresh and read). For the sample app I've provided, I simply prompt the users to refresh the page after the authorization process is complete (this can be seen in the video and images above).

"Permissions on the fly" call using OAuthAuthorize.aspx

<scripttype="text/javascript">
    $(document).ready(function () {
        $("#btnAddSite").click(function () {
            //launch the popup
            if ($("#txtSiteUrl").val().length >= 10) {
                var url = $("#txtSiteUrl").val();
                if (url.charAt(url.length) != '/')
                    url += '/';

                //build a redirect URI
                var redirect = encodeURI("https://localhost:44367/Site/Add") + "&state=" + $("#hdnUserID").val() + "|" + encodeURI(url.toLowerCase());
                url += "_layouts/15/OAuthAuthorize.aspx?IsDlg=1&client_id=b36fb934-b990-41a5-b9e7-1dddf66ded2e&scope=Web.Manage&response_type=code&redirect_uri=";
                url += redirect;
                window.open(url, "", "width=720, height=300, scrollbars=0, toolbar=0, menubar=0, resizable=0, status=0, titlebar=0");

                $("#refreshModal").modal("show");
            }
        });
    });
</script>

 

The OAuthAuthorize.aspx page returns an authorization code to the redirect URI. This authorization code can be used to get access and refresh tokens for SharePoint.

Controller action to handle authorization code response from OAuthAuthorize.aspx

publicActionResult Add()
{
    //check for error
    if (Request["error"] != null)
    {
        //Redirect to error
        return RedirectToAction("Error", "Home", new { error = Request["error"] });
    }
    else if (Request["code"] != null)
    {
        //get state parameters
        string[] stateParams = Request["state"].ToLower().Split('|');
        Guid userID = newGuid(stateParams[0]);
        Uri siteURI = newUri(stateParams[1]);
        string siteURIString = stateParams[1];
               
        //get realm and token for site
        string realm = TokenHelper.GetRealmFromTargetUrl(siteURI);
        var token = TokenHelper.GetAccessToken(Request["code"], TokenHelper.SharePointPrincipal, siteURI.Authority, realm, new Uri("https://localhost:44367/Site/Add"));

        //use access token to establish clientContext
        using (var clientContext = TokenHelper.GetClientContextWithAccessToken(stateParams[1], token.AccessToken))
        {
            clientContext.Load(clientContext.Web.CurrentUser);
            clientContext.ExecuteQuery();

            //check if a user exists in the database...create new if needed
            using (ShptPermsOnFlyEntities entities = newShptPermsOnFlyEntities())
            {
                var user = entities.Users.FirstOrDefault(i => i.UserLogin == clientContext.Web.CurrentUser.LoginName.ToLower() && i.Id == userID);
                if (user == null)
                {
                    //create the user
                    user = new User()
                    {
                        Id = userID,
                        UserLogin = clientContext.Web.CurrentUser.LoginName.ToLower()
                    };
                    entities.Users.Add(user);
                    entities.SaveChanges();
                }

                //add the site to the site listing if it doesn't already exist
                var site = entities.Sites.FirstOrDefault(i => i.UserId == user.Id && i.SiteURI == siteURIString);
                if (site == null)
                {
                    //create the site listing
                    site = newSite()
                    {
                        Id = Guid.NewGuid(),
                        UserId = user.Id,
                        SiteURI = stateParams[1],
                        Token = token.RefreshToken
                    };
                    entities.Sites.Add(site);
                    entities.SaveChanges();
                }
                else
                {
                    //update the refresh token
                    site.Token = token.RefreshToken;
                    entities.SaveChanges();
                }
            }
        }
    }
    returnView();
}

 

Although this approach is one of the most complex, it is one of the best option for multi-tenant apps that are targeting the Office Store and a pattern used by several popular apps.

Office 365 APIs (Code)

(Please visit the site to view this video)

The Office 365 APIs have a huge advantage over the other scenarios with regard to SharePoint context. These APIs leverage a discovery service that provides contextual information about users and SharePoint. An App for Office can use this service for basic SharePoint details (root site and OneDrive URLs) and perform search queries to deliver a "site picker" for users (instead of having users type site URLs).

The Office 365 APIs also pose a challenge when combined with an App for Office. Apps for Office are hosted in a browser control/iframe that "frames" the Azure AD login process. This login process does not work well when displayed in a frame. In fact, clicking the "Sign in" button causes a new browser window to open for completing the OAuth flow. Unfortunately, the new browser window has no way to communicate back to the App for Office. Similar to "Permissions on the fly", we can pass a reference code into the OAuth flow. However, the Office 365 SDK for ASP.NET/MVC does not currently expose the ability to customize the authorization request parameters. Instead, we will perform a manual OAuth flow that has been detailed by Chaks and Matthias Leibmann. Here are the high-level steps for this flow and cross-window communication:

  1. Check for a user cookie (which maps to a refresh token in a database)
  2. If the user doesn't have a cookie…generate a new GUID and store as cookie
  3. Launch the OAuth flow with Azure AD in a new window (passing the GUID as reference)
  4. Use the authorization code returned from the OAuth flow to get access and refresh token
  5. Store the refresh token in the database with the GUID user reference
  6. Prompt the user to refresh the Office app (which can now lookup the refresh token by the GUID user reference that is stored in a cookie)
  7. Use the refresh token in the app to get resource-specific access tokens for data retrieval

This script sample launches the OAuth flow for unknown users. Notice the user GUID reference we are passing on the redirect URI (stored in the hdnUserId hidden input).

Manual authorization request with Azure AD

@section Scripts {
<scripttype="text/javascript">
    $(document).ready(function () {
        var exists = @Model.UserExists.ToString().ToLower();
        if (!exists) {
            var redirect = "https://login.windows.net/common/oauth2/authorize";
            redirect += "?client_id=2a337874-4d83-407c-b178-8379f24aff29";
            redirect += "&resource=Microsoft.SharePoint";
            redirect += "&redirect_uri=" + encodeURI("https://localhost:44365/OAuth/AuthCode/" + $("#hdnUserID").val());
            redirect += "&response_type=code";
            window.open(redirect, "", "width=720, height=300, scrollbars=0, toolbar=0, menubar=0, resizable=0, status=0, titlebar=0");

            $("#refreshModal").modal("show");
        }
    });
</script>
}

 

Here is the AuthCode action on the OAuthController. This accepts the authorization code from the OAuth flow, gets a refresh token, and stores it in a database with the user reference.

OAuthController for managing the authorization code response from Azure AD 

public asyncTask<ActionResult> AuthCode(Guid id)
{
    if (Request["code"] == null)
        return RedirectToAction("Error", "Home", new { error = "Authorization code not passed from the authentication flow" });
    else if (id == null)
        return RedirectToAction("Error", "Home", new { error = "User reference code not passed from the authentication flow" });

    //get access token using the authorization code
    var token = awaitTokenHelper.GetAccessTokenWithCode(id, Request["code"], SettingsHelper.O365DiscoveryResourceId);

    //make call into discovery service
    HttpClient client = newHttpClient();
    client.DefaultRequestHeaders.Add("Authorization", "Bearer " + token.access_token);
    using (HttpResponseMessage response = await client.GetAsync("https://api.office.com/discovery/v1.0/me/services"))
    {
        if (response.IsSuccessStatusCode)
        {
            string json = await response.Content.ReadAsStringAsync();
            JObject root = JObject.Parse(json);
            var resources = ((JArray)root["value"]).ToObject<List<DiscoveryResource>>();
            var rootResource = resources.FirstOrDefault(i => i.capability == "RootSite");

            //redirect if we have an error
            if (rootResource == null)
                return RedirectToAction("Error", "Home", new { error = "RootSite is not a valid service capability for this user" });

            //get root files resource
            var rootToken = awaitTokenHelper.GetAccessTokenWithRefreshToken(token.refresh_token, rootResource.serviceResourceId);
                   
            //save the details in the token store database and redirect to sites
            using (O365TokenStoreEntities entities = newO365TokenStoreEntities())
            {
                UserToken uToken = newUserToken()
                {
                    UserId = id,
                    ServiceEndpointUri = rootResource.serviceEndpointUri,
                    ServiceResourceId = rootResource.serviceResourceId,
                    RefreshToken = rootToken.refresh_token
                };
                entities.UserTokens.Add(uToken);
                entities.SaveChanges();
            }
        }
    }

    //get discoverInfo
    returnView();
}

 

Here is an example of making a REST call into SharePoint using the cached refresh token (which we convert into an access token placed on the request header).

Performing REST call against SharePoint with cached refresh token
publicasyncstaticTask<List<SPList>> GetLists(UserToken userToken, string siteUrl)
{
    List<SPList> lists = newList<SPList>();
    HttpClient client = newHttpClient();
    var fullToken = awaitTokenHelper.GetAccessTokenWithRefreshToken(userToken.RefreshToken, userToken.ServiceResourceId);
    client.DefaultRequestHeaders.Add("Authorization", "Bearer " + fullToken.access_token);
    client.DefaultRequestHeaders.Add("Accept", "application/json; odata=verbose");
    using (HttpResponseMessage response = await client.GetAsync(siteUrl + "/_api/web/lists"))
    {
        if (response.IsSuccessStatusCode)
        {
            JObject root = JObject.Parse(await response.Content.ReadAsStringAsync());
            var listResults = root.SelectToken("d.results").ToArray();
            foreach (var list in listResults)
            {
                lists.Add(newSPList()
                {
                    SiteUrl = siteUrl,
                    Id = new Guid(list.SelectToken("Id").ToString()),
                    Title = list.SelectToken("Title").ToString()
                });
            }
        }
    }
    return lists;
}

 

The Office 365 APIs have a similar complexity to "permissions on the fly", but is the most solid solution for multi-tenant apps (thanks to the discovery service).

Final Thoughts

So there you have it…five patterns for connecting to SharePoint from an app for Office. I'm sure new options will show up as Office and its APIs continue to evolve.

Building an App for Office in 7 minutes

$
0
0

When I speak to developers about building Apps for Office, I’ll often build an end-to-end app from scratch to illustrate the ease in development. I can usually mock up most scenarios in 5-20min. How so quick? Apps for Office leverage standard web technologies like HTML5, JavaScript, and CSS, making most web developers an instant Office developer. I've had such success building apps on the fly that I thought I'd use this post to record that demonstration. The app I'll build is called “Rhymes for Word”, a simple task pane app that:

  • Subscribes to selection changes in the Word document
  • Reads text selections from a Word document when they change
  • Queries an online service for rhymes to display in the task pane
  • Allows the user to write a rhyme back into the word document

Although it is silly scenario, it illustrates several key concepts of Office apps. I've listed rhyme service endpoint, you you are welcome to follow along and build your own app.

(Please visit the site to view this video)

If you want to build the Rhymes for Word app on your own, you are welcome to use my rhyme service and follow the video OR download the complete solution:

Rhyme Service:
https://rhymeservice.azurewebsites.net/Rhymes.svc/GetRhymes?text=WORD

Completed Solution:
http://1drv.ms/18ArOUo 

Building a Mail Compose App for Outlook in 9 minutes

$
0
0

Yesterday I authored a post and video illustrating how to build a Task Page App for Office in 7 minutes. The purpose was to show how quickly apps can be developed for Office using the new app model (especially compared to traditional Office plug-in development). I was asked to do a similar video on Mail Compose Apps for Outlook. For my Mail Compose App, I decided to connect to Bing Translation Services in the Azure Data Marketplace. The app allows a user to translate text and add the translated text to an email message. I realize translation is already built into the Outlook client, but this would be useful in other Outlook apps and OWA. Also, development experience (not function) was the purpose of the post.

(Please visit the site to view this video)

If you want to build your own Translation compose app, you can follow along in the video or download the completed solution HERE.

Developing Native iOS Apps using the Office 365 SDK for iOS

$
0
0

Several months ago I authored a post on Developing iOS Apps with Azure and Office 365 APIs. In that post, I leveraged the Azure AD Authentication Library (ADAL) for iOS to authenticate and get resource-specific access tokens for Office 365 data retrieval. At the time of the post, performing REST queries and parsing results was the only mechanism available for communicating with Office 365 services. Since then, Microsoft has released an Office 365 SDK for iOS that makes it much easier to work with Office 365 services from a native iOS app. In this post, I’ll show you how to reference and use this new SDK in an Xcode project using the Swift programming language. I used Swift in my first post and will continue to use it here because of the lack of documentation that exists in the community.

(Please visit the site to view this video)

Ensure Client Created

Almost all of the Office 365 API samples in Visual Studio use an “Ensure Client Created” method to abstract away authentication context, access tokens, and discovery from normal CRUD operations (EnsureSharePointClientCreated for SharePoint services and EnsureOutlookClientCreated for Exchange services). You can find this pattern being used on MVC, Universal App, and Cordova samples in GitHub and does a nice job of cleaning up CRUD methods. As such, I decided to pattern my Swift code in a similar way. Below you can see my EnsureMSSharePointClientCreated function that establishes authentication context based on authority, calls into the Discovery Service to find the “MyFiles” resource, and initializes the MSSharePointClient object.

EnsureMSSharePointClientCreated
//
//  MyFilesController.swift
//  SDKTesterOffice365
//
//  Created by Richard diZerega on 11/19/14.
//  Copyright (c) 2014 Richard diZerega. All rights reserved.
//
import Foundation
typealias SPClientCreatedResponse = (MSSharePointClient?, NSString?) -> Void
typealias ServiceResponse = (Array<SPItem>?, NSError?) -> Void
typealias FileResponse = (NSData?, NSError?) -> Void
class MyFilesController { 
    init() {
    }
   
    var DISC_RESOURCE:NSString = "https://api.office.com/discovery/"
    var DISC_SERVICE_ENDPOINT:NSString = "https://api.office.com/discovery/v1.0/me/"
   
    func EnsureMSSharePointClientCreated(onCompletion:SPClientCreatedResponse) -> Void {    
        var er:ADAuthenticationError? = nil
    
        //setup the authentication context for the authority
        var authContext:ADAuthenticationContext = ADAuthenticationContext(authority: authority, error: &er)
       
        //get access token for calling discovery service
        authContext.acquireTokenWithResource(DISC_RESOURCE, clientId: clientID, redirectUri: redirectURI, completionBlock: {
            (discATResult: ADAuthenticationResult!) in
            
            //validate token exists in response
            if (discATResult.accessToken == nil) {
                onCompletion(nil, "Error getting Discovery Service Access Token")
            }
            else {
                //setup resolver for injection for discovery              
                var discResolver:MSDefaultDependencyResolver = MSDefaultDependencyResolver()
                var discCred:MSOAuthCredentials = MSOAuthCredentials()
                discCred.addToken(discATResult.accessToken)
                var discCredImpl:MSCredentialsImpl = MSCredentialsImpl()
                discCredImpl.setCredentials(discCred) 
                discResolver.setCredentialsFactory(discCredImpl)
               
                //create the discovery client instance 
                var client:MSDiscoveryClient = MSDiscoveryClient(url: self.DISC_SERVICE_ENDPOINT, dependencyResolver: discResolver)
               
                //get the services for the user
                var task:NSURLSessionTask = client.getservices().read({(discoveryItems: [AnyObject]!, error: NSError!) -> Void in
                    
                    //check for error and process items
                    if (error == nil) {
                        dispatch_async(dispatch_get_main_queue(), {
                            //cast the discoveryItems as an array of MSDiscoveryServiceInfo
                            var discList = (discoveryItems as Array<MSDiscoveryServiceInfo>)
                            
                            //loop through and find the MyFiles resource
                            var myFilesResource:MSDiscoveryServiceInfo?
                            for discItem in discList {
                                if (discItem.capability == "MyFiles") {
                                    myFilesResource = discItem
                                    break
                                }                      
                            }
      
                            //make sure we found the MyFiles resource
                            if (myFilesResource != nil) {
                                var resource:MSDiscoveryServiceInfo = myFilesResource!
                                
                                //get a MyFiles access token
                                authContext.acquireTokenWithResource(resource.serviceResourceId, clientId: clientID, redirectUri: redirectURI, completionBlock: {
                                    (shptATResult: ADAuthenticationResult!) in
                                   
                                    //validate token exists in response
                                    if (shptATResult.accessToken == nil &&
                                        shptATResult.tokenCacheStoreItem == nil &&
                                        shptATResult.tokenCacheStoreItem.accessToken == nil) {
                                        onCompletion(nil, "Error getting SharePoint Access Token")
                                    }
                                    else {
                                        //get the access token from the result (could be cached)
                                        var accessToken:NSString? = shptATResult.accessToken
                                        if (accessToken == nil) {
                                            accessToken = shptATResult.tokenCacheStoreItem.accessToken 
                                        }
                                       
                                        //setup resolver for injection
                                        var shptResolver:MSDefaultDependencyResolver = MSDefaultDependencyResolver()
                                        var spCred:MSOAuthCredentials = MSOAuthCredentials()
                                        spCred.addToken(accessToken)
                                        var spCredImpl:MSCredentialsImpl = MSCredentialsImpl()
                                        spCredImpl.setCredentials(spCred) 
                                        shptResolver.setCredentialsFactory(spCredImpl)
                                       
                                        //build SharePointClient 
                                        var client:MSSharePointClient = MSSharePointClient(url: resource.serviceEndpointUri, dependencyResolver: shptResolver)
                                       
                                        //return the SharePointClient in callback
                                        onCompletion(client, nil)
                                    }
                                })
                            }
                            else {
                                onCompletion(nil, "Unable to find MyFiles resource")
                            }
                        })
                    }
                    else {
                        onCompletion(nil, "Error calling Discovery Service")
                    }
                })
                
                task.resume()
            }
        })
    }
   
 

 

Performing CRUD Operations

All CRUD operations are wrapped in the EnsureMSSharePointClientCreated function, whose callback returns an initialized MSSharePointClient object that can be used to query SharePoint (Outlook is similar). If you have worked with other Office 365 SDKs, you should find the syntax for performing CRUD operations very similar. Below are functions for getting folder items and file contents.

Performing Basic CRUD Operations with Office 365 SDK for iOS
func GetFiles(id:NSString, onCompletion:ServiceResponse) -> Void {   
    EnsureMSSharePointClientCreated() { (client:MSSharePointClient?, error:NSString?) in
        //check for null client
        if (client != nil) {
            var spClient:MSSharePointClient = client!
           
            //determine if we load root or a subfolder
            if (id == "") {
                //get the files using SDK
                var task:NSURLSessionTask = spClient.getfiles().read({ (items: [AnyObject]!, error: NSError!) -> Void in
                    if (error == nil) {
                        dispatch_async(dispatch_get_main_queue(), {
                            var list = (items as Array<MSSharePointItem>)
                            var spItems:Array<SPItem> = self.ConvertToSPItemArray(list)
                            onCompletion(spItems, nil)
                        })
                    }
                    else {
                        println("Error: \(error)")
                    }
                })
                task.resume()
            }
            else {
                //get the files using SDK
                var task:NSURLSessionTask = spClient.getfiles().getById(id).asFolder().getchildren().read({ (items: Array<AnyObject>!, error: NSError!) -> Void in
                    if (error == nil) {
                        dispatch_async(dispatch_get_main_queue(), {
                            var list = (items as Array<MSSharePointItem>)
                            var spItems:Array<SPItem> = self.ConvertToSPItemArray(list)
                            onCompletion(spItems, nil)
                        })
                    }
                    else {
                        println("Error: \(error)")
                    }
                })
                task.resume()
            }
        }
        else {
            println("Error: \(error)")
        }
    }
}
   
func GetFiles(onCompletion:ServiceResponse) -> Void {
    GetFiles("", onCompletion)
}
    
func GetFileContent(id: NSString, onCompletion:FileResponse) {
    //ensure client created
    EnsureMSSharePointClientCreated() { (client:MSSharePointClient?, error:NSString?) in
        //check for null client
        if (client != nil) {
            var spClient:MSSharePointClient = client!

            //get the file content using SDK
            spClient.getfiles().getById(id).asFile().getContent({ (data: NSData!, er: NSError!) -> Void in
                onCompletion(data, nil)
            }).resume()
        }
        else {
            println("Error: \(error)")
        }
    }  
}

 

Conclusions

The Office 365 SDK for iOS certainly makes it easy to perform basic operations against Office 365 services. At the very least, I’m not parsing JSON/XML or trying to figure out the exact REST or header syntax. That said, I would encourage you to learn both patterns (pure REST and SDK) as both can be helpful in developing powerful applications that run on iOS. Feel free to download the Xcode project outlined in this post HERE.

Getting started with Bootstrap and AngularJS (for the SharePoint Developer)

$
0
0

Over past few months I’ve traveled the world talking to developers about building applications with Office 365. One of my favorite topics is building apps with Bootstrap and AngularJS. I like this topic because it illustrates the “new” Microsoft and our commitment to open source technologies. Bootstrap and AngularJS are wildly popular frameworks and we want to enable developers to use these and just about any other framework/library with Office 365. For some traditional SharePoint/Office developers, these technologies can be unfamiliar given they were difficult/impossible to leverage in the past.

In this video I’ll illustrate the basics of Bootstrap and AngularJS from the ground up. I’ll start with a completely blank page and build it into a completed app in easy to follow steps. Finally, I’ll import the HTML/JS/CSS assets into a SharePoint app in Visual Studio and connect everything with REST. This is a beginner’s guide to Bootstrap and AngularJS focused on traditional SharePoint developers.

(Please visit the site to view this video)

Request Digest and Bootstrap

Two-thirds of the video is a pure beginner’s guide to Bootstrap and AngularJS with no mention of SharePoint/Office. However, the end discusses specific considerations of leveraging these technologies in a SharePoint app. Specifically, the challenge of Bootstrap and the Request Digest value for a page.

The SharePoint masterpage adds a hidden “__RequestDigest” input field to all SharePoint pages. The value of this input field must be included in the header of HTTP POSTS against the SharePoint REST endpoints (read: add/update/delete). However, if we are building an app that leverages Bootstrap, it is unlikely we will leverage the SharePoint masterpage for app pages. This means the “__RequestDigest” field will not exist on app pages. Instead, we can make an explicit REST call to get the Request Digest value when our app loads. I’ve created an ensureFormDigest function that I always add to my AngularJS service(s). I can wrap all my SharePoint operations in this function to ensure I have a Request Digest value available for POSTS.

Angular App/Service for SharePoint and ensureFormDigest
var app = angular.module('artistApp', ['ngRoute']).config(function ($routeProvider) {
    $routeProvider.when('/artists', {
        templateUrl: 'views/view-list.html',
        controller: 'listController'
    }).when('/artists/add', {
        templateUrl: 'views/view-detail.html',
        controller: 'addController'
    }).when('/artists/:index', {
        templateUrl: 'views/view-detail.html',
        controller: 'editController'
    }).otherwise({
        redirectTo: '/artists'
    });
});
app.factory('shptService', ['$rootScope', '$http',
  function ($rootScope, $http) {
      var shptService = {};
      //utility function to get parameter from query string
      shptService.getQueryStringParameter = function (urlParameterKey) {
          var params = document.URL.split('?')[1].split('&');
          var strParams = '';
          for (var i = 0; i < params.length; i = i + 1) {
              var singleParam = params[i].split('=');
              if (singleParam[0] == urlParameterKey)
                  return singleParam[1];
          }
      }
      shptService.appWebUrl = decodeURIComponent(shptService.getQueryStringParameter('SPAppWebUrl')).split('#')[0];
      shptService.hostWebUrl = decodeURIComponent(shptService.getQueryStringParameter('SPHostUrl')).split('#')[0];
      //form digest opertions since we aren't using SharePoint MasterPage
      var formDigest = null;
      shptService.ensureFormDigest = function (callback) {
          if (formDigest != null)
              callback(formDigest);
          else {
              $http.post(shptService.appWebUrl + '/_api/contextinfo?$select=FormDigestValue', {}, {
                  headers: {
                      'Accept': 'application/json; odata=verbose',
                      'Content-Type': 'application/json; odata=verbose'
                  }
              }).success(function (d) {
                  formDigest = d.d.GetContextWebInformation.FormDigestValue;
                  callback(formDigest);
              }).error(function (er) {
                  alert('Error getting form digest value');
              });
          }
      };

      //artist operations
      var artists = null;
      shptService.getArtists = function (callback) {
          //check if we already have artists
          if (artists != null)
              callback(artists);
          else {
              //ensure form digest
              shptService.ensureFormDigest(function (fDigest) {
                  //perform GET for all artists
                  $http({
                      method: 'GET',
                      url: shptService.appWebUrl + '/_api/web/Lists/getbytitle(\'Artists\')/Items?select=Title,Genre,Rating',
                      headers: {
                          'Accept': 'application/json; odata=verbose'
                      }
                  }).success(function (d) {
                      artists = [];
                      $(d.d.results).each(function (i, e) {
                          artists.push({
                              id: e['Id'],
                              artist: e['Title'],
                              genre: e['Genre'],
                              rating: e['AverageRating']
                          });
                      });
                      callback(artists);
                  }).error(function (er) {
                      alert(er);
                  });
              });
          }
      };
      //add artist
      shptService.addArtist = function (artist, callback) {
          //ensure form digest
          shptService.ensureFormDigest(function (fDigest) {
              $http.post(
                  shptService.appWebUrl + '/_api/web/Lists/getbytitle(\'Artists\')/items',
                  { 'Title': artist.artist, 'Genre': artist.genre, 'AverageRating': artist.rating },
                  {
                  headers: {
                      'Accept': 'application/json; odata=verbose',
                      'X-RequestDigest': fDigest
                  }
                  }).success(function (d) {
                      artist.id = d.d.ID;
                      artists.push(artist);
                      callback();
                  }).error(function (er) {
                      alert(er);
                  });
          });
      };
      //update artist
      shptService.updateArtist = function (artist, callback) {
          //ensure form digest
          shptService.ensureFormDigest(function (fDigest) {
              $http.post(
                  shptService.appWebUrl + '/_api/web/Lists/getbytitle(\'Artists\')/items(' + artist.id + ')',
                  { 'Title': artist.artist, 'Genre': artist.genre, 'AverageRating': artist.rating },
                  {
                      headers: {
                          'Accept': 'application/json; odata=verbose',
                          'X-RequestDigest': fDigest,
                          'X-HTTP-Method': 'MERGE',
                          'IF-MATCH': '*'
                      }
                  }).success(function (d) {
                      callback();
                  }).error(function (er) {
                      alert(er);
                  });
          });
      };
      //genre operations
      var genres = null;
      shptService.getGenres = function (callback) {
          //check if we already have genres
          if (genres != null)
              callback(genres);
          else {
              //ensure form digest
              shptService.ensureFormDigest(function (fDigest) {
                  //perform GET for all genres
                  $http({
                      method: 'GET',
                      url: shptService.appWebUrl + '/_api/web/Lists/getbytitle(\'Genres\')/Items?select=Title',
                      headers: {
                          'Accept': 'application/json; odata=verbose'
                      }
                  }).success(function (d) {
                      genres = [];
                      $(d.d.results).each(function (i, e) {
                          genres.push({
                              genre: e['Title']
                          });
                      });
                      callback(genres)
                  }).error(function (er) {
                      alert(er);
                  });
              });
          }
      };
      return shptService;
  }]);

 

I do this so frequently, that I’ve created a Visual Studio code snippet for getting app web/host web URLs and ensureFormDigest. You can download it at the bottom of this post.

Conclusion

Hopefully you can see how Bootstrap and AngularJS can greatly accelerate web development. In the past these were difficult to use in conjunction with SharePoint/Office development, but not with the app model. Hopefully this video helped you get a better understanding of Bootstrap, AngularJS, and how they can work with SharePoint/Office. You can download the completed solution and code snippet below.

ArtistCatalog Solution

Angular Service Code Snippet

Building Apps with the new Power BI APIs

$
0
0

Last month, Microsoft unveiled the new and improved Power BI, a cloud-based business analytics service for non-technical business users. The new Power BI is available for preview in the US. It has amazing new (HTML5) visuals, data sources, mobile applications, and developer APIs. This post will focus on the new Power BI APIs and how to use them to create and load data into Power BI datasets in the cloud. Microsoft is also working with strategic partners to add native data connectors to the Power BI service. If you have a great connector idea, you can submit it HERE. However, ANYONE can build applications that leverage the new APIs to send data into Power BI, so let’s get started!

(Please visit the site to view this video)

Yammer Analytics Revisited

I’ve done a ton of research and development on using Power BI with Yammer data. In fact, last year I built a custom cloud service that exported Yammer data and loaded it into workbooks (with pre-built models). The process was wildly popular, but required several manual steps that were prone to user error. As such, I decided to use the Yammer use case for my Power BI API sample. Regardless if you are interested in Yammer data, you will find generic functions for interacting with Power BI.

Why are Power BI APIs significant?

Regardless of how easy Microsoft makes data modeling, end-users (the audience for Power BI) don’t care about modeling and would rather just answer questions with the data. Power BI APIs can automate modeling/loading and give end-users immediate access to answers. Secondly, some data sources might be proprietary, highly normalized, or overly complex to model. Again, Power BI APIs can solve this through automation. Finally, some data sources might have unique constrains that make it hard to query using normal connectors. For example, Yammer has REST end-points to query data. However, these end-points have unique rate limits that cause exceptions with normal OData connectors. Throttling is just one example of a unique constraint that can be addressed by owning the data export/query process in a 3rd party application that uses the Power BI APIs.

Common Consent Vision

My exploration of the Power BI APIs really emphasized Microsoft’s commitments to Azure AD and "Common Consent" applications. Common Consent refers to the ability of an application leveraging Azure AD to authenticate ONCE and get access to multiple Microsoft services such as SharePoint Online, Exchange Online, CRM Online, and (now) Power BI. All a developer needs to do is request appropriate permissions and (silently) get service-specific access tokens to communicate with the different services. Azure AD will light up with more services in the future, but I’m really excited to see how far Microsoft has come in one year and the types of applications they are enabling.

Power BI API Permissions

Power BI APIs use Azure Active Directory and OAuth 2.0 to authenticate users and authorize 3rd party applications. An application leveraging the Power BI APIs must first be registered as an Azure AD Application with permissions to Power BI. Currently, Azure AD supports three delegated permissions to Power BI from 3rd party applications. These include "View content properties", "Create content", "Add data to a user’s dataset". "Delegated Permissions" means that the API calls are made on behalf of an authenticated user…not an elevated account as would be the case with "Application Permissions" ("Application Permissions" could be added in the future). The permissions for an Azure AD App can be configured in the Azure Management Portal as seen below.

Access Tokens and API Calls

With an Azure AD App configured with Power BI permissions, the application can request resource-specific access tokens to Power BI (using the resource ID "https://analysis.windows.net/powerbi/api"). The method below shows an asynchronous call to get a Power BI access token in a web project.

getAccessToken for Power BI APIs
/// <summary>
///Gets a resource specific access token for Power BI ("https://analysis.windows.net/powerbi/api")
/// </summary>
/// <returns>Access Token string</returns>
privatestatic asyncTask<string> getAccessToken()
{
    // fetch from stuff user claims
    var signInUserId = ClaimsPrincipal.Current.FindFirst(ClaimTypes.NameIdentifier).Value;
    var userObjectId = ClaimsPrincipal.Current.FindFirst(SettingsHelper.ClaimTypeObjectIdentifier).Value;
    // setup app info for AuthenticationContext
    var clientCredential = newClientCredential(SettingsHelper.ClientId, SettingsHelper.ClientSecret);
    var userIdentifier = newUserIdentifier(userObjectId, UserIdentifierType.UniqueId);
    // create auth context (note: no token cache leveraged)
    AuthenticationContext authContext = newAuthenticationContext(SettingsHelper.AzureADAuthority);
    // get access token for Power BI
    return authContext.AcquireToken(SettingsHelper.PowerBIResourceId, clientCredential, newUserAssertion(userObjectId, UserIdentifierType.UniqueId.ToString())).AccessToken;
}

 

The Power BI APIs offer REST endpoints to interact with datasets in Power BI. In order to call the REST end-points, a Power BI access token must be placed as a Bearer token in the Authorization header of all API calls. This can be accomplished server-side or client-side. In fact, the Power BI team has an API Explorer to see how most API calls can be performed in just about any language. I decided to wrap my API calls behind a Web API Controller as seen below. Take note of the Bearer token set in the Authorization header of each HttpClient call.

Web API Controller
public classPowerBIController : ApiController
{
    [HttpGet]
    public asyncTask<List<PowerBIDataset>> GetDatasets()
    {
        return awaitPowerBIModel.GetDatasets();
    }
    [HttpGet]
    public asyncTask<PowerBIDataset> GetDataset(Guid id)
    {
        return awaitPowerBIModel.GetDataset(id);
    }
    [HttpPost]
    public asyncTask<Guid> CreateDataset(PowerBIDataset dataset)
    {
        return awaitPowerBIModel.CreateDataset(dataset);
    }
    [HttpDelete]
    public asyncTask<bool> DeleteDataset(Guid id)
    {
        //DELETE IS UNSUPPORTED
        return awaitPowerBIModel.DeleteDataset(id);
    }
    [HttpPost]
    public asyncTask<bool> ClearTable(PowerBITableRef tableRef)
    {
        return awaitPowerBIModel.ClearTable(tableRef.datasetId, tableRef.tableName);
    }
    [HttpPost]
    public asyncTask<bool> AddTableRows(PowerBITableRows rows)
    {
        return awaitPowerBIModel.AddTableRows(rows.datasetId, rows.tableName, rows.rows);
    }
}

 

Power BI Model Class
/// <summary>
///Gets all datasets for the user
/// </summary>
/// <returns>List of PowerBIDataset</returns>
public static asyncTask<List<PowerBIDataset>> GetDatasets()
{
    List<PowerBIDataset> datasets = newList<PowerBIDataset>();
    var token = await getAccessToken();
    var baseAddress = newUri("https://api.powerbi.com/beta/myorg/");
    using (var client = newHttpClient{ BaseAddress = baseAddress })
    {
        client.DefaultRequestHeaders.Add("Authorization", "Bearer " + token);
        client.DefaultRequestHeaders.Add("Accept", "application/json; odata=verbose");
        using (var response = await client.GetAsync("datasets"))
        {
            string responseString = await response.Content.ReadAsStringAsync();
            JObject oResponse = JObject.Parse(responseString);
            datasets = oResponse.SelectToken("datasets").ToObject<List<PowerBIDataset>>();
        }
    }
    return datasets;
}
/// <summary>
///Gets a specific dataset based on id
/// </summary>
/// <param name="id">Guid id of dataset</param>
/// <returns>PowerBIDataset</returns>
public static asyncTask<PowerBIDataset> GetDataset(Guid id)
{
    PowerBIDataset dataset = null;
    var token = await getAccessToken();
    var baseAddress = newUri("https://api.powerbi.com/beta/myorg/");
    using (var client = newHttpClient { BaseAddress = baseAddress })
    {
        client.DefaultRequestHeaders.Add("Authorization", "Bearer " + token);
        client.DefaultRequestHeaders.Add("Accept", "application/json; odata=verbose");
        using (var response = await client.GetAsync(String.Format("datasets/{0}", id.ToString())))
        {
            string responseString = await response.Content.ReadAsStringAsync();
            JObject oResponse = JObject.Parse(responseString);
        }
    }
    return dataset;
}
/// <summary>
///Creates a dataset, including tables/columns
/// </summary>
/// <param name="dataset">PowerBIDataset</param>
/// <returns>Guid id of the new dataset</returns>
public static asyncTask<Guid> CreateDataset(PowerBIDataset dataset)
{
    var token = await getAccessToken();
    var baseAddress = newUri("https://api.powerbi.com/beta/myorg/");
    using (var client = newHttpClient{ BaseAddress = baseAddress })
    {
        var content = newStringContent(JsonConvert.SerializeObject(dataset).Replace("\"id\":\"00000000-0000-0000-0000-000000000000\",", ""), System.Text.Encoding.Default, "application/json");
        client.DefaultRequestHeaders.Add("Authorization", "Bearer " + token);
        client.DefaultRequestHeaders.Add("Accept", "application/json");
        using (var response = await client.PostAsync("datasets", content))
        {
            string responseString = await response.Content.ReadAsStringAsync();
            JObject oResponse = JObject.Parse(responseString);
            dataset.id = newGuid(oResponse.SelectToken("id").ToString());
        }
    }
    return dataset.id;
}
/// <summary>
///!!!!!!!!!!!! THIS IS CURRENTLY UNSUPPORTED !!!!!!!!!!!!
///Deletes a dataset
/// </summary>
/// <param name="dataset">Guid id of the dataset</param>
/// <returns>bool indicating success</returns>
public static asyncTask<bool> DeleteDataset(Guid dataset)
{
    bool success = false;
    var token = await getAccessToken();
    var baseAddress = newUri("https://api.powerbi.com/beta/myorg/");
    using (var client = newHttpClient { BaseAddress = baseAddress })
    {
        client.DefaultRequestHeaders.Add("Authorization", "Bearer " + token);
        client.DefaultRequestHeaders.Add("Accept", "application/json");
        using (var response = await client.DeleteAsync(String.Format("datasets/{0}", dataset.ToString())))
        {
            string responseString = await response.Content.ReadAsStringAsync();
            success = true;
        }
    }
    return success;
}
/// <summary>
///Clear all data our of a given table of a dataset
/// </summary>
/// <param name="dataset">Guid dataset id</param>
/// <param name="table">string table name</param>
/// <returns>bool indicating success</returns>
public static async Task<bool> ClearTable(Guid dataset, string table)
{
    bool success = false;
    var token = await getAccessToken();
    var baseAddress = newUri("https://api.powerbi.com/beta/myorg/");
    using (var client = newHttpClient { BaseAddress = baseAddress })
    {
        client.DefaultRequestHeaders.Add("Authorization", "Bearer " + token);
        client.DefaultRequestHeaders.Add("Accept", "application/json");
        using (var response = await client.DeleteAsync(String.Format("datasets/{0}/tables/{1}/rows", dataset.ToString(), table)))
        {
            string responseString = await response.Content.ReadAsStringAsync();
            success = true;
        }
    }
    return success;
}
/// <summary>
///Adds rows to a given table and dataset in Power BI
/// </summary>
/// <param name="dataset">PowerBIDataset</param>
/// <param name="table">PowerBITable</param>
/// <param name="rows">List<Dictionary<string, object>></param>
/// <returns></returns>
public static asyncTask<bool> AddTableRows(Guid dataset, string table, List<Dictionary<string, object>> rows)
{
    bool success = false;
    var token = await getAccessToken();
    var baseAddress = newUri("https://api.powerbi.com/beta/myorg/");
    using (var client = newHttpClient { BaseAddress = baseAddress })
    {
        //build the json post by looping through the rows and columns for each row
        string json = "{\"rows\": [";
        foreach (var row in rows)
        {
            //process each column on the row
            json += "{";
            foreach (var key in row.Keys)
            {
                json += "\"" + key + "\": \"" + row[key].ToString() + "\",";
            }
            json = json.Substring(0, json.Length - 1) + "},";
        }
        json = json.Substring(0, json.Length - 1) + "]}";
        var content = newStringContent(json, System.Text.Encoding.Default, "application/json");
        client.DefaultRequestHeaders.Add("Authorization", "Bearer " + token);
        client.DefaultRequestHeaders.Add("Accept", "application/json");
        using (var response = await client.PostAsync(String.Format("datasets/{0}/tables/{1}/rows", dataset.ToString(), table), content))
        {
            string responseString = await response.Content.ReadAsStringAsync();
            success = true;
        }
    }
    return success;
}

 

Here are a few examples of calling these Web API methods client-side.

Client-side Calls to Web API
// sets up the dataset for loading
function createDataset(name, callback) {
    var data = {
        name: name, tables: [{
            name: "Messages", columns: [
                { name: "Id", dataType: "string" },
                { name: "Thread", dataType: "string" },
                { name: "Created", dataType: "DateTime" },
                { name: "Client", dataType: "string" },
                { name: "User", dataType: "string" },
                { name: "UserPic", dataType: "string" },
                { name: "Attachments", dataType: "Int64" },
                { name: "Likes", dataType: "Int64" },
                { name: "Url", dataType: "string" }]
        }]};
    $.ajax({
        url: "/api/PowerBI/CreateDataset",
        type: "POST",
        data: JSON.stringify(data),
        contentType: "application/json",
        success: function (datasetId) {
            callback(datasetId);
        },
        error: function (er) {
            $("#alert").html("Error creating dataset...");
            $("#alert").show();
        }
    });
}
// clear rows from existing dataset
function clearDataset(datasetId, callback) {
    var data = { datasetId: datasetId, tableName: "Messages" };
    $.ajax({
        url: "/api/PowerBI/ClearTable",
        type: "POST",
        data: JSON.stringify(data),
        contentType: "application/json",
        success: function (data) {
            callback();
        },
        error: function (er) {
            $("#alert").html(("Error clearing rows in dataset {0}...").replace("{0}", $("#cboDataset option:selected").text()));
            $("#alert").show();
        }
    });
}
// adds rows to the dataset
function addRows(datasetId, rows, callback) {
    var data = { datasetId: datasetId, tableName: "Messages", rows: rows };
    $.ajax({
        url: "/api/PowerBI/AddTableRows",
        type: "POST",
        data: JSON.stringify(data),
        contentType: "application/json",
        success: function (data) {
            callback();
        },
        error: function (er) {
            $("#alert").html("Error adding rows to dataset");
            $("#alert").show();
        }
    });
}

 

My application can create new datasets in Power BI or update existing datasets. For existing datasets, it can append-to or purge old rows before loading. Once the processing is complete, the dataset can be explored immediately in Power BI.

Conclusion

The new Power BI is a game-changer for business analytics. The Power BI APIs offer amazing opportunities for ISVs/Developers. They can enable completely new data-driven scenarios and help take the modeling burden off the end-user. You can download the completed solution outlined in this post below (please note you will need to generate your own application IDs for Azure AD and Yammer).

Solution Download

Using SignalR to communicate between an App for Office and Popups

$
0
0

Apps for Office are a powerful and flexible way to extend Office across all the new Office form factors (browser, PC, phone). Apps for Office come in many sizes/shapes (Mail Apps, Compose Apps, Task Pane Apps, Content Apps). Although users can resize apps for Office, they will typically launch in a default dimension that developers should design around. Often this doesn’t provide enough screen real estate to display everything the app needs (ex: task pane apps default to 320px wide). A good example is an OAuth flow against a 3rd party, where the app has no control over design. For these scenarios, it might be appropriate to leverage popups (in general popups should be avoided). The challenge is that apps run in an isolation that prevents popups from communicating back into them. This post outlines the use of SignalR to solve this communication challenge.

(Please visit the site to view this video)

What is SignalR

SignalR is an ASP.NET technology that enables near real-time communication between servers and web browsers. It uses “Hubs” on the server that can broadcast data over WebSocket to all or specific web browsers that are connected to the hub. WebSockets enable SignalR to push data to web browser (as opposed to a web browser polling for data). Here is the SignalR Hub for the sample app...it simply accepts a message and sends it to the specified client ID.

SignalR Hub for sending messages
public classPopupCommunicationHub : Hub
{
    public void Initialize()
    {
    }
    public void SendMessage(string clientID, string message)
    {
        //send the message to the specific client passed in
        Clients.Client(clientID).sendMessage(message);
    }
}

 

How it Works

When a web browser establishes a connection to a SignalR hub, it is assigned a unique client ID (a GUID).  The Office app and its popup(s) will all have their own unique client IDs. SignalR can push messages through the hub to all or specific client IDs. We can enable app-to-popup communication by making each aware of the others client ID. First we’ll pass the client ID of the parent to the popup via a URL parameter on the popup.

Passing the SignalR client ID of the app to the popup
// Start the connection.
$.connection.hub.start().done(function () {
    hub.server.initialize();
    //get the parentId off the hub
    parentId = $.connection.hub.id;
    //wire the event to launch popup (passing the parentId)
    $("#btnLaunchPopup").click(function () {
        //pass the parentId to the popup via url parameter
        window.open("/Home/PopupWindow?parentId=" + parentId, "", "width=850, height=600, scrollbars=0, toolbar=0, menubar=0, resizable=0, status=0, titlebar=0");
        $("#authorizeModal").modal("show");
        $("#btnLaunchPopup").html("Waiting on popup handshake");
        $("#btnLaunchPopup").attr("disabled", "disabled");
    });
    //wire the send message
    $("#btnSend").click(function () {
        if (popupId != null) {
            //send the message over the hub
            hub.server.sendMessage(popupId, $("#txtMessage").val());
            $("#txtMessage").val("");
        }
    });
});

 

The popup can read the app’s client ID off the URL and then send its own client ID as the first message to the parent (once the hub connection is setup).

Logic for the popup to read the client ID of app and send app its own client ID
var parentId = null, popupId = null;
$(document).ready(function () {
    //utility function to get parameter from query string
    var getQueryStringParameter = function (urlParameterKey) {
        var params = document.URL.split('?')[1].split('&');
        var strParams = '';
        for (var i = 0; i < params.length; i = i + 1) {
            var singleParam = params[i].split('=');
            if (singleParam[0] == urlParameterKey)
                return singleParam[1];
        }
    }
    //get the parentId off the url parameters
    parentId = decodeURIComponent(getQueryStringParameter('parentId')).split('#')[0];
    //setup signalR hub
    var hub = $.connection.popupCommunicationHub;
    // Create a function that the hub can call to broadcast messages
    hub.client.sendMessage = function (message) {
        $("#theList").append($("<li class='list-group-item'>" + message + "</li>"));
    };
    // Start the connection.
    $.connection.hub.start().done(function () {
        hub.server.initialize();
        //get the popupId off the hub and send to the parent
        popupId = $.connection.hub.id;
        hub.server.sendMessage(parentId, popupId);
        //initialize the textbox
        $("#txtMessage").removeAttr("disabled");
        //wire the send message
        $("#btnSend").click(function () {
            //send the message over the hub
            hub.server.sendMessage(parentId, $("#txtMessage").val());
            $("#txtMessage").val("");
        });
    });
});

 

The app will expect the popup client ID as the first message it receives from the hub. At this point, the app and the popup know the client IDs of the other. These client IDs are like the browser windows phone number for communication. Messages can be sent to specific client IDs and get pushed through the hub in near real time.

Logic in app to treat first message as the client ID of the popup
// Create a function that the hub can call to broadcast messages
hub.client.sendMessage = function (message) {
    //first message should be the popupId
    if (popupId == null) {
        popupId = message;
        $("#init").hide();
        $("#send").show();
    }
    else {
        $("#theList").append($("<li class='list-group-item'>" + message + "</li>"));
    }
};

 

Conclusion

Although popups should be avoided with apps for Office, it is sometime unavoidable. In those scenarios, SignalR give apps a better user experience. You can download the completed solution outlined in the video and post below.

Download the Solution: http://1drv.ms/1EOhtyl


Next Generation Office 365 Development with APIs and Add-ins

$
0
0

This week at //build, Microsoft made a number of exciting announcements regarding Office 365 development. If you haven’t had a chance, I highly encourage you to watch the foundational keynote that Jeremy Thake and Rob Lefferts delivered on the opening day…it was epic. In the months leading up to //build, I had the pleasure of working with Do.com on a solution to showcase many of the new Office 365 extensibility investments. I thought I’d give my perspective on working with the new extensibility options in Office 365 and how we applied them to Do.com (a solution already richly integrated with Office 365). I’ll break my thoughts down into “Next Generation Add-ins” and “New and Unified APIs”. But first, here is the prototype of the Do.com Outlook add-in that uses a number of the new announcements.

 

(Please visit the site to view this video)

 

NOTE: You should really checkout Do.com if you haven’t already. I really hate senseless and unorganized meetings, which Do.com has helped me reduce. The video above is a prototype, but they already have a great site and mobile apps that integrate nicely with Office 365 and an aggressive vision for more integration.

 

Next Generation Add-ins

This week we demonstrated Office apps running within the iPad Office clients. This was an exciting announcement that confirms Microsoft’s commitment to Office extensibility and “write-once run anywhere”. However, it also set off some concerns that an “app within app” could be confusing (if the term “app” wasn’t confusing enough already). Moving forward, these extensions to the Office experience will be called “add-ins” (a term we hope more people can relate to).

Office Add-in in Office for iPad

We also announced a new type of add-in called an “add-in command”. Add-in commands are custom commands pinned to the Office user experience to perform custom operations. An add-in command might launch a full add-in for the user to interact with or just perform some background process (similar to Outlook’s “mark as read”). The first generation add-in commands are concentrated to the ribbon (an area developers want to target and on-par with VSTO solutions). At //build we showcased a Do.com task pane add-in launched from the Outlook ribbon (task pane read add-ins are also new). For Do.com, an add-in command provided more visibility to their add-in and brand from within Outlook (especially compared to previous mail add-ins). Checkout the presence of the Do.com logo in the Outlook ribbon.

Do.com Outlook add-in via add-in command

Speaking of Outlook add-ins, we also announced that the same Outlook add-ins built for Office 365 and Exchange Server will be able to target the 400 million users of Outlook.com. This is just one example of unification efforts across consumer and commercial Microsoft services. If you build for Office, you could have a HUGE customer audience!

New and Unified APIs

APIs have been a significant investment area for Office 365 extensibility. About a year ago, Microsoft announced the preview of the Office 365 APIs. Since that time, the APIs have graduated to general availability and added several new services. At //build we announced perhaps our most strategic move with these APIs…unifying them under a single end-point (https://graph.microsoft.com).

Why is the Office 365 Unified API end-point so significant? Most of the services in Office 356 offer APIs, but they have traditionally been resolved under service-specific or even tenant-specific end-points. For tenant-specific end-points like SharePoint/Files, the Discovery Service had to be leveraged just to determine where to make API calls. Although 3rd party apps could provide a single sign-on experience across all the Office 365 services, resource-specific access tokens had to be request behind the scenes. Both the Discovery Service and token management made first-gen Office 365 apps chatty. The Office 365 Unified API end-point solves both these challenges by eliminating the need for the Discovery Service and providing a single access token that can be used against any service that falls under the unified end-point.

Consider Do.com, which needed access to Azure AD (for first/last name), Exchange Online (for high-res profile picture), and OneNote in Office 365 (for exporting agendas). The table below compares the flow with and without the Office 365 Unified API end-point:

w/ O365 Unified API End-Pointw/o O365 Unified API End-Point
1. Get Access Token for the resource
https://graph.microsoft.com 
(O365 Unified API End-Point)
1. Get Access Token for the resource
https://api.office.com/discovery/
(Discovery Service)
2. Use the Unified API end-point to get users properties (first/last name and manager):
https://graph.microsoft.com/beta/me
2. Call Discovery Service to get users capabilities:
https://api.office.com/discovery/v1.0/me/
3. Use Unified API end-point to get the users high-res profile picture from Exchange Online:
https://graph.microsoft.com/beta/me/
userPhoto/$value
3. Get Access Token for the resource
https://graph.windows.net
(Azure AD Graph)
4. Use the Unified API end-point to get the users notebooks in Office 365*:
https://graph.microsoft.com/beta/me/notes/
notebooks
4. Call Azure AD Graph to get user properties (first/last name and manager):
https://graph.windows.net/me
 5. Get Access Token for the resource
https://outlook.office365.com
(Exchange Online)
 6. Call Exchange Online to get the user’s high-resolution profile picture:
https://outlook.office365.com/api/beta/me/
userphoto/$value
 

7. Get Access Token for the resource
https://onenote.com/
(OneNote in Office 365) 

 8. Call OneNote API to get the users notebooks in Office 365:
https://www.onenote.com/api/beta/me/notes/
notebooks

* OneNote APIs for Office 365 were announced this week, but the unified end-point won't be live until a future date

Hopefully this illustrates the significance of unification. Oh and by the way…the Office 365 Unified API end-point also supports CORS from day one (#HighFiveRedmond).

The Microsoft Unified API wasn’t the only exciting API announcement made at //build. We also announced a number of completely new services, permissions to existing services, and SDKs:

The Do.com site already had options to export meeting agendas to Evernote, so this offered an opportunity to integrate the new OneNote APIs with Office 365. “New” might be a little deceiving as the APIs are identical to the existing OneNote consumer APIs (just different authentication/access token providers). In fact big efforts were announced to provide common APIs across consumer/commercial services with OneDrive/OneDrive for Business, OneNote, and Outlook/Outlook.

OneNote integration with Office 365

Do.com is all about facilitating a more productive meeting and in the Land of Gates, most meetings involve Skype for Business (formerly Lync) as a meeting bridge. The new Skype Web SDK posed a great opportunity to Skype-enable the Do.com add-in with audio and video (the SDK support other modalities).

Skype Web SDK integration for Audio/Video

Finally, the Do.com add-in leveraged the new Exchange Online endpoints to display a user’s high-resolution profile picture. This is a nice touch that almost any application connecting into Office 365 can benefit from.

Do.com leveraging new Exchange Online APIs for Profile Picture

Conclusion

I hope you can see how this was a significant week of announcements for Office 365 developers. Be sure to check out some of the great sessions delivered at //build on Channel 9 and let us know what you think of the new Office 365 extensibility announcements! Below are some helpful links related to the announcements at //build, but you can always find the latest info at http://dev.office.com

Office 365 Unified API Endpoint
http://channel9.msdn.com/Events/Build/2015/3-641

OneNote APIs for Office 365
http://channel9.msdn.com/Events/Build/2015/2-715

Office 365 Groups REST API
http://channel9.msdn.com/Events/Build/2015/3-701

Developing for Outlook.com AND Office 365
http://channel9.msdn.com/Events/Build/2015/3-742

Developing for OneDrive AND OneDrive for Business
http://channel9.msdn.com/Events/Build/2015/3-734

Skype Developer Platform
http://channel9.msdn.com/Events/Build/2015/3-643

Building Solutions with Office Graph
http://channel9.msdn.com/Events/Build/2015/3-676

Next-gen Outlook Add-ins
http://channel9.msdn.com/Events/Build/2015/3-694

Busy week...time to relax in my #DoSocks

Performing app-only operations on SharePoint Online through Azure AD

$
0
0

As all the shock and aw announcements were made this week at //build, Microsoft quietly turned on the ability to make app-only calls into SharePoint Online using Azure AD. The enables a whole new variety of scenarios that SharePoint Online and ACS couldn’t alone deliver (such as leveraging multiple services secured by Azure AD). It also provides a more secure way of performing background operations against Office 365 services (more on that later). In this post, I will provide a step-by-step outline for creating a background process that talks to SharePoint Online and can run as an Azure Web Job.

Azure AD App-only vs. ACS App-only

Before jumping into the technical gymnastics of implementation, you might wonder why not just use SharePoint and appregnew.aspx/appinv.aspx to register an app that has app-only permissions? After all, this is a popular approach and has been well documented by numerous people including myself. Well, consider the scenario where you want a background or anonymous service to leverage more than just SharePoint. Applications defined through Azure Active Directory can leverage the full breadth of the common consent framework. That is, they can connect to any service that is defined in Azure AD and offers application permissions. Secondly, these applications (are in my opinion) a little more secure since their trust is established through a certificate instead of an application secret that ACS uses.

Getting Started

Applications defined in Azure AD are allowed to make app-only calls by sharing a certificate with Azure AD. Azure AD will get the public key certificate and the app will get the private key certificate. Although a trusted certificate should be used for production deployments, makecert/self-signed certificates are fine for testing/debugging (similar to local web debugging with https). Here are the steps to generate a self-signed certificate with makecert.exe and exporting it for use with Azure AD.

Part 1: Generate a Self-signed Certificate


1. Open Visual Studio Tools Command Prompt

2. Run makecert.exe with the following syntax:

makecert -r -pe -n "CN=MyCompanyName MyAppName Cert" -b 12/15/2014 -e 12/15/2016 -ss my -len 2048

Example:

makecert -r -pe -n "CN=Richdizz O365AppOnly Cert" -b 05/03/2015 -e 05/03/2017 -ss my -len 2048

 

3. Run mmc.exe and add snap-in for Certificates >> My user account

4. Locate the certificate from step 2 in the Personal certificate store

 

5. Right-click and select All tasks >> Export

6. Complete the Certificate Export Wizard twice…once with the private key (specify a password and save as .pfx) and once without the private key (save as .cer)

Part 2: Prepare the certificate public key for Azure AD


1. Open Windows PowerShell and run the following commands:

$certPath = Read-Host "Enter certificate path (.cer)"
$cert = New-Object System.Security.Cryptography.X509Certificates.X509Certificate2
$cert.Import($certPath)
$rawCert = $cert.GetRawCertData()
$base64Cert = [System.Convert]::ToBase64String($rawCert)
$rawCertHash = $cert.GetCertHash()
$base64CertHash = [System.Convert]::ToBase64String($rawCertHash)
$KeyId = [System.Guid]::NewGuid().ToString()
Write-Host $base64Cert
Write-Host $base64CertHash
Write-Host $KeyId

2. Copy the values output for $base64Cert, $base64CertHash, and $KeyId for Part 3

Part 3: Create the Azure AD App


1. Log into the Azure Management Portal and go to the Azure Active Directory for your Office 365 tenant

2. Go to the Applications tab and select click the add button in the footer to manually add an Application

3. Select “Add an application my organization is developing”

4. Give the application a name, keep the default selection of “Web Application and/or Web API” and click the next arrow

5. Enter a Sign-on URL and App ID Uri (values of these don’t really matter other than being unique) and click next to create the application

6. Click on the “Configure” tab and scroll to the bottom of the page to the section titled “Permissions to other applications”

7. Select the desired “Application Permissions” such as permissions to SharePoint Online and/or Exchange Online and click the Save button in the footer

Part 4: Configure certificate public key for App


1. Click the Manage Manifest button in the footer and select “Download Manifest” to save the app manifest locally

2. Open the downloaded manifest file and locate the empty keyCredentials attribute

3. Update the keyCredentials attribute with the following settings:

Some Title

  "keyCredentials": [
    {
      "customKeyIdentifier": "<$base64CertHash FROM ABOVE>",
      "keyId": "<$KeyId FROM ABOVE>",
      "type": "AsymmetricX509Cert",
      "usage": "Verify",
      "value":  "<$base64Cert FROM ABOVE>"
     }
  ],

Example:

  "keyCredentials": [
    {
      "customKeyIdentifier": "r12cfITjq64d4FakvA3g3teZRQs=",
      "keyId": "e0c93388-695e-426b-8202-4249f8664301",
      "type": "AsymmetricX509Cert",
      "usage": "Verify",
      "value":  "MIIDIzCCAg+gAwI…shortened…hXvgAo0ElrOgrkh"
     }
  ],

 

4. Save the updated manifest and upload it back into Windows Azure using the same Manage Manifest button in the footer (select “Upload Manifest” this time)

5. Everything should now be setup in Azure AD for the app to run in the background and get app-only access tokens from Azure AD.

Building the background process

I used the Visual Studio console application template to build my background service. Just the normal template with Nuget packages for the Azure Active Directory Authentication Libraries (ADAL) and JSON.NET. In fact, most of the code is exactly like a normal .NET project that leverages ADAL and makes REST calls into Office 365. The only difference is the leverage of the certificate private key being passed into the authenticationContext.AcquireTokenAsync method using the ClientAssertionCertificate class.

A few important notes:

  • Although an app-only AAD app can be multi-tenant, it cannot use the /common authority. You must determine the tenant id to tack onto the authority (ex: request id_token response on authorize end-point)
  • My method for storing the certificate and private key is atrocious and only done this way for brevity. Azure Key Vault is a really good solution for security these sensitive items and is outlined HERE
Console Applications with App-only AAD Tokens

using Microsoft.IdentityModel.Clients.ActiveDirectory;
using Newtonsoft.Json;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Net.Http;
using System.Net.Http.Headers;
using System.Security.Cryptography.X509Certificates;
using System.Text;
using System.Threading.Tasks;

namespace MyO365BackgroundProcess
{
    class Program
    {
        private static string CLIENT_ID = "4b7fb8dd-0b22-45a2-8248-3cc87a3560a7";
        private static string PRIVATE_KEY_PASSWORD = "P@ssword"; //THIS IS BAD...USE AZURE KEY VAULT
        static void Main(string[] args)
        {
            doStuffInOffice365().Wait();
        }

        private async static Task doStuffInOffice365()
        {
            //set the authentication context
            //you can do multi-tenant app-only, but you cannot use /common for authority...must get tenant ID
            string authority = "https://login.windows.net/rzna.onmicrosoft.com/";
            AuthenticationContext authenticationContext = new AuthenticationContext(authority, false);

            //read the certificate private key from the executing location
            //NOTE: This is a hack...Azure Key Vault is best approach
            var certPath = System.Reflection.Assembly.GetExecutingAssembly().Location;
            certPath = certPath.Substring(0, certPath.LastIndexOf('\\')) + "\\O365AppOnly_private.pfx";
            var certfile = System.IO.File.OpenRead(certPath);
            var certificateBytes = new byte[certfile.Length];
            certfile.Read(certificateBytes, 0, (int)certfile.Length);
            var cert = new X509Certificate2(
                certificateBytes,
                PRIVATE_KEY_PASSWORD,
                X509KeyStorageFlags.Exportable |
                X509KeyStorageFlags.MachineKeySet |
                X509KeyStorageFlags.PersistKeySet); //switchest are important to work in webjob
            ClientAssertionCertificate cac = new ClientAssertionCertificate(CLIENT_ID, cert);

            //get the access token to SharePoint using the ClientAssertionCertificate
            Console.WriteLine("Getting app-only access token to SharePoint Online");
            var authenticationResult = await authenticationContext.AcquireTokenAsync("https://rzna.sharepoint.com/", cac);
            var token = authenticationResult.AccessToken;
            Console.WriteLine("App-only access token retreived");

            //perform a post using the app-only access token to add SharePoint list item in Attendee list
            HttpClient client = new HttpClient();
            client.DefaultRequestHeaders.Add("Authorization", "Bearer " + token);
            client.DefaultRequestHeaders.Add("Accept", "application/json;odata=verbose");

            //create the item payload for saving into SharePoint
            var itemPayload = new
            {
                __metadata = new { type = "SP.Data.SampleListItem" },
                Title = String.Format("Created at {0} {1} from app-only AAD token", DateTime.Now.ToShortDateString(), DateTime.Now.ToShortTimeString())
            };

            //setup the client post
            HttpContent content = new StringContent(JsonConvert.SerializeObject(itemPayload));
            content.Headers.ContentType = MediaTypeHeaderValue.Parse("application/json;odata=verbose");
            Console.WriteLine("Posting ListItem to SharePoint Online");
            using (HttpResponseMessage response = await client.PostAsync("https://rzna.sharepoint.com/_api/web/Lists/getbytitle('Sample')/items", content))
            {
                if (!response.IsSuccessStatusCode)
                    Console.WriteLine("ERROR: SharePoint ListItem Creation Failed!");
                else
                    Console.WriteLine("SharePoint ListItem Created!");
            }
        }
    }
}

 

Conclusion

There you have it…performing background processing against SharePoint Online (and other Office 365 services) using app-only tokens from Azure AD. You can download the solution from the following GitHub repo: https://github.com/richdizz/MyO365BackgroundProcess 

Connecting to Office 365 APIs from a Windows 10 UWP

$
0
0

Unless you have been living under a rock, you probably heard that Microsoft released Windows 10 last week. For app developers, Windows 10 and the new Universal Windows Platform (UWP) realizes a vision of write-once run on any Windows device (desktop, tablet, mobile). In this post, I’ll illustrate how to build a Windows 10 UWP connected to Office 365 using the new WebAccountProvider approach.

(Please visit the site to view this video)

Truly Universal

Microsoft first introduced the concept of a Universal Windows App at the Build Conference in 2014. This first generation universal app contained separate projects for desktop and mobile with a “shared” project for common code. The goal was to put as much code as possible in the shared project, which often required some technical gymnastics to accomplish. The Windows 10 UWP collapses this 3-project solution into a single unified project.

Old Universal App StructureNew Universal App Structure

 

Connecting to Office 365

Connecting to Office 365 from a Windows 10 UWP uses an updated Connected Service Wizard within Visual Studio 2015. This wizard registers the native application in Azure AD, copies details from Azure AD into the application (ex: Client ID, authority, etc), and pulls down important Nuget packages such as the Office 365 SDK.

Once the Office 365 Service has been added to the UWP, you can start coding against the Office 365 APIs (either via REST or the Office 365 SDK). However, all the Office 365 APIs require access tokens from Azure AD which requires the app to perform an OAuth flow. In the past, native Windows apps used a WebAuthenticationBroker to manage this flow. The WebAuthenticationBroker was a browser control on OAuth steroids. The Azure AD Authentication Libraries (ADAL) automatically leveraged this when you requested a token. The WebAuthenticationBroker worked great, but didn’t always look great within an app given it was loading a framed login screen. The WebAuthenticationBroker still exists in 2015, but the WebAccountProvider is a new mechanism to UWPs and provides a first class experience.

The WebAccountProvider is optimized for multi-provider scenarios. Imagine building a UWP that leverages file storage across a number of providers (ex: OneDrive, OneDrive for Business, DropBox, Box, etc). Or maybe files from one place but calendar from another. The WebAccountProvider handles these scenarios and token management in a more generic and consistent way when compared to WebAuthenticationBroker. The WebAccountProvider will be the default authentication experience for Office 365 in a Windows 10 UWP. In fact, if you look at the application that the Connected Service Wizard registers in Azure AD, you will notice a new reply URI format that is specific to supporting the WebAccountProvider:

Working with the WebAccountProvider is very similar to traditional ADAL. We will use it to get access tokens by resource. When we do this, we will first try to get the token silently (the WebAccountProvider could have a token cached) and then revert to prompting the user if the silent request fails. Here is a completed block of code that does all of this:

Using WebAccountProvider to get Azure AD Access Tokens
private static asyncTask<string> GetAccessTokenForResource(string resource)
{
    string token = null;
    //first try to get the token silently
    WebAccountProvider aadAccountProvider = awaitWebAuthenticationCoreManager.FindAccountProviderAsync("https://login.windows.net");
    WebTokenRequest webTokenRequest = newWebTokenRequest(aadAccountProvider, String.Empty, App.Current.Resources["ida:ClientID"].ToString(), WebTokenRequestPromptType.Default);
    webTokenRequest.Properties.Add("authority", "https://login.windows.net");
    webTokenRequest.Properties.Add("resource", resource);
    WebTokenRequestResult webTokenRequestResult = awaitWebAuthenticationCoreManager.GetTokenSilentlyAsync(webTokenRequest);
    if (webTokenRequestResult.ResponseStatus == WebTokenRequestStatus.Success)
    {
        WebTokenResponse webTokenResponse = webTokenRequestResult.ResponseData[0];
        token = webTokenResponse.Token;
    }
    else if (webTokenRequestResult.ResponseStatus == WebTokenRequestStatus.UserInteractionRequired)
    {
        //get token through prompt
        webTokenRequest = newWebTokenRequest(aadAccountProvider, String.Empty, App.Current.Resources["ida:ClientID"].ToString(), WebTokenRequestPromptType.ForceAuthentication);
        webTokenRequest.Properties.Add("authority", "https://login.windows.net");
        webTokenRequest.Properties.Add("resource", resource);
        webTokenRequestResult = awaitWebAuthenticationCoreManager.RequestTokenAsync(webTokenRequest);
        if (webTokenRequestResult.ResponseStatus == WebTokenRequestStatus.Success)
        {
            WebTokenResponse webTokenResponse = webTokenRequestResult.ResponseData[0];
            token = webTokenResponse.Token;
        }
    }
    return token;
}

 

The WebAccountProvider also looks much different from the WebAuthenticationBroker. This should provide more consistent sign-in experience across different providers:

WebAuthenticationBrokerWebAccountProvider
  

Once you have tokens, you can easily use them in REST calls to the Office 365 APIs, or use the GetAccessTokenForResource call in the constructor of Office 365 SDK clients (SharePointClient, OutlookServicesClient, etc).

Using Office 365 SDKs
private static asyncTask<OutlookServicesClient> EnsureClient()
{
    return newOutlookServicesClient(newUri("https://outlook.office365.com/ews/odata"), async () => {
        return awaitGetAccessTokenForResource("https://outlook.office365.com/");
    });
}
public static asyncTask<List<IContact>> GetContacts()
{
    var client = await EnsureClient();
    var contacts = await client.Me.Contacts.ExecuteAsync();
    return contacts.CurrentPage.ToList();
}

 

Using REST
public static asyncTask<byte[]> GetImage(string email)
{
    HttpClient client = newHttpClient();
    var token = await GetAccessTokenForResource("https://outlook.office365.com/");
    client.DefaultRequestHeaders.Add("Authorization", "Bearer " + token);
    client.DefaultRequestHeaders.Add("Accept", "application/json");
    using (HttpResponseMessage response = await client.GetAsync(newUri(String.Format("https://outlook.office365.com/api/beta/Users('{0}')/userphotos('64x64')/$value", email))))
    {
        if (response.IsSuccessStatusCode)
        {
            var stream = await response.Content.ReadAsStreamAsync();
            var bytes = new byte[stream.Length];
            stream.Read(bytes, 0, (int)stream.Length);
            return bytes;
        }
        else
            return null;
    }
}

 

Conclusion

The unification achieved with the new Windows Universal Platform (UWP) is exactly what Windows developers have been waiting for. Office 365 is poised to be a dominate force with Windows 10. Together, some amazing scenarios can be achieved that developers have the power to deliver. I have published two complete Windows 10 UWP samples on GitHub that you can fork/clone today:

Contacts API Win10 UWP
https://github.com/OfficeDev/Contacts-API-Win10-UWP

MyFiles API Win10 UWP
https://github.com/OfficeDev/MyFiles-API-Win10_UWP

Connecting to Office 365 from an Office Add-in

$
0
0

Earlier in the year, I authored a post on Connecting to SharePoint from an Office add-in. In that post, I illustrated 5 approaches that were largely specific to SharePoint. However, the last pattern connected to SharePoint using the Office 365 APIs. SharePoint is one of many powerful services exposed through the Office 365 APIs. In this post, I’ll expand on leveraging the Office 365 APIs from an Office add-in. I’ll try to clear up some confusion on implementation and outline some patterns to deliver the best user experience possible with Office add-in that connect to Office 365. Although I’m authoring this post specific to the Office 365 APIs, the same challenges exist for almost any OAuth scenario with Office add-ins (and the same patterns apply).

Mail CRM sample provided in post

 

The Office add-in Identity Crisis

Since 2013, users have become accustomed to signing into Office. Identity was introduced into Office for license management and roaming settings such as file storage in OneDrive and OneDrive for Business. However, this identity is not currently made available to Office add-ins. The one exception is in Outlook mail add-ins, which can get identity and access tokens specifically for calling into Exchange Online APIs. All other scenarios (at the time of this post) require manual authentication flows to establish identity and retrieve tokens for calling APIs.

User may sign into Office, but identity isn't available to add-ins

 

Why Pop-ups are a Necessary Evil

Office add-ins can display almost any page that can be displayed in a frame and whose domain is registered in the add-in manifest (in the AppDomains section). Both of these constraints can be challenging when performing OAuth flows. Due to the popularity of clickjacking on the internet, it is common to prevent login pages from being display inside frames. The X-FRAME-Options meta tag in HTML makes it easy for providers to implement this safeguard on a widespread or domain/origin-specific basis. Pages that are not “frameable” will not load consistently in an Office add-in. For example, Office Online displays Office add-ins in IFRAME elements. Below is an example of an Office add-in displaying a page that cannot be displayed in a frame:

Office add-in display an page that is NOT "frameable"

 

The other challenge facing Office add-ins that perform OAuth flows is in establishing trusted domains. If an add-in tries to load any domain not registered in the add-in manifest, Office will launch the page in a new browser window. In some cases, this can be avoided by registering the 3rd party domain(s) in the AppDomains section of the add-in manifest (ex: https://login.microsoftonline.com). However, this might be impossible with identity providers that support federated logins. Take Office 365 as an example. Most large organizations use a federated login to Office 365 (usually with Active Directory Federation Services (ADFS)). In these scenarios, the organization/subscriber owns the federated login and the domain that hosts it. It is impossible for an add-in developer to anticipate all domains customers might leverage. Furthermore, Office add-ins do not support wildcard entries for trusted domains. In short, popups are unavoidable.

Rather than trying to avoid popup, it is better to accept them as a necessary evil in Office add-ins that perform OAuth/logins. Redirect your attention to popup patterns that can deliver a better user experience (which I cover in the next section).

Good User Experience without Identity

To address the challenges with identity in Office add-ins, I’m going to concentrate on patterns for improving the user experience in the popup and with “single sign-on”. For popups, we want to deliver an experience where the popup feels connected to the add-in (a feat that can be challenging in some browsers). For “single sign-on” we want to provide a connected experience without requiring the user to sign-in every time they use the add-in. Technically, this isn’t really “single sign-in” as much as token cache management (which is why I put "single sign-on" in quotes).

Mastering the Popup

Almost as soon as the internet introduced popup, they started being used maliciously by both hackers and advertisers. For this reason, popups have established a bad reputation and browsers have built-in mechanisms to control them.  These safeguards can make client-side communication between add-ins and popups problematic (don't get me started on IE Security Zones). Ultimately, we are using popups to acquire access tokens so that add-ins can make API calls. Instead of passing tokens back client-side (via window.opener or window.returnValue), consider a server-side approach that browsers cannot (easily) interfere with.

One server-side method for popup/add-in communication is by temporarily caching tokens on a server or in a database that both the popup and add-in can communicate with. With this approach, the add-in launches the popup with an identifier it can use to later retrieve the access token for making API calls. The popup performs the OAuth flow and then caches the token by the identifier passed from the add-in. This was the approach I outlined in the Connecting to SharePoint from an Office add-in blog post. It is solid, but relies upon cache/storage and requires the add-in to poll for tokens or the user to query for tokens once the OAuth flow is complete.

(Please visit the site to view this video)

 

We can address both these limitations by delivering popup/add-in communication via web sockets. This method is similar to the previous approach. The add-in still passes an identifier to the popup window, but now “listens” for tokens using web sockets. The popup still handles the OAuth flow, but can now push the token directly to the add-in via the web socket the add-in is listening on (this “push” goes through a server and is thus considered server-side). The benefit of this method is that nothing needs to be persisted and the add-in can immediately proceed when it gets the access token (read: no polling or user actions required). Web sockets can be implemented numerous ways, but I’ve become a big fan of ASP.NET SignalR. Interestingly, SignalR already provides an identifier when a client established a connection to the server (which I can use as my identifier sent to the popup).

Sound complicated? It can be, so I’ll try to break it down. When the add-in launches, we need to get the identifier (to pass into the popup) and then start listening for tokens:

Get the Client Identifier and Start "listening" on Hub for Tokens
//initialize called when add-in loads to setup web sockets
stateSvc.initialize = function () {
    //get a handle to the oAuthHub on the server
    hub = $.connection.oAuthHub;

    //create a function that the hub can call to broadcast oauth completion messages
    hub.client.oAuthComplete = function (user) {
        //the server just sent the add-in a token
        stateSvc.idToken.user = user;
        $rootScope.$broadcast("oAuthComplete", "/lookup");
    };

    //start listening on the hub for tokens
    $.connection.hub.start().done(function () {
        hub.server.initialize();
        //get the client identifier the popup will use to talk back
        stateSvc.clientId = $.connection.hub.id;
    });
};

 

The client identifier is passed as part of the redirect_uri parameter in the OAuth flow of the popup:

Page loaded in popup for perform OAuth flow
https://login.microsoftonline.com/common/oauth2/authorize?
client_id=cb88b4df-db4b-4cbe-be95-b40f76dccb14
&resource=https://graph.microsoft.com/
&response_type=code
&redirect_uri=https://localhost:44321/OAuth/AuthCode/A5ED5F48-8014-4E6C-95D4-AA7972D95EC9/C7D6F7C7-4EBE-4F45-9CE2-EEA1D5C08372
//the User ID in DocumentDB
//the Client Identifier listening on web socket for tokens...think of this as the "address" of the add-in

 

The OAuthController completes the OAuth flow and then uses the client identifier to push the token information to the add-in via the web socket:

OAuthController that handles the OAuth reply
[Route("OAuth/AuthCode/{userid}/{signalrRef}/")]
public asyncTask<ActionResult> AuthCode(string userid, string signalrRef)
{
    //Request should have a code from AAD and an id that represents the user in the data store
    if (Request["code"] == null)
        return RedirectToAction("Error", "Home", new { error = "Authorization code not passed from the authentication flow" });
    else if (String.IsNullOrEmpty(userid))
        return RedirectToAction("Error", "Home", new { error = "User reference code not passed from the authentication flow" });

    //get access token using the authorization code
    var token = awaitTokenHelper.GetAccessTokenWithCode(userid.ToLower(), signalrRef, Request["code"], SettingsHelper.O365UnifiedAPIResourceId);

    //get the user from the datastore in DocumentDB
    var idString = userid.ToLower();
    var user = DocumentDBRepository<UserModel>.GetItem("Users", i => i.id == idString);
    if (user == null)
        return RedirectToAction("Error", "Home", new { error = "User placeholder does not exist" });

    //update the user with the refresh token and other details we just acquired
    user.refresh_token = token.refresh_token;
    awaitDocumentDBRepository<UserModel>.UpdateItemAsync("Users", idString, user);

    //notify the client through the hub
    var hubContext = GlobalHost.ConnectionManager.GetHubContext<OAuthHub>();
    hubContext.Clients.Client(signalrRef).oAuthComplete(user);

    //return view successfully
    return View();
}

 

Here is a video that illustrates the web socket approach. Notice that the add-in continues on after the OAuth flow without the user having to do anything.

(Please visit the site to view this video)

 

Cache Management

Ok, we have established a consistent and smooth method for getting tokens. However, you probably don’t want to force the user through this flow every time they use the add-in. Fortunately, we can cache user tokens to provide long-term access to Office 365 data. An access token from Azure AD only has a one hour lifetime. So instead, we will cache the refresh token, which has a sliding 14-day lifetime (maximum of 90 days without forcing a login). Caching techniques will depend on the type of app.

The Exchange/Outlook Team already has a published best practice for caching tokens in an Outlook mail add-in. It involves using the Identity Token that is available through JSOM (Office.context.mailbox.getUserIdentityTokenAsync) and creating a hashed combination of ExchangeID and AuthenticatedMetadataUrl. This hashed value is the lookup identifier the refresh token is stored by. The Outlook/Exchange Team has this documented on MSDN, including a full code sample. I followed this guidance in my solutions. For the sample referenced in this post, I used Azure’s DocumentDB (a NoSQL solution similar to Mongo) to cache refresh tokens by this hash value. Below, you can see a JSON document that reflects a cached user record. Take note of the values for hash and refresh_token:

DocumentDB record for user (with cached refresh token by hash)

 

For document-centric add-ins with Excel, Word, and PowerPoint, there is no concept of an identity in JSOM. Thus, these types of add-ins can’t take the same token caching approach as an Outlook mail add-in. Instead, we must revert to traditional web caching techniques such as cookies, session state, or database storage. I would probably not recommend local cache of the actual refresh tokens. So if you want to use cookies, try storing some lookup value in the cookie that the add-in can use to retrieve the refresh token stored on a server. Consider also that cookie caching in an Office add-in could expose information in a shared workstation scenario. Ultimately, be careful with your approach here.

Conclusion

I have full confidence that these add-in identity challenges will be short lived. In the meantime, the patterns outlined in this post can help deliver a better user experience to users. To get you jumpstarted, you can download a Mail CRM sample to uses these patterns and many more. You can also download the Office 365 API sample from the Connecting to SharePoint from an Office add-in post back in March. Happy coding!

Mail CRM Sample outlined in blog post: https://github.com/OfficeDev/PnP-Store/tree/master/DXDemos.Office365

Connecting with SharePoint from add-in sample (from March 2015): http://1drv.ms/1HaiupJ 

Working with the converged Azure AD v2 app model

$
0
0

Microsoft recently announced the public preview of a new application model that offers a unified developer experience across Microsoft consumer and commercial services. This is so significant it is being called the “V2” application model. Why is it so significant? Now a single application definition and OAuth flow can be used for consumer services (ex: OneDrive, Outlook.com, etc) AND commercial services in Office 365 (ex: Exchange Online, SharePoint Online, OneDrive for Business). In this post, I’ll outline the major differences in the v2 app model and how to perform a basic OAuth flow using it.

(Please visit the site to view this video)

What’s Different

Registering applications and performing OAuth have become common practices when building applications that connect to Microsoft services. However, the new converged “V2” app model brings some significant changes to both of these tasks. I have listed the major differences below, but you should also read the announcement by the Azure Active Directory team.

  • Unified Applications – V2 Apps converge the disparate application definitions that exist today between Microsoft Accounts (MSA) that are used for consumer services and Azure AD (AAD) accounts that are used for Office 365. By offering one unified application, developers can register apps from a centralized portal (https://apps.dev.microsoft.com) that work with either MSA or AAD accounts.
  • One App, Multiple Platforms – V2 Apps support multiple platforms within a single application definition. In the past, multiple application definitions were required to deliver web and mobile experiences. In V2 apps, both web and mobile experiences can be delivered from the same application definition.
  • Permissions at Runtime – V2 apps don’t declare permissions during app registration. Instead, they request permission dynamically by providing a scope parameter in token requests.
  • Deferred Resources– V2 apps no longer pass a resource parameter to get resource-specific access tokens. Instead, the resource can be automatically determined by the service based on the scopes passed in. 
  • Refresh Tokens by Request– V2 apps do not automatically get refresh tokens when requesting tokens from the service. Instead, you must explicitly request a refresh token by using the offline_access permission scope in the request a token.

Performing OAuth

There are a number of OAuth flows that the V2 model supports. I’m going to walk through the OAuth2 Authorization Code Flow, which is the most popular and used in most web applications. To demonstrate the flow, I’m going to take the raw browser/fiddler approach popularized by Rob Howard and Chakkaradeep “Chaks” Chandran blogged about HERE. The OAuth2 Authorization Code Flow can be simplified into these simple steps:

  1. Redirect the user to an authorize URL in Azure AD with some app details, including the URL Azure should reply back with an authorization code once the user logs in and consents the application.
  2. Post additional app details (including the authorization code from Step 1) to a token end-point in Azure AD to get an access token.
  3. Include the access token from Step 2 in the header when calling services secured by the V2 app model.

Sounds simple enough right? The Azure Active Directory Authentication Libraries (ADAL) make this flow simple on a number of platforms, but I find it very helpful to understand the flow ADAL manages. Let’s perform this flow using nothing but a browser and Fiddler (any web request editor will work in place of Fiddler).

Step 0 – Register the V2 Application

Before we can perform an OAuth flow, we need to register a new V2 application in the new registration portal.

  1. Open a browser and navigate to https://apps.dev.microsoft.com.
  2. Sign in with either a Microsoft Account (MSA) such as outlook.com/live.com/hotmail.com or an Azure AD account you use for Office 365.
  3. Once you are signed in, click the Add an app button in the upper right.
  4. Give the application a name and click Create application.
  5. Once the application is provisioned, copy the Application Id somewhere where it will be readily available for the next section.
  6. Next, generate a new application password by clicking the Generate New Password button in the Application Secrets section. When the password is displayed, copy it down for use in the next section. Warning: this is the only time the app registration portal will display the password.
  7. Next, locate the Platforms section and click Add Platform to launch the Add Platform dialog.
  8. Select Web for the application type. Notice that the V2 application model supports multiple platforms in the same application.
  9. Finally, update the Redirect URI of the new platform to https://localhost and save your changes by clicking the Save button at the bottom of the screen.
  10. The V2 application should be ready to use!

Step 1 – Get Authorization Code

The first step of the OAuth2 Authorization Code Flow is to redirect the user to an authorize URL in Azure AD with some app details, including the URL Azure should reply back with an authorization code once the user logs in and consents the application. The format of this authorize URL is listed below. Replace the placeholders with details from your app registration and paste the entire URI into your browser.

NOTE: The authorize URI uses the new v2.0 end-point versioning. It also uses the scope parameter to tell the authorize flow what permissions the application is requesting (aka – Runtime Permissions). Here we are requesting openid (sign-in), https://outlook.office.com/contacts.read (read access to contacts), and offline_access (required to get refresh tokens back for long-term access).

 

Authorize URI
https://login.microsoftonline.com/common/oauth2/v2.0/authorize
?client_id={paste your client id}
&scope=openid+https://outlook.office.com/contacts.read+offline_access
&redirect_uri={paste your reply url}
&response_type=code

 

Immediately after pasting the authorization URI into the browser, the user should be directed to a login screen. Here, they can provide either a consumer account (MSA) or an Azure AD account (if an MSA account is provided, the login screen change)

Azure AD Sign-inMSA Sign-in
  

 

Once the user signs in, they will be asked to grant consent for the permissions the application is requesting. This consent screen will only display the first time through this flow. The screen will look a little different based on the type of account provided.

Azure AD Grant ConsentMSA Grant Consent
  

 

After granting consent to the application, the browser will be redirected to the location specified in the redirect_uri parameter. However, the authorization flow will include a code URL parameter as part of this redirect. This is your authorization code and completes this section!

Step 2 – Get Access Token

After acquiring the authorization code with the help of the user (logging in and granting consent) you can get an access token silently. To do this, POST additional app details (including the authorization code, application password, and permission scopes) to a token end-point in Azure AD. To perform the POST, you need a web request editor such as Fiddler or Postman. The end-point, headers, and body are listed below, but make sure you replace the placeholders with details from your app registration.

NOTE: The token end-point also uses the new v2.0 end-point versioning. The POST body also uses the same scope parameters you used to get the authorization code.

 

Get Access Token with Authorization Code
Method: POST
----------------------------------------------------------
End-Point: https://login.microsoftonline.com/common/oauth2/v2.0/token
----------------------------------------------------------
Headers:
Content-Type: application/x-www-form-urlencoded
----------------------------------------------------------
Body:
grant_type=authorization_code
&redirect_uri={paste your reply url}
&client_id={paste your client id}
&client_secret={paste your client secret}
&code={paste authorization code from previous step}
&scope=openid+https://outlook.office.com/contacts.read+offline_access

 

Here I’m using Fiddler’s Composer to perform the POST to get an access token.

The response to this POST should include both an access token and refresh token (because we included the offline_access scope).

Step 3 – Call Service with Access Token 

Congratulations…you have an access token, which is your key to calling services secured by the V2 application model. For the initial preview, only Outlook.com/Exchange Online services support this new flow. However, Microsoft is working hard deliver widespread support for this flow, so other popular services will become available very soon. For Outlook.com/Exchange Online, we can hit one API end-point and the service will determine which mail platform to use based on the token provided. Use an MSA account and the API will automatically go against Outlook.com. Use an AAD Account and the API will automatically hit Exchange Online in Office 365. It’s magic!

You can call a service in Outlook.com/Exchange Online using the web request editor. Use the REST end-point and headers below to GET contacts for the user. The header has a placeholder that should be replaced with the access_token acquired in the previous section. 

Calling Outlook.com/Exchange Online REST API
Method: GET
---------------------------------------------------------- 
End-Point: https://outlook.office.com/api/v1.0/me/contacts
----------------------------------------------------------
Headers:
Accept:application/json
Content-Type:application/json
Authorization: Bearer {access_token from previous step}

 

GET ComposerGET Response
  

 

NOTE: There are millions of MSA around the world and not all of them have been migrated to support this flow. Microsoft is working hard to migrate all MSA accounts, but it won’t happen overnight. If your MSA account hasn’t been migrated, you will get a 404 response querying contacts with the following error:

{"error":{"code":"MailboxNotEnabledForRESTAPI","message":"REST API is not yet supported for this mailbox."}}

 

Conclusion

App Unification…OAuth Unification…End-Point Unification…goodness all around! I’ll be posting an actual code sample in the next few days, so check back soon. Below is a raw text file with the calls used in this post:

https://raw.githubusercontent.com/richdizz/Azure-AD-v2-Authorization-Code-Flow/master/OAuthRaw.txt

Viewing all 68 articles
Browse latest View live


Latest Images

<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>