April 2013 - Posts

I wanted to create an autocomplete textbox in a SharePoint app recently by using terms from the term store.  Since retrieving items from the term store can be a bit involved I wanted to show you the steps involved.  This solution make use of jQuery UI Autocomplete.  The code loads the terms from the term store and then sets the source.  This implementation is really only ideal for a small amount of terms, but it’s enough to get you started.

For this example, I am going to use a simple set of terms using state names in the United States.  My terms are included in a group named Classification and a Term Set named States.  Here is what my term store looks like.

TermStoreStates

In this example, we’re going to build our code inside a Client Web Part.  Take a look at that post if you are not familiar with the process yet.  We then need to add a heap of JavaScript references.  Some of these are included already, but specifically we need to load SP.Taxonomy.js.  We also need to include init.js as I mentioned in an earlier blog post.

<script type="text/javascript" src="../Scripts/jquery-1.7.1.min.js"></script>

<script type="text/javascript" src="/_layouts/15/MicrosoftAjax.js"></script>

<script type="text/javascript" src="/_layouts/15/init.js"></script>

<script type="text/javascript" src="/_layouts/15/sp.runtime.js"></script>

<script type="text/javascript" src="/_layouts/15/sp.js"></script>

<script type="text/javascript" src="/_layouts/15/sp.taxonomy.js"></script>

My web part is called AutocompleteWebPart.aspx so I am going to add a JavaScript file for my code called AutocompleteWebPart.js.  We also need to include a reference to jQuery UI  You’ll need to download this and include it in your project or pull it from a CDN.

<script type="text/javascript" src="../Scripts/AutocompleteWebPart.js"></script>

<script type="text/javascript" src="../Scripts/jquery-ui-1.9.1.custom.min.js"></script>

Lastly to use jQuery UI, you need to include it’s CSS file in the Content folder of your project.

<link rel="Stylesheet" type="text/css" href="../Content/jquery-ui-1.8.22.custom.css" />

Now, I am just going to add a textbox to the body of our page.

<body>

    <div>

        <input id="autocompleteTextBox" type="text" />

    </div>

</body>

The process for querying terms is involved.  You first need to get a reference to the Taxonomy Session, followed by the Group, the Term Set, and finally you can iterate the terms.  There are a lot of good examples out there but many of them take short cuts by using GUIDs for values for things like the group and term set.  This works but is absolutely useless when you are writing proper code that can be deployed to any environment.  You need to be able to reference these items by name, but unfortunately the API makes accessing anything in the term store by name difficult.  It’s not impossible though, it just requires extra code and iteration.  Let’s walk through our JavaScript example below with absolutely no hard-coded GUIDs.

We’ll start by adding some global variables.  We’ll populate these as we go.

var context;

 

var session;

var termStore;

var groups;

var termSets;

var termsArray = [];

We’ll start in a document ready function.  We use the standard code to get a reference to the current context.  We’ll need this to create a new TaxonomySession object.  We then session.GetDefaultSiteCollectionTermStore() to get the default term store.  From there, we need to context.load on the session and termStore objects.  We then use a typical executeQueryAsync method to execute our query.  The onTaxonomySession method will handle success and we’ll use a shared onTaxonomyFailed method to handle any failures.

$(document).ready(

    function () {

        context = new SP.ClientContext.get_current();

 

        session = SP.Taxonomy.TaxonomySession.getTaxonomySession(context);

        termStore = session.getDefaultSiteCollectionTermStore();

        context.load(session);

        context.load(termStore);

        context.executeQueryAsync(onTaxonomySession, onTaxonomyFailed);

    });

Just like with the managed API, you must configure your Managed Metadata Service Application Client appropriately in order for the default term store call to work.  Click on the Managed Metadata Service Connection and then click Properties.  Now, make sure the checkbox next to This service application is the default storage location for column specific term sets is checked.  Once, you have made this change your code should work.

ManagedMetadataServiceDefaultChecked

The onTaxonomySession method then retrieves a list of groups.  We have to retrieve all groups because there isn’t a method to just retrieve one by name.  Although there is a method to retrieve a group by id.  Since we don’t want to hard code any GUIDs though.  We have to retrieve all groups and iterate them to find the one we want.  A successful query will call onGroupsLoaded.

function onTaxonomySession() {

    groups = termStore.get_groups();

    context.load(groups);

    context.executeQueryAsync(onGroupsLoaded, onTaxonomyFailed);

}

In this method we have a list of the groups so we need to iterate through them and find the one we want.  In this case, Classification.  The code isn’t ideal but it works.  We start by getting an enumerator with getEnumerator().  We then use this enumerator to examine the groups.  In our loop, we use get_current() to get currentGroup.  We then use get_name() to compare against the one we want.  When a match is found, we call another method getTermSets and pass the term set.

function onGroupsLoaded() {

    // iterate termStores

    var groupEnumerator = groups.getEnumerator();

 

    while (groupEnumerator.moveNext()) {

        var currentGroup = groupEnumerator.get_current();

        if (currentGroup.get_name() == 'Classification')

            getTermSets(currentGroup);

    }

}

In the getTermSets method, we call get_termSets.

function getTermSets(currentGroup) {

    termSets = currentGroup.get_termSets();

    context.load(termSets);

    context.executeQueryAsync(onTermSetsLoaded, onTaxonomyFailed);

}

The onTermSetLoaded method will then iterate through the term sets returned and compare by name in the same way.  In this case, we are looking for the term set named States.  When the match is found, we call getTerms().

function onTermSetsLoaded() {

    var termSetEnumerator = termSets.getEnumerator();

 

    while (termSetEnumerator.moveNext()) {

        var currentTermSet = termSetEnumerator.get_current();

        var termSetName = currentTermSet.get_name();

        if (termSetName == 'States')

            getTerms(currentTermSet);

    }

}

This is now the last call we need to make.  This retrieves all of the terms for the term set.  Unfortunately, we have to get all of them (as far as I know) which is why I don’t recommend this with large term sets.

function getTerms(termSet) {

    terms = termSet.get_terms();

    context.load(terms);

    context.executeQueryAsync(onTermsLoaded, onTaxonomyFailed);

}

The onTermsLoaded method will iterate through the terms and add them to an array that the jQuery UI autocomplete method will accept.  There you have it all of the code to get items from a term set without a hard coded GUID.  It’s a lot of code, but not too bad once you get used to it.

Lastly, we’ll get a reference to our textbox and use the .autocomplete() method passing in the value of our array.

function onTermsLoaded() {

    var termsEnumerator = terms.getEnumerator();

 

    while (termsEnumerator.moveNext()) {

        var currentTerm = termsEnumerator.get_current();

        termsArray.push(currentTerm.get_name());

    }

 

    $("#autocompleteTextBox").autocomplete({ source: termsArray });

}

At this point, we are done, but we do need to implement our failure method.

function onTaxonomyFailed(sender, args) {

    alert('Taxonomy Error:' + args.get_message());

}

If you are using this code in an app, the last thing you need to do is set the Taxonomy permission to Read in the AppManifest.xml file.  This will let us query the term store.

AppManifrstTaxonomy

At this point, we can test it.  Deploy your app and add the app part to a page.

AutocompleteExample

So, a little code involved here but the results are great.  You can configure the jQuery autocomplete plugin in a variety of ways too.  This code could probably be optimized some so if you have improvements, let me know.

I recently ran into the following error when working with a SharePoint 2013 Client Web Part (App Part) while accessing the term store using SP.Taxonomy.js

JavaScript runtime error: 'NotifyScriptLoadedAndExecuteWaitingJobs' is undefined.

SPTaxonomyErrorNotifyScriptLoadedAndExecuteWaitingJobs

The following line gets hit in the debugger.

SPTaxonomyErrorNotifyScriptLoadedAndExecuteWaitingJobs2

The code I had worked fine inside a page, but when placed inside a client web part, I received the error.  I thought it might be something to do with the order in how I loaded the script files.  I recently switched to the new Client Web Parts set up in the RTM version of the Office Developer Tools.  This particular update changes the references to the JavaScript files to the page instead of being loaded dynamically with $.GetScript().  It turns out it had nothing to do with that.  Instead, I just needed to add a reference to Init.js.  Here is what my complete list of references looks like in the page for my client web part.

<script type="text/javascript" src="../Scripts/jquery-1.7.1.min.js"></script>

<script type="text/javascript" src="/_layouts/15/MicrosoftAjax.js"></script>

<script type="text/javascript" src="/_layouts/15/init.js"></script>

<script type="text/javascript" src="/_layouts/15/sp.runtime.js"></script>

<script type="text/javascript" src="/_layouts/15/sp.js"></script>

<script type="text/javascript" src="/_layouts/15/sp.taxonomy.js"></script>

If you receive the error above that is all you have to do..  Luckily, it’s easy to fix.

As a developer, if you have ever looked for information on how to query search in SharePoint, there is a good chance that you have ran into one of my articles.  I’ve been writing about how to query search in a variety of different ways since 2007.  They have always been some of the most popular articles on my site.  SharePoint 2013 has so many new ways to query search, it has taken me time to write up all of the different ways.  I am keeping the tradition alive by writing about how to query using the JavaScript Client Object Model provided by SP.Search.js.  You may have already seen my post on how to query from JavaScript and REST or my post on how to use the managed CSOM to Query Search.  You’ll find that querying from JavaScript looks very similar to the managed side of things.  In fact even the namespaces are the same and most of the code is quite similar between C# and JavaScript.  However, there are some nuances due to JavaScript so I wanted to share a complete code sample.

I am going to build my example inside a SharePoint-hosted app.  If you’re not familiar with that process yet, take a look at my Client Web Part post to get started.  When it comes to apps, you need to be sure and request permission to access search.  Do this by editing your AppManifest.xml and clicking on the permissions tab.  Select Search and then select QueryAsUserIgnoreAppPrincipal.  If you forget this step, you won’t get an error, your queries will simply just return zero results.

I’m going to put all of my code in the welcome page today.  My default.aspx page will have the same HTML as my other JavaScript post.  I simply add a textbox, a button, and a div to hold the results.  The user will type in his or her query, click the button, and then see search results.

<div>

    <label for="searchTextBox">Search: </label>

    <input id="searchTextBox" type="text" />

    <input id="searchButton" type="button" value="Search" />

</div>

 

<div id="resultsDiv">

</div>

We’re going to edit App.js next.  Since we’re in an App, I am assuming you already have code to get the SharePoint context since Visual Studio adds it for you.  If not, you can get it like this.

var context = SP.ClientContext.get_current();

I then add a click event handler to my document ready function.

$("#searchButton").click(function () {

});

In here, we’ll put our code to get a KeywordQuery and SearchExecutor object to execute our query in a similar manner to the Managed CSOM approach.  Now, we need to get our KeywordQuery object using the context object we already have.

var keywordQuery = new Microsoft.SharePoint.Client.Search.Query.KeywordQuery(context);

Then, we set the query text using the value the user entered in the textbox with set_queryText().

keywordQuery.set_queryText($("#searchTextBox").val());

Now, we just need to create a SearchExecutor object and execute the query.  I am assigning the results of the query back to a global variable called results.  Since apps by default have ‘use strict’ enabled, you need to be sure and declare this first.

var searchExecutor = new Microsoft.SharePoint.Client.Search.Query.SearchExecutor(context);

results = searchExecutor.executeQuery(keywordQuery);

Then like any CSOM code, we have to use executeQueryAsync to make the actual call to the server.  I pass it methods for success and failure.

context.executeQueryAsync(onQuerySuccess, onQueryError);

I generally prefer using REST to do my search queries.  However, when it comes to processing results, the data we get back from CSOM is typed better and much easier to access.  This results in less code that we have to deal with.  You can get quite a bit of data back such as the number of results returned just by exmaining results.m_value.  Each individual result can be found in results.m_value.ResultTables[0].ResultRows.  The managed properties of each row are typed directly on the object so that it means you can access them directly if you know the name (i.e.: Author, Write, etc). Iterating the values is simple using $.each.  Take a look at my example where I am writing the values into an HTML table.

function onQuerySuccess() {

    $("#resultsDiv").append('<table>');

 

    $.each(results.m_value.ResultTables[0].ResultRows, function () {

        $("#resultsDiv").append('<tr>');

        $("#resultsDiv").append('<td>' + this.Title + '</td>');

        $("#resultsDiv").append('<td>' + this.Path + '</td>');

        $("#resultsDiv").append('<td>' + this.Author + '</td>');

        $("#resultsDiv").append('<td>' + this.Write + '</td>');

        $("#resultsDiv").append('</tr>');

    });

 

    $("#resultsDiv").append('</table>');

}

The last thing to implement is the code to handle errors.  It just sends the error to an alert dialog.

function onQueryFail(sender, args) {

    alert('Query failed. Error:' + args.get_message());

}

When we run the app, it will look something like this.

SearchAppCSOMDefault

Executing a query will show us the four columns I specified in my $.each statement.

SearchAppCSOMResults

Pretty easy, right?  Here is the entire code snippet of my App.js.

'use strict';

 

var results;

 

var context = SP.ClientContext.get_current();

var user = context.get_web().get_currentUser();

 

// This code runs when the DOM is ready and creates a context object which is needed to use the SharePoint object model

$(document).ready(function () {

 

    $("#searchButton").click(function () {

        var keywordQuery = new Microsoft.SharePoint.Client.Search.Query.KeywordQuery(context);

        keywordQuery.set_queryText($("#searchTextBox").val());

 

        var searchExecutor = new Microsoft.SharePoint.Client.Search.Query.SearchExecutor(context);

        results = searchExecutor.executeQuery(keywordQuery);

 

        context.executeQueryAsync(onQuerySuccess, onQueryFail)

    });

});

 

 

function onQuerySuccess() {

    $("#resultsDiv").append('<table>');

 

    $.each(results.m_value.ResultTables[0].ResultRows, function () {

        $("#resultsDiv").append('<tr>');

        $("#resultsDiv").append('<td>' + this.Title + '</td>');

        $("#resultsDiv").append('<td>' + this.Author + '</td>');

        $("#resultsDiv").append('<td>' + this.Write + '</td>');

        $("#resultsDiv").append('<td>' + this.Path + '</td>');

        $("#resultsDiv").append('</tr>');

    });

 

    $("#resultsDiv").append('</table>');

}

 

function onQueryFail(sender, args) {

    alert('Query failed. Error:' + args.get_message());

}

Although, I typically prefer the REST interface for querying search.  I have to admit, I like the ease of working with the results in CSOM.  Hopefully, you find this sample useful.  Thanks!

I’ve also uploaded the full Visual Studio solution to MSDN Code Samples.

I’ve got a number of talks coming up that I am excited about.  It turns out that talking about my experiences publishing SharePoint 2013 apps to the Office Store is hot!  I’ve already spoken about it in Austin and now I’ll be speaking about it at the following events:

If you’re in a major city in Texas and want to hear about apps, I’ve got you covered!  This talk covers my personal experiences building a business around selling apps in the Office Store.  The talk is mostly non-technical and covers more of the details around what you need to submit and publish an app.  If you’re going to be at any of the events be sure and check it out.

I’m also giving a talk about the various aspects of the Search API in SharePoint 2013 at SharePoint Summit Toronto (5/13 – 5/15).  If you’re going to be in the area, be sure and check it out!

Follow me on twitter: @coreyroth.

This is the third time (and hopefully final) I am writing this post about building Client Web Parts (App Parts) for SharePoint 2013 with Visual Studio 2012.  This is largely due to the Office Developer Tools going through a few iterations (Preview 1 and Preview 2).  I don’t want to leave out-dated information out there, so here is the updated version.  The process remains largely the same, but a few of the screens have changed and what Visual Studio produces for us has changed significantly since Preview 1.

When you create the project, the first two steps look pretty much the same as Preview 1 and Preview 2.

VS2012RTMNewAppProject

The next step looks the same as well.  For our example, we’ll go with a SharePoint-hosted app again.

VS2012RTMNewAppProjectStep2

The good news is, in Preview 2 they’ve added a wizard that helps you get stared with client web parts.  This wizard does three things for you.  It creates an application page for the client web part, it add it to elements.xml, and it registers the CSS files from SharePoint so that the content inside the IFRAME is styled appropriately.   They have also added all of the required JavaScript files you need.  I’ll talk about those more in a second. Once you have created the project, add a new item, and choose Client Web Part (Host Web).

VS2012RTMClientWebPartSPI

You’ll notice on the list that there is a new entry for Search Configuration as well.  I talked about that last week in a previous post.  When you go to the next step, you get a new dialog in the wizard that gives you the option to create a new page for the client web part.  This dialog has been updated slightly since Preview 2.

VS2012RTMClientWebPartSPIPart2

When you complete the wizard, it will generate quite a bit of HTML and JavaScript including all of the script files needed to get started referencing SharePoint.  Let me post the whole code snippet, so you can see what I mean.

<%@ Page language="C#" Inherits="Microsoft.SharePoint.WebPartPages.WebPartPage, Microsoft.SharePoint, Version=15.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c" %>

<%@ Register Tagprefix="SharePoint" Namespace="Microsoft.SharePoint.WebControls" Assembly="Microsoft.SharePoint, Version=15.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c" %>

<%@ Register Tagprefix="Utilities" Namespace="Microsoft.SharePoint.Utilities" Assembly="Microsoft.SharePoint, Version=15.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c" %>

<%@ Register Tagprefix="WebPartPages" Namespace="Microsoft.SharePoint.WebPartPages" Assembly="Microsoft.SharePoint, Version=15.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c" %>

 

<WebPartPages:AllowFraming ID="AllowFraming" runat="server" />

 

<html>

<head>

    <title></title>

 

    <script type="text/javascript" src="../Scripts/jquery-1.7.1.min.js"></script>

    <script type="text/javascript" src="/_layouts/15/MicrosoftAjax.js"></script>

    <script type="text/javascript" src="/_layouts/15/sp.runtime.js"></script>

    <script type="text/javascript" src="/_layouts/15/sp.js"></script>

 

    <script type="text/javascript">

        'use strict';

 

        // Set the style of the client web part page to be consistent with the host web.

        (function () {

            var hostUrl = '';

            if (document.URL.indexOf('?') != -1) {

                var params = document.URL.split('?')[1].split('&');

                for (var i = 0; i < params.length; i++) {

                    var p = decodeURIComponent(params[i]);

                    if (/^SPHostUrl=/i.test(p)) {

                        hostUrl = p.split('=')[1];

                        document.write('<link rel="stylesheet" href="' + hostUrl + '/_layouts/15/defaultcss.ashx" />');

                        break;

                    }

                }

            }

            if (hostUrl == '') {

                document.write('<link rel="stylesheet" href="/_layouts/15/1033/styles/themable/corev15.css" />');

            }

        })();

    </script>

</head>

<body>

</body>

</html>

The above code differs from Preview 2 in that it also includes references to MicrosoftAjax.js, sp.runtime.js and sp.js.  Why we never bothered to include scripts this way in the past is beyond me.  In the past, we used $.getScript to dynamically load the SharePoint scripts.  That’s way over complicated when you can just get it out of the layouts directory. 

At this point, I recommend adding another script file to your project.  I tend to go with one script per client web part.  You can put all of your script code in there.  At this point, you’re also ready to deploy the client web part and try it out.  Deploy, your project, click on the Developer Site link, and then edit the page.  Click App Part and then choose your newly deployed client web part.

VS2012RTMClientWebPartDeployed

That should get your started.  On a related note, the JavaScript code also changed slightly in App.js (the script file for your default.aspx page).  Here’s what it looks like if you’re curious.

VS2012RTMAppJs

The new tools make it much easier to get started with Client Web Parts, so be sure and get them if you haven’t already.  Also take a look at my Preview 1 and Preview 2 posts as they can walk you through some of the other steps.

I have a session coming up at SharePoint Summit Toronto this year about the many different ways you can query search.  Whenever I am working on a new talk, it is customary for me to write blog posts about my examples so here it is. :)  I first learned how to query SharePoint 2013 search with the new REST API and JavaScript from looking at examples from @ScotHillier on MSDN.  However, the last time I tried the example, I noticed an issue which I equate to most likely a change between beta and RTM.  This post shows you my version of how to query search using the REST API and JavaScript. 

For this post, I am using the RTM Office Developer tools (which I have a post coming out on soon).  I am going to use a SharePoint-hosted app for my example but you could also use this in a web part with farm solution as well.  When it comes to apps, you need to be sure and request permission to access search.  Do this by editing your AppManifest.xml and clicking on the permissions tab.  Select Search and then select QueryAsUserIgnoreAppPrincipal.  If you forget this step, you won’t get an error, your queries will simply just return zero results.

SearchAppRESTPermission

For my example, I am just going to add my code to the default.aspx page in the app.  I simply add a textbox, a button, and a div to hold the results.  The user will type in his or her query, click the button, and then see search results.  You could put this in a Client Web Part if you wanted. 

<div>

    <label for="searchTextBox">Search: </label>

    <input id="searchTextBox" type="text" />

    <input id="searchButton" type="button" value="Search" />

</div>

 

<div id="resultsDiv">

</div>

Now, we need to add the necessary code to App.js.  I start by removing the example code that retrieves the user information.  I instead, add a click handler to my searchButton to execute the search query.  If you remember from my previous post on REST, we assemble a URL by appending /api/search/query to a SharePoint host URL.  For example.

http://server/_api/search/query

However, in an app, we have to request the URL to the App.  One way to do this is by query string using the SPAppWebUrl parameter.  SharePoint passes this parameter to your app start page automatically.  We can request it with a line line this.  Remember getQueryStringParameter() is a helper method that we have gotten from some of the SharePoint examples.  I’ll include it in the full code listing at the bottom of this post.

var

spAppWebUrl = decodeURIComponent(getQueryStringParameter('SPAppWebUrl'));

To pass the user’s query to SharePoint, we need to include the querytext parameter on the REST URL.  Be sure to enclose the value in single quotes.  Again more details in my previous post.

http://server/_api/search/query?querytext=’SharePoint’

Now, we need to create a click handler for the search button, build the URL, and then execute the query.

$(

"#searchButton").click(function () {

});

Inside the click handler, assemble the URL, using the concatenation of the spAppWebUrl, /api/search/query, and the querytext parameter.  The value of querytext will be retrieved from the textbox we added earlier.

var queryUrl = spAppWebUrl + "/_api/search/query?querytext='" + $("#searchTextBox").val() + "'";

Now, we just execute the query with $.ajax().  Pass the queryUrl in the url parameter.   This example uses the GET method but if you have a lot of parameters, you may consider using POST instead.  Lastly, this part is key to get this example to work.  The accept header must have value of "application/json; odata=verbose".  The odata=verbose part is not in the MSDN example.  If you leave it out, you will receive an error.  The last parameters are the methods that will handle the success and failure of the AJAX call.  Here’s what the whole method looks like.

$("#searchButton").click(function () {

    var queryUrl = spAppWebUrl + "/_api/search/query?querytext='" + $("#searchTextBox").val() + "'";

 

    $.ajax({ url: queryUrl, method: "GET", headers: { "Accept": "application/json; odata=verbose" }, success: onQuerySuccess, error: onQueryError });

});

Now, you need to write code to handle the success.  The results come back in JSON format, but unfortunately, they are buried in a hugely nested structure.  Each individual result can be found in data.d.query.PrimaryQueryResult.RelevantResults.Table.Rows.results.  Your best bet is to assign this a variable and then use a template or manually parse it. In my example, I am just going to use $.each to iterate through the results using brute force.  The individual columns of each search result can be found in this.Cells.results.Value.  Ok, that’s confusing I am sure, so let’s look at the code and then step through it.  I’m just writing out a simple table by appending HTML tags to a div.

function onQuerySuccess(data) {

    var results = data.d.query.PrimaryQueryResult.RelevantResults.Table.Rows.results;

 

    $("#resultsDiv").append('<table>');

 

    $.each(results, function () {

        $("#resultsDiv").append('<tr>');

        $.each(this.Cells.results, function () {

            $("#resultsDiv").append('<td>' + this.Value + '</td>');

        });

        $("#resultsDiv").append('</tr>');

    });

 

    $("#resultsDiv").append('</table>');

}

Effectively, I have two nested loops.  One to iterate through each result and one for each managed property (column) in the results.  When you get inside the Cell, this.Value will show the value of the result while this.Key will contain the name of the managed property. 

The last thing to implement is the code to handle errors.  It’s relatively simple and just writes error.StatusText to the div.

function onQueryError(error) {

    $("#resultsDiv").append(error.statusText)

}

When I run my app, here is what it looks like.

SearchAppDefault

Executing a query in the app, gives us results but they aren’t pretty.  You can pretty them up yourself. :)  It also doesn’t include anything to print out the managed property names on the table.  You could get the names by looking at each cell of the first result and using this.Value.

SearchAppResults

You can see, it’s really pretty easy to get started.  You can further refine your REST query to request specific fields, sort orders, and more.  I’ll probably write a follow-up post in the future to include some of those details. Here’s the whole source code for my example that you can work with.

var context = SP.ClientContext.get_current();

 

// This code runs when the DOM is ready and creates a context object which is needed to use the SharePoint object model

$(document).ready(function () {

 

    var spAppWebUrl = decodeURIComponent(getQueryStringParameter('SPAppWebUrl'));

 

    $("#searchButton").click(function () {

        var queryUrl = spAppWebUrl + "/_api/search/query?querytext='" + $("#searchTextBox").val() + "'";

 

        $.ajax({ url: queryUrl, method: "GET", headers: { "Accept": "application/json; odata=verbose" }, success: onQuerySuccess, error: onQueryError });

    });

 

});

 

function onQuerySuccess(data) {

    var results = data.d.query.PrimaryQueryResult.RelevantResults.Table.Rows.results;

 

    $("#resultsDiv").append('<table>');

 

    $.each(results, function () {

        $("#resultsDiv").append('<tr>');

        $.each(this.Cells.results, function () {

            $("#resultsDiv").append('<td>' + this.Value + '</td>');

        });

        $("#resultsDiv").append('</tr>');

    });

 

    $("#resultsDiv").append('</table>');

}

 

function onQueryError(error) {

    $("#resultsDiv").append(error.statusText)

}

 

//function to get a parameter value by a specific key

function getQueryStringParameter(urlParameterKey) {

    var params = document.URL.split('?')[1].split('&');

    var strParams = '';

    for (var i = 0; i < params.length; i = i + 1) {

        var singleParam = params[i].split('=');

        if (singleParam[0] == urlParameterKey)

            return decodeURIComponent(singleParam[1]);

    }

}

That’s all there is to it.  Hopefully, you find this example useful.  Thanks.

The complete Visual Studio for this code snipped has also been uploaded to MSDN Code Samples.

The Office Developer Tools team snuck a new feature into the RTM version of the tools for Visual Studio 2012.  This new feature allows you to deploy apps and actually alter the search schema on the host web.  That’s right.  You can deploy an app and it will directly change the search configuration on the host.  They just released documentation on it a while back, but as usual, I wanted to share my experiences.  That and I know you all like screenshots.

What does this feature actually do?  Well let’s back up a bit.  If you remember back from my post, Search is Everywhere, I mentioned the we now had the ability to export and import search settings.  This works at the SSA, site collection, and site level and allows you to move everything from result sources to managed properties from one environment to another.  This is big as it lets you finally promote search settings between environments and maintain a true SDLC when it comes to search.  Why do we care about search configuration with apps?  Well this allows the developer to package up search settings in Visual Studio 2012 and then move them to production without having to do manual steps or use PowerShell.  This also means you could include search settings in an app that you would put in the Office Store.  It certainly opens up possibilities.

To test this out, go to your source site collection and customize your search settings.  In my example, I created a custom result source and some managed properties on our source site.  In my example, I actually did this on an on-premises installation of SharePoint 2013. 

SearchConfigurationResultSourceSite1

This particular result source does nothing exciting.  It simply limits the search to documents, but it serves as a good example.  I’ve also created a managed property mapped to the Author crawled property.  You may already know about this part, but I am showing it for a reason.

SearchConfigurationManagedPropertySite1

Now, I am going to export my search settings of my site collection, by going to Site Settings –> Search –> Configuration Export.

SiteCollectionSearchSettingsExport

At this point, I could manually import the search settings using Configuration Import on another site collection.  However, we want to do this from an app.  Let’s get started in Visual Studio 2012.  Start by creating a new SharePoint-hosted app.  Once it is created, add an item to the project and choose Search Configuration.

VS2012SearchConfigurationSPI

The next step will ask for the path to your configuration XML file that you exported.

VS2012SearchConfigurationImportSettings

At this point the process is done.  It will show you an XML editor with the contents of your search configuration.  According to the MSDN documentation, you then need to edit it and set the DeployToParent element to true

SearchConfigurationDeployToParentTrue

We then need to grant permissions to access the Site Collection.  To do this, open AppManifest.xml and then click on Permissions.  On this tab, add a scope of Site Collection and set the value to Full Control

VS2012SearchConfigurationAppManifestPermissions

At this point, we are are ready to deploy.  In my example, I am taking my search configuration and deploying it to an Office 365 SharePoint Online tenant.  When the app deployment completes, you’ll be prompted if you want to trust the app.  Trust it and then you should see your app start page.

VS2012SearchConfigurationDeploymentTrust

At this point, you are just going to see you default app start page.  There is nothing visible in the application.  Go to the Developer Site (or the site collection you deployed to) and go to the Site Settings.  Then look at the Result Sources.  If everything worked correctly, you should now see your new result source there.

SearchConfigurationResultSourceDeployed

It was successfully deployed.  Now what about the managed property?  Unfortunately, it is no where to be found.  If you go back to Visual Studio and look at your XML, you’ll notice that your managed property definition is no where to be found there either.  If you check the source file before you imported it though, you’ll see the definition.  After I noticed this particular behavior, I reached out on Twitter and @chakkaradeep reached out to me and told me that managed properties aren’t supported in this deployment model.  That made me kind of sad because that’s what I want to deploy the most.  I’m sure there is a technical reason though that he’ll explain to me sometime though.  You can still deploy managed properties via Configuration Import though which is still a great added feature of SharePoint 2013.

You might be curious if the result source is removed when you uninstall the app.  It turns out that the changes are indeed removed when you uninstall.

SearchConfigurationResultSourceRemoved

Aside from the managed properties not being available, this is still a pretty cool feature and it has me thinking about some new things I can do that I didn’t think were possible before.  I’m pretty excited to work with it more.

As an IT consultant, I have seen it all (ok that’s a pretty big statement…I have seen a lot).  I often ask why does it take days, weeks, or even months to create a new user account?  Is it because there is so much process involved, your help desk or administrators are over-worked, or is it because someone in the chain is slacking?  Honestly, it could be any of this.  I find that watching how long it takes to create a user account is a good metric for an IT department’s efficiency.  You can see things from a business process and technical perspective by watching this process.  It shows how well the business process works to request and approve an account.  It can also expose any resourcing shortfalls that you may have.  In reality, the process of creating a new user account should really be simple.  Read more to look at possible causes of why it takes so long to create user accounts and for ways to improve it.

If you’re reading this as a business user or executive and have been told the process of physically creating the account takes forever, prepare to get angry.  Watch this video and see how long the process actually takes.  This particular video (albeit old) is under two minutes and there is a lot of ramp up time.  The process takes seconds.  I’ll give you that it might take a big longer if you are also configuring Exchange mailboxes (although minimal) or trying to decide on a username, but as you can see the process doesn’t take that long.  If your administrator tells you that this process takes a long time, he or she is lying and it’s time for him or her to go.

Let’s look at the business process.  Obviously, you need to have some sort of approval process for creating accounts.  Your data is important.  You don’t want people requesting accounts for their friends or even worse a competitor.  Who needs to approve an account request?  Well from what I have seen in the past, typically someone from the business approves the request such as the requester’s manager, director, or VP.  You’ll probably also have someone from the IT department approve it as well.  Every organization is different, but if you have more than two or three approvals required to create an account, that is probably over-kill.  You definitely don’t want to have too many VP-level or C-level approvals required.  After all, most of them don’t really care nor do they have time to deal with your approval to begin with.

For each approval you have, you can pretty much assume that is going to cost you a day at least.  A well-engineered process will use multiple approvers and have fall-backs for when approvers are not available.  It should also have task reminders to remind approvers to approve tasks after a set period of time.  What is not surprising is that the technology that powers this process widely varies in every organization and it is far from standardized.   Typically, organizations manage this process inside their help-desk ticketing system.  I’ve also seen it put together in everything from an E-mail, a custom form, a Word document, you name it.  You could certainly build an approval workflow in SharePoint too. :)   If it takes a long time to create a user account though, it is more than likely a business process problem and not a technology problem.

It’s amazing how much money is lost because a new hire or contractor doesn’t have an account.  I see it all the time.  If I was the CFO, I would be yelling at you the CIO to get this process under control.  Of course, no one in their right mind would let me manage a company’s money. :)  Creating a user account should not take weeks and you know it.  So as the leader of an IT department, how do you fix it?  You need to figure out if it is a technology problem, a resource problem, or an approval problem.  

Although I have never seen a help desk system that I would classify as great, I would rarely say it is a technology problem.  Still though, you can look at the process that the end user completes to get an account created.  This could be a form, an E-mail, logging in to the help desk system, or a phone call.  Make sure the process is obvious and make sure your users know how to get to it.

Let’s look at potential resource problems.  Monitor how long the ticket sits with the person actually creating the user account as well as the number of other tickets.  If that administrator is being hit with 50 new tickets a day and he or she can’t keep up, then it’s probably a resource problem which ultimately means it’s your fault.  Get that poor admin some help.  If the administrator only has a handful of tickets a day and doesn’t have any other responsibilities, you may be dealing with a slacker and you need to fix the problem.  Slackers are notoriously bad about making up excuses of why things take so long.  They use your ignorance in technology against you because as a non-techie you no means to question them.  Ask the slacker lots of questions and it usually becomes pretty obvious.

You can find approval problems by looking at ticket history and examining the time in between approvals.  If you are seeing huge lags times in between approvers, you likely have too many required approvers or the wrong approvers.  Look and see if you see any trends where the process is getting hung up.  Interview people from your help desk about it because chances are one of them knows where the delays usually come from.  Think back to when the approval process was first created, does anyone there even remember why it was built the way it was?  Don’t be afraid to re-engineer the process from the beginning to simplify it.  Does the CIO or the VP of Legal really need to approve every account?  I’ve seen it before and it just really isn’t necessary.  Put a little more trust in your management to make the right decisions.  After all you hired them because they are good people and they just want to get work done.

Lastly, separate the account creation process from the process to grant permissions to applications and files.  If the user is going to need access to sensitive files, often times you will want additional approvals.  Separate this from the account creation process so that the new employee can at least read E-mail and get started with his or her job.  Let the approvals for those sensitive areas come in on a separate ticket to expedite the process.

I know there could be other factors in the mix such as dependencies on HRMS systems and the like.  What have you seen?  I know a lot of you out there are consultants and work with companies of varying sizes.  What are your experiences?  Do you have any good stories about starting somewhere?  Share them in your comments.

I could be totally off base here.  I haven’t done any significant systems administration work in years, but I think these are good ideas.  As part of being a SharePoint architect, I am often working with business users to improve processes and this one almost always needs help.  Remember, nothing gives your IT department a worse name than being slow at executing such a rather simple process.  Perception is reality.  Fix this problem in your organization and you’ll get a big win with your stakeholders.