July 2008 - Posts

I decided to go with an intro topic for Enterprise Search today.  If you are like me, you often find yourself in a situation where you are doing ECM related activities such as creating custom content types that use site columns.  This may present itself in the form of a custom list, form library, or a document library.  After you get some documents, forms, or list items in there, eventually someone is going to want to search on them.  Searching these things is not really hard at all.  After all the Local Office SharePoint Sites content source will index this stuff for you.  If you want to be able to build more advanced searches though, you are going to need to be able to search on those site columns.

In today's example, let's assume my content type uses site columns called City, State, and Product Type.  The reason I might want to do this is because I might want to have a search such as show me all documents from the state of California.  We'll also assume that the content type I have inherits from document (so we are dealing with document libraries).  The first thing we need to do is to get Enterprise Search to learn about our site columns.  It does this by crawling, but we can't crawl without doing a little work first.  In our document library, we have to go create a document or two that actually has those site columns populated.  If there isn't any documents with that metadata populated, Enterprise Search won't know about it and it won't add those site columns as crawled properties.

After you create your documents and crawl, you should have some new crawled properties representing your site columns.  It puts these new crawled properties under the SharePoint folder in your Metadata Property Mappings page.  Here you can see crawled properties for all of the site columns on your SharePoint farm.  The one thing to note is that it prefixes ows_ to your property and replaces any spaces with _x0020_.  This means City would become ows_City and Product Type would become ows_Produt_x0020_Type.  I really don't like the encoding on the spaces, so I just try to avoid them in my site columns.  Now we just need to create managed properties and map them to the new crawled properties.  Remember, you don't have to call the managed property the same name as the crawled property (i.e.: State would map to ows_State).

Once you map your properties, be sure and do another Full Crawl.  This is necessary for the index to pick up your managed property mappings.  Once you do that, use your search center (or create a new one) and you can perform your queries.  For example, to search for all documents in California like mentioned above, you would use something like IsDocument:"1" State:"California".

Building a SmartPart type control really isn't all that complicated.  On a project I did in the past, I came into a situation where I couldn't use the SmartPart.  I needed specific functionality and unforutnately at the time, the source code for the SmartPart was not available.  Therefore, I decided to see if I can build my own.  In its simplest form, all you are really doing is calling Page.LoadControl and passing it a path to your user control.  Of course it's a little more complicated than that but not really.  Let's start by looking at the beginning of the web part.

[SupportsAttributeMarkup(true), XmlRoot(Namespace = "http://schemas.mydomain.com/WebPart/MySmartPart")]

public class MySmartPart : Microsoft.SharePoint.WebPartPages.WebPart

{

    // this provides a refernece to the control we are loading

    private Control myControl;

 

    [XmlElement("ControlPath")]

    [Personalizable(PersonalizationScope.Shared), WebBrowsable(true), WebDisplayName("Control Path"), WebDescription("Specify a path to a user control.")]

    [Category("Configuration")]

    [WebPartStorage(Storage.Shared), Browsable(false), DefaultValue("")]

    public string ControlPath

    {

        get;

        set;

    }

}

So far what we have is a new class inheriting from WebPart which has a member variable to reference the control itself as well as a property to contain the path to the control.  The ControlPath has attributes on it so that it can be set via CAML or through the UI in SharePoint.  When the user specifies a path it can be on any relative URL on the site (not just in a UserControls folder).  The next part is to actually load the control and add it to the web part.

protected override void CreateChildControls()

{

    base.CreateChildControls();

 

    // don't use LoadControl if you already have a reference to the control

    if (myControl == null)

    {

        myControl = Page.LoadControl(ControlPath);

        Controls.Add(myControl);

    }

}

We start by overriding CreateChildControls and calling the base method.  Then, we check to see if the control already exists or not.  This is so that you don't load a new instance of the control on postbacks.  Then, we just call Page.LoadControl and add it to the Controls collection.  This is really all that is required to get up and running with a web part that displays a user control.  One thing that is different is that typically in ASP.NET when you load a control with Page.LoadControl, you need to specify an Id on the control for postbacks to work properly.  However, you don't want to do this in your web part.  For whatever reason, it will lead to very inconsistent behavior with your postbacks.  There isn't any error checking or anything in the above example, so you might want to check to make sure that the path is valid, that the control was able to load, and look for exceptions.

Also, this web part pretty much has to run with full trust.  That means you'll have to put it in the GAC or run SharePoint with full trust.  I hate running thing with full trust, but I discovered something pretty deep in the call stack of Page.LoadControl that requires full trust.  It's pretty simple, but maybe it will be of use to you if you ever need to do something more advanced when loading a user control (i.e.: passing parameters).

with 2 comment(s)
Filed under:

I have been really enjoying the new features of the Infrastructure Update, but I noticed that something didn't look quite right about my search results.  It finally occurred to me, that the horizontal line separating the description and the URL was no longer being displayed.  So, I got out the IE Developer Toolbar and determined that the style in question was srch-MetaData.  Comparing the style between an updated server and one that hadn't been updated, I discovered that there was no longer a top border defined in this element.  I knew this file was defined in core.css, but that file still had the same date and the contents had not changed.   I noticed that portal.css had a new date on it so I began looking through there and found several new styles to handle the new search features, as long with the culprit style below.

.srch-Metadata{
    BORDER-TOP: 0px none !important;
    MARGIN:0px 0px 20px !important;
}

This was it.  The top border was being removed by this element.  As a quick fix, I just removed this style and my search results looked normal again.  Of course as Microsoft always reminds you, any SharePoint files you modify in the hive are subject to be changed by a future version.

I have been in the process of updating multiple MOSS development servers with the Infrastructure Update, and ran into an issue where the SharePoint Products and Technologies Configuration Wizard failed.  The reason was that this particular server did not have a HOSTS file and it was trying to apply permissions to the file.  I am not sure why this is.  You would think it would just skip the file and move on, but apparently not.  So if you run into this issue, just go create an empty hosts file and run the Configuration Wizard again and you will be good to go.

with 1 comment(s)
Filed under: ,

According to FeedBurner, my posts on querying Enterprise Search with the FullTextSqlQuery and KeywordQuery classes have been some of the most popular.  So I thought, I would continue on these posts and explain how to do it using the web service.  The SDK covers this, but not in enough detail for me.  When I am learning something new, I like to see complete examples, so hopefully this will help someone trying to learn how to do this.  The first place to start is by adding a web reference to your project.  You can query SharePoint Search or Enterprise Search in the exact same manner it is just a matter of which web service you make a reference to.  SharePoint search can be queried at a URL similar to the one below.

http://moss-server/_vti_bin/spsearch.asmx

To query Enterprise Search, the URL will look similar to this.  Both web services have the same methods.  They just query different indexes.

http://moss-server/_vti_bin/search.asmx

To execute a query, you have to build an XML document. I posted about the XML you post to the service in the past, now I will give a complete example and then also explain some of the options you can configure when querying.   The type of query is specified on the QueryText element using the Type attribute.  A value of MSSQLFT like in the XML below is used to do a full text query.

<QueryPacket xmlns="urn:Microsoft.Search.Query" Revision="1000">

  <Query domain="QDomain">

    <SupportedFormats>

      <Format>urn:Microsoft.Search.Response.Document.Document</Format>

    </SupportedFormats>

    <Context>

      <QueryText language="en-US" type="MSSQLFT">SELECT Title, Path, Description, Write, Rank, Size FROM Scope() WHERE CONTAINS('Accounting') AND "Scope" = 'Corporate Documents'</QueryText>

    </Context>

  </Query>

</QueryPacket>

In the example above, I have a simple query that is looking for documents with the word Accounting in the Corporate Documents scope.  Issuing a keyword query is similar but it uses a Type of STRING.  In this example, we are searching on the managed property city with a value of Austin.

<QueryPacket xmlns="urn:Microsoft.Search.Query" Revision="1000">

  <Query domain="QDomain">

    <SupportedFormats>

      <Format>urn:Microsoft.Search.Response.Document.Document</Format>

    </SupportedFormats>

    <Context>

      <QueryText language="en-US" type="STRING">City:"Austin"</QueryText>

    </Context>

  </Query>

</QueryPacket>

There are two methods used to query search.  The Query method returns the results as XML.  The QueryEx method returns the results as an ADO.NET dataset.  They both take the same input XML document like the ones above.  The code is pretty simple. Here I create a new reference to the web service (I named it QueryWebService) and I pass it credentials.  Like any SharePoint web service, you have to pass it credentials.  You can specify custom ones or just use the credentials of the application you are running using DefaultCredentials.

QueryWebService.QueryService queryService = new QueryWebService.QueryService();

queryService.Credentials = System.Net.CredentialCache.DefaultCredentials;

For this example, I am just going to build the input XML using a StringBuilder. 

queryXml.Append("<QueryPacket xmlns=\"urn:Microsoft.Search.Query\" Revision=\"1000\">");

queryXml.Append("<Query domain=\"QDomain\">");

queryXml.Append("<SupportedFormats>");

queryXml.Append("<Format>");

queryXml.Append("urn:Microsoft.Search.Response.Document.Document");

queryXml.Append("</Format>");

queryXml.Append("</SupportedFormats>");

queryXml.Append("<Range>");

queryXml.Append("<Count>50</Count>");

queryXml.Append("</Range>");

queryXml.Append("<Context>");

queryXml.Append("<QueryText language=\"en-US\" type=\"STRING\">");

queryXml.Append("City:\"Austin\"");

queryXml.Append("</QueryText>");

queryXml.Append("</Context>");

queryXml.Append("</Query>");

queryXml.Append("</QueryPacket>");

This query is similar to the one from above, but I added a Range and Count element.  By default, you will only get 10 results back, by changing this element I will get back 50 in this case.  You can also specify a StartAt element to start on a specific row.  This would be useful if you are building paging into something.  Then I just execute the query like so.

string resultXml = queryService.Query(queryXml.ToString());

This returns an XML document that looks similar to the following.

<ResponsePacket xmlns="urn:Microsoft.Search.Response">

  <Response domain="QDomain">

    <Range>

      <StartAt>1</StartAt>

      <Count>10</Count>

      <TotalAvailable>7312</TotalAvailable>

      <Results>

        <Document relevance="1000" xmlns="urn:Microsoft.Search.Response.Document">

          <Title>Document 1</Title>

          <Action>

            <LinkUrl size="0">http://moss-server/Documents/Document1.docx</LinkUrl>

          </Action>

          <Description />

          <Date>2008-07-15T19:59:43.2511787-05:00</Date>

        </Document>

        <Document relevance="1000" xmlns="urn:Microsoft.Search.Response.Document">

          <Title>Document 2</Title>

          <Action>

            <LinkUrl size="0">http://moss-server/Documents/Document2.docx</LinkUrl>

          </Action>

          <Description />

          <Date>2008-07-15T19:59:43.2511787-05:00</Date>

        </Document>

      </Results>

    </Range>

    <Status>SUCCESS</Status>

  </Response>

</ResponsePacket>

From here, you have something that you can easily work with using LINQ to XML.  If you are feeling lazy, you can always just use QueryEx to get a dataset as well.  You can configure quite a few options in how things are searched using the input XML document.  The schema is in the SDK if you are interested.  Hopefully, this will help the next time you need to query search.

Yesterday, the Infrastructure Update for Office SharePoint Server (as well as WSS and Project Server) was released.  This update makes changes to content publishing as well as Enterprise Search.   The install went smoothly.  Just remember like the service pack, you need to install the WSS update first. There are two notable changes as far as search goes.  First is the addition of Federated Search which comes over from the Search Server line of products.  The second is a new set of administration pages for setting up search.   

If you are not familiar with Federated Search, it basically gives you the ability to have your search results page query additional content indexes that are not on your farm (i.e.: Live Search or Google).  After you get the update deployed to your MOSS server(s), go create a new Search Center with Tabs site.  Immediately, you will notice that it looks slightly different.  The first thing that is changed is that there is a Content Editor web part underneath the search box which basically has a link to a disclaimer that says you searches also pull content from the Internet.  I found this quite lame so I deleted it immediately.  Perform a search and you will see that the search results page is different now too.  The search results are the same but there is two new instances of the Federated Results web part.  This new web part takes what you searched for and also passes it to Windows Live search.  It works by specifying a Federated Search Location.

This is where the new administration pages come in.  The existing search pages are still there under the link Search Settings, but if you look in your SSP, you will see a new link for Search Administration.  Just about every page for configuring search has been updated.  The first page you see displays the same basic information on search but also has two new web parts on it (new in Microsoft.Office.Server.Search.dll) that display currently running crawls and crawls that have recently finished.  The Content Sources page has been updated to show how long the current crawl has been running as well as the duration of the last crawl.  A new Crawl Log page which lets you get to all crawl logs in one place but the only identifier on them is a GUID, so you have to click on each one to view what it is.  I was sad to see that the interface for mapping crawled properties to managed properties still has not changed.  Of anything that needed to  be updated it was that since it is very difficult to pick properties with long names.  A new Proxy and Timeouts page lets you specify to use a proxy server as well as specify the timeout to use when crawling.

There is also a page to configure Federated Search.  The first thing to note is that the local search index is listed here as Local Search Results.  From what I can tell is that this really only allows for it to be used in a Federated Results web part.  From what I can tell there are no changes to the CoreResultsWebPart.  Which means you are not going to be displaying federated results with it.  This also means that if you were getting your hopes up that you could display search results from your local index and some place else all mixed together it isn't going to happen.  Out of the box there are two other federation search locations that are used to query Live Search.  It is here where you configure the URLs to these external searches and provide XSLT to configure the output in the Federated Results web part.  There is also a link to an online gallery of other federated search connectors for things such as Google, Yahoo, etc.

All in all, I am pretty excited about these new changes.  The new search stuff is really nice and I am looking forward to using it more.  Also note that Microsoft recommends installing this update as soon as possible.  Here is more info about the search specific stuff from the Enterprise Search Team's blog.  I expect I will be posting more about using these new features soon.

Recently, I wanted to create a simple InfoPath form that took a query string parameter (an id) and passed it to a secondary data source that called a web service.  There may be a simpler solution to this, but I went with a programmatic solution.  So, I set my preferred language as C# and went to the Programming menu item under the Tools menu.  For the purpose of this example, my secondary data source is called WebServiceDataSource and it takes a single parameter called ProductId.  Before you add any code, you will want to turn off the option to automatically execute the data source on form load.

This code goes in the FormEvents_Loading event handling method.  The first thing we need to do is actually get the parameter passed in from the query string.  This works similarly to TryParse.  It returns true if successful and writes the value into an out parameter.

e.InputParameters.TryGetValue("ProductId", out productId)

This next line of code selects the node of the ProductId parameter and sets the value.  XPath is used to find the current node.  Remember you can get the XPath query of the node by selecting it in the Data Source explorer and using the Copy XPath context menu item.

DataSources["WebServiceDataSource"].CreateNavigator().SelectSingleNode("/dfs:myFields/dfs:queryFields/tns:WebServiceDataSource/tns:ProductId", NamespaceManager).SetValue(productId);

The last thing you need to do is execute the data source.

DataSources["WebServiceDataSource"].QueryConnection.Execute();

At this point you can pass a parameter to the form and your secondary data source will use that parameter.  Since there is code in the form now, when you publish it you will of course have to select the Administrator-approved form template option.  You will then have to upload it on the manage form templates page in Central Administration.  Here is what all of the code looks like together.

string productId;

 

// get the value if it exists

if (e.InputParameters.TryGetValue("ProductId", out productId))

{

    // select the node in the secondary datasource

    DataSources["WebServiceDataSource"].CreateNavigator().SelectSingleNode("/dfs:myFields/dfs:queryFields/tns:WebServiceDataSource/tns:ProductId", NamespaceManager).SetValue(productId);

 

    DataSources["WebServiceDataSource"].QueryConnection.Execute();

}

else

{

    // handle an error

}

I am no expert on InfoPath, but this solution works for me.  If you know of a better solution, please post it.

with 5 comment(s)
Filed under: ,

Recently, I wanted to bind some data from the BDC to an ASP.NET control (i.e. a GridView, ListView, or DropDownList).  For what I needed a Business Data List web part just wouldn't cut it, so I decided to create something that I could bind to.  I also needed a quick and dirty way to get an XML representation of the data so that I could use it from InfoPath.  Normally, I avoid DataSets but for this purpose today, I am willing to tolerate it. 

First, we'll start with the process of how to retrieve data from the BDC through the API.  We'll start with the process to execute a BDC finder method.  The first thing you have to do is get a LobSystemInstance using the ApplicationRegistry object.  Application Registry is what the API guys came up with before marketing decided to call it the Business Data Catalog.  You pass it the name of the instance you want.  Note, this is the name of the instance in the XML file and not the application name you see in the BDC configuration pages.

// get a reference to the instance

LobSystemInstance instance = ApplicationRegistry.GetLobSystemInstanceByName("MyInstance");

 

// get a reference to the entity

Entity entity = instance.GetEntities()["MyEntity"];

Once you have the LobSystemInstance you need to get a reference to the entity you want.  Since we are creating a dataTable, we need to get a list of the Fields returned in the view and create a dataTable.  The Fields collection can be accessed after calling GetFinderView (use GetSpecificFinderView when calling the SpecificFinder method).  After that I call a simple method which manually creates a new dataTable given the fields I have provided.

// get a view so that the fields can be retrieved

FieldCollection fieldCollection = entity.GetFinderView().Fields;

 

// create a datatable based on the fields in the entity

DataTable dataTable = CreateDataTable(fieldCollection);

My CreateDataTable method is simple.  It just iterates through the field collection and adds a column to the table.

private DataTable CreateDataTable(FieldCollection fieldCollection)

{

    // create a new datatable

    DataTable dataTable = new DataTable();

 

    foreach (Field field in fieldCollection)

        dataTable.Columns.Add(field.Name);

 

    return dataTable;

}

To execute the BDC Finder method, you have to pass it a FilterCollection even if you aren't using any filters.  You then call the FindFiltered method which returns an IEntityInstanceEnumerator.

// get an empty filter collection

FilterCollection filterCollection = entity.GetFinderFilters();

 

// get an enumerator over IEntityInstances

IEntityInstanceEnumerator entityInstanceEnumerator = entity.FindFiltered(filterCollection, instance);

This gives you an enumerator, then you just need to use it and add rows to the dataTable.

// iterate through the entity instances

while (entityInstanceEnumerator.MoveNext())

{

    // add the row to the dataTable

    FillDataTable(dataTable, fieldCollection, entityInstanceEnumerator.Current);

}

The FillDataTable method just copies the values from the IEntityInstance into new dataRows.

private DataTable FillDataTable(DataTable dataTable, FieldCollection fieldCollection, IEntityInstance entityInstance)

{

    // create a new row based on the dataTable

    DataRow newDataRow = dataTable.NewRow();

 

    // add the data from each field into the datatable

    foreach (Field field in fieldCollection)

        newDataRow[field.Name] = entityInstance[field];

 

    // add the row to the table

    dataTable.Rows.Add(newDataRow);

 

    return dataTable;

}

After this we have a complete dataTable that can be used for binding or whatever.  You can do manual binding or could put this code in a class and have an ObjectDataSource call it.  As you can see in the example here, that I haven't included any exception handling or null checking.  Calling a SpecificFinder method is pretty similar except for a few changes. 

// get an entity instance given the id

IEntityInstance entityInstance = entity.FindSpecific(id, instance);

 

// get a view so that the fields can be retrieved

FieldCollection fieldCollection = entity.GetSpecificFinderView().Fields;

Instead of calling FindFiltered we call FindSpecifc passing in a value for the identity (i.e.: give me entity with id 37223).  The other change we make is that we get the FieldCollection from GetSpecificFinderView instead of GetFinderView.  Even if you don't want a dataTable, hopefully these examples will help you get the data out of the BDC into some form that you can use.

 

I tend to run into this issue every time I do a production deployment on this one particular MOSS farm.  What happens is when I deploy a solution, stsadm of course says everything worked great but I try to look for my new code and its not there.  Eventually, I remember to look at the solutions page in central admin where I will always find an error.  It turns out on a few of the frontend servers, the solution deployment failed because it can't update one of the DLLs in the bin directory.  It's the typical, this file can not be copied because it is in use error message.  After going through the list of processes, I finally found the one that was the culprit SP2K3DocBackup.exe.  After a little research, I discovered this belonged to CommVault Galaxy (a data backup application).  The solution was simple, once I killed this process on each server, I redeployed my solution file and it installed correctly.  I am not sure why this process locks your DLLs.  You would think it would use Volume Shadow Copy or something so it wouldn't have to.  Hopefully this will help if you run into this situation.

with no comments
Filed under: ,

I can't talk about SharePoint all the time, so I thought I would talk about how to perform a type of query with LINQ.  In T-SQL you might have wrote something like this at one point.

SELECT Title, Id FROM Table1 WHERE Id IN (SELECT Id FROM Table2)

Basically, I am looking for all rows in Table1 where there is a matching Id in Table2.  Effectively I want a contains or exists type comparison between tables or lists.  I recently ran into a scenario where I wanted to do this and the syntax wasn't immediately obvious to me so I thought I would post something on it.  Let's start by defining a simple class.

public class MyClass

{

    public string Title

    {

        get;

        set;

    }

 

    public int Id

    {

        get;

        set;

    }

}

We'll then start by adding some test data to a list of this class.

List<MyClass> myClassList = new List<MyClass>();

myClassList.Add(new MyClass() { Title = "Title 1", Id = 1 });

myClassList.Add(new MyClass() { Title = "Title 2", Id = 2 });

myClassList.Add(new MyClass() { Title = "Title 14", Id = 14 });

In this case I want to compare it to a list of integers to find which items of MyClass match.

List<int> subQueryList = new List<int> { 1, 14, 97, 3, 11 };

Now, to perform the LINQ query.  The key to this kind of query makes use of the contains extension method on the list.

var filteredList = from myClass in myClassList

                    where subQueryList.Contains(myClass.Id)

                    select myClass;

Enumerating this query would return MyClass objects with an Id of 1 and 14.  This works well given a simple list of integers but what if we have two different lists of MyClass?  Here is our second list.

List<MyClass> myClassList2 = new List<MyClass>();

myClassList2.Add(new MyClass() { Title = "Title 5", Id = 5 });

myClassList2.Add(new MyClass() { Title = "Title 2", Id = 2 });

myClassList2.Add(new MyClass() { Title = "Title 14", Id = 14 });

One way that comes to mind to handle this is to write a LINQ query to get a list of the Ids for the second list and then query in a similar manner.

// get a list of ids from the other list of classes

var idList = from myClass in myClassList2

            select myClass.Id;

 

// subquery using the idList

var filteredList2 = from myClass in myClassList

                   where idList.Contains(myClass.Id)

                   select myClass;

Enumerating filteredList2 would return MyClass objects with an Id of 2 and 14.  Instead of using a subquery to get a list of Ids, what about something like this?

var filteredList3 = from myClass in myClassList

                    where myClassList2.Contains(myClass.Id)

                    select myClass;

This will compile just fine, but as expected it returns no results.  Although the myClass in each list with Ids of 2 and 14 have identical values, they are different objects.  If you wanted to exert some additional effort, you could get this to work, but I am not going to cover that today.

with 3 comment(s)
Filed under: