March 2008 - Posts

As you may know SharePoint has its own internal security groups that you can map Active Directory users and groups into.  This lets you create custom security groups without having to store them in Active Directory.  This is not always a best practice but there are times that you may want to do this.  Using the API to do this is pretty similar to other SharePoint tasks, but the last time I tried it, I ran into some issues so I thought I would show you how I did it.

The case I am talking about today is adding a group to a site collection.  This code could easily be adapted for other uses as well.  Any web object has a property called SiteGroups which is of type SPGroupCollection.  This class has an Add method which requires a name, an owner, a default user, and a description.  Unfortunately, you can't just pass a user login to the owner and default user properties, you have to give it an SPMember object.  This means, that the user has to be added to the site collection, prior to creating a group.  I sifted through the SDK and I figured this would be simple.  In fact the code is really simple, here is what I tried first (because it seemed obvious).

currentSiteCollection.RootWeb.Users.Add("DOMAIN\\USERNAME", string.Empty, "DOMAIN\\USERNAME", string.Empty);

This takes parameters of login, E-mail address, username, and notes.  All, I had was the username, so I passed in an empty string for the other values.  Unfortunately though, this returns the following error.

System.InvalidOperationException.  Operation is not valid due to the current state of the object.

Looking back, I think I should have been using SiteUsers instead of Users, but that is not the direction I ended up going.  I think I could use SiteUsers but I would also have to add some code to check and see if the user exists first.  What I ended up using was SPWeb's EnsureUser method.  This method simply checks to see if the user exists on the site and if the user is not there, it adds it.

Once you ensure the owner of the group is present on the site collection, adding the group is relatively easy.  Simply call the Add method, with the name, owner, default user, and description.  In this case I am using the same owner and default user.  I am getting the user from the SiteUsers collection since this is a SiteCollection.  If you were adding a group to a site, you would use Users or AllUsers.

currentSiteCollection.RootWeb.SiteGroups.Add("My Site Group", currentSiteCollection.RootWeb.SiteUsers["DOMAIN\\USERNAME"],

    currentSiteCollection.RootWeb.SiteUsers["DOMAIN\\USERNAME"], "My Group Description");

This creates the group, but now you need to add some Active Directory users or groups to it.  This is actually pretty simple.  it takes the same parameters as the Add method does on SPUserCollection.

currentSiteCollection.RootWeb.SiteGroups["My Site Group"].AddUser("DOMAIN\\USERNAME", string.Empty, "DOMAIN\\USERNAME", string.Empty);

This is how you do it.  To make things more eloquent, I store my groups and users in an XML file and use LINQ to XML to query the data.  I may post the code for that here in the near future.

with no comments
Filed under: ,

As you know, LINQ is similar in a lot of ways to T-SQL, but as you go to start doing things with grouping or joining, you will find that that there are some syntactical differences.  There are two ways you can use the group clause in LINQ, which way you will use it will be based upon your particular needs.  I'll start by explaining the way I typically use it.  In this scenario, I want to group something by a particular column and then also get a count of how many rows have each value.  Basically, we are creating the equivalent of the following T-SQL statement.

SELECT ProductName, COUNT(*) AS ProductCount FROM MyTable GROUP BY ProductName

When using the group clause it has to be the last statement in the query unless you use the into keyword.  Basically, it replaces the select statement.  I'll talk about that in a minute.  To support a scenario more similar to the T-SQL syntax, you make use of the into keyword.  Here is what the LINQ query would look like.  In this case we are using LINQ to DataSet as the source.

var productGroups = from row in myDateTable.AsEnumerable()

                    group row by row.Field<string>("ProductName") into rowGroup

                    select new

                    {

                        Name = rowGroup.Key,

                        Count = rowGroup.Count()

                    };

The syntax of the group clause specifies what to group (row), how to group it (row.Field<string>("ProductName")) and where to put that grouping (rowGroup) so that you can use it.  The difference between this and T-SQL is the into clause which is required in order for you to do anything with the grouping in an anonymous type.  The variable rowGroup is actually of type of IGrouping<string, DataRow> with string being the type we grouped on.  Once the data is grouped into rowGroup, this variable can be used to store the name and count in a new anonymous type.  The Key property contains the value of what we are grouping on and the Count() method gives us our count.  So far when I have grouped in LINQ, I have found this to be the most common scenarios.  Typically I want an IEnumerable<> of some sort of anonymous type that I can iterate over or bind to.

If you don't want an anonymous type at this point, you can use the other use of the group clause without the into keyword.  In that case the result of the LINQ query would return the IGrouping<string, dataRow>.  You can then either query it again with LINQ or use a nested loop to work with the data. 

var productGroups = from row in myDateTable.AsEnumerable()

                    group row by row.Field<string>("ProductName");

 

foreach (IGrouping<string, DataRow> productGroup in productGroups)

{

    foreach (var row in productGroup)

    {

        // do something with dataRow here

    }

}

I don't find this scenario nearly as usable but some may find it better.  Hopefully this will help the next time you got to group something with LINQ.  It's not really that complicated but I thought it would be worth it to point some of the differences.  I'll probably cover left outer joins pretty soon, because I have done it a few times now and the syntax still gets me tripped up every time.

with 5 comment(s)
Filed under: ,

In my previous post about using the KeywordQuery class, I had one small omission.  The fact is that when you add your own managed properties using the SelectProperties collection of the keyword class, the data type you get back is a string[] containing 1 element instead of a string.  This can be quite annoying when you are attempting to do data binding as well as additional filtering.  I have found this to really only be the case when dealing with custom managed properties.  Default properties such as title and content source always return a string.  To combat this use LINQ to read the data into a new anonymous type.  You can then bind the data and filter as needed.

var results = from queryResult in queryDataTable.AsEnumerable()

              select new

              {

                  Title = queryResult.Field<string>("ContentSource"),

                  Size = (queryResult.Field<string[]>("Size").Any()) ? queryResult.Field<string[]>("Size")[0] : null                       

              };

In the above example, I specify ContentSource as a string just like normal.  However, for my custom manager property Size, I have to cast to a string[] and then I simply return the first element.  Technically, I should probably check to make sure that the element exists as well (but I am checking for null).  Once everything is copied into the anonymous type, you can bind, group, or filter it as needed.

I am sure many of you have faced a situation like this yourselves.  You meet some new developer and they ask you what you do and you tell them that you are a SharePoint developer and they are like "Oh.  That's nice."  I have to admit before I started doing SharePoint, I may have been guilty of doing the same myself.  There are many things that non-SharePoint developers don't realize.  First, it's not all point and click.  We actually do write code.  Second, WSS3 and MOSS 2007 have really improved things since the WSS2 (or even earlier days).  I think a lot of developers got a bad taste for SharePoint from some of its previous versions.

The problem we face is that there is this stigma about being a SharePoint developer (just like there is about VB developers).  Whether it is founded or not, this is something we all as SharePoint developers have to overcome.  Personally, I like the fact that I have been able to specialize in a particular platform and I am not just another ASP.NET web developer.  There is a lot of benefit to that, especially job wise.  So let's clear up some of the misconceptions.

Misconception 1: SharePoint Developers don't write Code

Although we may write less code and we do a lot of point and click at times, this is far from the truth.  We can point and click our way to set up a content type, list, or site, but when it comes down to deployment, we have to create XML or write code to get the job done.  This is just one scenario where we write code too.  We also write web parts, user controls, event receivers, domain objects, data access, and application pages.

Misconception 2: All you do is build web parts

Well sometimes, but honestly, if I can build something without having to write a web part I will.  It easy to use the SmartPart to load a user control or even write your own (you can load a user control with a single line of code).  More often than not, I build web user controls for most common tasks.

Misconception 3: SharePoint is terrible, it sucks, etc.

Sometimes I would agree with that. :)  But more often than not, I think it is a good platform for a lot of scenarios.  As I have said in the past, it is not the solution for everything (sorry Microsoft :) ), but I think the document management, Business Data Catalog, and Enterprise Search features are great.  If you want to custom dev a document management solution, have fun with all of that.  I have also found a lot of people don't know that WSS3 has improved quite a bit since WSS2.

 

I try not to write posts like this very often.  I typically try to focus on posts with tips that actually help the community learn something.  However, I thought it was worth spending some time as I sit on this airplane to write about it.

with no comments
Filed under: ,

LINQ to SQL has already proved to be extremely easy to use to create object relational mappings when you have an existing database schema using the Object Relational Designer.  This designer is good, but you may not want something that is autogenerating your domain classes.  You may want to generate your domain classes yourself.  This is actually quite easy and works in a similar manner to other OR/Ms such as ActiveRecord.  The thing I like about it is that your domain objects do not have to inherit from some base class that has all of the underlying logic to access the database.  Instead you create a custom class separate from your domain objects that inherits from DataContext.

We'll create a simple example of a products table for an e-Commerce web site.  Let's start by looking at the domain object.  Before you create your domain object start by adding a reference to System.Data.Linq to your class library if it is not already present.  You will then need to add a using statement in each domain class for System.Data.Linq.Mapping.

[Table(Name="Products")]

public class Product

{

    [Column]

    public string Name;

 

    [Column(IsPrimaryKey=true, Name="Id")]

    public int ProductId;

 

    [Column(Name="Price")]

    public double Price;

}

The first thing you do in your domain class is decorate it with a Table attribute.  An optional parameter here specifies the name of the underlying database table.  In this case my domain object it Product but my database table is named Products.  I then defined three properties representing columns in the table.  The Column attribute specifies that the property will have a corresponding column in a database table.  The IsPrimaryKey parameter specifies that the column is a primary key in the database.  The Name parameter here also allows you to specify a different column name in the database.

That is really all that is required to create a domain object.  You can define a class for each domain object you want and you can also create relations between them (but I won't be covering that here today).  Once you have your domain object created, you will need to create a DataContext class to actually be able to query your domain objects.  This is also pretty simple.  You just expose a property with the generic type of Table<> for each one of your domain objects.  The name of the property is what you will use with the DataContext when you are querying with LINQ.

public class StoreDataContext : DataContext

{

    public Table<Product> Products;

 

    public StoreDataContext(string connection)

        : base(connection)

    {

    }

}

Now that you have your domain objects written you will need to create the SQL tables that they represent.  You can do this manually, or you can have LINQ create the whole database for you.  Just create an instance of your DataContext and call the CreateDatabase method.  This method infers the name of the database given the connection string you used.  If you did not specify the database, you need to add a Database attribute with the name to your class.

StoreDataContext myDataContext = new StoreDataContext(myConnectionString);

myDataContext.CreateDatabase();

Alright, so now your domain objects and database are created, now you just need to query something with it.

var products = from product in myDataContext.Products

               where product.Price > 49.99f

               select product;

This simple query simply returns any product with a price greater than 49.99.  So, LINQ to SQL doesn't have to be completely domain driven.  This gives you a lot of flexibility and makes it easy to add additional things to your domain logic if you want to.  The downside to this of course it that, when your database schema changes, your domain object is not going to get updated at the click of a button.  If you are building your domain objects in this manner though, this is probably not a concern to you though.

I was recently building an ASP.NET control that needed to display a DropDownList with the same choices coming from a Choice content type.   It is pretty easy to get access to a content type, but getting the available choices for the content type requires a cast.   The first thing to know is some UI to API translation.  Site Column in the UI is referred to as a Field in the API.  Here is the code.

using (SPWeb currentSite = SPContext.Current.Web)

{

    MyDropDownList.DataSource = ((SPFieldChoice)currentSite.Fields["MyChoiceSiteColumn"]).Choices;

    MyDropDownList.DataBind();

}

The Fields collection will return you an SPField object but you need to cast it to an SPFieldChoice object to get to the collection called Choices.  This is just a simple string collection that can be bound to a DropDownList or whatever.  Hopefully, this helps if you ever need to get access to the choices of this content type.

with no comments
Filed under:

At some point you may want to do an Enterprise Search query and specify that the results come from a particular content source.  A lot of times you might create a custom scope with that content source in it, but if you don't want to create a new scope, you can just query the content source directly.  The syntax is simple.  Just use it like in the example below.

ContentSource:"Local Office SharePoint Sites"

In this case, I want everything to come from the Local Office SharePoint Sites content source.  You can of course combine it with other terms as well like in the example below.

Bike Color:"Red" ContentSource:"BDC AdventureWorks"

In this case, it would search for red bikes in the BDC AdventureWorks content source.  Kind of a simple tip today, but I really haven't seen this documented anywhere.

Today I realized that when I blogged about how to use the KeywordQuery class, I forgot to mention how to specify the scope(s) you are querying.  You would think there would be a built-in property to set this, but there isn't.  I wanted to see how Microsoft was doing it so I used reflector and examined the notorious SearchResultsHiddenObject that the CoreResultsWebPart uses.  It in fact does what I expected.  It simply iterates through the list of scopes and appends each one to the keyword query string (i.e.: Scope:"Working Documents").  I was hoping there would be a more elegant way of doing this, but this appears to be the way.

Unfortunately, the Tulsa SharePoint Users Group meeting had to be canceled tonight.  Don't fret, I will still be presenting my talk on the BDC in April or May.  I'm sorry to disappoint all of my die hard fans that were planning on attending tonight.  Just kidding, I don't have any fans. :)  I was looking forward to speaking tonight.  Sit night and be sure and check back for when it gets rescheduled.  Thanks.

A while back, I posted a how to on using the KeywordQuery class and it seemed to get pretty good response, so I figured I would post a follow up on how to use the FullTextSqlQuery class today.  It's actually pretty similar to using the KeywordQuery class but of course the syntax is different.  I am not going to give you a full run down of all the syntax, but I will provide enough to get you going. 

There are a number of reason to use the FullTextSqlQuery class.  Using a SQL Syntax query unlocks the full power of Enterprise Search that really isn't there out of the box.  With SQL Syntax, you can do wildcard searches, and make use of the powerful CONTAINS and FREETEXT predicates.  With the CONTAINS predicate, you can use the FORMSOF term to do inflectional (i.e.: go matches going, gone, went, etc.) or thesaurus searches.  Again, check the SDK as there is plenty of documentation about how to use all sorts of predicates.  Before, we look at all the code lets look at a couple of things on the query.

SELECT Title, Path, Color, Size, Quantity, Rank, Description, Size FROM SCOPE() WHERE "scope" = 'My Scope' AND CONTAINS(Description, 'ant*')

In a lot of ways this looks like a T-SQL query.  Enterprise Search queries however always query from SCOPE().  To specify the actual scope (or scopes) you want to search on you use the where clause.  I have always found the syntax weird here because the word scope has to be in quotes and what you are comparing it to always has to be in single quotes.  In this case I am using the CONTAINS predicate to do a wildcard search.  It should return anything that has a word starting with ant in the description column (i.e.: ant, ants, anthony, antler, etc.).

Here is the complete code example.

using (SPSite siteCollection = new SPSite(siteCollectionUrl))

{

    // create a new FullTextSqlQuery class - use property intializers to set query

    FullTextSqlQuery myQuery = new FullTextSqlQuery(siteCollection)

    {

        QueryText = "SELECT Title, Path, Color, Size, Quantity, Description, Rank, Size FROM SCOPE() WHERE \"scope\" = 'My Scope' AND CONTAINS(Color, 'ant*')",

        ResultTypes = ResultType.RelevantResults

    };

 

    // execute the query and load the results into a datatable

    ResultTableCollection queryResults = myQuery.Execute();

    ResultTable queryResultsTable = queryResults[ResultType.RelevantResults];

    DataTable queryDataTable = new DataTable();

    queryDataTable.Load(queryResultsTable, LoadOption.OverwriteChanges);

}

It really works the same as the KeywordQuery class so if you want an explanation of the details of the subsequent lines, be sure and check out that post.

After talking to many different developers, I appear to be one of the few ones that have gotten remote debugging to work with a reasonable success rate.  That is why I have decided to post on it today.  Everything I am writing today is based upon my experience in what has worked and may not necessarily be a best practice.  I actually posted about this once in the past, but I think its worth going into more detail.

Preparing Your Server

The first step is to install the remote debugging tools on your server.  To do this, run rdbgsetup.exe contained in the Remote Debugger folder of your Visual Studio 2008 or Visual Studio 2005 installation media.  Be sure and pick the correct processor architecture (x86, x64, or ia64).  To use remote debugging, you can either install a Windows service or run an application.  Although, the service is more convenient if you are going to be debugging a lot, the application is a lot easier to get up and running.

Running the Remote Debugger

Once you have got the remote debugger installed, I typically use remote desktop to log into the server and start the Visual Studio 2008 Remote Debugger.  This really will only work right if the account you are logging into the server with is also the same account you log into on your client machine that is doing the remote debugging.  If its not, there are some complications and you'll legitimately get the error I have posted about here.

One thing to note.  The Visual Studio 2008 Remote Debugger is not backwards compatible with Visual Studio 2005.  Therefore, you need to be sure and install the Remote Debugger from the same version of Visual Studio you are debugging with.  It is ok to have them both installed at the same time, but I don't believe you can have both running at the same time (need to confirm though).

Preparing your environment

When you are ready to start remote debugging, start by compiling your web application.  You then need to copy the DLL and PDB file from your bin/debug folder to the bin folder of the web application on your server.  Not doing this is one of the most common causes for a breakpoint to never be hit when remote debugging.  If the DLL and PDB do not match between the client and the server, the breakpoint will never be hit.

Start Debugging

Once everything is in place, it is time to start debugging.  To do this, click on Debug -> Attach to Process in Visual Studio when you have your web project open.  If everything is good and all the permissions match up, you should be able to type your server name into the Qualifier box and view its processes.  Typically, when I do this, I am a local administrator on both the client and the server.  I think this is more permission than needed though.  I believe there is a Remote Debuggers security group that can be used.

The way you start debugging is by attaching to w3wp.exe.  However, it is more than likely that you will have multiple application pools on your server (especially if you are using SharePoint) which means more than one w3wp.exe process.  To determine which w3wp.exe to use, you can just pick one arbitrarily and then examine the modules window in Visual Studio and look for your DLL.  You can also look at the username on the process and see if it matches the one on the application pool you want, or you can use a cscsript command to get a list of the w3wp.exe processes and which site they match in IIS.

Once you have attached to the correct w3wp.exe, set a breakpoint open a web browser and hit the page you want to debug.  Assuming you set a breakpoint and did everything correctly, the breakpoint wont give you a message that the symbols could not be loaded and the breakpoint cannot be hit.  If you were able to set your breakpoint, open a web browser and hit the page you wanted to debug.  If all goes well, your breakpoint will be hit and you can debug just like it was on your local machine.

I have found that when working with web parts one of the most painful experiences is dealing with one when the version of the DLL changes.  This is never fun.   In this case I am assuming you are deploying a .webpart file via feature using a solution package.  If you try to Upgrade the package or Retract the package and then deploy the new version, the installation will be successful.  After reactivating your feature, if you try to use the web part anywhere it will most likely fail (along with all of the other pages you have it deployed on).  This is because the installation of the feature will not delete or update any existing web parts in the web part gallery.  So if you have a .webpart in there already referencing version 1.0.0.0 of your DLL and you want to update it to 2.0.0.0, it will not change it if it already exists.

The proper way to handle this is to delete the web parts from the gallery before updating.  You can do this manually if you want or even better update your feature deactivation code to go in there and remove the web parts from the gallery.  Honestly, this goes for anything you are deploying via feature in SharePoint.  Just because you deactivate your feature, don't expect it to go back and remove whatever you added.  You are always going to have to write something to delete a file, document library, navigation item, page, etc.

with no comments
Filed under: ,

As I have mentioned before, one of my biggest complaints about SharePoint is that none of the collections in the SharePoint API have any way to determine if an item exists.  Extension Methods offer a slightly more elegant way to do this, although the underlying code still violated multiple best practice rules.  Take a look at this example using the SPFileCollection.

public static bool Contains(this SPFileCollection fileCollection, string index)

{

    try

    {

        SPFile testFile = fileCollection[index];

        return true;

    }

    catch (SPException e)

    {

        return false;

    }

}

If you're not familiar with Extension Methods yet, they are an addition in C# 3.0 that allow you to add methods to existing classes without having to inherit from them.  You prefix the first parameter with the keyword this followed by a type to specify what type you are extending.  You can put your extension method in any class you want.  Inside, the method, you see the typical way of checking to see if something exists in a SharePoint collection: try/catch.   The syntax for using the extension method is below.

bool fileExists = fileCollection.Contains("SomeFile");

Extension methods are quite powerful and I think they can provide an excellent way to make many tasks easier and cleaner inside the SharePoint API.

with 2 comment(s)
Filed under: ,