Friday, December 21, 2012

MVC and Routing Basics

Routing is located in the System.Web.Routing namespace

Can be used in MVC and also ASP.NET web forms apps.

Routing is used to map incoming URLS to action methods in a controller and optionally pass arguments to the action methods.

Generates outgoing urls (links in pages; generally used for navigation).

check out www.useit.com for guidelines on url best practices.

Routing is similar to URL Rewriting (been around for a long time in ASP.NET web forms). URL rewriting is geared towards pages and doesn’t really apply to MVC. Now you should almost always use Routing, but in MVC it is hard not to use routing.

 

Defining a route

  • URL Pattern
  • Default values
  • Constraints
  • Arbitrary properties

Look in the Global.asax.cs file under the RegisterRoute() method. In the RegisterRoute method you will use the MapRoute() method to specify any of the 4 items listed above. There are many overloads for this method.

routes.MapRoute(
“Default”, // route name
”{controller}/{action}/{id}”, // url pattern
new { controller = "Home”, action = “Index”, id = “”} // parameter defaults
);

The order or the MapRoute method is very important. They are processed from top to bottom. so put routes from more specific to more general.

URL Pattern

Default URL pattern: {controller}/{action}/{id}

Consists of segments. Each segment has URL parameters or placeholders and are shown as {…} and some static text.

The RouteValueDictionary holds the values that the parameters take based on the current url. For example if the url is http://localhost/Dog/Edit/3 then when the route is matched the RouteValueDictionary will contain keys and values as shown below.

controller = Dog
action = Edit
id = 3

You don’t have to use / to separate the placeholders. You can use just about anything, but it should be something that is not the data

Some other examples of url patterns:

“Brent/{controller}-{action}-{id}” –> Brent/Dog-Edit-3

“Brent/{controller}-{action}/{id}” –> Brent/Dog-Edit/3

 

Default Values

Works much like optional parameters in methods. If the parameter is not specified routing looks to see if there is a default value for that parameter (placeholder). They are passed to the routes.MapRoute() method.

http://localhost - default values for controller and action

http://localhost/Dog/List - default values for id

 

Constraint

What will happen if your url is http://localhost/Dog/Edit/3? In this example, let’s assume that the id is an int and 3 will be assigned to it. This matches the routing pattern fine, but what would http://localhost/Dog/Edit/Jewel do? It would try to assign Jewel to an int and that would not work and would result in the YSOD. To make it so the pattern is not matched we can introduce a constraint to only match numbers. Constraints can use regular expressions to control accepted values.

We could change the route to something like this using an overloaded Map method.

routes.MapRoute(
“Default”, // route name
”{controller}/{action}/{id}”, // url pattern
new { controller = "Home”, action = “Index”} // parameter defaults – notice no id default
new { id = @”\d{1,6} }

);

routes.MapRoute(
“Default”, // route name
”{controller}/{action}”, // url pattern
new { controller = "Home”, action = “Index”} // parameter defaults,
);

The first route is to accomplish what we were looking for. Unfortunately, it doesn’t allow for things like http://localhost and http://localhost/Dog/List. So, we had to add the second route to allow those routes.

Handling an Arbitrary Number of Parameters

Can use a catch-all parameter that:

  • match URLS with arbitrary number of parameters
  • Ignores slashes
  • Captures everything after what matches the url pattern to the end of the url
  • Excludes the query string parameters (doesn’t capture)
  • MUST be the last parameter in the url
  • Can only have one in url pattern

To do this, just a * to the beginning of the parameter name.

routes.MapRoute(
“CatchAll”, // route name
”Catch/All/Params/{*randomParams}”, // url pattern
new { controller = "Dog”, action = “CatchAll” } // parameter defaults
);

There needs to be a method in the Dog controller called CatchAll. It would look something like this

public ActionResult CatchAll(string randomParams) {… }

A matching url might look like this: http://localhost/Catch/All/Params/and/more/stuff/goes/here.

In this example, “and/more/stuff/goes/here” would be passed to the CatchAll method.

It is up to you to do something with the string. In most cases you will parse it and do something interesting with the values in the url.

 

Blocking Routes

You may want to block users from particular urls. To do that you would typically put a new route as shown below as one of the first routes in your routes collection so that it matches first.

routes.IgnoreRoute(“{resource.axd/{*pathInfo}”)

or

routes.Add(new Route(“{resource}.axd/{*pathInfo}”, new StopRouteHandler()));

Both of these do the same thing.

 

Generating URLS

You could hardcode urls in your MVC application, but is difficult to maintain and messy. Luckily MVC makes it easy when you use the routes to automatically generate urls. To access this automatic generation you would typically use HtmlHelper.ActionLink and HtmlHelper.RouteLink helper methods.

ActionLink – the url is based on the combination of the controller and action name, and any other parameters

RouteLink – relies on the route definitions and parameters

Works just like incoming routes regard order, so Routing will match the first one it finds which is based on the order you add it to the collection of routes.

 

Named Routes

If you want to force a particular route to be used. This works because the route name is basically a key into the list of routes in the route collection. This affects both the automatically generated urls (see ActionLink and RouteLink) and incoming urls. This would be useful if you want to use a route that is farther down the list of routes and thus would not usually be matched. Some people say that you should not use named routes, but the choice is yours.

Monday, December 10, 2012

Intro to MVC

URLS have the default format: /domain/controller/method/ID

Controller

They are generally light, or that should be the goal. (I need to verify this comment)

Class must end in Controller suffix and implement IController interface. However, we usually inherit from the Controller class since it inherits from IController and provides more common code we need. It cannot be abstract and cannot use generic parameters.

index() is the name of the Action Method for the default method that will be called.

Unless a method says otherwise, it looks in the Views folder. It then looks for a folder named the same as the class (minus the suffix Controller). Then it looks for the page with the name that matches the method name.

Action Methods can be overloaded, but also have attributes that determine how it is used.

Requirements to be an Action method:

  • Public (and not static)
  • Must be defined on System.Object or Controller
  • Cannot be a special method
  • Cannot have a NonAction attribute.


Security Warning: Anybody on the internet can invoke any public Method in your controller unless decorated as NonAction. However, you would usually only include methods that should be invoked based on a url.

Use the [ActionName("MyNewActionNameHere")] attribute on the Action method to change how it is used in the url. This changes what is matched for routing the request.

HttpGet, HttpPost, HttpPut, HttpDelete attributes to affect what overloaded method is called based on how it is being called. The verb is based on the verb in the current request.

use HandleUnknownAction() method to change what happens when a bad url is passed (i.e. 404 error). We could redirect to error page for example.

Controllers should be simple. Logic should be in the model. A controller's job is to translate user's actions to the model, then provide a response.

It is not uncommon to directly reference the Model (including the Entity Framework) here using Linq. If you do, it is best to instantiate the Entity Framework model in the action method rather than creating it at the controller level.

Action Filters

Implemented as attributes that can be used on Action Methods in the controller or the controller itself. Can use more than one, and specify order.
AsyncTimeout, NoAsyncTimeout, Authorize, ChildActionOnly, HandleError, OutputCache, RequiresHttps, ValidateAntiforgeryToken, ValidateInput

View

use Html.RenderPartial() to render a user control

<%: ... %> will automatically html encode (if not already) what is being rendered. This is a better choice than <% ... %>

Should be as simple as possible. No calculations, etc. Logic and calculations should be in the Model.

Give the View the raw data, let it format it the way the user should see it.

Can use open source view engines such as: Spark, Nvelocity, Brail, Nhaml

Generally will want to put files, etc in the Content directory.

Use ContentResult object as a return type in the Action to send a file to the browser.

Use ViewData object for use with strongly-typed model and Dictionary (name/value pairs). Allows code nuggets (<% %>)
Code Nuggets: <% %>

<%= %> not recommended anymore, use <%: %> due to data not being encoded.

<%: %> always use instead of = nuggest since it will do same thing, but safely encode content.

HTML Helper Methods help with writing tedious common html. They are really just shortcuts to generate HTML. Examples: Html.Label(), Html.LabelFor(), Html.TextBox(), Html.TextBoxFor(), Html.BeginForm(), Html.ValidationSummary(), Html.ValidateMessageFor(), Html.ActionLink(), Html.RenderPartial(), Html.TextArea(), Html.Hidden(), Html.RouteLink(), Html.Action()

Partial Views (implemented as user Controls). They allow you to reuse html and view code. They can be handy for Create and Edit views that share nearly all the html. Generally they should be in the Views/Shared if they will be used by multiple controllers. If they will be used by one controller, but multiple views then put it in the Views/<Controller Folder>. use the Html.RenderPartial() to include the partial view on the part of the page you want it to display. Shares data with parent model by default, but this can be changed to use a different model by passing different model via Html.RenderPartial(). Partial views can be nested.

Use RenderAction() to call controller action methods from a view and display the result of that action method. useful to show data that is not part of the model of the view. Similar to using a RenderPartial(), but instead of inserting a partial view, you can insert virtually anything because most anything you return from an Action method. Beware of using this too often because the calls between controller and views becomes difficult to maintain. Only use it when needed.

HtmlHelper - AntiForgeryToken(), Encode(), EnableClientValidation(), AttributeEncode(), HttpMethodOverride(). 3 classes of helpers: 1. Untyped or regular helper methods, 2. Strongly typed helper methods, 3. templated helper methods. Lots of overloads. Searches for fields in the following order: ModelState, ViewData, and view model. Can put @ before a custom attribute that is passed to a helper. You can also use a Dictionary.
The Html.Label() helper method will use the Display() attribute from the Model to determine what text to display.

There are some high level helpers for Grids and Charts. For those, check out, WebGrid() with getHtml() and Chart with AddTitle(), AddSeries(). These are used with Razor.

Can supply a default value via an extra parameter on helper methods.

Most common Templated Helper Methods

  • Display, DisplayFor, DisplayForModel
  • Edit, EditorFor, EditorForModel

Models

Most any kind of technology or conventions can be used here. This would include things like Entity Framework, etc.
Encapsulates data and business rules, logic, validation, calculations, etc. This is where all the heavy lifting is done.
Use directly in conrollers and views
It is NOT required to add the suffix Model to the classes, but it does generally make it easier to follow in code.

Monday, December 3, 2012

Truncating a table from a Linked Server

Let’s assume you are using SQL Server and you want to truncate a table on a Linked Server but are getting errors about permissions.

Msg 4701, Level 16, State 1, Line 1

Cannot find the object "MyTable" because it does not exist or you do not have permissions.

The first thing to check is that you have the proper permissions. MSDN says

“The minimum permission required is ALTER on table_name. TRUNCATE TABLE permissions default to the table owner, members of the sysadmin fixed server role, and the db_owner and db_ddladmin fixed database roles, and are not transferable. However, you can incorporate the TRUNCATE TABLE statement within a module, such as a stored procedure, and grant appropriate permissions to the module using the EXECUTE AS clause.”

Assuming you do actually have permission the problem is probably in the syntax.

My guess is that you tried the same thing I did to start with, which is the following.

truncate table MyLinkedServer.MyDB.dbo.MyTable

You can use truncate table statement on a Linked Server, but not with that syntax and you have to know a trick. Here is the same statement, but using a different syntax.

EXEC MyLinkedServer.MyDB.sys.sp_executesql N'truncate table dbo.MyTable'

It seems a bit convoluted to me, but it works.

NOTE: If you get a message about … is not configured for RPC then click here to read how to fix that issue.

How to resolve: Msg 7411, Level 16, State 1, Line 1 Server 'MyServer' is not configured for RPC.

 

If you get the following error when trying to execute a stored procedure on a Linked Server in MS SQL Server you need to enable RPC on the server that has the linked server defined (I called my linked serverMyServer in this example)

Msg 7411, Level 16, State 1, Line 1

Server 'MyServer' is not configured for RPC.

The solution is simple, just execute the following SQL on the server where you have defined the Linked Server. You will need to change MyServer to the name of your linked server.

EXEC master.dbo.sp_serveroption @server=N'MyServer', @optname=N'rpc', @optvalue=N'true'
GO

EXEC master.dbo.sp_serveroption @server=N'MyServer', @optname=N'rpc out', @optvalue=N'true'

You need RPC enabled on the Linked Server so that you can called stored procedures. In most cases you only need to do the second one that allows out, but if you have problems you can do the first one also. Most people recommend doing both even though both are not needed in many cases.

Wednesday, November 14, 2012

Getting the status (including In Progress) of SQL Job in SQL Server 2005+

I find it amazing how many posts there on forums, etc where people want to know what the status of SQL Server Jobs are and how difficult it was. The SysJobHistory table is good if you don’t care about jobs that are In Progress, but if you do even though it has a status of In Progress it will never show that status because the SysJobHistory record is not created until the job is Completed.

Here is the code if you don’t care about In Progress and want to use the SysJobHistory table. This would basically mean you want the last status of a job that completed.

select * from msdb.dbo.sysJobHistory h, msdb.dbo.sysJobs j
where j.job_id = h.job_id and h.run_date =
(select max(hi.run_date) from msdb.dbo.sysJobHistory hi where h.job_id = hi.job_id)

The SysJobHistory table is pretty well documented at here.

Since SQL Server 2005, if you want to get the current execution status of a job you can use the following:

exec msdb.dbo.sp_help_job

This is simple and easy. The problem some people seem to have with it is that it is a stored procedure and can’t use it like a table. It does already allow for a bunch of parameters that are very much like using a where clause. For example if you want to get all In Progress jobs it is simple:

exec msdb.dbo.sp_help_job @execution_status = 1

Still, you are limited to what they provide as parameters. Luckily as with any stored procedure you can work around this.

This site has lots of examples of how to take results from stored procs and select from them.

One example of this (if OPENROWSET is available on your installation of SQL Server) is:

SELECT *  FROM OPENROWSET ( 'SQLOLEDB','SERVER=.;Trusted_Connection=yes','EXECUTE msdb.dbo.sp_help_job');

If you don’t have OPENROWSET as an option then you will have to work a bit harder.

My first thought was do to something like

insert into Results exec msdb.dbo.sp_help_job --@execution_status = 1

That gets the data, but also generates an error which I really don’t think is a good thing.

Msg 8164, Level 16, State 1, Procedure sp_get_composite_job_info, Line 72
An INSERT EXEC statement cannot be nested.

So, I had to look to another solution. Unfortunately it is undocumented, but it works.

declare @Results table(
    job_id uniqueidentifier not null,
    last_run_date int not null,
    last_run_time int not null,
    next_run_date int not null,
    next_run_time int not null,
    next_run_schedule_id int not null,
    requested_to_run int not null, -- bool
    request_source int not null,
    request_source_id sysname collate database_default null,
    running int not null, -- bool
    current_step int not null,
    current_retry_attempt int not null,
    job_state int not null )

insert @Results exec master.dbo.xp_sqlagent_enum_jobs @is_sysadmin = 1, @job_owner = ''

select * from @Results

This works well UNLESS you want to connect to another server via a linked server since this is really calling a DLL underneath the stored procedure. If you try that you will get the error:

Msg 7411, Level 16, State 1, Line 1
Server 'myserver' is not configured for RPC.

I am pretty sure I can get by that, but I think that is opening up a security hole.

Conclusion:

I find this topic so frustrating. I really want it to be simpler and supported, but the best I can do is use the unsupported option since my SQL Server configuration cannot be changed. I wished all SQL Server stored procedures worked more like Table-valued functions. In the end, I created another table that each of the jobs from the different servers log when the start and stop. I can then do some queries to figure out the status (including In Progress) of the jobs. Seems like there should be a better way than opening up rpc, using openquery, etc. If I can implement it, Microsoft certainly could. Please Microsoft, please. What am I missing?

Friday, October 19, 2012

Troubleshooting Microsoft Excel cannot access the file

Here’s the scenario. I am running SQL Server (on Windows Server 2008 R2) and I have a created a Job that has among other steps a CmdExec step. This CmdExec step calls a C# console application that I wrote. My C# console application calls the Microsoft.Office.Interop.Excel.Application. The code throws the following exception / message.

Microsoft Excel cannot access the file 'c:\myDir\MyFile.xlsx'. There are several possible reasons:

• The file name or path does not exist.

• The file is being used by another program.

• The workbook you are trying to save has the same name as a currently open workbook.

 

The last line of the code below is the line that throws the exception

Microsoft.Office.Interop.Excel.Application xlApp = new Microsoft.Office.Interop.Excel.Application();
string filename = @"c:\myDir\MyFile.xlsx"
Console.WriteLine("Exists: " + File.Exists(filename).ToString());
var wb = xlApp.Workbooks.Open(filename);

The message suggests that it can’t find my file that I am trying to open, so I made sure none of these were the issue. However, the actual problem is not anything to do with my file.  Instead I just had to create a directory called Desktop at the following path:

C:\Windows\SysWOW64\config\systemprofile\Desktop

It is worth noting I am running Windows Server 2008 R2 and is a 64-bit operating system. If you are running a 32-bit OS you would need to change make sure there is a desktop directory here:

C:\Windows\System32\config\systemprofile\Desktop

I noticed that I needed to be admin to follow the path above, but it didn’t seem to matter from a code perspective. However, if this solution doesn’t work, I would suggest playing with the permissions to that Desktop directory just to make sure since that is what I did to start with and then un-did my changes to the permissions. Also, I am running Windows 7 64-bit on my laptop where I am developing and I did not have any issues if when that directory did NOT exist. I don’t know why this seems to be particular to the server environment.

Thanks to this discussion that gave me this solution.

Why doesn’t my SQL Server Job that uses CmdExec not report a failure correctly

Here’s the scenario. I am running SQL Server and I have a created a Job that has among other steps a CmdExec step. This CmdExec step calls a C# console application that I wrote. My C# console application writes its errors to the Windows Event Log instead of just throwing an exception. True, I could just throw the exception and the exception would get logged to the Windows Event Log, but there would be 2 or 3 entries and they would be ugly and in general just confusing. I want to have better control over what gets written to the Windows Event Log so I have a try-catch at the highest level of my console application so that nearly no exception will be just thrown.

The problem

The problem is that when I catch the exception my program no longer returns a non-zero return code (error code) like it would if an uncaught exception was thrown. I want my program to have a return of 0 only if it was successful, and non-zero (let’s choose 1) if there is an error I caught and wrote to the Windows Event Log. The major reason I need to make sure the return code is correct is because SQL Server’s CmdExec step looks at the return code to determine if the step succeeded or not which it then affects the overall flow of the later steps and also what the status of the Job is.

Thank goodness there is an easy way to solve this problem if you are writing a C# console application.

Here is a very simple but illustrative example of how to change the return code. No need to set it for success since it will do that for us. We just have to handle the case where we catch the exception and want to have the return code to 1 (indicating an error).

static void Main(string[] args)
{
try
{
// some code here
throw new Exception(“Oh no an error”);
}
catch (Exception ex)
{
// write error to Windows Event log here using your favorite code


//Change the return code to 1 since there was an error
System.Environment.ExitCode = 1;
}
}

Thursday, October 18, 2012

Troubleshooting ‘The client connection security context could not be impersonated. Attaching files require an integrated client login’

I am calling msdb.dbo.sp_send_dbmail from my C# console application and is connecting via a SQL Server Database user. When I call stored proc from C# I get the message below.

The client connection security context could not be impersonated. Attaching files require an integrated client login

there are some things you can try.

  • If you want to use a SQL Server Database user you still can, but you will need to give that user sysadmin rights. You can add those permissions using the following command:
  • sp_addsrvrolemember '<Login>', 'sysadmin' 
  • Try connecting as a Windows users instead of a SQL Server Database user. In other words, try a domain user.
  • Make sure the file is not too large. The default size is 1,000,000 bytes (nearly 1MB). You can change the max attachment size using the following command:

    USE msdb
    EXEC sysmail_configure_sp @parameter_name='MaxFileSize', @parameter_value=N'1572864'
    GO

    In the example above the new max attachment is 1.5MB

Wednesday, October 17, 2012

Troubleshooting ‘profile name is not valid’ error

I got this message when I tried to call the msdb.dbo.sp_send_dbmail stored procedure. I had setup my profile (let’s call it Profile1). I even gave my user (let’s call it User1) the permissions to send mail as described here. I then called it from some C# code and got the following message.

profile name is not valid

Here are the steps I used to figure out what my problem was.

  1. Double check that the name of the profile is correct and is being passed correctly to the stored proc using the @profile_name parameter.
  2. The next step is to figure out if it is my code or SQL server side. The easiest way to do that is to open SQL Management Studio and connect to your database server using the exact user/login you were using when you got the error. Then try to send the mail message using the following (tweak as necessary):

    msdb.dbo.sp_send_dbmail
        @profile_name = 'Profile1',
        @recipients = 'someone@somewhere.com',
        @subject = 'Test subject',
        @body = '<p>Test Body</p>',
        @body_format = 'HTML'

    If you still get the error, it is something to do with SQL Server, otherwise, it is somewhere in your code that is called the stored proc. If it is in your code, you are on your own. If it is on the server (still getting the error) then proceed to the next step.

  3. In SQL Management Studio, open another connection, but this time connect as a user that has more rights (probably yourself). Now execute the following stored proc.

    msdb.dbo.sysmail_help_principalprofile_sp @principal_name = 'User1', @profile_name = 'Profile1'

    If you get no results back then that means you have a permission problem. This likely means that your profile is NOT public or the user (User1) don’t have access to it. You an change the permission by going to the Database Mail Configuration Wizard | Manage profile security radio button | Next button. Here you can click the profile to make it public. You can alternatively go to the Private Profiles tab and select the user and then the profile. Either way should work.

    If you run the stored procedure again, it should show that you now have access to the profile now. You can also re-run the sp_send_dbmail and it should work this time.

Tuesday, October 2, 2012

Add-ins ribbon missing in Excel 2007

I was amazed that for some reason my installation of Excel 2007 didn’t have the Add-Ins ribbon like everyone else does. I don’t know how this happened, but here is the fix that worked for me.

  1. Open Excel
  2. Click Office button.
  3. Click Excel Options button.
  4. Click the Customize tab on the left.
  5. Choose Commands Not in the Ribbon from the Choose commands from drop down list.
  6. Click the Add-Ins item in list
  7. Click the Add >> button
  8. Click the OK button.

Wednesday, August 22, 2012

Q&A on SharePoint Architect from MetaVis Technologies

Below are answers (in my own words and some of my own commentary) I got from a Q&A meeting with MetaVis Technologies on their SharePoint migration tooling called Architect Suite. They recommend their Architect Suite and the questions are answered based on the assumption that is the tool you will be using. However, depending on your needs some of the individual products in that suite can be purchased instead.

Q: What are the limitations of this product

A: The following are not supported:

· Highly customized look and feels

· Copying of assemblies for web parts that are installed. This must be manually BEFORE the content is copied.

· Best to copy solutions to target before copy content.

· Many workflows are not supported, but no clear way to tell.

· Running workflows are not supported.

· Workflow history is not preserved.

· Can on preserve either Workflow Approval Status or Last Modified By, but not both.

· Users must be in both active directories, otherwise the current user will be used.

Q: Are OOTB workflows supported

A: It depends. If there are dependencies or customizations then no. We can try it and see if they work. The copying of workflows is not enabled by default. In most cases they recommend just recreating the workflows.

Q: What happens to workflows that are running?

A: Running workflows are not support. Workflows MUST be completed before they are copied.

Q: Is Workflow history kept?

A: No, no workflow history is kept.

Q: What is the copying of a workflow actually doing?

A: They are getting the xaml for the workflows and then replacing links in them.

Q: Where should I install the Metavis application?

A: The Metavis application should be installed close to the source or target system.

Q: What kind of performance can I expect from this application?

A: 250MB an hour is slow and up to 6GB is the fast. Multithreading is recommended. This varies greatly though based on structure, permissions, location, bandwidth, etc. For example, moving files is fast, but re-creating structure, permissions, etc takes longer.

Q: What is the recommended approach for migrating

A: Make sure all the users that you want to keep history for are in active directory in the source and target.

Make sure all workflows are completed or don't copy workflows.

Enable claims based authentication if needed for Office 354 SharePoint migration.

Use one of the three approaches for copying:

Use the UI provided by Metavis

This has the advantage of requiring no other tooling and is still fairly efficient. Its disadvantage is that you must be present each time new sites need to be selected.

  1. Take advantage of multi-threading.
  2. To do this create the top level sites on the target.
  3. Select a few sites at a time and start the copying.
  4. Repeat step 3 while taking care not to overwhelm any systems. Do this until all sites have been processed.

Script / Schedule the Migration

This has the advantage of being executed again easily and give you more control. Perhaps the biggest advantage of this is you can run with minimal interaction.

  1. Use the Metavis UI to migrate a site, but instead of executing, just generate the script. This script can be used on the command line, windows scheduler, etc.
  2. Copy the script, change the source and destination to match each site you want to migrate
  3. Execute the scripts in a parallel manner while taking care not to overwhelm any systems. Do this until all site have been processed.

Single-Threaded

This is the easiest because you just select all the sites and let it navigate the tree one site at a time. The downside is that it will take a lot longer to finish a large number of sites because it is only doing it one site at a time instead of in parallel as the other two approaches.

Test workflows that you may have copied. If they don't work, then re-create them.

Once everything is copied to the target have users do UAT to make sure everything is working ok. At this point keep the source system as the production one until UAT has finished. UAT will take some time. Don't expect everything to move over flawlessly, there will be issues in the beginning. Once everything looks good, go back and do an incremental copy from the source to the target system. No you can decommission the source system.

It is NOT recommended that you promise users that the system will be perfect on a given date. Plan to work with end users in UAT to solve issues. Once they have been solved it will be up to the users to start using the new system. This means that if you have a lot of users and sites you will likely have some users on the source system and some on the target system.

Q: Do site groups get copied over and over again

A: No they only get copied once, but there is a performance penalty the first time they are copied.

Q: There are a lot of settings on copy screen, should I just check everything.

A: No, the defaults are typically the best choice. Only change them if you know what they do (use the help icon near each option to better understand).

Q: How can script the migration

A: Use the UI and pretend like you are doing a migration using the UI. Before you get to the Finish / Execute step a button to generate a script will appear. Use that to generate a script you can use as a template for the other sites. You typically just need to change the source and target.

Q: Why aren't alerts copied by default?

A: Alerts are not copied by default, but you can enable them. Be careful though. The reason is that if you copy them when you are copying the site, the copying of the content will trigger alerts (lots of them). You can go back and copy the alerts after everything has been copied or you can leave it up to the end users to sign up for alerts again. Beware of copying alerts before you are sure you will not be doing any more incremental copies. Metavis recommends the later.

Q: How do I import data and metadata from other systems such as Lotus Notes, Alfresco, or even the file system?

A: In all cases, Metavis does not write custom connectors to different products. However, most products can be exported to the file system and the metadata put in CSV format. You can then drag and drop from the file system to the Metavis application. The application then allows you to map the metadata (including folder structure) to SharePoint. You can also map users, permissions, etc at this point.

Q: How can I keep history of users that are no longer with the company?

A: The users MUST be in the source and target active directory. If you are migrating to Office 365 SharePoint just be sure that these users are not assigned a license. You can mark them as disabled as well. Then you can map the usernames using the Metavis application.

Q: Can I expect 100% fidelity of my sites after a copy?

A: No, there are too many customizations that can be done to SharePoint to guarantee full fidelity copying. However, lists, calendars, master pages, permissions, users, groups, and most uncustomized functionality is copied without issue.

Q: Is there a way to compare two sites

A: Yes, the two provides a visual compare of two sites. Permissions can be ignored as well.

Q: In the trial version it is difficult to get my site to transfer all the files, how can I get the entire site copied.

A: The trial version will only copy the first 5 item in a list. You can go back and copy the ones that were missed. You can also use the compare features to see any differences. There are typically some hidden lists as well. If you right-click on lists you can choose to show hidden lists.

Q: Are there any tools to help migrate references in Outlook to SharePoint lists and calendars to the new locations after the migration?

A: Not that I am aware of, but a good place to start is Colligo.

Thursday, August 16, 2012

Getting the Description Column from SQL Server

If you are storing valuable information such as comments, etc in the Description for a given column in your table you may want to export it to excel or in general just get to it via t-sql. It is actually pretty simple to get it. Below is a query you can run from the database you want to report on. I have exluded views, non-user tables, default tables, and system diagram tables, but you can change that by commenting out the appropriate statements in the where clause.

SELECT
[Schema] = ColumnInfo.TABLE_SCHEMA,
[Table Name] = ColumnInfo.TABLE_NAME,
[Column Name] = ColumnInfo.COLUMN_NAME,
[Position In Table] = ColumnInfo.ORDINAL_POSITION,
[Date Type] = ColumnInfo.[Data_Type],
[Description] = Properties.value
FROM
INFORMATION_SCHEMA.COLUMNS ColumnInfo
LEFT OUTER JOIN sys.extended_properties Properties
ON
Properties.major_id = OBJECT_ID(ColumnInfo.TABLE_SCHEMA+'.'+ColumnInfo.TABLE_NAME)
AND Properties.minor_id = ColumnInfo.ORDINAL_POSITION
AND Properties.name = 'MS_Description'
WHERE
-- exclude ones created when SQL Server was installed
OBJECTPROPERTY(OBJECT_ID(ColumnInfo.TABLE_SCHEMA+'.'+ColumnInfo.TABLE_NAME), 'IsMSShipped')=0
 
-- only get tables (no views)
and OBJECTPROPERTY(OBJECT_ID(ColumnInfo.TABLE_SCHEMA+'.'+ColumnInfo.TABLE_NAME), 'IsTable')=1
 
-- exclude ones created when SQL Server was installed
and OBJECTPROPERTY(OBJECT_ID(ColumnInfo.TABLE_SCHEMA+'.'+ColumnInfo.TABLE_NAME), 'IsUserTable')=1
 
-- exclude tables used for system diagrams
and ColumnInfo.TABLE_SCHEMA+'.'+ColumnInfo.TABLE_NAME <> 'dbo.sysdiagrams'

--AND ColumnInfo.TABLE_NAME = 'table_name'
ORDER BY
ColumnInfo.TABLE_NAME, ColumnInfo.ORDINAL_POSITION

I got the basics for this from this post.

Tuesday, August 14, 2012

Creating a more flexible master-detail layout in SharePoint 2010

In an earlier entry I gave instructions on how to easily create a master-detail layout in SharePoint 2010 using SharePoint Designer 2010. I found one limitation of it is that the initially selected item is always the first item in the topics list. Another one is that there is no way to link to the page with anything other than the first topic selected. To get past those short comings, it requires a little bit more of tweaking. Below is the updated instructions. I have copied the previous entry and modified it.

This assumes you have two Lists. One list is the lookup table (we’ll called it FAQTopics) that you want to group by. The other list is a table (I called it FAQ List) that has the rest of the data you want to display. In this scenario I want my lookup table to show on the left side of the screen and when I click one of the items on this table I want the list on the right to update and only show the rows that are in that match. In this case, the master is the FAQTopics list and the detail is the Topics list.
  • Create a new web part page
    1. Open SharePoint Designer 2010
    2. Go to Site Pages
    3. Click Web Part Page from the Pages ribbon and select the desired web part layout. Pick on that has at least a left and right web part area for our purposes since we want the list of topics on the left and the FAQs on the right.
    4. Rename the file as a desired. I called it FAQ.aspx
    5. Right-click new page and select Edit File in Advanced Mode.
  • Add the FAQTopics to the page
    1. Click on one of the left web part zones
    2. Click the Insert ribbon and choose Empty Data View from the Data View button. This will create a DataFormWebPart. This will work best since it gives us access to the html directly and allows us to easily customize the selected row styles.
    3. Click the Click here to select a data source link in the middle of the DataFormWebPart and select FAQTopics (you may have called it something different). The screen won’t change much, but open the Data Sources Details panel (if it is not all ready visible). Here you will see a list of columns from the FAQTopics.
    4. Drag over the column you want to display in the FAQTopics list. In my case, I used Title. Please, note, this field should uniquely identify the row.
    5. On the Options ribbon, set any filters or sorting you may need by clicking the Filter and Sort & Group buttons.
    6. You can now test what we have done. Click the Home ribbon and then click the Preview in Browser button. This will open your page up in the browser. It should show a list of rows from your FAQTopics table.
  • Add the FAQ List to the page
    1. Click on one of the right web part zones
    2. Click the Insert ribbon and choose FAQ List from the Data View button. This will create a XsltListViewWebPart. Note, we didn’t create an Empty Data View (like we did for the FAQTopics) and instead selected the FAQ List. This causes SharePoint Designer to use the default XsltListViewWebPart. You could use a DataFormWebPart like we did for the FAQTopics, but you may lose some functionality. For example, if you want to use the Rating stars to allow users to rate the FAQs, they don’t really work when you use the DataFormWebPart and instead you just get the number stored in the database. So, depending on your needs, pick the appropriate web part. The rest of the steps are the same regardless.
    3. You can now test what we have done. Click the Home ribbon and then click the Preview in Browser button. This will open your page up in the browser. It should show a list of rows from your FAQTopics table and also have the list of FAQs on the right side. They are independent of each other right now, but we’ll connect them next.
  • Connect the two lists
    1. Right-click anywhere in the FAQTopics web part on the left and choose Add Connection.
    2. Choose Send Row of Data To (the default).
    3. Click the Next button.
    4. Click Connect to Web Part on this page (the default).
    5. Click the Next button.
    6. Choose FAQ List from the Target Web Part drop down list.
    7. Choose Get Filter Values From from the Target action drop down list.
    8. On the right set of columns (called Inputs to FAQ List) in the list of columns scroll down to the column you want to group by and will match the text in the FAQTopics. In my case the column is called Topic.
    9. Click the column (it shows as <none> initially) to the left of the Topic column. You will notice that <none> turns into a drop down list. Select the matching column from the list. In my case, I selected Title.
    10. Click the Next button.
    11. Select Title from the Create a hyperlink on drop down list.
    12. Check the Indicate current selection using checkbox.
    13. Click the Modify button and select the column that uniquely identifies the row. In my case, this is Title.
    14. Click the Next button.
    15. Click the Finish button.
    16. You can now test what we have done. Click the Home ribbon and then click the Preview in Browser button. This will open your page up in the browser. It should the two lists like before, except this time the first item in the FAQTopics (the one on the left) should be bold (the default style for the currently selected row). You can each row and the list on the right side (FAQ List) should update to show only the rows that match the selected FAQTopic.
  • Add support for query string to Topics DataView
    1. Right-click on one of the DataViews and then choose Manage Connections.
    2. Remove the Connection we added earlier. We needed to add the connection in the first place so that it added support for current selected item.
    3. Select the Topic DataView
    4. Click the Parameters button and click the New Parameter button to create a new parameter called something like Topic.
    5. Change the Parameter Source drop down list to Query String.
    6. Enter Topic into the Query String textfield.
    7. For Default value, enter the topic you would like to be selected when the page loads for the first time or there is no topic specified in the url. For example, if you topics are Topic 1, Topic 2, Topic 3 and you would like Topic 2 to be selected by default then enter Topic 2. NOTE: You can change this in the future by selecting the DataView and then Parameters again.
    8. Make sure you have SharePoint Designer 2010 set to Split view so that you can see code and the Designer view at the same time.
    9. Click on one of the rows in your DataView. This will show something like this:
      <td class="ms-vb">
           <xsl:value-of select="@Title"/>
      </td>
    10. Replace or modify the snippet above to look like the following:
      <td class="ms-vb" nowrap="nowrap" style="height: 23px">
          <a target="_self">
          <xsl:attribute name="href">
              <xsl:text>?Topic=</xsl:text>
              <xsl:value-of select="ddwrt:UrlEncode(string(@Title))"/>
          </xsl:attribute>
    11.     <xsl:attribute name="style">
              <xsl:if test="$CurrentRowKey = $Topic">font-weight: bold;</xsl:if>
          </xsl:attribute>
          <xsl:value-of select="@Title" />
          </a>
      </td>
      NOTE: If you are showing a field different than Title you will need to change @Title to the name of your field. Also, if you name your parameter something other than Topic, you will need to change the $Topic to the name you used. If you typed something else in the Query String textfield you will need to change ?Topic to that as well. I have bolded each of the items you may need to replace depending on the values you entered in previous steps.

  • Add support for query string to FAQ DataView
    1. Select the FAQ DataView
    2. Click the Filter button on the Data View Tools | Options ribbon.
    3. Select Topic from the Field Name drop down list.
    4. Select Create a New Parameter… from the drop down list.
    5. Change the Name of the parameter to Topic.
    6. Change the Parameter Source drop down list to Query String.
    7. Enter Topic into the Query String textfield.
    8. For Default value, enter the topic you would like to be selected when the page loads for the first time or there is no topic specified in the url. For example, if you topics are Topic 1, Topic 2, Topic 3 and you would like Topic 2 to be selected by default then enter Topic 2. NOTE: You can change this in the future by selecting the DataView and then Parameters again.
  • Tweak the UI (optional)
    1. This step is totally optional if you are happy with the default styles, etc of the topics on the left.
    2. Do a search for bold in FAQ4.aspx. You will find a tag like this:

      <xsl:attribute name="style">
           <xsl:if test="$CurrentRowKey = $Topic">font-weight: bold;</xsl:if>
      </xsl:attribute>
      This adds the style=”font-weight:bold:” to the link tag. You can modify this or if you prefer, you can add a css class reference by adding the following lines after the lines for the style attribute.

      <xsl:attribute name="class">
           <xsl:if test="$CurrentRowKey = $Topic">selected</xsl:if>
      </xsl:attribute>

      In this case if the row is selected we the link tag to have class=”selected” to it. Then in your stylesheet you create a .selected style. The name (selected) is not important, but it does need to match what you have in your stylesheet.

Thursday, August 9, 2012

Get all SharePoint sites (webs) recursively in PowerShell

SharePoint has always had confusion between sites and webs. What we are really talking about hear is for a given site (not site collection, but really a web) such as a team site we want to get all the sites below it. An easy way to do this is to use PowerShell if you are using SharePoint 2010.

  1. Open SharePoint 2010 Management Shell (As Administrator).
  2. Paste the snippet of code below into the prompt. Note, the first line can be uncommented if you are just using a PowerShell prompt that doesn’t have the SharePoint PowerShell commands added already.

 

#Add-PSSnapin "Microsoft.SharePoint.PowerShell"
$url = "
http://mysharepoint/SiteDirectory/site1"
$site
= Get-SPWeb ($url)
foreach ($web in $site.Site.AllWebs)
{
    if ($web.Url.StartsWith($url))
    {
    Write-Host ($web.Name + "|" + $web.Url)
    }
}

In my example I am writing to the console the name of the site, the pipe character as a delimiter, and then the url.

Thursday, August 2, 2012

Programmatically finding and using a person using Client Side Object Model

I have a custom table and one of the fields is a Person column column called Answered By. In the SharePoint UI the person can type in the persons username (same as email if you are using SharePoint Online) or use the Person Picker to find the person. In the code below we want to find them by username.

using (ClientContext clientContext = ClaimClientContext.GetAuthenticatedContext("https://myco.sharepoint.com/site1/site2"))
{
    var user = clientContext.Web.EnsureUser("me@myco.com");
}

Once you have the person you can then use it when setting the value for that field programmatically. In this example I am creating a new list item and setting the Answered By field using the person object that I find using the EnsureUser() method.

using (ClientContext clientContext = ClaimClientContext.GetAuthenticatedContext("https://myco.sharepoint.com/site1/site2"))
    {
        var destList = clientContext.Web.Lists.GetByTitle("FAQs");
        var itemCreateInfo = new ListItemCreationInformation();
        var newItem = destList.AddItem(itemCreateInfo);
        newItem["Title"] = "Some title here";
        var user = clientContext.Web.EnsureUser(
brent.vermilion@nxp.com);
        newItem["Answered_x0020_By"] = user;
        newItem.Update();
        // actually create the records in the destination list
        clientContext.ExecuteQuery();
    }

Take note, I could have also put the integer instead of the user object, but I think this actually saves a trip to the database and is cleaner. I also found it interesting that spaces in column names in SharePoint have the space replaced by the space with a notation that uses the hex value for space (20).

To learn more about CRUD operation and other useful topics see this intro.

Monday, July 30, 2012

Hiding View All Site Content menu item in SharePoint Online

This entry assumes you are using the new SharePoint Online interface that has the Site Actions menu on the top LEFT of the screen and there is a View All Site Content menu item on that menu. The goal is to hide the View All Site Content menu item from non-admin users. Please note, this does not stop people that have a link to the page already. Keep in mind that mobile users have access to the page also, so you won’t be stopping them either. This change simply hides the menu item in the Site Action menu. You should be able to use this same technique to hide the link on the Quick Launch as well. The easiest way is to click on it in the Design view in SharePoint Designer. If you do a search in the master page for viewlsts.aspx you will that there are five references to the url for the All Content Page, so you may want to consider changing those as well if you want to get rid of those entry points as well.
  1. You will need to edit (which will get you a copy) the v4.master master page that your site uses. The easiest way is to use SharePoint Designer 2010 (Free from Microsoft).
  2. Do a search in your master page for MenuItem_ViewAllSiteContents. There should only be one match.
  3. Locate the PermissionString=”ViewFormPages” attribute and change it to PermissionsString="ManagePermissions". The ManagePermissions should be sufficient to only include admin access to this menu item, but you may also want to consider other permission levels as well. Since this isn’t really a security measure, and really just trying to shield the average user from getting confused my limited testing of who sees the menu item was sufficient. You may want to do more testing.
  4. Don’t forget to check-in and publish your changes (assuming it is enabled) so that others can see the changes. Otherwise, you will be the only one that can see the changes.

Friday, July 27, 2012

Best tool for CAML query generation

After lots of looking around, I finally found a CAML query generation tool that supports SharePoint Online authentication and Client Side Object Model api. It also works with SharePoint 2010. It is well designed, easy to use, and very powerful. At the click of a button it gives you the applicable Server Object Model code or Client Side Object Model code or even Web Service code. Unfortunately, the generated code does not work with SharePoint Online, but that is because it uses different authentication.However take a look at my post on using authenticating against SharePoint Online when using the Client Side Object Model.

Thank you to Karine Bosch for writing the CAML Designer and making it freely available AND giving good documentation on it.

Click here to learn more and download.

LINQ to SharePoint

Why use LINQ instead of CAML

  • Intellisense
  • Compile-time type checking
  • You don’t have to know CAML which is not as well documented.
  • If you know LINQ to ??? you already have a good feel for the syntax.

Supported Environments

From what I can tell (I found conflicting information on this and I could not get it to work) you can’t use spmetal (which is required for LINQ to SharePoint) on SharePoint Online. However, if you are using SharePoint 2010 (on premise) it does generate ok, so I expect the LINQ to SharePoint would work as expected.

Learn more about LINQ to SharePoint

  1. Introducing Linq to SharePoint Video – Good place to start.
  2. LINQ Syntax Video – Learn the basics of the syntax for general LINQ (if you don’t know it already)
  3. Accessing SharePoint Data using LINQ to SharePoint

Basics

  1. Create Entity Classes using SPMetal
  2. Add reference to Microsoft.SharePoint.Linq.dll to your project
  3. Create DataContext
  4. Write Queries

Thursday, July 26, 2012

Getting Started with SharePoint Online Development

Authenticating against SharePoint Online

The first place I would start is here to learn how to authenticate against SharePoint Online (O365). This will give you a good understand of what libraries are available to do the claims based authentication SharePoint Online uses. It also explains it in an easy to understand manner. Remote Authentication in SharePoint Online Using Claims-Based Authentication – This is a article, but really what you want is the code / solution you can use to connect to your SharePoint Online site and get its title. Here is the direct link to download the solution for Visual Studio. You can also check out this library instead. They both do similar authentication. These libraries are great because they make authenticating against SharePoint Online which is a MAJOR pain in the rear, much easier.

If you are doing your development on a machine that does not have SharePoint 2010 installed you will need the SharePoint Foundation 2010 Client Object Model Redistributable. This gives you among other things the Microsoft.SharePoint.Client.dll and Microsoft.SharePoint.Client.Runtime.dll assemblies which you will need. If you have an installation of SharePoint 2010 on another machine you can copy these files from the c:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\14\ISAPI directory. If you have trouble, you may want to look here first for a code tweak that brings up the login screen. I did not find it necessary to do the tweak, but some people do.

As it turns out the SharePoint Online requires a bit more work to get the authentication to work than the standard example for SharePoint 2010 show. This is because SharePoint Online uses Claims-Based Authentication. For whatever reason it is not implemented and certainly is not transparent in the SharePoint Client Object Model. What I recommend is downloading the solution from above if you have not already done so since this has the Claims-Based Authentication implemented for you. You should still be able to use most samples for SharePoint 2010 regarding the CSOM (SharePoint Client Object Model), but you will need to do two things to make them work (which is related to authentication).

  1. Add a reference to the ClaimsAuth (that is from the solution above).
  2. In the code replace any calls to new ClientContext(url) with ClaimClientContext.GetAuthenticatedContext(url);

Once you are authenticated, the api should be the same.

Technology Overview

The SharePoint Foundation provides an api to interact with SharePoint (both SharePoint 2010 and SharePoint Online) data remotely from script that executes in the browser, Silverlight, or .NET applications. The api includes a subset of the features available in the SharePoint Object Model (Microsoft.SharePoint.dll) that you would use if you were running your code on the SharePoint Server itself (which is not an option if you are using Microsoft SharePoint Online). Each language uses the same api to make switching between them easier. Obviously, the syntax is different for each language.

Microsoft decided to make the client object model instead of constantly trying to extend the web services. The reason is that with the client object model it is very similar to the server object model that was available on the SharePoint server itself. This makes providing new features much easier and also easier for developers since there is one object model needed for both client and server (in many cases).

You can do most things you can do with the server object model. The following areas are supported by the client object model:

  • Site Collections and Sites
  • Lists, List Items, Views, and List Schemas
  • Files and Folders
  • Web, List, and List Item Property Bags
  • Web Parts
  • Security
  • Content Types
  • Site Templates and Site Collection Operations

This has some advantages over the web services that Microsoft used to require developers to use. The biggest advantage may be that it does not have the overhead associated with web services (xml bloat).

A good resource is the Client Object Model Resource Center.

A great video to get more details.

IMPORTANT: It only works with .NET 3.5 right now. It does NOT work with a .NET 4.0 project.

Do your first project.

  1. Create a new .NET Command Line application (or other type if you prefer) using the 3.5 (4.0 is the default and will not work as of 2012-07-26)
  2. Make sure you compiled the ClaimsAuth project that is located in the Remote Authentication in SharePoint Online Using the Client Object Model solution and reference the ClaimsAuth.dll, or just add that project to your project if you prefer to have the source code.
  3. Add a reference to the Microsoft.SharePoint.Client.dll and Microsoft.SharePoint.Client.Runtime.dll to your project as well. Remember, you can get that from SharePoint Foundation 2010 Client Object Model Redistributable installation or from c:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\14\ISAPI on an existing SharePoint 2010 installation.
  4. Add [STAThread] attribute to the Main method.
  5. Make your Main method look like this (you’ll need to change the url)
    [STAThread]
    static void Main(string[] args)
    {
        using (ClientContext clientContext = ClaimClientContext.GetAuthenticatedContext(https://yoursite.sharepoint.com/teams/site1/site2/etc))
        {
            Web site = clientContext.Web;
            clientContext.Load(site); // tell model we want that data
            clientContext.ExecuteQuery(); // actually go to SharePoint and get data
            Console.WriteLine("Title: {0}", site.Title);
        }
    }
  6. I have highlighted the most important pieces that are different from what you would do in SharePoint 2010. When you run it you will see a browser object popup and then continue. That is all there is to it.

Where to go from here

Using the SharePoint Foundation 2010 Managed Client Object Model – good tutorial that covers lots of the CRUD type operations you may need to do.

You may prefer to use LINQ instead of CAML to do your queries. This may be a good place to start.

Wednesday, July 25, 2012

Creating a simplified Master–Detail functionality using SharePoint Designer 2010

This assumes you have two Lists. One list is the lookup table (we’ll called it FAQTopics) that you want to group by. The other list is a table (I called it FAQ List) that has the rest of the data you want to display. In this scenario I want my lookup table to show on the left side of the screen and when I click one of the items on this table I want the list on the right to update and only show the rows that are in that match. In this case, the master is the FAQTopics list and the detail is the Topics list.

If you need to be able to change the item that is selected by default or have a link to the page with a certain topic selected, I suggest you try an enhanced version of this.

  • Create a new web part page
    1. Open SharePoint Designer 2010
    2. Go to Site Pages
    3. Click Web Part Page from the Pages ribbon and select the desired web part layout. Pick on that has at least a left and right web part area for our purposes since we want the list of topics on the left and the FAQs on the right.
    4. Rename the file as a desired. I called it FAQ.aspx
    5. Right-click new page and select Edit File in Advanced Mode.
  • Add the FAQTopics to the page
    1. Click on one of the left web part zones
    2. Click the Insert ribbon and choose Empty Data View from the Data View button. This will create a DataFormWebPart. This will work best since it gives us access to the html directly and allows us to easily customize the selected row styles.
    3. Click the Click here to select a data source link in the middle of the DataFormWebPart and select FAQTopics (you may have called it something different). The screen won’t change much, but open the Data Sources Details panel (if it is not all ready visible). Here you will see a list of columns from the FAQTopics.
    4. Drag over the column you want to display in the FAQTopics list. In my case, I used Title. Please, note, this field should uniquely identify the row.
    5. On the Options ribbon, set any filters or sorting you may need by clicking the Filter and Sort & Group buttons.
    6. You can now test what we have done. Click the Home ribbon and then click the Preview in Browser button. This will open your page up in the browser. It should show a list of rows from your FAQTopics table.
  • Add the FAQ List to the page
    1. Click on one of the right web part zones
    2. Click the Insert ribbon and choose FAQ List from the Data View button. This will create a XsltListViewWebPart. Note, we didn’t create an Empty Data View (like we did for the FAQTopics) and instead selected the FAQ List. This causes SharePoint Designer to use the default XsltListViewWebPart. You could use a DataFormWebPart like we did for the FAQTopics, but you may lose some functionality. For example, if you want to use the Rating stars to allow users to rate the FAQs, they don’t really work when you use the DataFormWebPart and instead you just get the number stored in the database. So, depending on your needs, pick the appropriate web part. The rest of the steps are the same regardless.
    3. You can now test what we have done. Click the Home ribbon and then click the Preview in Browser button. This will open your page up in the browser. It should show a list of rows from your FAQTopics table and also have the list of FAQs on the right side. They are independent of each other right now, but we’ll connect them next.
  • Connect the two lists
    1. Right-click anywhere in the FAQTopics web part on the left and choose Add Connection.
    2. Choose Send Row of Data To (the default).
    3. Click the Next button.
    4. Click Connect to Web Part on this page (the default).
    5. Click the Next button.
    6. Choose FAQ List from the Target Web Part drop down list.
    7. Choose Get Filter Values From from the Target action drop down list.
    8. On the right set of columns (called Inputs to FAQ List) in the list of columns scroll down to the column you want to group by and will match the text in the FAQTopics. In my case the column is called Topic.
    9. Click the column (it shows as <none> initially) to the left of the Topic column. You will notice that <none> turns into a drop down list. Select the matching column from the list. In my case, I selected Title.
    10. Click the Next button.
    11. Select Title from the Create a hyperlink on drop down list.
    12. Check the Indicate current selection using checkbox.
    13. Click the Modify button and select the column that uniquely identifies the row. In my case, this is Title.
    14. Click the Next button.
    15. Click the Finish button.
    16. You can now test what we have done. Click the Home ribbon and then click the Preview in Browser button. This will open your page up in the browser. It should the two lists like before, except this time the first item in the FAQTopics (the one on the left) should be bold (the default style for the currently selected row). You can each row and the list on the right side (FAQ List) should update to show only the rows that match the selected FAQTopic.
  • Tweak the UI (optional)
    1. This step is totally optional if you are happy with the default styles, etc of the topics on the left.
    2. Do a search for bold in FAQ4.aspx. You will find a tag like this:

      <xsl:attribute name="style">
           <xsl:if test="$CurrentRowKey = $dvt_curselkey">font-weight: bold;</xsl:if>
      </xsl:attribute>

      This adds the style=”font-weight:bold:” to the link tag. You can modify this or if you prefer, you can add a css class reference by adding the following lines after the lines for the style attribute.

      <xsl:attribute name="class">
           <xsl:if test="$CurrentRowKey = $dvt_curselkey">selected</xsl:if>
      </xsl:attribute>

      In this case if the row is selected we the link tag to have class=”selected” to it. Then in your stylesheet you create a .selected style. The name (selected) is not important, but it does need to match what you have in your stylesheet.

Monday, July 23, 2012

Hiding column headers on SharePoint lists

I found that sometimes I don’t want to see the column headers on a SharePoint list. I ran into such an instance when I was putting together a FAQ which was implemented as a Custom List in SharePoint 2010. I create a view that grouped my FAQs by the Topic field. SharePoint give a fairly nice view, but in this case I think the column headers are a bit confusing to the end users.

image

It makes even more sense, if you don’t have tabular data for each row or two up in my case.

image

The the goal here, is to remove the column header (denoted by the red arrows) so that the page looks something more like this:

image

To do this you will need SharePoint Designer 2010 (Freely available from Microsoft). Once you have SharePoint Designer 2010 installed (you may need to restart before it will open the page properly) follow the steps below.

  1. Add the list to a page as a web part. In this case, my list is called FAQ List, so I added there was a web part of type FAQ List.
  2. Go to Site Actions | Edit In SharePoint Designer. The page will load in SharePoint Designer 2010.
  3. Click one of column headers. You will notice that the Table ribbon is now available.
  4. Click the Select button | Select Table button.
  5. On the same ribbon, there is a Shading button. Click it and choose a color (for example red). This will change the background of the headers to red. This works because the headers are actually in nested table. By changing the background of the table, we force the html for that table to be generated so that it can be extended.
  6. Find the style="background-color: #FF0000" (assuming you chose red).
  7. Change the style attribute to be style=”display:none;”. This will cause it to still be sent to the browser, but it will ignore it. Notice, the background-color is not set anymore, but it doesn’t hurt anything because the table is not displayed and does not take up the height and width it did on the page either. It is effectively gone from the users view now.
  8. Save your changes in SharePoint Designer 2010 and refresh the page in the browser to see verify that your goal was accomplished.

What not to do

The first time I tried to do this I went into SharePoint Designer 2010 and clicked the column headers and just started hitting the delete key. As expected, the column headers disappeared because I deleted them. The problem I saw with this is that my Ratings control no longer would allow the user to click the stars to rate each of the FAQs. It would no error or give a javascript error depending on if error are turned off in the browser. I assume this has something to do with a missing javascript file that didn’t get included, but I wasn’t going to track it down. Instead, I thought to myself, why not just trick SharePoint into loading the proper files and then on the browser side just hide the column headers since it really is just a cosmetic change that I need. As far as I can tell, the solution outlined here works much better and is actually quite easy to undo if you want to also.

Friday, July 13, 2012

A backup strategy for MS SQL Server

In this post I want to talk about the basics of a backup plan for SQL Server databases.The question is often, "What do I need to keep and for how long when I backup my SQL Server databases?"

The long answer is it depends on how large the database is, what type of database, how large are the maintenance windows, etc. However, for most databases you will want to minimize your risk for losing data. The bottom line is that you need to know how the different types of backups in SQL Server work. I'll try to explain this.

First some general thoughts. You should always backup to a disk that is different from the one the thing (database, log file, etc) that you are backing up. A backup to tape or another backup of the file system is advisable as well. This way in the event of a fire, natural disaster, etc you will have at least something to work with. Please note that you cannot backup the actual database file in the system, you have to use SQL Server 

The most important thing you can do is a periodic full backup of your database. It is best to do your system databases nightly since they are very quick. This will give you your jobs, etc. You should do your user databases during a time of low usage because this can take a while. Typically you would do a full backup once a week.  If that is the only thing you do you will lose a week of data if your database becomes corrupted, accidentally deleted, hardware failure, etc. In most cases this is not good enough, so we need to reduce the window of risk.

To reduce the window of risk, we could do a full backup everyday, but typically there is not a maintenance window that allows the time or resources to do this. So, this means we need a faster way to backup. This can be done by doing a differential backup in SQL Server. It is typical to do a differential backup each day of the week. This means that each time you do a differential backup you will be backing all the changes to the database since the last FULL backup. This also means that if you are doing your full backs weekly that the day before your full backup your differential backup will potentially be quite large because each day after the full backup the differential backup keeps getting larger and larger each day. The size grows because each differential includes the previous differentials data as well.

With the weekly full backup and the daily differential backs running we still have at most a day of data we could lose. We also have the problem that our log file itself will grow indefinitely (assuming full or bulk-logged recovery mode for the database is selected). Once solution is to periodically backup and truncate the log file manually after a full backup. This is not an optimal solution if you have many databases. The highly recommended method for solving that problem AND decreasing your window of risk is to do hourly log file backups. This action will save only the changes since the last log file backup which is one hour of changes in this case. The log file backup is extremely fast in most cases. The more often you run it the faster it is (for the most part). So at the end of the day you will have 24 log file backs and your log file will be truncated as well (pending a bunch of conditions that are outside the scope of this entry).

Now that we have a weekly full backup, a daily differential backup, and an hourly log backup our window of risk has been reduced to 1 hour max. This is great, but we still don't know what we need to keep and for how long in order to make a restore of our database. The answer to this pretty simple if you remember the following:

  1. FULL BACKUPS: You need to keep a full backup until you have made another full backup AND verified it.
  2. DIFFERENTIAL BACKUPS: You only need to keep the latest differential backup and it can be deleted when you do a full backup. You don't need differential backup files if you have log backups from the time of the full backup to the time the differential backup was last done. This means that you don't really need to ever do a differential backup if you keep all your log backups between each full backup. This actually recommended by some DBAs because it really simplifies the backup strategy.
  3. LOG FILE BACKUPS: You need to keep log files for the time period that you don't have a differential or full backup. For instance, if you have a full backup and your last backup, you only need the log backups that occurred since the last differential or full backup. This would mean that you don't need to at least 24 hours of log backups if you do a differential backup every 24 hours.
So, as you can see there is no definite answer, but it is easy enough to figure out given your backup strategy.

One thing that was not obvious the first time I scheduled backups is that it is probably best to do a differential backup EVERYDAY including the day you do the full backup or keep more than 24 hours of log backups. Why? Consider, you do a differential backup on everyday except Sunday because that is when you do you full backup. Full backups take longer and thus I usually schedule them earlier in the day than I do for the differential. This difference in time creates a period where we are not protected and unable to restore using our log backups. The additional risk is that let's say your full backup on Sunday starts at 10am and your next differential backup is on Monday at 6pm. There are more than 24 hours between the two backups, but we are only keeping 24 hours of log backups. This means the log files between 10am and 6pm will be deleted. Since we are missing those log backups , it makes the rest of the log backups not very useful either. That means that between Sunday 10am and Monday 6pm we are not longer getting the benefit of the log backups and instead have opened our window of risk back to 1 day and on a day that we are doing extra maintenance most likely. This is not a good scenario. The simplest solution seems to do the differential backup on Sunday as well (as the full backup we did that morning). The backup will be fast and small and we will be safe. The other option would be to keep 32 hours of log backups to cover that period. The downside is that we will have to keep an extra 8 hours of log files throughout the week or make a special change just for that day of the week, and that is just messy.

I hope this helps. If you are looking for a backup implementation that is both quick and easy to use, I recommend reading my other entry on Implementing a backup strategy in MS SQL Server.

Implementing a backup strategy on MS SQL Server

I have implemented a new backup strategy on a server using the SQL Server Maintenance Solution by Ola Hallengren. It is very flexible and very robust. There are lots of advantages to using it instead of the built-in maintenance plans.

Some of the reasons are:
  • Easy Deployment to multiple servers using sql scripts.
  • Have control of the name of the sql jobs.
  •  Wild card inclusion/exclusion of databases
  • Handles a differential or log back up if full has not been executed as in when a database is created.
  • Totally customizable
  • Very robust options
  • Smart index maintenance by only re-indexing or organizing indexes that need it.
  • Better clean up of old files
  • All details of every command executed is stored in a SQL table and text file system. Job history also has job level details.
  • Easy to diagnose issues due to detailed logging of commands.
  • If one thing in job fails, the other commands in the job are still executed. This means if one database can’t be backed up the rest still will be.
  • Used by tons of corporations around the world.
  • Works with multiple version of SQL Server
  •  Has received Gold Community Choice award and Silver Editor’s Best Choice award (both in 2011).
  • Handle Large databases well.

The one that I like most is ease of deployment. Since it is based on stored procs and sql jobs all that needs to be done is run a sql file. The default configuration is nearly perfect. The jobs just need to be scheduled for the most part.

There is a good video (about an hour) that reviews it in detail.

It is an open source project that has a open ended license. The source and documentation can be found at: http://ola.hallengren.com


Monday, July 2, 2012

Delete all alerts in SharePoint

Whenever I setup a development copy of SharePoint and want to do some testing on a content database I copied from production, I don’t want have the alerts still enabled. I have experimented with enabling and disabling via script, but it never seemed to work quite how I expected. Then I got the idea to just delete them since this is development copy of the content database anyway.

The PowerShell script below MUST be ran as ADMINISTRATOR on one of the front end servers for SharePoint 2010. If you don’t Run As Administrator, then you will get an error that you don’t have permissions to access the farm.

The script is theoretically simple. It goes to the SharePoint url you give it, opens up the web application, gets all the site collections. For each site collection it gets all the webs in that site collection (this is the same as recursively traversing the SharePoint tree of webs, but much wrapped up nicely in the AllWebs property). Then we loop through each web and get all the Webs. We then get all the ids of the alerts we want to delete. Next we delete each alert. If one fails because of a user not existing anymore, we put a valid user in its place and try again. This script should only take a short time to run (like seconds or few minutes, not hours).

Add-PSSnapin "Microsoft.SharePoint.PowerShell"

$numWebs = 0
$numDeleted = 0;

$sp = Get-SPWebApplication ("http://mySharePoint”)
foreach ($site in $sp.Sites)
{
    foreach ($web in $site.AllWebs)
    {
        $numWebs++;
       
        $alertIds = @();
        foreach ($alert in $web.Alerts)
        {
            $alertIds += $alert.ID;
            $numDeleted++;   
        }
       
        foreach ($alertId in $alertIds)
        {
            try
            {
                $web.Alerts.Delete($alertId);
            }
            catch [system.exception]
            {
           
                # user no longer exists. Perhaps not migrated properly when changed domains, etc.
                # In any case, we just need to update the alert so that it points to a valid user,
                # then we can delete it.
                if ($_.Exception.Message.Equals("User cannot be found."))
                {
                    $alertToUpdate = $web.Alerts[$alertId];
                    $alertToUpdate.User = $web.SiteUsers[0];
                    $alertToUpdate.Update();
                   
                    #try to delete again, but this time we have a valid user
                    $web.Alerts.Delete($alertId);
                   
                }
            }
        }
    }
}

Write-Host "Num of Webs checked: " + $numWebs;
Write-Host "Num of Alerts deleted: " + $numDeleted;

To delete all alerts for a particular person, you may want to try this script. It was also the starting point for my script here.