Wednesday, June 24, 2009

How to calculate dot pitch

Dot Pitch is the measure of the distance between each adjoining pixel on a color computer monitor (including LCD monitors), expressed in millimeters (mm). The lower the dot pitch the sharper the displayed image. Typically color monitors come with a dot pitch of 0.3 mm or lower.

Dot Pitch is related to resolution because resolution is a measure of the number of pixels horizontally and vertically. Given this relationship we can calculate the dot pitch of any monitor if we know the horizontal resolution, vertical resolution, and the diagonal distance. Technically, if we new vertical and horizontal measure in inches for example, we could do the same thing and get a more accurate measure of dot pitch. However, all marketing and technical specs use the horizontal dot pitch.

With a little math and conversion of units you can easily calculate the dot pitch yourself.

Let’s take the example of a monitor that is 1280 pixels horizontally by 1024 pixels vertically and has a viewable display (make sure you use the viewable measurement, not the advertised one since they differ slightly sometimes) of 20 inches.

Below is visual of the information we know. You’ll notice that we have a difference in the units here. We have pixels and inches. We do need to do some calculations to figure out what the 20 inches is in pixels.


To calculate the diagonal in pixels we need to go back to our grade school math class. We can think of the problem as math problem where we have a right triangle and are solving for the hypotenuse (the diagonal). Below is a visual of the problem.


We need to solve for d. To do that we use the formula

d*d = x*x + y*y

We need to solve for d which means

d = square root (x*x + y*y)

For our example:  d = square root (1280 * 1280 + 1024 * 1024)

d = 1639.2 pixels

Now we know the diagonal in inches and pixels. This means we can calculate the ratio between them which is known as pixels per inch (ppi). This can be calculate very simply by dividing pixels by inches.

ppi = pixels / inches

In our example: ppi = 1639.2 / 20 = 81.96 pixels per inch.

This is a measure of dot pitch, but it is not really what marketing and technical specs will use. They use millimeters since they are smaller units of measure and more appropriate for small things like pixels.

To make our calculation compatible with marketing and technical specs we need to convert our answer to pixels per millimeter (dot pitch). It is a known conversion factor that 1 in is equal to 25.4 mm. Here is the formula to convert our answer to pixels to millimeter.

dot pitch = 25.4 / ppi

For our example: dot pitch = 25.4 / 81.96 = .31 pixels per mm (which is commonly known as dot pitch)

If you don’t want to do the math yourself, you can just use the JavaScript function below.

function calcDotPitch(hPixels, vPixels, diagonal)
var d = Math.sqrt(hPixels * hPixels + vPixels * vPixels);
var ppi = d/diagonal;
var dotPitch = 25.4 / ppi;
return dotPitch;
alert(calcDotPitch(1280, 1024, 20));

Tuesday, June 23, 2009

Using Beyond Compare for backups / mirroring

I love Beyond Compare. It is so useful. It allows you to compare contents of files in a very nice way. It can also compare entire directories. It can even compare and copy binary files.

In many ways, I prefer it to the built in Windows copy functionality because if there is an error in Windows copy, you have to try again and I never really know the state of the destination folder. When I use Beyond Compare, if there is an error all I have to do is look at the differences or just simply try again. I also get that warm-fuzzy when I see that there is no difference between the source and destination directories.

I also like to have the same warm-fuzzy when it comes to “backups". I don’t trust backup solutions in general. It isn’t that they aren’t good, it is that I don’t trust that I am configuring it right. What I want is to be able to see the files after a backup in the destination drive. Then if I can use a tool like Beyond Compare to verify the source and destination directories are the same, then I feel confident that what think I backed up is actually backed up.

One important point with this model is that I am not really talking about a true backup in the sense that most backup software provides. Most backup software saves different versions of a given file for a certain period of time. While, what I am talking about here could be modified to do so, it would be a substantially different solution. What I am talking about is really a way to snapshot my important data to another disk in the event that my disk fails. I am not worried about me changing a file and needing a previous version of a files. I use manual copies or version control for this type of stuff. I don’t need this for most files, thus I only need the most recent snapshot of my important data.

You can actually use Beyond Compare to do what I just described. However, I want an automated way of doing this. I also want to be notified of any errors. There is nothing worse than thinking your backup is running, and then when you need it most, you realize it was broken. So, always periodically check that your backup process is working.

Below are my scripts I have created to automate the process. The only software needed to do this is Beyond Compare and Microsoft LogParser and MS Windows XP (or greater I’m sure). LogParser is only needed if you want to see any errors that occur. If you like checking log files, then you don’t need LogParser.

This scripts below will need to be tweaked to match your environment. My environment is such that I have an external hard drive that I backup to. You can also use UNC pathnames as well if you want to use a Windows share.

You can download a zip file of the same basic thing, or you can follow the instructions below.


Assume E:\ is the volume that you are backing up to.

  1. Create E:\backupScript directory
  2. Create E:\backupScript\Logs directory
  3. Create E:\backup directory
  4. Create E:\backupScript\backupAll.bat text file.
  5. Create E:\backupScript\backupDirectory.bc text file.
  6. Create E:\backupScript\backupOne.bat text file.
  7. Create E:\backupScript\CheckForError.bat text file.
  8. Create E:\backupScript\CheckForErrors.sql text file.
  9. In Beyond Compare, create a Saved Session for each of the directories you want to backup. This would be the same thing you would use if you were to compare two directories, just that you save the configuration information. The name you give it will be used in the backupAll.bat file.
  10. Copy the content from each of the sections below into the appropriate text file you created.


This is the main file that you will run to do the backup. You can use Windows Scheduled Tasks to run this an a regular interval to automate the backup process. You need to modify this file to have a line for each of the Saved Sessions you created in the initial setup. If you are backup up a SQL Server database directory, then be sure to stop SQL Server first, otherwise the files will be locked and cannot be backed up. You could also schedule a backup of the SQL Server databases, and then backup the database backups. The choice is yours. The backup is running on my laptop where I don’t have much disk space to make backups of the databases just to back them up, but I no problem stopping SQL Server since I am the only one using it.

call backupOne.bat "Backup dev to E"
call backupOne.bat "Backup Inetpub to E"
call backupOne.bat "Backup Video to E"
rem Stop SQL Server so we can backup the database files
call backupOne.bat "Backup Databases to E"

rem Check for errors
call CheckForErrors.bat


Below is a Beyond Compare script. It tells Beyond Compare to clear any read only flags in the destination directory so that we won’t get any errors when replacing any files. Then it makes the source and destination directory the same. There are other options, just check the Beyond Compare docs on how to modify the script to change the behavior of the sync command (see the last line), the rest of the script is to clear the read only flags.

# Backup Directory

# Turn on logging
log verbose %LOGS_PATH%\backup-%BACKUP_NAME%.log

# Load the deployment directory on the server and the backup directory on the server

#Must expand all folders so that select will work
expand all

# Select all files in the backup directory on server
select right

# Clear the read only flag on the backup directory on server
attrib -r

# Assume yes to all confirmation messages
option confirm:yes-to-all

# Copy all files from local directory to backup directory on the server
sync mirror mirror:left->right


This file is very simple and doesn’t really need to be modified unless you changed the path to the Logs directory, or have installed Beyond Compare in a different path. This just tells Beyond Compare to run as quietly as possible so that it doesn’t disturb us. :)


"C:\Program Files\Beyond Compare 2\bc2.exe" /silent @backupDirectory.bc


This file is optional. You really only need it if you want to scan the log files for errors and then show them to you. It tells MS LogParser to query the logs we created using the query defined in the CheckForErrors.sql file. Adjust the path to LogParser if needed or a different version is used.

"C:\Program Files\Log Parser 2.2\LogParser" file:CheckForErrors.sql -i:TEXTLINE


This file is optional like I said before. Adjust the path to your log files if you didn’t use the same path as I did.

select * into DATAGRID from E:\backupScript\Logs\*.log where Text like '%Script Error%'

Wednesday, June 17, 2009

Using GridView, Entity Framework, LINQ, and an ObjectDataSource to implement a GridView that sorts and filters.

I went in search of an elegant and flexible way to implement a GridView that supports sorting (and the ability to add custom paging if I need it later, though I won’t cover that here) and uses a Data Access Layer (DAL). The filtering I am looking for is the ability to add any number of controls on the page and have them filter the results that are shown in the GridView. Based on the values of these user controls, I want to be able to do a begins with, or a range, or choose from a list of values, etc. I don’t want to be limited to just one value.

I am a believer that like SQL, you don’t want your LINQ queries all over the place. I believe a Data Access Layer (DAL) is a good place to put all your LINQ queries.

At the present time, this means that the ObjectDataSource is probably the best choice because it can call the DAL to do the query and not embed it in the EntityDataSource or LinqDataSource.

It is possible to get pretty good filtering and little to no code to do this using the Dynamic Data Future, but even then I using the DynamicFilter I found it difficult to modify the query to do things like ranges, or a begins with search for example. If you decide to go down that road, you might also want to check out the following post on how to do this in your own project. It makes searching based on a DropDownList or an AutoComplete field very easy. I wanted more flexibility than that. You can also get much of that same functionality from VS 2008 SP1 (yes the SP1 is required to get this functionality since SP1 is essentially a feature release, not a bug fix release).

The hard part of this is writing the DAL method, but is actually much easier than it used to be now that we have LINQ. In this case, I am using LINQ to Entities to query the Entity Framework.

Here is my DAL:

public class DAL
private MyEntities ctx = new MyEntities();

public IQueryable GetPerson(string firstName, string lastName, bool hasChildren, int? age, string sortExpr)
// set a default sort order
if (string.IsNullOrEmpty(sortExpr))
sortExpr = "FName";

var people = from p in ctx.Person
select new
ID = p.ID,
FName = p.FName,
LName = p.LName,
Age = p.Age,
NumChildren = p.NumChildren

if (hasChildren)
people = people.Where(p => p.NumChildren > 0);

if (!string.IsNullOrEmpty(firstName))
people = people.Where(p => p.FName.StartsWith(firstName));

if (!string.IsNullOrEmpty(lastName))
people = people.Where(p => p.LName.StartsWith(lastName));

if (!age.HasValue)
people = people.Where(p => p.Age > age);

var sortedPeople = people.OrderBy(sortExpr)
.Select("new(ID, FName, LName, Age, NumChildren)");

return sortedPeople;

You may notice that the .OrderBy() method gives you a compiler error or is not in your Intellisense. You need to download it from Microsoft. Click here to download. In the zip file, look for the Dynamics.cs file. You can include it in your project or you can build the project that comes with and include the assembly it builds in your project. It is one file, so I like putting it in my project as source code.

This Dynamic class works very in scenarios like this because it actually supports the same syntax as the ObjectDataSource uses which is <ColumnName> <SortDirection>. If the sort direction is Ascending, then no direction is specifed by the ObjectDataSource. For example the syntax for sorting my FName in Ascending order, the sortExpression would be “FName” or “FName Descending” if you wanted to sort in Descending order.

You may also notice that I use Lambda expressions to do the additional where statements. Be sure to use the value returned by the Where() method since the Where() call doesn’t change (or even query the database). All the Where() does is adds another condition to the existing where clause in the generated sql. Each time Where() is called, the statement is ANDed to the existing where clause. The code is very optimized. I am quite impressed with the code generation.

For related details on sorting with the ObjectDataSource, check out my other post.

Below is the aspx code. The most important thing you get right is the SelectParameters/ControlParameters. The ControlID property needs to match the ID of the Controls you are using for Filtering. The Name property needs to match the parameter names in the DAL method you specified in the ObjectDataSource SelectMethod property.

First Name starts with: <asp:TextBox ID="FirstNameFilter" runat="server"></asp:TextBox><br />
Last Name Starts with: <asp:TextBox ID="LastNameFilter" runat="server"></asp:TextBox><br />
Has Children: <asp:CheckBox ID="cbHasChildren" runat="server" /><br />
<asp:Button ID="Button1" runat="server" Text="Apply Filter" />

<asp:GridView ID="GridView1" runat="server" AllowSorting="True"
AutoGenerateColumns="False" DataKeyNames="ID"
<asp:HyperLinkField DataNavigateUrlFormatString="PersonDetail.aspx?ID={0}" Text="View" DataNavigateUrlFields="ID" />

<asp:BoundField DataField="ID" HeaderText="ID" ReadOnly="True"
SortExpression="ID" Visible="False"/>
<asp:BoundField DataField="FName" HeaderText="First Name"
SortExpression="FName" />
<asp:BoundField DataField="LName" HeaderText="Last Name"
SortExpression="LName" />
<asp:BoundField DataField="NumChildren" HeaderText="Number of Children"
SortExpression="NumChildren" />
<asp:BoundField DataField="Age" HeaderText="Age"
SortExpression="Age" />

<asp:ObjectDataSource ID="dsPeople" runat="server"
<asp:ControlParameter ControlID="cbHasChildren" Name="hasChildren" Type="Boolean"/>
<asp:ControlParameter ControlID="FirstNameFilter" Name="firstName" Type="String"/>
<asp:ControlParameter ControlID="LastNameFilter" Name="lastName" Type="String"/>


The context type MyDataContext does not belong to any registered model.

Are you getting the following error message at runtime in your ASP.NET application that is using LINQ to SQL?

The context type MyDataContext does not belong to any registered model.

If you are, it is very likely you have not registered you DataContext. To register you DataConext open your Global.asax.cs and add the following to the RegisterRoutes() method.

model.RegisterContext(typeof(MyDataContext), new ContextConfiguration() { ScaffoldAllTables = true });

Monday, June 15, 2009

The GridView 'GridView1' fired event Sorting which wasn't handled.

If you get the error:

The GridView 'GridView1' fired event Sorting which wasn't handled.

You are likely using an ObjectDataSource and then set AllowSorting to true or you are binding to directly your GridView in Page_Load using something like this.

It means you using a GridView that has AllowSorting=”true” equal to true and for some reason nothing has told it what will handle the sorting. The easiest way to avoid this is to use a DataSource control such as an EntityDataSource, SqlDataSource, or LinqDataSource control. The ObjectDataSource will not help you out of the box though. Some extra stuff is required. I’ll show you that later.

This page entry is broken up into to sections. One if you are binding directly to the GridView in your page load and thus have no DataSource assigned to the GridView. Another if you have are using an ObjectDataSource.

In both sections I assume you are using LINQ to access the database, but you could use anything to talk to the database. The logic that needs to be implemented is still the same. I also assume you have an object that encapsulates your database access (a Database Access Layer (DAL)).

For this example, let’s assume you have used the ADO.NET Entity Data Model in Visual Studio to create your entities. In this example we have one entity called Person. It has 3 properties: ID, FName, LName.

Data Access Layer

Below is a solution if you are using an LINQ to Entities, though it would be virtually identical to LINQ to SQL. A similar solution could be used for SQL, though in that case you would translate the requests to SQL statements.
public class DAL
private MyEntities ctx = new MyEntities();

public IQueryable GetPerson(string sortExpression)
// set a default sort order for when the page is first rendered
if (string.IsNullOrEmpty(sortExpression))
sortExpression = "FName Descending";

var people = from p in ctx.Person
select p;

var sortedPeople = people.OrderBy(sortExpression)
.Select("new(ID, FName, LName)");

return sortedPeople;


You may notice that the .OrderBy() method gives you a compiler error or is not in your Intellisense. You need to download it from Microsoft. Click here to download. In the zip file, look for the Dynamics.cs file. You can include it in your project or you can build the project that comes with and include the assembly it builds in your project. It is one file, so I like putting it in my project as source code.
This Dynamic class works very in scenarios like this because it actually supports the same syntax as the ObjectDataSource uses which is <ColumnName> <SortDirection>. If the sort direction is Ascending, then no direction is specifed by the ObjectDataSource. For example the syntax for sorting my FName in Ascending order, the sortExpression would be “FName” or “FName Descending” if you wanted to sort in Descending order.
The GridView sortingEvent also uses very similar syntax. In either case, this saves us from writing a bunch of if-else or switches for each column and sort direction. The choice is yours. This is just so easy, and it is clean.

Binding ObjectDataSource to GridView

This is by far easier of the two methods. I highly recommend using a DataSource such as the ObjectDataSource. The code is much simpler.
To make the ObjectDataSource sort all you have to do is set the DataSourceID property on the GridView to the ID of your ObjectDataSource.
You do have to tell the ObjectDataSource some things about your Data Access Layer though. You need to tell it the type for your Data Access Layer, the method to call, and what the parameter name is for sortExpression the GridView will pass you.
Here is my ObjectDataSource that I defined for the Data Access Layer we defined above.

<asp:ObjectDataSource ID="dsContracts" runat="server" 

Binding Directly to GridView in Page Load

If you want to work a little harder you can implement the logic using the GridView and no ObjectDataSource. If you are bind directly to your GridView in your page load, all you to do to stop this error message is handle the Sorting event on your GridView. While this stops the error message, it doesn’t give you sorting when a column header is clicked. You need to put some logic in the Sorting event for it to do something useful.

You are likely binding your DAL to your GridView using something like this or maybe conditionally if it isn’t a postback:

protected void Page_Load(object sender, EventArgs e)
GridView1.DataSource = new DAL().GetPerson("");
The GridView does NOT set the SortDirection property in this event handler unless an DataSource object is set. This means that Sorting event ALWAYS will have a e.SortDirection equal to SortingDirection.Ascending. This is a bug in my mind, but I think Microsoft just says it is by design (or bad design if you ask me). For more explanation on this please see here for the response from Microsoft.
As a recommended workaround, we need to track the SortDirection ourselves. In order to do something useful, we need to also track the column that was clicked so that we know when to reset the sort direction to the default direction.
Here is the code to handle the sorting event for GridView. Be sure to wire it up to your GridView.
protected void GridView1_Sorting(object sender, GridViewSortEventArgs e)

// get values from viewstate
String sortExpression = ViewState["_GridView1LastSortExpression_"] as string;
String sortDirection = ViewState["_GridView1LastSortDirection_"] as string;

// on first time header clicked ( including different header), sort ASCENDING
if (e.SortExpression != sortExpression)
sortExpression = e.SortExpression;
sortDirection = "Ascending";
// on second time header clicked, toggle sort
if (sortDirection == "Ascending")
sortExpression = e.SortExpression;
sortDirection = "Descending";
// Descending
sortExpression = e.SortExpression;
sortDirection = "Ascending";

// save state for next time
ViewState["_GridView1LastSortDirection_"] = sortDirection;
ViewState["_GridView1LastSortExpression_"] = sortExpression;

// NOTE: Depending on the syntax you require for your sortExpression parameter
// to your method, you may need to convert the sort expression to that syntax.
GridView1.DataSource = new DAL().GetPerson(sortExpression + " " + sortDirection);

Friday, June 12, 2009

Getting the SQL that was generated using LINQ to Entities

LINQ to Entities doesn’t have debugger support for getting the SQL that was generated for a query like LINQ to SQL does. However, you can easily get the generated SQL with a few lines of code. I have added some additional lines of code to make it more robust. I recommend adding the following method to your partial class that inherits from ObjectContext. This designer class is created by default when you create an ADO.NET Entity Data Model, but you don’t want to edit that. Instead you want to add a partial class with the same name and in the same namespace.  If you don’t want to put the code there you could modify it slightly to take the ObjectContext as a parameter and put it anywhere you want.

/// <summary>
/// For debugging only. Returns the SQL statement that is generated
/// by LINQ for an IQueryable object. This does NOT execute the query
/// </summary>
/// <param name="query">The IQueryable object</param>
/// <returns>The generated SQL as a string</returns>
public string GetGeneratedSQL(IQueryable query)
string sql = string.Empty;
bool weOpenedConnection = false;
if (Connection.State != ConnectionState.Open)
weOpenedConnection = true;
sql = (query as ObjectQuery).ToTraceString();
if (weOpenedConnection)
return sql;

Here is how you use it.
using (ICA3Entities ctx = new ICA3Entities())
var query = from t in ctx.MyTable
select t;

string sql = ctx.GetGeneratedSQL(query);

For tools and more information on other options you may want to check out:

Thursday, June 11, 2009

Controlling Opacity of a background color using CSS

If you want want to have a background color be semi-transparent (a color with opacity of less than 100%), Cascading Style Sheets (CSS) may be what you have been looking for.

FireFox and Internet Explorer implement the feature differently. However, both implements can co-exist in one CSS class so there is no need for fancy code to swap between the two depending on the browser.

.mySemitransparentBackground {
background-color: Gray;
opacity: 0.7;

In this example, the background color is Gray, but could be any color. The line that starts filter is for Internet Explorer, and the line that starts with opacity is for FireFox. Notice that Internet Explorer takes the value 70 to specify that the color is 70% opaque, and FireFox use the decimal version which is 0.7 to specify the same 70% opacity. Note, that 100% means that your color will be solid / not transparent (no background will be seen through it). Be sure both values represent the same value, otherwise you will get different opacity for each browser.

NOTE: I have verified that the FireFox stuff also works for Safari on Windows. If anyone else confirms any other browsers or platforms, please let me know. Thx!

Wednesday, June 3, 2009

Just Geeks - 200th posting

Wow! I can hardly believe I am up to 200 postings.

It is amazing how much I have picked up from programming on a daily basis. It seems hard to imagine that someone could get bored programming.

I sincerely hope this blog helps many people. According to Google Analytics I get about 550 people visiting my blog a day. I guess I must be doing something right, so I think I will keep on writing. If nothing else I use my own blog to help document what I learn.

Thanks for all your support.

Brent V

Downloading Binary content using ASP.NET and LINQ

Here’s the scenario. You have some binary data that is in a database. You want to use LINQ to SQL or LINQ to Entities to pull the data out of the database and return the user for viewing when they hit a particular url.

Let’s pretend you have a page called Download.aspx and you want to use the primary key in the url to specify what file should be downloaded when the user hits this page.

For example, if the url is http://myapp/Download.aspx?AttachmentID=123 where 123 is the primary key in a table called Attachment that has our binary data.

Below is an LINQ to Entities example, but a LINQ to SQL would be virtually identical.

protected void Page_Load(object sender, EventArgs e)
if (!string.IsNullOrEmpty(Request.QueryString["AttachmentID"]))
int attachmentID = Convert.ToInt32(Request.QueryString["AttachmentID"]);

using (MyEntities ctx = new MyEntities())
var attachment = ctx.Attachment.FirstOrDefault<MyAttachment>(a => a.ID == attachmentID);
if (attachment != null)

byte[] binaryData = attachment.DataField;
Response.AddHeader("content-disposition", string.Format("attachement; filename=\"{0}\"", attachment.FileName));
Response.ContentType = attachment.MimeType;



Monday, June 1, 2009

Microsoft Chart Controls from code

Below are two methods that you can call from your console app or Win Forms app to create a chart using MS Chart Controls. You will needs Visual Studio 2008 SP1 (.NET 3.5.1 SP1). You will need to download the appropriate Microsoft Charting Control files. See here for a list of files to download, and here for a good article on what MS Charting Controls have to offer.

This example assumes you don’t want to use the visual designer in Visual Studio for some reason and that you want to control the entire lifecycle of the Windows Control.

The first example is called DatabaseLikeTest and is meant to simulate querying a database using LINQ to SQL, but the example could be applied to any kind of object that holds data like a DataSet or direct database query.

The second example is called HardCodedLikeTest and is just shows you how to do a very basic chart.

In both cases, the output is a PING file, though other formats could be output just by changing the ChartImageFormat of the SaveImage method.

using System.Windows.Forms.DataVisualization.Charting;


public class Thing
public int MyNum { get; set; }
public string MyDescription { get; set; }

private void DatabaseLikeTest()
// this could be a database
List<Thing> things = new List<Thing>();
things.Add(new Thing { MyDescription = "My Data 1", MyNum = 14 });
things.Add(new Thing { MyDescription = "My Data 2", MyNum = 34 });
things.Add(new Thing { MyDescription = "My Data 3", MyNum = 42 });
things.Add(new Thing { MyDescription = "My Data 4", MyNum = 18 });
things.Add(new Thing { MyDescription = "My Data 5", MyNum = 24 });
things.Add(new Thing { MyDescription = "Bogus Value", MyNum = 100 });

// this could be a linq to sql query here
var filteredThings = things.Where(p => p.MyNum < 50);

Chart myChart = new Chart();
myChart.Size = new Size(500, 500);

ChartArea myChartArea = new ChartArea();

Series series = new Series("mySeries");

foreach (var thing in filteredThings)
series.Points.AddXY(thing.MyDescription, thing.MyNum);


series.Points[2].Label = "High Data";
series.Points[2].Color = Color.Green;

Font font = new Font("Arial", 24, FontStyle.Bold);
Title title = new Title();
title.Text = "Test Chart";
title.Font = font;


myChart.SaveImage(@"C:\temp\test2.png", ChartImageFormat.Png);

private void HardCodedTest()
Chart myChart = new Chart();
myChart.Size = new Size(500, 500);

ChartArea myChartArea = new ChartArea();

Series series = new Series("mySeries");

series.Points[0].AxisLabel = "My Data 1";
series.Points[1].AxisLabel = "My Data 2";
series.Points[2].AxisLabel = "My Data 3";
series.Points[3].AxisLabel = "My Data 4";
series.Points[4].AxisLabel = "My Data 5";

series.Points[2].Label = "High Data";
series.Points[2].Color = Color.Green;

Font font = new Font("Arial", 24, FontStyle.Bold);
Title title = new Title();
title.Text = "Test Chart";
title.Font = font;


myChart.SaveImage(@"C:\temp\test.png", ChartImageFormat.Png);

The chart will look something like this:


Truncating Log File in SQL Server

Sometimes a transaction log file in SQL Server gets too large and needs to be shrunk.

To see how much free space you will be able to reclaim, run the following query before and / or after you shrink the log file.

SELECT name ,size/128.0 - CAST(FILEPROPERTY(name, 'SpaceUsed') AS int)/128.0 AS AvailableSpaceInMB
FROM sys.database_files;

Below is a quick T-SQL snippet of code that you can run to truncate (shrink) you SQL Server transaction log file.The snippet will shrink you database to 1MB. You will need to change MyDB and MyDB_Log to match your database.


I believe the log file is usually named using the convention MyDB_Log, but if that does work, or you want to check for sure, just get properties on your database by right-clicking it in SQL Server Management Studio, and going to the Files page. Look at the Logical name of the log file. That is what you want to use.

SQL Server needs some free space just for daily operations. So, don’t be surprised if your log file grows a little after you shrink it. Though, typically, it will be a small amount.
WARNING: With any of this, you will lose the log of transactions since you have deleted the transaction log.


The above didn’t error, but it didn’t reduce the size of the log file. Here are some things to check.

  • Is there a backup job that is currently running? If so, wait for it to stop, or stop the backup job.
  • Is there a long transaction that is currently running? If so, wait for it to stop, or kill the transaction.
  • Is there an SSIS package or other job running that could potentially lock the database you are trying to shrink? If so, wait for it to stop or kill it.
  • If the log file doesn’t shrink (usually due to a transaction running), you may need wait for the transaction to finish, or to put the database in single user mode (under Properties | Options). Then run the DBCC SHRINKFILE command. 
  • If all else fails and you get desperate, you can detach your database (you may need to put it in single user mode first especially if you are out of disk space), then manually go to the file system and manually delete the log file. Then attach the database again; a new log file will be created.

You may also find other entry on this topic useful. For more information on the topic, I recommend MSDN docs. They are actually quite helpful on this topic. I also, recommend this blog posting. It is where I started.