If your database worked great when you developed and is now slow, here are some things to consider. If you didn’t create indexes initially, you data may have grown since you started, and need some indexes. If your data changes a lot, the database can become fragmented. Fragmentation can come at the disk level and can be remedied by something like Windows disk defragger tool. Fragmentation can also come at the database level which could mean even certain tables or partitions are fragmented.
Checking for Index Fragmentation
Use DBCC SHOWCONTIG to determine how fragmented your tables are. You can use different parameters, if you just execute with no parameters it will be run for all tables in the database. I recommend using the following parameters so that it can be shown in table for all tables in the database.
DBCC SHOWCONTIG WITH TABLERESULTS;
NOTE: You will see an column called IndexName, but know that this is just the name of the primary key index, not all the indexes on the table.
In particular, the first thing to look at is the Local Scan Fragmentation. This is the percent that is fragmented. For a very healthy table 0% is what you want. Secondly, look at the Scan Density line. For a really healthy table you want 100% which is the ratio or Best Count to Actual Count.
You should eventually move to sys.dm_db_index_physical_stats instead since DBCC SHOWCONTIG will be removed after SQL Server 2008. I don’t know how to get the Scan Density though. So, I show both here. However, some one at Microsoft posted that you don’t need the Scan Density and should instead use the contiguous extent allocation which is provided by sys.dm_db_index_physical_stats, but I don’t know which column they are referring to. Does anyone else know? I suspect it has something to do with fragment size.
Evaluating Index Fragments
A fragment is made up of physically consecutive leaf pages in the same file for an allocation unit. An index has at least one fragment. The maximum fragments an index can have are equal to the number of pages in the leaf level of the index. Larger fragments mean that less disk I/O is required to read the same number of pages. Therefore, the larger the avg_fragment_size_in_pages value, the better the range scan performance. The avg_fragment_size_in_pages and avg_fragmentation_in_percent values are inversely proportional to each other. Therefore, rebuilding or reorganizing an index should reduce the amount of fragmentation and increase the fragment size. For more info click here.
If you want to go off of just the average logical fragmentation percent and you have SQL 2005 or greater you can use the following statement which is very fast if you just want to see the fragmentation of the index on the table MyTable in this example
SELECT a.index_id, name, avg_fragmentation_in_percent
FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID(N'MyTable'),
NULL, NULL, NULL) AS a
JOIN sys.indexes AS b ON a.object_id = b.object_id AND a.index_id = b.index_id;
Alternatively, if you want to see it for all the tables, you can do this one.
SELECT a.index_id, name, avg_fragmentation_in_percent FROM sys.dm_db_index_physical_stats (DB_ID(), null,
NULL, NULL, NULL) AS a
JOIN sys.indexes AS b ON a.object_id = b.object_id AND a.index_id = b.index_id;
Possible Solutions
If you have index fragmentation you have three options noted in docs on DBCC SHOWCONTIG
-
Drop and re-create a clustered index.
Re-creating a clustered index reorganizes the data, and causes full data pages. The level of fullness can be configured by using the FILLFACTOR option in CREATE INDEX. The drawbacks of this method are that the index is offline during the drop or re-create cycle, and that the operation is atomic. If the index creation is interrupted, the index is not re-created.
Example:
CREATE CLUSTERED INDEX IX_WorkOrder_ID ON WorkOrder(ID)
WITH (FILLFACTOR = 80,
PAD_INDEX = ON,
DROP_EXISTING = ON);
or
DROP INDEX followed by CREATE INDEX -
Reogranize / Defrag
Reorder the leaf-level pages of the index in a logical order.
Use ALTER INDEX…REORGANIZE to reorder the leaf-level pages of the index in a logical order. Because this operation is an online operation, the index is available when the statement is running. The operation is also interruptible without loss of completed work. The drawback of this method is that the method does not do as good a job of reorganizing the data as a clustered index drop or re-create operation.
Note: You can also use DBCC INDEXDEFRAG since it is old SQL Server 2000 equivalent.
Example: ALTER INDEX PK_Employee_ID ON Employee REORGANIZE; -
Rebuild the index.
Use ALTER INDEX with REBUILD to rebuild the index. For more information, see ALTER INDEX (Transact-SQL).
Note: You can also use DBREINDEX since it is the old SQL Server 2000 equivalent.
Example: ALTER INDEX PK_Employee_ID ON.Employee REBUILD;
Choosing the right solution
The rule of thumb that Microsoft recommends is the following:
% Logical Fragmentation | Corrective Statement |
< 5% | No action should be taken |
> 5% and <= 30% | ALTER INDEX REORGANIZE |
> 30% | ALTER INDEX REBUILD WITH (ONLINE = ON)* |
* Use the ONLINE = ON to keep the index usable and data available to users. ONLINE = OFF can also be used if availability is not a concern
Automation
Example E in the docs has a great script for defragging any index that has more than a specified threshold of fragmentation.
The easy way is to Rebuild indexes in the Maintenance Plan for the database. Though this requires the indexes be taken offline, so if you run it weekly and load data daily, it may not be often enough. Then you may want to use the script below.
This script actually follows the recommendations above, except puts changes the 5% boundary to 10%. The problem with the script is that it does NOT support anything except the default schema, uses 10% instead 5%, and includes indexes that don’t have names such as when there is no primary key on the table. If you want to use the same script I use that I have modified to support these issues, click the link below.
1 comment:
Why not also just run a trace to look underneath the surface? There are a number of good "How to use SQL Profiler to run a trace" posts out there. There are also several good tools around that help you analyze and figure out what action to take based on trace data (e.g. SQL Trace Analyzer).
Post a Comment