Blocking continues with PageLages (Not lock Blocking)


, ,

Remember one of our earlier blogs where statistics on columns with gradually increasing data would get out of date very fast and need frequent update stats.

Assume SQL Server is being used as a  backend for highly OLTP (Transaction based system) and there were around 10K + transactions per second. You thought SQL Server could not handle those or never heard of😉.. So it is here. We were facing issue in our company where there was 10K transactions and only about 7 K or 8 K transactions / second we would have heavy blocking and response time would come down to more than 30 seconds and queries would time out.

To give more background about SQL Server usage and our application it was an ERP system where SQL Server was backend. And we had an audit and permission tables where every user activity is monitored and logged (in case someone deletes it we would want to know how deleted and when was it deleted) and also all deletes were only soft delete, ie. we never deleted data but just updated flag as deleted and would transfer this data asynchronously to a different table during off peak hours.

To enable such auditing we had a table which had column somewhat as below

AuditUserActivity (TimeOfActivity DateTime, UserID int, TypeOfAction varchar(10), Action Varchar(10), MachineIP varchar(10))

And there was a clustered index in TimeOfActivity.

Definition of Clustered Index was Create Clustered Index ClustActivity on AuditUserActivity(TimeOFActivity)

And from this table there were no reads only inserts or deletes. At end of every Quarter this data would be removed from table.

All of a sudden one fine day I got a call saying SQL Server CPU utilization is more than normal and also heavy blocking at same time. I initial thought may be I had a wrong problem description as till date I always believed that if there is heavy blocking on server, CPU would be lower as all threads would be in sleep state while blocked.

And soon all my understanding was to be shattered when I ran our beloved SQL command “Select * from Sysprocesses where blocked <> 0 and SPID <> Blocked”.

Earlier I used to only “Select * from Sysprocesses where blocked <> 0” but post SP4 SQL Server 2000 provides latch information as well, to filter I modified as above. Refer this link for self blocking or latch information in SP4 (

Lo behold, see below what I have got.. take a moment and observe below results do you find anything odd there..


There is lot of stuff here that initially did not make sense to me.

See SPID being Blocked like 55 is blocked by 159 and 234 is again blocked by 159. Is 159 culprit or head blocker , no it is not see just down below we have even have 224, 232, 201, 60 and there are lot of head blockers and each of head blocker is not head for more than couple of seconds. It was like there are lot of head blockers each was only blocking for couple of second. So conclusion there is no one big bad statement executed from one session that is blocking this.

See WaitType Columns and LastWaitType columns both are PageLatch_Ex which is quite odd. That means all of the sessions are trying to hold of that page only and no other page in that entire database. Notice Wait Resource 19 Database , 4th File and 1980641 what is so important about that page that every session wants to read / write there.

Finally see type, it is either insert or rollback transactions. So, for all inserts type may be they still waiting to insert or those in rollback type have timedout and all rolling their transaction back.

To understand waittypes, refer this KB ( or get this from here (  from KB PageLatch_EX means a Buff Latch and below is excerpt from KB

“The buffer latches are used to synchronize access to BUF structures and their associated database pages. The typical buffer latching occurs during operations that require serialization on a buffer page, (during a page split or during the allocation of a new page, for example). These latches are not held for the duration of a transaction.”  And from SQLMag link

Page latch exclusive:Contention can be caused by issues other than IO or memory performance, for example, heavy concurrent inserts into the same index range can cause this type of contention.  If a lot of inserts need to be placed on the same page they are serialized using the latch.  A lot of inserts into the same range can also cause page splits in the index which will hold onto the latch while allocating a new page (this can take a while).  Any read accesses to the same range as the inserts would also conflict on the latches.  The solution in these cases is to distribute the inserts using a more appropriate”

Marked red line is interesting enough.. So we went ahead read trying to get input buffer for each statement that was blocked or blocking and all of them were running same insert statement sample as below..

Insert into AuditUserActivity (getdate(),@UserID, @ActionType, @Action,@IP)

Every session was running this single statement if looked at plan this is one of most simplest query and nothing to optimize for. But since clustered index is on TimeOfActivity column data is ordered by that column and always inserts are going to come to last page of table as getdate() would always get latest date which should come in last. So in essence due to our design we had created a bottleneck where all of sessions are waiting to insert into last page. Interestingly they would also have to wait for page to be split as it is always last page that would split.

Learning from this:

1. Do not create clustered indexes on columns that are gradually increasing like GetDate(), Identity or (Auto Increment) or anything column were data insertion would be sequential in nature. They would create a bottleneck like our server had.

This is not to say do not have clustered index at all have clustered index on a different column but intention should be that every insert should be as much random as possible rather than sequential in nature.

Our solution to this problem was

1. Moved Clustered index to UserID and ActionType

2. Since users logging into system and their actions would be very random we would not end up going to same page all the time.

3. Create index with fill factor as low as 60% to avoid page splits.

HTH and have a Nice Week End.

As always, if you have any questions or needs to be addressed please feel free to shoot me mail at


First Blog of 2010 (Twenty 10)….


It has been a while since I have put anything here.. not that I have been idle but it was a hectic last couple of weeks of 2009.. Oh gosh.. I planned to wish you before writing anything but see I missed it again.. Anyways..

 HAPPY NEW YEAR – 2010 (Twenty Ten sound digital🙂 to all you there. Hope this year it brings real happiness, peace and brings less work than last year😉..

Last couple of weeks I had been working on multiple things at one time and all of them had a tremendous enrichment in experience as were as diverse as they could be.

  • A High CPU utilization of SQL Server with PageLatch Blocking at a very high load of 10K – 12K batch requests / second. (How irony, I was blogging all about blogging and this was one I had never touched.)

  • A DR plan using replication where if data is deleted in publisher by a specific user, it should be deleted on Subscriber. Had to provide a Proof of concept for that.

  • Integrate SSRS 2008 with SAP BW (Still going on)

  • Plan a DR for Site Redundancy with Custom Log Shipping  as network bandwidth was abysmal and thence had to incorporate Compression in it. BTW, Server was SQL Server 2000 on IA 64 with DB Size around 1.5 TB. (Still going on)

  • And ofcourse Partitioning of a SQL table as it was getting unmanageable. Partitioning has been done but awaiting results of it.

Additionally there were couple of good suggestions about writing below items:

Hi Guru,

Do you know or have any link/example which explains nested/hash/merge joins in a simplified fashion?




Hi Guru,

Hope you are doing great as usual..

The articles on Isolation Levels are really very helpful in SQLFundas. If you get time can you write any article on Access Violation and UMS?

Regards, Sujit

So in total we have around 7 / 8 blogs to write and would want to complete them as early as possible and pending from last year was 100 Tips for SQL Developers about writing better SQL Code.

Lets gets started with first one dealing with PageLatch Blocking….


Update: SQLFundas I have closed due to cash crunch🙂


Tip 4: Linked Server between SQL Server 2008 and SQL Server 7.0

Recently we had to create a linked server between a SQL Server 2008 (64 Bit) and SQL Server 7.0 (32 Bit). To Create Linked Server we used below command:

EXEC sp_addlinkedserver @server=’ABC’,@srvproduct=”,

When this query was executed SQL Raised an exception as below:

Msg 10054, Level 16, State 1, Line 0
TCP Provider: An existing connection was forcibly closed by the remote host.
Msg 26, Level 16, State 1, Line 0
Client unable to establish connection because an error was encountered during handshakes before login. Common causes include client attempting to connect to an unsupported version of SQL Server, server too busy to accept new connections or a resource limitation (memory or maximum allowed connections) on the server.

SQL Server 2008 can use different Providers:

* SQLNCLI (SQL Native Client 9.0 OLEDB Provider released with SQL Server 2005)

* SQLNCLI10 (SQL Native Client 10.0 OLEDB Provider Released with SQL Server 2008)

* SQLOLEDB Provider for SQL Server.

* MSDASQL (OLEDB Provider for ODBC)

We tried all the options and the only option that works is MSDASQL and none of other work.

So to create a linked Server between SQL Server 2008 and SQL Server 7.0 is

* Use ODBC Administrator to create DSN that connects to SQL Server 7.0.

* Create a Linked Server and Choose MSDASQL and then connect to DSN that was created.

* This way we can connect between SQL 2k8 and SQL 7.0 (if needed).


Validating Input Parameters in SQL Server 2008 Reporting Services

Since Reporting Services is not any front end programming tool (language like .NET or Java) validating user inputs may not be possible or at best it would be convoluted. I am still exploring what would be the best way to

=> Validate user Parameters supplied by Report Consumer.

=> Based on Validity Dataset (query) either needs to execute or stop from executing.

Till now we were able to validate user input in couple of ways and show to user about wrong parameters but query of report still executes with incorrect set of parameters.

To Validate User Parameters there could be 2 methods:

* Using Variables

* Using Code

Using Variables:

SQL Server 2008 introduced Variable which can be used for validate user parameters.

In Report Main screen, Click on Report and to go Report Properties


In Report Properties screen go to Variables and declare a variable. In the value of variable type similar expression as below:


This expression takes CategoryName parameter from Parameters Collection and compares value (Entered by End Users) and compares with Bikes. Based on Parameters.CategoryName.Value being “Bikes” or not, True or False is assigned to the Variable.

Now in Report Body, place a TextBox and for TextBox.value properties type below expression

=iif(Variables!vs.Value.Equals("True"),"","Parameter Selected is not correct, Review Parameters Selected")

This indicates in Variable Collection for VS variable if it equals “False” then end user entered wrong values and above message would be displayed. Else nothing.

Additionally, for Hidden property of TextBox one can write below expression based on value of Variable.


This is one way of Validating User Inputs and sending User messages about wrong Input Parameters, but this is not stopping SSRS for executing query albeit with wrong set of parameters.


* For each parameter, there needs to be a corresponding variable for validation.

* And can get complex with lot of parameters with many different values for each parameter.

* Query still executes with wrong parameters.

For first 2 issues, we can check using Code functionality of SQL Server 2005 / 2008 Reporting Services but still exploring 3rd issue where query should not execute if parameters are not passing validation checks.

How to Use Code to achieve similar goal, we would post it in next blog..


DBCC SHRINKDB – Understanding Math behind it……


This is in response to the question about detailing how DBCC SHRINKDB works. The confusion typically arises from the fact that in DBCC SHRINKDB, Target Percentage is not Target Size that database should be shrunk to but rather Target Percent Free Space in Database Post Shrink Operation.

It needs one more reminder that DBCC SHRINKDB or SHRINKFILE will induce fragmentation and are not recommended in Production Systems that typically backend OLTP type of data usage patterns.

For Syntax and Books Online explanation please refer below link:

How we approach before we plan for ShrinkDB:

Think of each data file (assuming there are multiple data files) in the database as 100%. In the data file there will

* User Data, Meta Data all falling under Data.

* Free space for future use for both User Data and Meta Data.

Sum of these should be 100% (irrespective of the size of Data file)

To reiterate further, say we have a database of 100 GB with 30 GB Data and 70 GB Free, it is 30% filled with Data and 70% Free Space.

When we try to Shrink using DBCC SHRINKDB command (not ShrinkFile), we are only Shrinking Free Space (%) from Data File and we are not Shrinking Data Portion.

So, in above example our data % was 30 and if DBCC SHRINKDB was run with 80% as Target Percent, DBCC SHRINKDB would not shrink database but would return a message that it could not shrink. Reason being

30% of Data + 80% of Free spaces amounts 110% which is not mathematically possible.

Math behind calculations:

As regards to exact math, we tried with different permutations and came up with best possible match that suits results. May be a Microsoft team can only answer that.. but our postulate is something as below..

D = Data Size in the Data File.

X = Target Data File Size post Shrink Percentage.

P = Percentage Specified in DBCC SHRINKDB operation.

D + (X * P / 100) = X

Say for example in our case

Data File is 10 MB with 6 MB Data and 4 MB free Space. If we Shrink this Database with DBCC SHRINKDB with Target Percentage of 25% then

D (6 MB) + X (we do not know this Yet) * P (25) / 100 = X

=> 6 + .25X = X => 6 = .75 X = 8 MB.

So post Shrink Operation data file size should be 8 MB.

Now let us say we shrunk Database with DBCC SHRINKDB with Target of say 50% (Remember our Free space was only 40%). As per our math

=> 6 + .5X = X => 6 = .5X = 12 MB (File should be expanded and not Shrunk!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! :)) atleast DBCC SHRINKDB does not expand file size for a Shrink Operation. 

Disclaimer: This one is fairly accurate for large databases but not for smaller ones.. And we are not sure about

* If such calculations are made in Pages (Database pages of 8KB)

* Converted to Bytes or Kilo Bytes for Calculation Purposes

* Is it Integer or Float Math

Other Things to Keep in Mind:

=> Database can not be shrunk to smaller than it was originally created. Like the mistake once we did, Create a database of 100 GB initial size which was never used and now shrink would not work as it was the size with with database was created.

During the course of working on this article found one interesting UNDOCMENTED DBCC COMMAND which seems very useful. Refer this blog

Getting the proof:

Below are logical steps that would be needed to perform the test

1. Create a Database, 2. Create 3 Tables, 3. Fill the data in these tables in Alternative manner so that data pages from these tables are interleaved, 4. Drop or Truncate 2 of the tables  (remember only drop and truncate would enable database to reclaim space from object not delete operations). Shrink Database.


Create Database TestShrinkDatabase
   Name = 'TestShrinkDB',
   FileName = 'C:\TestShrinkDB.mdf',
   size = 10 MB
log on
   Name = 'TestShrinkDB_log',
   FileName = 'C:\TestShrinkDB.ldf',
   size = 10 MB
Use TestShrinkDatabase
--Create Table 1
Create table Test(i char(4000))
--Populate Table 1
Declare @i int
Set @i = 1
While @i < 100
   Insert into Test  Values(@i)
   Set @i = @i  +1
--Create Table 2
Create table Test_Dummy_1 (i char(4000))
--Populate Table 2
Insert into Test_Dummy_1
select * from Test
--Populate Table 1 again
Declare @i int
Set @i = 101
While @i < 1000
   Insert into Test  Values(@i)
   Set @i = @i  +1
--Create and Populate Table 3
Select * into Test_Dummy_2 from Test
--Populate Table 2 (Beware of this go 10.. this is what filled up our database)
Insert into Test_Dummy_1
Select * from Test_Dummy_1
go 10

--Populate Table 3 (Beware of this go 10.. this is what filled up our database)
Insert into Test_Dummy_2
Select * from Test_Dummy_2
go 5
--Populate Table 1 again
Insert into Test
Select * from Test

To find out database properties we would use

* SP_SpaceUsed, Database Properties Screen Shots, and Database File Properties Screen shots and use Shrink file option not for Shrinking but just for checking Data usage.

Database Properties after Data is loaded:

Output of sp_spaceused:

database_name        database_size      unallocated space

TestShrinkDatabase       7162.19 MB                   0.22 MB

reserved           data               index_size         unused

4701984 KB         4700648 KB         608 KB                     728 KB

Database Properties:


Database File Sizes:


We see that Database is around 7 GB (as said we filled too much data and you can reduce if needed) with data file around 4.5 GB and Log file around 2.5 GB.

Post table Drop (Table 2 and Table 3) operation:

Output of SP_Spaceused:

database_name          database_size      unallocated space

TestShrinkDatabase               7162.19 MB         4091.80 MB

reserved           data               index_size         unused

512200 KB          511464 KB                 584 KB             152 KB

Database Properties:


Database File Sizes:


See the UnAllocated space from SP_Spaceused it is around 4 GB and Data only around 511 MB (highlighted in red bold)

As per our calculation we have data around 501 MB and if we Shrink with Target Percent of 50 then

D (501) + .5 X = X ==> 501 = .5 x = 1 GB for Data File.

Shrink with 50%



Post Shrink Operation:


database_name       database_size      unallocated space

TestShrinkDatabase      1020.56 MB                   500.19 MB

reserved           data               index_size         unused

512192 KB          511464 KB          576 KB             152 KB

Database Properties:


Data Files:


To summarize again, when running DBCC SHRINKDB,

=> Target Percentage is Target free space that needs to be present post shrink operation

=> DBCC ShrinkDB will only Shrink files to their Initial Size (Size specified when creating Database or When Alter Database with different size)

=> Sum of Data Percentage and Free Space Percentage should not be greater than 100%

Hope this article achieves the goal of explaining the how DBCC SHRINKDB works, in case of any comments please feel free to send mail to

Extending Column Level Security to SQL Server

Before starting I would like to “Thank You” and Appreciate Prasanna for reading the blog and following with comments / queries.

As a follow up query for my earlier blog detailing about column level security in Reporting Services, there was question asking if this could be extended to SQL Server Database Engine, one of the objectives being that even one should have a consistent view of data irrespective of what front is being used (Be it SQL Server management Studio, Query Analyzer etc).

From: Prasanna Prabhu

Sent: Sunday, August 02, 2009 9:17 AM


Subject: Nice Article – Column Level Security in Reporting Services

Hey Guru

Very Nice Article –

I would like to see or hear from you as a new blog or amendment to this one, if the same level of security can be moved from REPORTS to Database itself, such that if the report is generatred by those users using direct access to SQL Server (using Query Analyzer, say), then how do we ensure the security is still intact…

Any ideas?

Prasanna Prabhu

The column level security can be built in Multiple ways in Database (SQL Server 2005 onwards).

Method 1:

Most simplistic manner to achieve this would be to create views and give permissions to end users only to access the views rather than base tables. This ways they would get a consistent view of data (columns and rows) irrespective of what front end they are using.

Views are since ages Big Grin and we would not deal with them here.

Method 2:

Starting SQL Server 2005, a user at database level can be given permission at column level for a table. See below example.. It implements column level security out of box supported in SQL Server 2005 onwards.

Note: For update column permission, select permission would also be needed. And selective insert and Delete permission can not be granted. Only Select and Update would be possible.

For those interested through Management studio:

=> Select the table or view , Right Click on Table

=> Open the "properties" dialog

=> Go to the "permissions" page

=> If they are not already there, add the users and/or roles that need column level permission

=> Choose "select", "Update", or "References" in the permission list (these are the only ones that support column level permissions)

=> Click on "Column Permissions" near the bottom

=> Choose the permissions that you want

=> Click "OK"


USE [master]
--Create Test database to understand column level security.
CREATE DATABASE [TestColumnLevelSecurity] 
USE [TestColumnLevelSecurity] 
--Create a table with column where column level security is to be enabled.
CREATE TABLE [dbo].[TestSecurity](
      [i] [int] NOT NULL,
      [j] [char](10) NULL,
      [i] ASC
--Insert data into table.
INSERT INTO [TestSecurity] VALUES (1,'A')
use master
--Create Login who would act as grantee.
--Create corresponding user in the database
USE [TestColumnLevelSecurity]
--This statement provides actual permissions.
GRANT SELECT ON OBJECT::DBO.[TestSecurity](i) TO TestUser
Grant Update on object::dbo.TestSecurity (i) to TestUser
--Testing if it works by swithcing user context
setuser 'TestUser'
Select * from TestSecurity 
Select i from TestSecurity
Select j from TestSecurity
Msg 230, Level 14, State 1, Line 1
The SELECT permission was denied on the column 'j' of the object 'TestSecurity', 
database 'TestColumnLevelSecurity', schema 'dbo'.
-- Reverting back to Sysadmin / dbo role that we were connected to before.
Select * from TestSecurity


Tip 3: Beware of Indexes on DateTime Columns

Indexes on Date Time columns can sometimes create inconsistencies in performance.. Some of the examples that we have come across are

=> Processing of cube till last month which contains more data took less time than processing of cube for current month with less data.

=> Queries for today took more time than yesterday or any other date in the past though current day’s data returned is less.

Above of some of the typical cases for indexes DateTime Columns. Other such cases would be Auto Increment cases or identity columns in columns.

When a query is executed against database software (SQL or Oracle or etc), they first use statistics for generating execution plan (remember cost based optimization). Statistics for database engine (IMHO) are more important than indexes themselves.

In above scenarios, statistics would be quickly outdated as even if Auto Update Stats (in SQL ) are turned on, they would kick only some threshold values. So in essence statistics would not be updated as frequently as actual updates tables. Due this inputs to optimizer from statistics are incorrect and as a consequence suboptimal plan would be generated which would be slower.

Most of cases it seemed that in the Showplan_all (SQL) or Explain Plan ( Oracle), Nested Loop (Loop) join would be used which typically runs at disk speed instead of Hash or Merge which are more CPU / Memory oriented.

To test these, run the same query (slow query)  with hint (hash join), some thing as below

<Query> option (Hash Join)).

If above works, update statistics for such index more frequently with below script.

Update Statistics <Index Name> with Sample 5 Percent


Please note that though update statistics does not cause blocking, if may cause indirect impact by utilizing  disk resources.

Are there any other cases where indexes may return inconsistent results, if so please respond to this and let us compile a large list to republish.

If needed more info about optimizer let us know and we would be glad to explain in detail as this is a Tip we have restricted only to a few thousand words.

As usual, we are open to respond through mail and feel free to send mail to

Column Level Security in Reporting Services

Recently one of customer had below requirement:

“Based on User or User Group Membership some columns, Column Groups or Row Groups of a report should be hidden. If condition is User Based only then it could be achieved in SQL Server 2005 or SQL Server 2008 Reporting Services out of the box with out any customization or custom code. “

A column Hidden could mean

  1. Column is not visible in End Report (Both Column Headings and Column (domain) content is not visible.
  2. Column is visible and content with “No Access” or “NA” to indicate user does not have access to that column.

SQL Server 2008 Reporting Services can deal with both options but with certain limitations Tongue out. I will deal with both these conditions at the end.

Here is how we thought it would be achievable

To achieve this below are steps:

  1. Add a New Report Item
  2. Create a Data Source Connecting to SQL Server instance that host AdventureWorksDW database
  3. A Data Set with query ( SELECT        DimEmployee.*
    FROM            DimEmployee


  4. A Simple Table Report with No Grouping.
  5. Drag EmployeeKey for Employee ID
  6. For Employee Name write expression in value (=Fields!FirstName.Value + " " + Fields!MiddleName.Value + " " + Fields!LastName.Value)
  7. DepartmentName for Department
  8. BaseRate for BaseRate Columns
  9. Right Click on BaseRate Column and go to properties (press F4 function key)
  10. In Hidden Properties of Column write below expression
    1. =iif(User!UserID.Equals("Domain\User"),true,false)
  11. Based on above condition columns and their corresponding rows would either be visible or hidden and this above expression can be expanded further to include more complex conditions.

Hope you got idea how to hide columns, but there is a major issue with afore mentioned steps:

  1. What if column to be hidden is not at the end of report but somewhere in middle i.e report has some column after column that is hidden based on expression. If such cases persists (as I reckon) it would be frequent, the report aesthetics would go for a toss Crying.
  2. To get this behavior add a column after base rate and run report (ensure hidden property evaluates to true). See below Base Rate is blank


If we write expression at Cell Level instead of Column Level (which may be true for most of cases, Column heading would be present but cells under it would be blank)


The top one is worse off than subsequent one but more elegant approach compare to both of them would be if we columns are hidden Report automatically renders itself as if not column existed there in first place.

Also I have checked with column Width and Column Height (Under Size), Height and Width, but there is no place were one could write an expression. If it were to be present there we could have make column width to 0 (analogous hiding columns in excel sheet)

  1. One workaround would be to put hidden column at end but there would lot of if and but conditions where this would fail and also it is not always practically possible to put all such columns at end for readability purposes (for those users who have access)
  2. Generate 2 reports one with column and others without columns and give access at report level. (Then there is not point of this post at allOpen-mouthed)
  3. The other method is instead of hiding columns and their related data, if we can have expression written in value property of cell that if it evaluates to true then show data else show “NA”. It would also solve business problem but again users who have no permission would get “NA” . If this works then remove expression in hidden property of columns and instead write expression as below:
    1. =iif(User!UserID.Equals("domain\user"),"NA",Fields!BaseRate.Value)

This is one of the methods I have found for hiding columns not sure if there are other methods though.

Though above method works for users, it is not always users are directly given access to reports unless of course if it is for small set of users. Typically SSRS deployments are for entire enterprise reporting, in such scenarios it becomes difficult to manage individual users instead permissions are controlled through group membership i.e Users are added to Windows user Groups and Windows User groups are in turn given access to Reports and Report Folders.

That would be the objective of next blog where based on user information we would need to get Group Membership either from AD or from a locally stored Database. But once membership is identified, columns are hidden / visible based on expression as explained in this blog.

Any other ideas please feel free to send mail to

DBCC ShrinkDatabase Will it cause index fragmentation – Last in Series

Well as I learnt in previous blog Shrink Database caused fragmentation, but why? Before we discuss fragmentation, let me try to explain DBCC ShrinkDatabase command.

DBCC ShrinkDatabase command takes database name, Target Percent and Truncate | NoTruncate as parameters.

Truncate or NoTruncate indicate to database engine if space reclaimed by database engine be given back to Operating System or not.

Truncate: SQL Database engine releases space to OS, NOTruncate: Does not release space to OS though file content is  (leaving empty space at the end).

Target Percent is slightly confused here. Target Percent is not Target Size that database should be post shrinking operation but Target Percent is Target free space database should have post shrinking.

When database is shrunk either by Management Studio or DBCC command it moves data from end of file to beginning or starting of file. It tries to fill pages as it gets to move from end to start of file. So pages can not be moved in chunks (as there may not be huge chunks of pages at start in a normal production environment) but pages maybe moved individually inducing fragmentation.

In this case I have learnt that a completely defragmented index is fragmented to 97% post shrink operation and that is terrifying with regards to performance. Are you all seeing the same🙂

Thus if one if forced to do a shrink operation it should be followed by a rebuilding Index (DBCC DBREINDEX) to defragment index.

Also this is one of reasons for Auto Shrink Database Property to be turned off.

Now in a later post I would like to check if Compressing and Decompressing an existing table will induce fragmentation or not or maybe something more interesting🙂

If you have any questions / comments / corrections pertaining to any of fragmentation details please feel free to send mail to me at