Tuesday, September 30, 2008

Common Table Expression in SQL Server 2005

When working with SQL 2005, there are times when you need to perform several aggregate functions, or to define a view that needs to be stored only during the execution of a particular query. A Common Table Expression (CTE) does just that; it helps in the definition and manipulation of query related results that need temporary storage. Its scope is limited to the statement that defines it. CTEs provide an easy way of writing and reviewing queries. Another unique and defining element about CTE is that it can self-reference, meaning it can refer to itself as many times as required in the same query. Microsoft developed the Common Table Expression for SQL Server 2005 based on the ANSI SQL-99 standard. Before its introduction, SQL 2000 users were using derived and temporary tables.

Types of Common Table Expression

CTEs can be recursive and non-recursive. In its non-recursive form a CTE can serve as a substitute for derived tables or a view, and can be used in place of user-defined routines. It provides a convenient tool for creating queries by building tables as and when they are required within nested SELECT statements. This simplifies query building by allowing them to be developed in separate logical blocks.

In a recursive CTE, the original CTE is executed repetitively in order to arrive at different subsets of data that together form the complete result set. It is best used in queries that return hierarchical data results like reporting relationships within an organization. A good example would be employees in an organizational chart or products that have subcomponents. You can think of it as a combination of a hybrid derived table and a declared temporary table.

Creating a Common Table Expression

The syntax for creating a CTE consists of the WITH statement followed by the expression name that will identify the CTE and a query definition. If the query definition provides for separate names for the columns that will store the results then a column list is provided immediately after the expression name of the CTE. The basic syntax is:

WITH [ ,...n ]
expression_name [ ( column_name [ ,...n ] ) ]
( CTE_query_definition )

In the following example, a Common Table Expression "MyCTE" is being defined that has two resulting columns LP and SP. The statement following AS provides the query definition by populating the two column names. MyCTE can be run by using the SELECT statement.

USE AdventureWorks

SELECT LP, LP * .95 FROM itemlist.item

CTEs and Temporary Tables

A CTE can be created using the WITH statement with the CTE name following it. You will find that temporary tables aren't as convenient. For instance, unlike CTEs, temporary tables can be populated only after they have been created and not otherwise. Study the following syntax to note the difference:

LP money, SP money
(LP, SP)
SELECT LP, LP * .95 FROM itemlist.item

Temporary tables as used in SQL 2000, allow them to be called repeatedly from within a statement, whereas in a Common Table Expression, it can be called immediately after stating it. Hence, if you write syntax similar to the following example, there won't be any response to the CTE you call.

USE AdventureWorks
SELECT LP, LP * .95 FROM itemlist.item
SELECT * FROM itemlist.item

Both a CTE and a temporary table can be called by name using the SELECT * statement.

CTEs and Derived Tables

Derived tables in SQL 2000 and CTEs in SQL 2005 are very similar. Derived tables serve in simple queries; however, they suffer from two drawbacks: they can be used only once, and you cannot refer to it by name. CTEs score over derived tables when it comes to complex statements. In fact, CTEs can efficiently handle simple queries too, since they can be rewritten as derived tables as shown below:

FROM itemlist.item

Recursive Queries

A recursive query used in SQL Server 2005 refers to a recursive CTE. It allows you to set up arbitrary levels of queries on data that have a tree like structure (hierarchical), for instance, the employee reporting structure in an organization — something that was not possible in SQL 2000. You can either set the number of levels of recursion you want or leave them without any limit depending upon the query required.

Let us study the following example of reporting an organizational hierarchy where a initial subquery is set up to return only those records (employees) that report to the top management (represented by Mng_ID = null). If we limit the number of levels of recursion here, we will be excluding those employees who fall outside the defined chain of command. If we set no limit, no employee is excluded but there is a risk of an infinite recursion occurring if any employee happens to be reporting directly or indirectly to himself, like a director who is also an employee. In such cases, you can join the table to itself once for each level.

A recursive subquery is then set up by using the CTE name "DictRep" to append additional rows to the result set. The two are then connected by using the operator UNION ALL. The first or the initial subquery is nonrecursive and is processed first while the recursive query refers to the rows that were added in the previous cycle. Whenever iteration generates no new rows, recursion comes to a halt. Running the following syntax will display the result set that shows the reporting levels in the organization.

USE AdventureWorks ;
WITH DictRep(Log_ID, Mng_ID, Emp_ID) AS
SELECT e.Log_ID, e.Mng_ID, e.Emp_ID
FROM HRs.Emp e
ON e.Mng_ID = d.Emp_ID

The result set would be:










































Recursive CTEs need an anchor member definition and a UNION ALL statement to generate appropriate results.

For all those who have so far been working with temporary tables and derived tables alone, Common Expression Tables are a good option to explore. Unlike recursive routines used in other languages, a recursive CTE can return multiple rows instead of one scalar value. It can handle complex queries and has a syntax that is easy to understand, read and write.

Loading XML data into SQL Server 2008

SQL Server DBA’s will often come across a scenario where they need to load data from an XML document into SQL Server 2005 or 2008. As XML is being used widely by many organizations to communicate and share data, it is becoming a common task for DBA’s to load XML data into SQL Server. This article provides a step by step guide on how to create an configure an SSIS package which can be used to import data from XML file to SQL Server 2008.

The first step is to launch BIDS (aka SQL Server Business Intelligence Development Studio). In BIDS select File | New | Projects, the will pop up the New Project screen. From here we need to select Business Intelligence Projects under the Project Types and then we need to select Integration Services Project under the Templates. Then you need to provide the SSIS Project name, location etc. Once you have entered the details left click OK to create the solution. The project creation screen will look as shown in the image below:

BIDS will open up the SSIS Package Development Studio Environment. The Screen will have a Toolbox on the left and Solution Explorer and Properties on the right side of the screen. One of the most important spaces which we need to notice is the Connection Manager space which is at the bottom of the screen. In the center of the screen you will see the work space which has four tabs namely, Control Flow, Data Flow, Event Handler and Package Explorer.

To build the SSIS package you need to drag Data Flow Task from the toolbox and drop it on the Control Flow tab. Then you need to rename the Data Flow Task to Load XML and double click on it to open the Data Flow Task. By doing so, you will notice that you are now in the Data Flow Tab of BIDS.

To buid a Data Flow, drag the XML Source from the Data Flow Sources Toolbox and drop it on the Data Flow designer surface and rename the task as XML Source Data File.

You need to double click the XML Source Data File task which will open the XML Source Editor where you need to select the XML file location as the value for Data Access Mode. Then you need to specify the actual path for the xml data file for XML Location and finally click the Generate… button to create and save the XSD file.

Next you need to Drag the SQL Server Destination Data Flow task from the toolbox and rename it to Load XML Data to SQL Server. Below are the following steps for the remainder of the configuration

1) You need to connect the green arrow from the XML Source Data File to the Load XML Data to SQL Server task 2) Double click the Load XML Data to SQL Server task to open up the SQL Destination Editor and click the New… button which will pop up the Configure OLE DB Connection Manager.

3) For a valid connection to the SQL Server Destination database one needs to click on the New… button which will once again pop up the Connection Manager where the Provider needs to be selected as Native OLE DB\SQL Server Native Client 10.0, SQL Server name, Authentication Mode and finally the destination database name. Once added all the details click OK to save the changes.

4) The table name needs to be provided where the data imported from the XML file will be stored in the destination database. You then need to click the Mappings to check whether the elements in the XML document are mapped correctly to the table column in SQL Server Table. Click OK to save the settings. The final screen will look as shown below:

5) The package screen in BIDS will look as shown in the following image:

6) Select the SSIS package named “LoadXML.dtsx” if renamed otherwise by default BIDS names it as “package.dtsx”. Right click “LoadXML.dtsx” package which will popup a window where you need to select the Execute Package option, this will run the SSIS package.

7) Once the package has successfully executed you can see the tasks turning green and it will also mention the number of records transformed and published to SQL Server.

8) The sample XML data file used in this example is shown in the following image:

9) The data which was loaded into the SQL Server Table by the SSIS package is shown below:

10) Save the SSIS package by pressing CTRL + S

Administrator & Monitoring Change Data Capture in SQL Server 2008

SQL Server 2008 introduces a new feature called Change Data Capture (CDC). CDC Captures DDL and DML activities on a SQL Server Table and places the changes in a separate SQL Server relational table. In this article we will see how DBA can administer and monitor CDC in SQL Server 2008. The CDC feature is disabled by default at the database level. A member of the sysadmin server role must enable a database for change data capture. Once the database is enabled for CDC any member of the dbo fixed database role can enable a table within the database for change data capture.

Overview of Change Data Capture
Once CDC is enabled at the database level, the next step is to enable CDC for a specific table for which the change needs to be captured. The CDC feature gathers the changed data from the database transaction log file and inserts the change information in an associated change table which is created during the setup and configuration process of CDC. There is a one to one relationship that exists between the source table and the change capture table. You can have a maximum of two change tables for a single source table. As the CDC feature needs to continuously read the transaction log file it’s obvious that for CDC to work SQL Server Agent should be running. As this feature in entrenched in the transaction log architecture and thus a lot of the metadata in CDC is related around the concept of a Log Sequence Number (LSN)

Definition of Log Sequence Number (LSN)
Every record in the Microsoft SQL Server transaction log is uniquely identified by a log sequence number (LSN). LSN's are ordered such that if LSN2 is greater than LSN1, the change described by the log record referred to by LSN2 occurs after the change described by the log record LSN.

SQL Server Versions Supporting Change Data Capture
The Change Data Capture feature is available in SQL Server 2008 Enterprise, Developer Editions.

How to Enable CDC for a SQL Server 2008 Database
1. Connect to a SQL Server 2008 instance using SQL Server Management Studio
2. In the query window, type the following TSQL Query to create a ChangeDataCapture Database:

Use Master

IF EXISTS (SELECT name FROM sys.databases WHERE name = N'ChangeDataCapture')
DROP DATABASE ChangeDataCapture

USE [master]
Create Database ChangeDataCapture

3. Once the database is successfully created you need to enable the change data capture feature for the database, this can be done by executing the following TSQL Query:

Use ChangeDataCapture
EXEC sys.sp_cdc_enable_db

4. Execute the query below to check whether the database is enabled for CDC:

Select [name] as DBName, is_cdc_enabled from sys.databases

The value of 1 for the is_cdc_enabled column means that the database is enabled for CDC and value of 0 means that the database is not enabled for CDC.

5. Once the database is enabled for CDC, you will see a new cdc schema, cdc user and a few metadata tables and other system objects created in the ChangeDataCapture database. The most important things which a DBA needs to keep in mind when enabling CDC for a database is to make sure that there are not a cdc schema or cdc user existing in the database before configuring CDC. If there was a cdc schema or cdc user existing in the database then the configuration of CDC will fail, so a DBA needs to remove or rename any previously existing cdc schema or user from the database before configuring CDC.

How to Enable CDC for a SQL Server 2008 Database Table
1. Now let’s create a Currency Table in the ChangeDataCapture Database by executing the following TSQL Query:

Use ChangeDataCapture
Create table Currency
CurrencyKey Int Identity(1,1) Primary Key NOT NULL,
CurrencyAlternateKey varchar(5),
CurrencyName varchar(25)

2. Once the Currency table is successfully created, a DBA needs to make sure that the SQL Server Agent Service is running. In order for the CDC to be successful the SQL Server Agent should be running.

3. Enable CDC for the table Currency by executing the following TSQL:

Use ChangeDataCapture
EXEC sp_cdc_enable_table 'dbo', 'Currency', @role_name = NULL, @supports_net_changes =1

4. Execute the query below to check whether the table is enabled for CDC:

Use ChangeDataCapture
Select [name], is_tracked_by_cdc from sys.tables

The value of 1 for the is_tracked_by_cdc column means that the CDC is enabled for the table and the value of 0 for is_tracked_by_cdc column means that the CDC is disabled.

5. Once you have enabled CDC for the Currency table, another table is created for keeping changed data and the information about the changes in the source table. The new table created will have the cdc.dbo_Currency_CT name as highlighted in the above snippet:

How to disable CDC for a SQL Server 2008 Database Table
Database Administrators can run the TSQL Query below to disable CDC on a table:

Use ChangeDataCapture
EXEC sys.sp_cdc_disable_table 'dbo', 'Currency', 'all'

How to disable CDC for a SQL Server 2008 Database
Database Administrators can run the TSQL Query below to disable CDC on a database. The SQL Server Agent should be running when DBA a decides to disable CDC for the database. Once the TSQL Query below has been executed successfully you could see that the SQL Server Agent Jobs which monitor the changes happening to table by reading the transaction log are deleted and also all the system tables related to CDC are also dropped.

Use ChangeDataCapture
EXEC sys.sp_cdc_disable_db

Growth of Change Data Capture System Tables
Once in every three days there is an automatic cleanup process that occurs. For more intense environments you can leverage the manual method using the system stored procedure: sys.sp_cdc_cleanup_change_table. When you execute this system procedure you specify the low LSN and any change records occurring before this point are removed and the start_lsn is set to the low LSN you specified.

Change Data Capture System Tables
Once the DBA has enabled CDC for the Currency table, SQL Server creates the tables below along with a schema named CDC. CDC enables change tracking on tables so that Data Manipulation Language (DML) and Data Definition Language (DDL) changes made to the tables can be captured. The system tables that store information used by change data capture operations are mentioned below:

Table Name Description
cdc._CT It returns one row for each change made to a captured column in the CDC enabled source table
cdc.captured_columns It returns one row for each column tracked in a capture instance
cdc.change_tables It returns one row for each change table in the database
cdc.ddl_history All the Data Definition Language (DDL) change made to tables that are enabled for CDC are captured
cdc.lsn_time_mapping It returns one row for each transaction having rows in a change table. This table is used to map between log sequence number (LSN) commit values and the time the transaction committed
cdc.index_columns It has information about all the indexes which are associated to a CDC enabled table
cdc.cdc_jobs This table is created in MSDB database and has the configuration parameters for CDC agent jobs
dbo.systranschemas This table is used to track schema changes in articles published in transactional and snapshot publications. This table is stored in both publication and subscription databases

Change Data Capture System Stored Procedures
Change data capture enables change tracking on tables so that Data Manipulation Language (DML) and Data Definition Language (DDL) changes made to the tables can be captured. The following stored procedures help DBA to implement, monitor, and maintain change data capture:
1. sys.sp_cdc_add_job
2. sys.sp_cdc_change_job
3. sys.sp_cdc_cleanup_change_table
4. sys.sp_cdc_dbsnapshotLSN
5. sys.sp_cdc_disable_db
6. sys.sp_cdc_disable_table
7. sys.sp_cdc_drop_job
8. sys.sp_cdc_enable_db
9. sys.sp_cdc_enable_table
10. sys.sp_cdc_generate_wrapper_function
11. sys.sp_cdc_get_captured_columns
12. sys.sp_cdc_get_ddl_history
13. sys.sp_cdc_help_change_data_capture
14. sys.sp_cdc_help_jobs
15. sys.sp_cdc_restoredb
16. sys.sp_cdc_scan
17. sys.sp_cdc_start_job
18. sys.sp_cdc_stop_job
19. sys.sp_cdc_vupgrade
20. sys.sp_cdc_vupgrade_databases

Change Data Capture Dynamic Management Views
Database Administrators can monitor the change data capture process to determine if changes made are being written correctly and with a reasonable latency to the change tables. The following below mentioned DMV can be used to identify if there are any errors:
1. sys.dm_cdc_log_scan_sessions
2. sys.dm_cdc_errors

Change Data Capture System Functions
Change data capture records DDL & DML activity applied to SQL Server 2008 tables, supplying the details of the changes in an easily consumed relational format. Column information that mirrors the column structure of a tracked source table is captured for the modified rows, along with the metadata needed to apply the changes to a target environment. The following functions are used to return information about the changes:
1. cdc.fn_cdc_get_all_changes_
2. cdc.fn_cdc_get_net_changes_
3. sys.fn_cdc_decrement_lsn
4. sys.fn_cdc_get_column_ordinal ( 'capture_instance' , 'column_name' )
5. sys.fn_cdc_get_max_lsn
6. sys.fn_cdc_get_min_lsn
7. sys.fn_cdc_has_column_changed
8. sys.fn_cdc_increment_lsn
9. sys.fn_cdc_is_bit_set
10. sys.fn_cdc_map_lsn_to_time
11. sys.fn_cdc_map_time_to_lsn

Change Data Capture SQL Server Agent Jobs
There are two SQL Server Agent Jobs which are created when CDC is enabled for a particular table.

Database Administrators can use this feature to monitor DDL and DML changes which are happening to tables.

DDL Triggers in SQL Server 2005

Triggers are not new to SQL Server. But prior to SQL Server 2005 triggers were DML triggers, which were raised only when there is an INSERT,UPDATE or DELETE action. A new table, database or user being created raises a DDL event and to monitor those, DDL triggers were introduced in SQL Server 2005.


Following is the syntax for DDL triggers.

CREATE TRIGGER trigger_name


[ WITH [ ,...n ] ]

{ FOR | AFTER } { event_type | event_group } [ ,...n ]

AS { sql_statement [ ; ] [ ...n ] | EXTERNAL NAME <> [ ; ] }

DDL triggers can be created in either in the Database or the Server. If you want to monitor table creations and drops, you should create DDL trigger on the database, while to monitor operations like database creations you should create a DDL trigger on the Server.

Take a simple example of creating a database.


Let us assume that we want to log all the new table creations. We will log all the events in some other database called DDL_Trigger_Log in a table which has following schema.

CREATE TABLE [dbo].[tblDDLEventLog](

[ID] [int] IDENTITY(1,1) NOT NULL,

[EventTime] [datetime] NULL,

[EventType] [varchar](15) NULL,

[ServerName] [varchar](25) NULL,

[DatabaseName] [varchar](25) NULL,

[ObjectType] [varchar](25) NULL,

[ObjectName] [varchar](25) NULL,

[UserName] [varchar](15) NULL,

[CommandText] [varchar](max) NULL,)

Then we need to create a DDL trigger so that all the relevent event data is updated in the above table. Follwing will be the DDL trigger.


FOR CREATE_TABLE -- Trigger will raise when creating a Table



DECLARE @xmlEventData XML

-- Capture the event data that is created

SET @xmlEventData = eventdata()

-- Insert information to a EventLog table

INSERT INTO DDL_Trigger_Log.dbo.tblDDLEventLog











SELECT REPLACE(CONVERT(VARCHAR(50), @xmlEventData.query('data(/EVENT_INSTANCE/PostTime)')),

'T', ' '),

CONVERT(VARCHAR(15), @xmlEventData.query('data(/EVENT_INSTANCE/EventType)')),

CONVERT(VARCHAR(25), @xmlEventData.query('data(/EVENT_INSTANCE/ServerName)')),

CONVERT(VARCHAR(25), @xmlEventData.query('data(/EVENT_INSTANCE/DatabaseName)')),

CONVERT(VARCHAR(25), @xmlEventData.query('data(/EVENT_INSTANCE/ObjectType)')),

CONVERT(VARCHAR(25), @xmlEventData.query('data(/EVENT_INSTANCE/ObjectName)')),

CONVERT(VARCHAR(15), @xmlEventData.query('data(/EVENT_INSTANCE/UserName)')),

CONVERT(VARCHAR(MAX), @xmlEventData.query('data(/EVENT_INSTANCE/TSQLCommand/CommandText)'))


Then create a table and retrieve data in the tblDDLEvetnLog table:

You can see that all the necessary information, we will look more details about DDL triggers.

Database Triggers

As specified before, DDL triggers are executed whenever you create, drop or alter an object at the database level. Users, tables, stored procedures,views, service broker objects like queues , functions and schemas are the objects which fall into the database objects.

In a DDL trigger you can specify the trigger options (ie the operations that need to be triggered). In the above example, it is specified to execute the triggers when a new table is created. However, rather than specify each operation, there are DDL event groups that you can specify. In that case the trigger will be executed for all the operations in that event group. For example, if you specified DDL_DATABASE_LEVEL_EVENTS instead of CREATE_TABLE all the events for CREATE_TABLE, ALTER_TALBE and DROP_TABLE that trigger will be executed hence all the events will be logged.

That trigger will look like below;




/* Your code goes here */

At the end of the article, you will find the all the existing trigger events with it's highrachy. If you specificed an event, the trigger will be excuted for all the subsequent events.

EVENTDATA is an important function in DDL triggers. The EVENTDATA() function will be raised whenever a DDL trigger is fired. Output of the EVETNDATA() function is in XML format. The following is the XML format of the EVENTDATA() with example.

You can use above tags to suit your requirments.

Let us see what are the options that we can use with EVENTDATE() functions.

Apart from monitoring table creations. another requirment for DBAs is to prevent users creating tables or any other objects which does not conform to a standard. For example, if you want to stop users from creating tables which do not have prefix tbl, you can use following DDL trigger.

CREATE TRIGGER [ddltrg_CheckCreateTable] ON DATABASE




DECLARE @xmlEventData XML,

@tableName VARCHAR(50)

SET @xmlEventData = eventdata()

SET @tableName = CONVERT(VARCHAR(25), @xmlEventData.query('data(/EVENT_INSTANCE/ObjectName)'))

IF LEFT(@tableName, 3) <> 'tbl'


RAISERROR ( 'You cannot create table name without starting with tbl',

16,- 1 )




After creating above DDL trigger, if you try create a table like the following,




Desccription VARCHAR(50)


You will get below error and table will not be created because of the ROLLBACK statement specified in the trigger.

Msg 50000, Level 16, State 1, Procedure ddltrg_, Line 17

You cannot create table name without starting with tbl

Msg 3609, Level 16, State 2, Line 1

The transaction ended in the trigger. The batch has been aborted.

It is important to remember is that unlike DML triggers, in DDL triggers you won't find INSTEAD OF triggers. Instead of using INSTEAD OF triggers, you can write the trigger so that it triggers instead of the opreration. Because of this, in DML triggers you do not have to roll them back. As there is no such an option for DDL triggers, you have insert a ROLLBACK which might be a bit expensive.

You can extend the DDL trigger to include stored procedures , functions and for schemas.

Also, if you want to stop users doing ALTER_TABLE during peak hours, you can do this by using the PostTime XML tag of EVENTDATA().

Server Triggers

Server DDL triggers fire when server operations are performed. For example, if you want to audit create database operations, the following trigger can be used.




/* Your code goes here */

This trigger will also have the same EVENTDATA() function with same output XML format. Hence you will have all the options that database triggers have.

Enable or Disable Triggers

As in DML triggers, you have the option to Enable or Disable DDL triggers (for both server and database triggers)







Trigger Execution Order

When there are several triggers, you can define which trigger to execute first and last. There is a system stored procedure named sp_settriggerorder to set the priority. This is the same stored procedure which you can use to set priority for DML triggers as well.

sp_settriggerorder [ @triggername = ] '[ triggerschema. ] triggername'

, [ @order = ] 'value'

, [ @stmttype = ] 'statement_type'

[ , [ @namespace = ] { 'DATABASE' | 'SERVER' | NULL } ]

From @order parameter you can set either first or last, which is the order of the trigger execution. The @namespace parameter can be set either DATABASE or SERVER depending on whether the DDL trigger is a database or server dependent trigger.

System Tables

It is often necessary to know where the triggers are saved. In case of database DDL triggers, the information is stored in sys.triggers and sys.trigger_events. The sys.triggers view contains information like trigger name, create date etc and sys.trigger_events view contains the for which events those triggers are going to execute.


FROM sys.triggers


FROM sys.trigger_events

In case of Server DDL triggers, you have to use sys.server_triggers and sys.server_trigger_events.


FROM sys.server_triggers


FROM sys.server_trigger_events


Eventhough there are 100+ events included for DDL triggers, there are few important events. Specifically events for database backup, database restore, and SQL Server Job related.

Using TRY/CATCH to Resolve a Deadlock in SQL Server 2005

A deadlock is an inevitable situation in the RDBMS architecture and very common in high-volume OLTP environments. A deadlock situation is when at least two transactions are waiting for each other to complete. The Common Language Runtime (CLR) of .NET lets SQL Server 2005 provide developers with the latest way to deal with error handling. In case of a deadlock, the TRY/CATCH method is powerful enough to handle the exceptions encountered in your code irrespective of how deeply nested the application is in a stored procedure.

This article will acquaint you with how to use TRY/CATCH blocks in your code to handle deadlocks. Exception handling offers a powerful mechanism for controlling complex programs that have many dynamic runtime characteristics. As the article progresses, it includes the syntax for new TRY/CATCH exception handling methods that help improve your code.

While this article focuses on how to use TRY/CATCH blocks to handle deadlocks, it can also be used to deal with many different types of SQL Server exception handling. See the SQL Server 2005 Books Online for more on how TRY/CATCH can be used.

Occurrence of Deadlock and Transaction Handling

We will begin with an example that causes a deadlock in SQL Server 2005. A deadlock is a situation wherein two transactions wait for each other to give up their respective locks.

Let us take an example of a publishing house whose inventory database uses two tables, say "Titles" and "Authors." The Titles table stores the information about the books published, whereas the Authors table stores the list of authors.

In a certain scenario, Transaction1 might lock the rows in the Titles table and need to update some rows in the Authors table to complete. In a similar manner, Transaction2 holds a lock on the same rows in the Titles table, but needs to update the rows held by the Authors table in transaction1. As a result,, neither of the transactions reaches completion as Transaction1 has a lock on the Authors table and Transaction2 has a lock on the Titles table. This brings the ongoing process to a halt. This process resumes only when SQL Server 2005 detects the deadlock and aborts one of the transactions.

Each transaction has an exclusive lock on the inserted data. Consequently, any attempts to read each other's inserted data using a SELECT statement will be blocked and result in a deadlock. This deadlock is then detected by SQL Server's lock manager, which cancels the transaction that caused the halt and rolls back the other transaction releasing its locks to reach completion. The transaction that is generally aborted is the once that has consumed the least amount of resources up to this point in time.

How Using TRY/CATCH Helps to Resolve Deadlocks

This section explains how using a TRY/CATCH block helps to write structured and well-designed deadlock-resolving code, capable of trapping errors that were not uncovered in previous versions of SQL Server.

TRY/CATCH lets you separate the action and error handling code. The code meant for the action is enclosed in the TRY block and the code for error handling is enclosed in the CATCH block. In case the code within the TRY block fails, the control automatically jumps to the CATCH block, letting the transaction roll back and resume execution. In addition to this, the CATCH block captures and provides error information that shows you the ID, message text, state, severity and transaction state of an error.

As per our deadlock example above, SQL Server 2005 returns the value 1205 [Invalid Deal Sequence number in table] as a result of the transaction getting deadlocked with another process. The catch block then catches the 1205 deadlock error and rolls back the transaction until Transaction1 becomes unlocked as shown in the code below. The following code serves as an example of how to release a deadlock.

INSERT Title VALUES (@Title_ID, Title_Name, ' ', ' ', ' ', ' ', 1112, 0)
WAITFOR DELAY '00:00:05'

Run these snippets of code simultaneously in two Management Studio windows connected to SQL Server 2005, and make sure to delete the data that would prevent the inserts in the "Titles" table. Once done, both windows will return an @@TCOUNT level of 0. There will still be a deadlock, but this time TRY/CATCH will trap it. The victim's transaction will no longer be aborted and you can see the error in the output of the deadlock victim.



This example indicates the power that TRY/CATCH offers us, as transactions no longer get aborted, and deadlock errors are trapped using the catch blocks. For deadlock victims, error 1205 puts the code into the catch block where it can be explored with new error handling functions.

Notice that in the TRY/CATCH block code given above, the error functions are used to capture error information. The code also contains ROLLBACK. The reason is that though the error has been trapped, it leaves the transaction at a standstill. Therefore, it is your responsibility to get it rolled back within the TRY/CATCH block. This will let you continue with the transaction from the beginning.

If no errors are encountered until the last statement of the TRY block code, control jumps to the statement immediately after the associated END CATCH statement. If an error is encountered within the TRY block, control passes to the first statement in the respective catch block. If the END CATCH statement is last in a stored procedure, control jumps back to the statements that invoked the stored procedure.

Use the following system functions to acquire information about errors within the CATCH block. These functions, if called outside the CATCH block, will return NULL.

  • ERROR_NUMBER() returns the number of the error.
  • ERROR_SEVERITY() returns the severity.
  • ERROR_STATE() returns the error state number.
  • ERROR_PROCEDURE() returns the name of the stored procedure or trigger where the error occurred.
  • ERROR_LINE() returns the line number inside the routine that caused the error.
  • ERROR_MESSAGE() returns the complete text of the error message. The text includes the values supplied for any substitutable parameters, such as lengths, object names or times.


We have seen how a deadlock situation arises when two transactions are processed simultaneously. We have incorporated the use of the TRY/CATCH block, which helps to resolve the situation by releasing the locks that block the transactions so that you are able to retrieve the data you want without any halts or delays. The examples above show that a TRY/CATCH block lends itself to writing structured and well-designed deadlock-resolving code capable of trapping errors that were not uncovered before. In addition to this, you can even investigate the error information enclosed within the CATCH block by calling the functions that show you the ID, message text, state, severity and transaction state of an error. Therefore, SQL Server 2005 gives you a robust means to resolve deadlocks using T-SQL.

SQL Server 2008 MERGE Statement

SQL Server 2008 introduces the MERGE statement which will allow users to perform insert, update and delete operations in a single statement. In the earlier versions of SQL Server to achieve the same functionality the database developer or database administrator needed to write separate statements to perform the insert, update or delete of data in one table based on certain conditions in another table

How MERGE Statement Internally Works
The MERGE statement internally works as an individual insert, update and delete statement within a single Merge statement. You need to specify the SOURCE and the TARGET table or query which should be joined together. Within the MERGE statement you also need to specify the type of the data modification that needs to be performed when the records between the source and target are matched and what actions needs to be performed when they are not matched. With the introduction of MERGE statement the complex TSQL codes which was used earlier to do checks for the existence or inexistence of data within the data warehouse can be replaced with single Merge statement. The use of Merge statement will also improve the query performance.
Below are the three different matched clauses in MERGE:
    • Rows that meet the criteria
    • Rows that do not match with another row in the target table
    • Rows that do not match with another row in the source table

Overview on OUTPUT Clause
You can use the OUTPUT clause which was introduced in SQL Server 2005 to find out what records are being inserted, updated or deleted when you run a MERGE statement. OUTPUT clause will output information you specify for every record which is inserted, updated or deleted. The OUTPUT clause can also be useful to the value of the identity or any computed columns after an insert or update operation is performed against records in a table. In the below example you can see that you are using OUTPUT clause to identify what actions has been taken on records.

Syntax of MERGE statement:

[ TOP ( expression ) [ PERCENT ] ]
[ INTO ] target_table [ WITH ( ) ] [ [ AS ] table_alias]
[ ]
[ OPTION ( [ ,...n ] ) ];

Example: How to Use Merge Statement
In this example you will be creating a Target table named CricketTeams (destination table) and will populate it with the list of cricket playing nations. The second table named CricketTeams_UpdatedList will act as Source table, which has updated information about the cricket playing nations. Using MERGE statement you will be modifying the Target table named CricketTeams with the changes that of CricketTeams_UpdatedList Source table.

USE tempdb;

IF OBJECT_ID (N'dbo.CricketTeams', N'U') IS NOT NULL
DROP TABLE dbo.CricketTeams;

CREATE TABLE dbo.CricketTeams
CricketTeamID tinyint NOT NULL PRIMARY KEY,
CricketTeamCountry nvarchar(30),
CricketTeamContinent nvarchar(50)

(1, 'Australia', 'Australia'),
(2, 'India', 'Asia'),
(3, 'Pakistan', 'Asia'),
(4, 'Srilanka', 'Asia'),
(5, 'Bangaladesh', 'Asia'),
(6, 'HongKong', 'Asia'),
(7, 'U.A.E', 'Asia'),
(8, 'England', 'Europe'),
(9, 'South Africa', 'Africa'),
(10, 'West Indies', 'North America');

SELECT * FROM CricketTeams

IF OBJECT_ID (N'dbo.CricketTeams_UpdatedList', N'U') IS NOT NULL
DROP TABLE dbo.CricketTeams_UpdatedList;

CREATE TABLE dbo.CricketTeams_UpdatedList
CricketTeamID tinyint NOT NULL PRIMARY KEY,
CricketTeamCountry nvarchar(30),
CricketTeamContinent nvarchar(50)

INSERT INTO dbo.CricketTeams_UpdatedList VALUES
(1, 'Australia', 'Australia'),
(2, 'India', 'Asia'),
(3, 'Pakistan', 'Asia'),
(4, 'Srilanka', 'Asia'),
(5, 'Bangaladesh', 'Asia'),
(6, 'Hong Kong', 'Asia'),
(8, 'England', 'Europe'),
(9, 'South Africa', 'Africa'),
(10, 'West Indies', 'North America'),
(11, 'Zimbabwe', 'Africa');


MERGE dbo.CricketTeams AS TARGET
USING dbo.CricketTeams_UpdatedList AS SOURCE
ON (TARGET.CricketTeamID = SOURCE.CricketTeamID)
WHEN MATCHED AND TARGET.CricketTeamContinent <> SOURCE.CricketTeamContinent OR
TARGET.CricketTeamCountry <> SOURCE.CricketTeamCountry
THEN UPDATE SET TARGET.CricketTeamContinent = SOURCE.CricketTeamContinent ,
TARGET.CricketTeamCountry = SOURCE.CricketTeamCountry
INSERT (CricketTeamID, CricketTeamCountry, CricketTeamContinent)
VALUES (SOURCE.CricketTeamID, SOURCE.CricketTeamCountry, SOURCE.CricketTeamContinent)
OUTPUT $action,
INSERTED.CricketTeamID AS SourceCricketTeamID,
INSERTED.CricketTeamCountry AS SourceCricketTeamCountry,
INSERTED.CricketTeamContinent AS SourceCricketTeamContinent,
DELETED.CricketTeamID AS TargetCricketTeamID,
DELETED.CricketTeamCountry AS TargetCricketTeamCountry,
DELETED.CricketTeamContinent AS TargetCricketTeamContinent;


SELECT * FROM CricketTeams_UpdatedList

Important Points to Remember While Using MERGE Statement

  • The MERGE statement requires a semicolon (;) as a statement terminator. Otherwise Error 10713 is raised when a MERGE statement is executed without the statement terminator
  • At least one of the three MATCHED clauses must be specified when using MERGE statement; the MATCHED clauses can be specified in any order. However a variable cannot be updated more than once in the same MATCHED clause
  • If you are using @@ROWCOUNT at the end of MERGE statement, then it will return the total number of rows inserted, updated and deleted in the target table
  • Use executing the MERGE statement should have SELECT Permission on the SOURCE Table and INSERT, UPDATE and DELETE Permission on the TARGET Table
  • MERGE Statement runs through data only once in the database which also improves performance

MERGE Statement Execution Results

Even though at the first glance the MERGE statement code looks very confusing, if you analysis it the logical implementation is very simple. Now let me explain what actually happees when we used the MERGE statement.

In the above snippet you could see that you have performed an UPDATE, DELETE and INSERT operation for records in CricketTeams (TARGET Table) based on certain conditions in CricketTeams_UpdatedList (SOURCE Table).

You could see that the record with CricketTeamID as “6” is UPDATED in CricketTeams (TARGET) Table. This is because in CricketTeams_UpdatedList SOURCE Table the value for CricketTeamCountry is “Hong Kong” where as in CricketTeams TARGET Table the value was “HongKong”. As there is a change in value for the CricketTeamCountry column in the SOURCE Table when compared to data in TARGET Table the record was UPDATED.

Next, you can also see a DELETE operation being performed on the CricketTeams TARGET Table for the record which had CricketTeamID as “7”. This has happens because there is no record in CricketTeams_UpdatedList SOURCE Table which has CricketTeamID as “7”. This means that the record is no longer required and its deleted from the CricketTeams TARGET table.

Finally, you can see an INSERT Operation which is being performed on CricketTeams TARGET Table. The new row is getting inserted into the CricketTeams TARGET table because, in CricketTeams_UpdatedList SOURCE Table there is new record with CricketTeamID as “11” appearing, which is not present in CricketTeams TARGET table. Hence the record is INSERTED into CricketTeams TARGET Table.

Using the MERGE statement a developer can reduce the complex TSQL code which is used to implement insert, update and delete logic when loading data into warehouse tables. The use of the Merge statement will also improve query performance.

SQL Server 2008 New DATETIME DataTypes

The DATETIME function’s major change in SQL Server 2008 is the four DATETIME data types introduced. They are DATE, TIME, DATETIMEOFFSET and DATETIME2. IN addition to these newly introduced data types, there are new DATETIME functions all well.

DATE Data Type

In SQL Server 2005, there is no data specific datatype to store only a Date. You must use the DATETIME or SMALLDATETIME data types. In addition to the date you have entered, you will see a time component, which will appear as 12:00 AM. You then need to format your output to display only the date component. Most of the time you can use the getdate() function to store the current date. If you save the getdate() value in a SMALLDATETIME or DATETIME column in SQL Server 2005, you will also store the current time, which may lead many issues. For example, if you want to search records for given date and you use

SELECT * FROM tblDate Where [Date] = '2007-10-01'

It will not work properly because of the existing time component in Date column. Therefore, you need to use following query.

SELECT * FROM tblDate Where datediff(d,[Date],‘2007-10-01’) =0

While the above query will work, there is a high chance that the index that is existing for the Date column will not be used. Still you can use the above query for a small number of records.

Although there are workarounds, it is very clear that there is a need for a DATE data type to reduce time and potential errors.


SET @dt = getdate()


The output of the above script is 2007-10-27. As you can see, there is no time component. The range for the DATE datatype is from 0001-01-01 through 9999-12-31.

Unfortunately, the color of the DATE text is not blue, which is the default for all other datatypes. This may be a bug that needs to be fixed in coming CTPs.

TIME Datatype

Similar to the Date datatype, there is a TIME datatype in cases where you need to store only the time.

The following is a sample query for using the TIME datatype.


SET @dt = getdate()


The output of the above script is 23:48:04.0570000. The range for the TIME data type is 00:00:00.0000000 through 23:59:59.9999999.


The new DATETIME2 datetype is a date/time datatype with larger fractional seconds and year range than the existing DATETIME datatype. You have the option of specifing the number of fractions that you need. The maximum fraction you can specify is 7 while the minimum fraction is 0. The following is an example of using DATETIME2.

DECLARE @dt7 datetime2(7)

SET @dt7 = Getdate()

PRINT @dt7

The result of above script is 2007-10-28 22:11:19.7030000.

The following is a list of outputs you get for each of the fractions.




2007-10-28 22:11:20


2007-10-28 22:11:19.7


2007-10-28 22:11:19.70


2007-10-28 22:11:19.703


2007-10-28 22:11:19.7030


2007-10-28 22:11:19.70300


2007-10-28 22:11:19.703000


2007-10-28 22:11:19.7030000


Currently when saving the date and time in a column, it will not indicate what time zone that date and time belongs to. This can be especially important when you are dealing with data including several different countries with different time zones. The new datatype DATETIMEOFFSET defines a date that is combined with a time of a day that has time zone awareness and is based on a 24-hour clock. The following script illustrates the usage of the DATETIMEOFFSET datatype.


SET @dt = '2007-10-29 22:50:55 -1:00'


SET @dt1 = '2007-10-29 22:50:55 +5:00'


DateTime Functions

Currently we have the GETDATE function in SQL Server 2005 and SQL Server 2000 to retrieve the current date and time. Additionally, there are several other functions in SQL Server 2005, namely CURRENT_TIMESTAMP, DATEADD, DATEDIFF, DATENAME, DATEPART, DAY, GETUTCDATE, MONTH and YEAR. Apart from these functions, there are five new functions included in SQL Server 2008: SYSDATETIME, SYSDATETIMEOFFSET, SYSUTCDATETIME SWITCHOFFSET and TODATETIMEOFFSET. The SYSDATETIME function returns the current system timestamp without the time zone, with an accuracy of 10 milliseconds. The SYSDATETIMEOFFSET function is the same is the SYSDATETIME function, however includes the time zone.

SYSUTCDATETIME returns the Universal Coordinated Time (same as Greenwich Mean Time) date and time within an accuracy of 10 milliseconds. This is derived from the current local time and the time zone setting of the server where SQL Server is running. Both SYSDATETIME and SYSUTCDATETIME return DATETIME2 data type, where SYSDATETIMEOFFSET returns the DATETIMEOFFSET datatype. Following is an example of the above datatypes. SELECT SYSDATETIME()






/* Returned:

SYSDATETIME() 2007-10-31 22:14:05.7131792

SYSDATETIMEOFFSET()2007-10-31 22:14:05.7131792 +05:45

SYSUTCDATETIME() 2007-10-31 16:29:05.7131792

CURRENT_TIMESTAMP 2007-10-31 22:14:05.710

GETDATE() 2007-10-31 22:14:05.710

GETUTCDATE() 2007-10-31 16:29:05.710