tag:blogger.com,1999:blog-23071452035749421232024-02-21T04:56:52.607-08:00SQL ServerSQL Server - TSQL, Queries, Stored procedure, triggers, tables, views, functions,sql server 2005 features, sql server 2008 features
( Select * from Random_Ideas Order By Common_Sense )Anil Yadavhttp://www.blogger.com/profile/09905003317091006921noreply@blogger.comBlogger145125tag:blogger.com,1999:blog-2307145203574942123.post-89199083491086315532014-04-03T09:53:00.001-07:002014-04-03T09:54:05.354-07:00Is Truncate Table DDL or DML Statement?<div dir="ltr" style="text-align: left;" trbidi="on">
<div style="background-color: white; font-family: Verdana, Arial, sans-serif; font-size: 12px; text-align: justify;">
Many times, categories of concepts and things overlap. It can be hard to categorize some items in a single category. The SQL TRUNCATE TABLE statement is an example of an item that is not so easy to categorize. Is it a DDL (Data Definition Language) or DML (Data Manipulation Language) statement?</div>
<div style="background-color: white; font-family: Verdana, Arial, sans-serif; font-size: 12px; text-align: justify;">
There is an ongoing discussion about this topic. However, if you quickly bingle for this question, you get the impression that the majority is somehow leaning more toward defining the TRUNCATE TABLE statement as a DDL statement. For example, <a href="http://en.wikipedia.org/wiki/Truncate_(SQL)" style="color: #02469b;">Wikipedia</a> clearly states: “In SQL, the <b><code>TRUNCATE TABLE</code></b> statement is a Data Definition Language (DDL) operation that marks the extents of a table for deallocation (empty for reuse).” Disclaimer: please note that I do not find Wikipedia as the “ultimate, trustworthy source” – I prefer sources that are signed!</div>
<div style="background-color: white; font-family: Verdana, Arial, sans-serif; font-size: 12px; text-align: justify;">
Some of the reasons why many people define the statement as a DDL statement include:</div>
<ul style="background-color: white; font-family: Verdana, Arial, sans-serif; font-size: 12px; text-align: justify;">
<li>It requests schema locks in some systems</li>
<li>It is not possible to rollback it in some systems</li>
<li>It does not include a WHERE clause</li>
<li>It does not fire triggers in some systems</li>
<li>It resets the autonumbering column value in some systems</li>
<li>It deallocates system pages directly, not through an internal table operation</li>
<li>and more.</li>
</ul>
<div style="background-color: white; font-family: Verdana, Arial, sans-serif; font-size: 12px; text-align: justify;">
On the other hand, it looks like there is only one reason to treat the statement as a DML statement:</div>
<ul style="background-color: white; font-family: Verdana, Arial, sans-serif; font-size: 12px; text-align: justify;">
<li>Logically, you just get rid of the data, like with the DELETE statement.</li>
</ul>
<div style="background-color: white; font-family: Verdana, Arial, sans-serif; font-size: 12px; text-align: justify;">
Even the <a href="http://en.wikipedia.org/wiki/Truncate_(SQL)" style="color: #02469b;">Wikipedia</a> article that I referred to says “The <code>TRUNCATE TABLE mytable</code> statement is logically (though not physically) equivalent to the <code>DELETE FROM mytable</code> statement (without a <code>WHERE</code> clause).”</div>
<div style="background-color: white; font-family: Verdana, Arial, sans-serif; font-size: 12px; text-align: justify;">
Like many times, I have to disagree with the majority. I understand that the categorization is somehow confusing, and might even be overlapping. However, the only reason for categorizing the TRUNCATE TABLE statement in the DML category is “THE” reason in my understanding. One of the most important ideas in the Relational Model is the separation between the logical and the physical level. We, users, or people, if you wish, are manipulating with data on the logical level; the physical implementation is left to the database management system. And this is the important part – logically, when you truncate table, you don’t care how this statement is implemented internally, you just want to get rid of the data. It really does not matter what kind of locks a system uses, does it allow WHERE clause or not, etc. The logical point is what matters. Therefore, I would categorize the <strong>TRUNCATE TABLE</strong> statement as a <strong>DML</strong>statement.</div>
<div style="background-color: white; font-family: Verdana, Arial, sans-serif; font-size: 12px; text-align: justify;">
Of course, this is a purely theoretical question, and is really not important for your practical implementation. As long as your app is doing what it should do, you don’t care too much about these nuances. However, IMO in general there is not enough of theoretical knowledge spread around, and therefore it makes sense to try to get the correct understanding.</div>
<div style="background-color: white; font-family: Verdana, Arial, sans-serif; font-size: 12px; text-align: justify;">
<br /></div>
<div style="background-color: white; font-family: Verdana, Arial, sans-serif; font-size: 12px; text-align: justify;">
Taken from <a href="http://sqlblog.com/user/Profile.aspx?UserID=33418" id="ctl00___ctl00___ctl00_ctl00_bcr_ctl00___Entry___AuthorLink" style="color: #02469b; font-size: 9px; text-align: left;">Dejan Sarka</a></div>
<div style="background-color: white; text-align: justify;">
<span style="font-family: Verdana, Arial, sans-serif;"><span style="font-size: 12px;"><a href="http://sqlblog.com/blogs/dejan_sarka/archive/2014/03/28/truncate-table-ddl-or-dml-statement.aspx">http://sqlblog.com/blogs/dejan_sarka/archive/2014/03/28/truncate-table-ddl-or-dml-statement.aspx</a></span></span><br />
<span style="font-family: Verdana, Arial, sans-serif;"><span style="font-size: 12px;"><br /></span></span></div>
</div>
Anil Yadavhttp://www.blogger.com/profile/09905003317091006921noreply@blogger.com0tag:blogger.com,1999:blog-2307145203574942123.post-71291685950897040682012-05-16T01:55:00.000-07:002012-05-16T01:55:02.992-07:00SQL Server 2012 - New Functions<div dir="ltr" style="text-align: left;" trbidi="on">
<br />
<div style="font-family: 'Segoe UI', Verdana, Arial; font-size: 13px;" xmlns="http://www.w3.org/1999/xhtml">
Microsoft SQL Server 2012 introduces 14 new built-in functions. Microsoft keeps on improving there products kudos to them.</div>
<div style="font-family: 'Segoe UI', Verdana, Arial; font-size: 13px;" xmlns="http://www.w3.org/1999/xhtml">
<br /></div>
<div style="font-family: 'Segoe UI', Verdana, Arial; font-size: 13px; margin-top: -6px;" xmlns="http://www.w3.org/1999/xhtml">
The new functions are:</div>
<div style="font-family: 'Segoe UI', Verdana, Arial; font-size: 13px; margin-top: -6px;" xmlns="http://www.w3.org/1999/xhtml">
<br /></div>
<div style="font-family: 'Segoe UI', Verdana, Arial; font-size: 13px; margin-top: -6px;" xmlns="http://www.w3.org/1999/xhtml">
<span class="label" style="font-weight: bold;">Conversion functions</span></div>
<ul style="font-family: 'Segoe UI', Verdana, Arial; font-size: 13px; margin-left: 40px; padding-left: 0px;" xmlns="http://www.w3.org/1999/xhtml">
<li style="list-style-image: url(http://i3.msdn.microsoft.com/Areas/Brand/Content/b.gif);"><a href="http://msdn.microsoft.com/en-us/library/hh213316.aspx" style="color: #960bb4; text-decoration: none;">PARSE (Transact-SQL)</a></li>
<li style="list-style-image: url(http://i3.msdn.microsoft.com/Areas/Brand/Content/b.gif);"><a href="http://msdn.microsoft.com/en-us/library/hh230993.aspx" style="color: #960bb4; text-decoration: none;">TRY_CONVERT (Transact-SQL)</a></li>
<li style="list-style-image: url(http://i3.msdn.microsoft.com/Areas/Brand/Content/b.gif);"><a href="http://msdn.microsoft.com/en-us/library/hh213126.aspx" style="color: #960bb4; text-decoration: none;">TRY_PARSE (Transact-SQL)</a></li>
</ul>
<div style="font-family: 'Segoe UI', Verdana, Arial; font-size: 13px;" xmlns="http://www.w3.org/1999/xhtml">
<span class="label" style="font-weight: bold;">Date and time functions</span></div>
<ul style="font-family: 'Segoe UI', Verdana, Arial; font-size: 13px; margin-left: 40px; padding-left: 0px;" xmlns="http://www.w3.org/1999/xhtml">
<li style="list-style-image: url(http://i3.msdn.microsoft.com/Areas/Brand/Content/b.gif);"><a href="http://msdn.microsoft.com/en-us/library/hh213228.aspx" style="color: #960bb4; text-decoration: none;">DATEFROMPARTS (Transact-SQL)</a></li>
<li style="list-style-image: url(http://i3.msdn.microsoft.com/Areas/Brand/Content/b.gif);"><a href="http://msdn.microsoft.com/en-us/library/hh213312.aspx" style="color: #960bb4; text-decoration: none;">DATETIME2FROMPARTS (Transact-SQL)</a></li>
<li style="list-style-image: url(http://i3.msdn.microsoft.com/Areas/Brand/Content/b.gif);"><a href="http://msdn.microsoft.com/en-us/library/hh213233.aspx" style="color: #960bb4; text-decoration: none;">DATETIMEFROMPARTS (Transact-SQL)</a></li>
<li style="list-style-image: url(http://i3.msdn.microsoft.com/Areas/Brand/Content/b.gif);"><a href="http://msdn.microsoft.com/en-us/library/hh231077.aspx" style="color: #960bb4; text-decoration: none;">DATETIMEOFFSETFROMPARTS (Transact-SQL)</a></li>
<li style="list-style-image: url(http://i3.msdn.microsoft.com/Areas/Brand/Content/b.gif);"><a href="http://msdn.microsoft.com/en-us/library/hh213020.aspx" style="color: #960bb4; text-decoration: none;">EOMONTH (Transact-SQL)</a></li>
<li style="list-style-image: url(http://i3.msdn.microsoft.com/Areas/Brand/Content/b.gif);"><a href="http://msdn.microsoft.com/en-us/library/hh213396.aspx" style="color: #960bb4; text-decoration: none;">SMALLDATETIMEFROMPARTS (Transact-SQL)</a></li>
<li style="list-style-image: url(http://i3.msdn.microsoft.com/Areas/Brand/Content/b.gif);"><a href="http://msdn.microsoft.com/en-us/library/hh213398.aspx" style="color: #960bb4; text-decoration: none;">TIMEFROMPARTS (Transact-SQL)</a></li>
</ul>
<div style="font-family: 'Segoe UI', Verdana, Arial; font-size: 13px;" xmlns="http://www.w3.org/1999/xhtml">
<span class="label" style="font-weight: bold;">Logical functions</span></div>
<ul style="font-family: 'Segoe UI', Verdana, Arial; font-size: 13px; margin-left: 40px; padding-left: 0px;" xmlns="http://www.w3.org/1999/xhtml">
<li style="list-style-image: url(http://i3.msdn.microsoft.com/Areas/Brand/Content/b.gif);"><a href="http://msdn.microsoft.com/en-us/library/hh213019.aspx" style="color: #960bb4; text-decoration: none;">CHOOSE (Transact-SQL)</a></li>
<li style="list-style-image: url(http://i3.msdn.microsoft.com/Areas/Brand/Content/b.gif);"><a href="http://msdn.microsoft.com/en-us/library/hh213574.aspx" style="color: #960bb4; text-decoration: none;">IIF (Transact-SQL)</a></li>
</ul>
<div style="font-family: 'Segoe UI', Verdana, Arial; font-size: 13px;" xmlns="http://www.w3.org/1999/xhtml">
<span class="label" style="font-weight: bold;">String functions</span></div>
<ul style="font-family: 'Segoe UI', Verdana, Arial; font-size: 13px; margin-left: 40px; padding-left: 0px;" xmlns="http://www.w3.org/1999/xhtml">
<li style="list-style-image: url(http://i3.msdn.microsoft.com/Areas/Brand/Content/b.gif);"><a href="http://msdn.microsoft.com/en-us/library/hh231515.aspx" style="color: #960bb4; text-decoration: none;">CONCAT (Transact-SQL)</a></li>
<li style="list-style-image: url(http://i3.msdn.microsoft.com/Areas/Brand/Content/b.gif);"><a href="http://msdn.microsoft.com/en-us/library/hh213505.aspx" style="color: #960bb4; text-decoration: none;">FORMAT (Transact-SQL)</a></li>
</ul>
<div style="font-family: 'Segoe UI', Verdana, Arial; font-size: 13px;" xmlns="http://www.w3.org/1999/xhtml">
In addition to the 14 new functions, one existing function has been changed. The existing <a href="http://msdn.microsoft.com/en-us/library/ms190319.aspx" style="color: #960bb4; text-decoration: none;">LOG (Transact-SQL)</a> function now has an optional second <span class="parameter" style="font-family: Verdana, Arial, Helvetica, sans-serif; font-style: italic; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px;">base</span>parameter.</div>
<div style="font-family: 'Segoe UI', Verdana, Arial; font-size: 13px;" xmlns="http://www.w3.org/1999/xhtml">
<br /></div>
</div>Anil Yadavhttp://www.blogger.com/profile/09905003317091006921noreply@blogger.com0tag:blogger.com,1999:blog-2307145203574942123.post-69914206377892445402009-08-12T10:41:00.000-07:002009-08-12T10:42:46.921-07:00ISRO unveils mapping application 'Bhuvan'ISRO will unveil its version of Google Earth, the iconic mapping service for the common man to zoom into any part of the world on their personal computer using satellite images.<br /><br />However, the new mapping service -- Bhuvan, which means earth in Sanskrit -- will allow users to have a closer look at any part of the subcontinent barring sensitive locations such as military and nuclear installations.<br /><br />ISRO Chairman G Madhavan Nair will unveil the Bhuvan webportal at a function here tomorrow in the presence of Minister of State in the PMO Prithviraj Chavan.<br /><br />Bhuvan will use images taken by ISRO's seven remote sensing satellites at least a year ago. These eyes in the sky can capture images of objects as small as a car on the road to build three-dimensional map of the world.<br /><br /><a href="http://bhuvan2.nrsc.gov.in">http://bhuvan2.nrsc.gov.in</a>Anil Yadavhttp://www.blogger.com/profile/09905003317091006921noreply@blogger.com0tag:blogger.com,1999:blog-2307145203574942123.post-82179934593295204862009-07-09T11:01:00.000-07:002009-07-09T11:06:09.139-07:00Google Chrome Operating SystemGoogle Chrome has always been a little more than a browser: it's optimized for running web applications, each tab runs as a separate process, the interface is minimalistic and there's even a task manager. "We realized that the web had evolved from mainly simple text pages to rich, interactive applications and that we needed to completely rethink the browser. What we really needed was not just a browser, but also a modern platform for web pages and applications, and that's what we set out to build," said Google in September 2008.<br /><br />Google's blog announces a natural extension of the Chrome project: an operating system for netbooks. "Google Chrome OS is an open source, lightweight operating system that will initially be targeted at netbooks. Later this year we will open-source its code, and netbooks running Google Chrome OS will be available for consumers in the second half of 2010. (...) Google Chrome OS will run on both x86 as well as ARM chips and we are working with multiple OEMs to bring a number of netbooks to market next year. The software architecture is simple — Google Chrome running within a new windowing system on top of a Linux kernel."<br /><br />As people use more and more web applications, the operating system becomes less important and it makes no sense to pay for it. The desktop mail client could be replaced by Gmail, the calendaring application could be replaced by Google Calendar, the office suite has lightweight alternatives: Google Docs and Zoho, it makes more sense to use an online feed reader like Google Reader, your scientific calculator is less powerful than Wolfram Alpha and you'll rarely need a video player when you have YouTube, Hulu and other video sites.<br /><br />This idea is not new and there are already operating systems optimized for the browser. For example, Good OS announced last year Cloud, an operating system that "integrates a web browser with a compressed Linux operating system kernel for immediate access to Internet, integration of browser and rich client applications, and full control of the computer from inside the browser". If Google manages to create a great user interface, the new operating system could be very successful.<br /><br /><a href="http://googleblog.blogspot.com/2009/07/introducing-google-chrome-os.html">http://googleblog.blogspot.com/2009/07/introducing-google-chrome-os.html</a>Anil Yadavhttp://www.blogger.com/profile/09905003317091006921noreply@blogger.com0tag:blogger.com,1999:blog-2307145203574942123.post-85132203308543662762009-03-31T12:20:00.000-07:002009-03-31T12:25:54.254-07:00Gmail in 5 Indian Languages Hindi, Tamil, Telugu, Kannada and MalayalamIt's hard for me to imagine going without email for a day. It's such an easy and convenient way to communicate with my friends and family. However, there was one limitation that bothered me: my family members and friends who prefer to communicate in Hindi did not have an easy way to type and send email in their language of choice. I am extremely happy to announce the launch of a new feature in Gmail that makes it easy to type email in Indian languages.<br /><br />When you compose a new mail in Gmail, you should now see an icon with an Indian character, as the screenshot below shows. This feature is enabled by default for Gmail users in India. If you do not see this function enabled by default, you will need to go the "Settings" page and enable this option in the "Language" section.<br /><br />When you click the Indian languages icon, you can type words the way they sound in English and Gmail will automatically convert the word to its Indian local language equivalent. For example, if a Hindi speaker types "namaste" we will transliterate this to "नमस्ते." Similarly, "vanakkam" in Tamil will become "வணக்கம்." We currently support five Indian languages -- Hindi, Tamil, Telugu, Kannada and Malayalam -- and you can select the language of your choice from the drop-down list next to the icon.<br /><br />They built this new feature using Google's transliteration technology, which is also available on Google India Labs, Orkut, Blogger and iGoogle. I hope you find this feature useful to communicate with those of your friends and family who prefer to write in their native language, and it will be available soon to businesses and schools using Google Apps. Now back to replying to all those Hindi emails I got from my family and friends today!Anil Yadavhttp://www.blogger.com/profile/09905003317091006921noreply@blogger.com1tag:blogger.com,1999:blog-2307145203574942123.post-79193454292837023862009-03-29T08:53:00.000-07:002009-03-29T08:54:21.970-07:00How to Minimize SQL Server BlockingBy default, blocking locks do not time out. The waiting connection waits until the lock is released, and the block is over. If you like, you can set a lock time-out so that a connection does not wait indefinitely for the blocking lock to be released. This is accomplished using the LOCK_TIMEOUT setting.<br /><br />When the LOCK_TIMEOUT setting is used to set a maximum amount of time that a connection can wait for a blocking lock to go away, the connection with the lock that is causing the blocking problem is not affected, but the connection waiting for the block is halted and an error message is received. When this happens, error message 1222, "Lock request time-out period exceeded," is sent to the application.<br /><br />This means that the application needs to include the appropriate error-handling code to deal with this situation and take the appropriate action, which includes rolling back the transaction. If the application does not know how to deal with this error message, and the transaction is not rolled back, it is possible that the application can continue as if the transaction was not automatically cancelled. Because of this, you should not use the LOCK-TIMEOUT setting unless your application(s) that will be affected by it know what to do when they receive this message from SQL Server.<br /><br />The syntax for the SET LOCK_TIMEOUT is:<br /><br />SET LOCK_TIMEOUT timeout_period<br /><br />Where timeout_period is the number of milliseconds that a connection waits for a blocking lock to go away before an error is returned from SQL Server to the application. A value of -1 is the default, which means to wait indefinitely. A value of 0 tells SQL Server not to wait at all, and to return the error immediately.<br /><br />This command is based on a per connection basis, and stays with the connection until the connection is broken, or a new SET LOCK_TIMEOUT command is issued. [7.0, 2000]Anil Yadavhttp://www.blogger.com/profile/09905003317091006921noreply@blogger.com1tag:blogger.com,1999:blog-2307145203574942123.post-77111077179962120312009-03-29T08:42:00.000-07:002009-03-29T08:45:43.842-07:00A view runs on our development server in 30 seconds, but it takes over 10 minutes on the production server. Why?<span style="font-weight:bold;">Problem</span>There are two SQL Server servers, both with virtually the same hardware. One is a development server, where we create and test Transact-SQL code, and the other is production SQL Server. Here's my problem. When a particular view runs on the development server, it takes about 30 seconds. But when the identical view on the production server, the view takes over 10 minutes to run. The databases on both servers are almost identical. Periodically copy the production database to the development server so that we are working with the same data and the same indexes. What could be causing the difference in time for this particular view to run?<br /><br /><span style="font-weight:bold;">Solution</span><br />Assuming that the servers and the databases are virtually the same, and the statistics have been updated on both of them, here's what is suggestted you to check. First, do an Execution Plan of the view that is causing the problem on the development server. Find out if this particular view is using parallelism as part of the execution plan. I am guessing that this is the case.<br /><br />Now, do an Execution Plan of the view on the production server. Find out if this particular view is using parallelism as part of the execution plan. My guess is that it is not.<br /><br />If I am write so far, then check the SQL Server Parallelism setting for each server. I bet that it is set to "Use All Available CPUs" on the development server, and set to use only one CPU on the production server. If I am right, then set the production server to "Use All Available CPUs," and the performance difference of the same view on both servers should be resolved, and performance should be virtually identical on both servers.<br /><br />By default, SQL Server is set to use "Use All Available CPUs" on a server. If this is not your current setting, then this has been changed by someone.<br /><br />Essentially, this setting tells SQL Server whether to attempt to execute queries in parallel using multiple processors. If the default "Use All Available CPUs" option is selected, then SQL Server will attempt to run queries in parallel, using multiple CPUs. This can sometimes result in dramatic performance differences in a query's performance. But if this option is turned off, then SQL Server will only use one CPU to execute a query, which can be much slower.<br /><br />Keep in mind that if SQL Server is set to use parallelism, that it may not always use it. If the server is very busy, it may not be used, but when the server is less busy, it may be used. This means that sometimes the view will be fast, and other times, slow (depending if parallelism is being used or not).<br /><br />On the other hand, sometimes parallelism can cause queries to take longer to execute than if parallelism is not used. In these cases, you can use a Query Hint to turn off parallelism for the odd-acting query.Anil Yadavhttp://www.blogger.com/profile/09905003317091006921noreply@blogger.com0tag:blogger.com,1999:blog-2307145203574942123.post-64938064526806499502009-03-29T08:40:00.000-07:002009-03-29T08:41:11.922-07:00SQL Server Transact-SQL DMLGenerally, it is better to perform multiple UPDATEs on records in one fell swoop (using one query), instead of running the UPDATE statement multiple times (using multiple queries).<br /><br />For example, you could accomplish this two different ways:<br /><br />USE Northwind<br />UPDATE Products<br />SET UnitPrice = UnitPrice * 1.06<br />WHERE UnitPrice > 5<br /><br />GO<br /><br />USE Northwind<br />UPDATE Products<br />SET UnitPrice = ROUND(UnitPrice, 2)<br />WHERE UnitPrice > 5<br /><br />GO<br /><br />Or<br /><br />USE Northwind<br />UPDATE Products<br />SET UnitPrice = ROUND(UnitPrice * 1.06, 2)<br />WHERE UnitPrice > 5<br /><br />GO<br /><br />As is obvious from this example, the first option requires two queries to accomplish the same task as the second query. Running one query instead of two or more usually produces the best performance. [6.5, 7.0, 2000, 2005]Anil Yadavhttp://www.blogger.com/profile/09905003317091006921noreply@blogger.com0tag:blogger.com,1999:blog-2307145203574942123.post-51432411383308243092009-03-25T10:20:00.000-07:002009-03-25T10:21:11.472-07:00How many logical and physical processors do you have?SELECT cpu_count AS [Logical CPUs], <br /> cpu_count / hyperthread_ratio AS [Physical CPUs]<br />FROM sys.dm_os_sys_infoAnil Yadavhttp://www.blogger.com/profile/09905003317091006921noreply@blogger.com0tag:blogger.com,1999:blog-2307145203574942123.post-79820468689602924132009-03-25T10:15:00.000-07:002009-03-25T10:16:52.412-07:00Sorting different dateformats correctlyDECLARE @Stats TABLE<br /> (<br /> SomeDate DATETIME<br /> )<br /><br /><br />INSERT @Stats<br />SELECT 20000 + ABS(CHECKSUM(NEWID())) % 30000<br />FROM master..spt_values<br /><br /><br />DECLARE @Style INT<br /><br /><br />SET @Style = 100<br /><br /> <br />WHILE @Style <= 113<br /> BEGIN<br /> -- Orders by ISO format but displays according to @Style parameter<br /> SELECT TOP 10 @Style AS Style,<br /> CONVERT(VARCHAR(40), SomeDate, @Style) as SomeDate<br /> FROM @Stats<br /> GROUP BY CONVERT(VARCHAR(40), SomeDate, @Style),<br /> CONVERT(VARCHAR(8), SomeDate, 112)<br /> ORDER BY CONVERT(VARCHAR(8), SomeDate, 112) DESC<br /><br /> <br /> SET @Style = @Style + 1<br /> END<br /><br />The general idea is to group by both ISO format and the style you wish and the sort by the ISO format and display the other format.<br />The reason this work is that the two lines in GROUP BY clause are deterministic.Anil Yadavhttp://www.blogger.com/profile/09905003317091006921noreply@blogger.com0tag:blogger.com,1999:blog-2307145203574942123.post-72203341331761634412009-03-25T10:04:00.000-07:002009-03-25T10:05:29.050-07:00Performance Tuning for SQL Server Backup and RestoreDepending on the size of your database, select the backup method that is the best tradeoff between backup and restore time. For example, full backups take the longest to perform, but are the fastest to restore. Differential backups are overall faster than full backups, but take longer to restore. Incremental (transaction log) backups are the fastest, but are generally the slowest to restore. [7.0, 2000, 2005]<br /><br />*****<br /><br />If you are backing up your databases directly over a network (not really recommended as it hurts performance), one way to boost throughput is to perform the backup over a dedicated network which is devoted to backups and restores only. All devices should be on the same high-speed switch. Avoid going over a router when backing up over a network, as they can greatly slow down backup speed. [6.5, 7.0, 2000, 2005]Anil Yadavhttp://www.blogger.com/profile/09905003317091006921noreply@blogger.com0tag:blogger.com,1999:blog-2307145203574942123.post-90680125374350611942009-03-13T03:02:00.000-07:002009-03-13T03:06:04.682-07:00SQL Server Transact-SQL DMLPerforming UPDATES takes extra resources for SQL Server to perform. When performing an UPDATE, try to do as many of the following recommendations as you can in order to reduce the amount of resources required to perform the UPDATE. The more of the following suggestions you can do, the faster the UPDATE will perform.<br />If you are UPDATing a column of a row that has a unique index, try to update only one row at a time.<br /><br /><br />Try not to change the value of a column that is also the primary key.<br /><br />When updating VARCHAR columns try to replace the contents with contents of the same length.<br /><br />Try to minimize the UPDATing of tables that have UPDATE triggers.<br /><br />Try to avoid UPDATing columns that will be replicated to other databases.<br /><br />Try to avoid UPDATing heavily indexed columns.<br /><br />Try to avoid UPDATing a column that has a reference in the WHERE clause to the column being updated.<br /><br />Of course, you may have very little choice when UPDATing your data, but at least give the above suggestions a thought.Anil Yadavhttp://www.blogger.com/profile/09905003317091006921noreply@blogger.com0tag:blogger.com,1999:blog-2307145203574942123.post-32584875004895423822009-03-12T07:45:00.001-07:002009-03-12T07:54:09.001-07:00How to Perform Multiple lookups to the same table<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiBp5hG3FY6y17aeX5XKsDHB3spmHKB2fvaDWwfb-gYWdGHfH4Rb6akqwH8YnKAPO6eknS93yX21lU_45gVrojkwzUfwPYmRH4yTCdxgWOIrEMtCdD6q8_oqxBALR0DkTxlCHA1f_whYX8r/s1600-h/Image1.jpg"><img style="float:right; margin:0 0 10px 10px;cursor:pointer; cursor:hand;width: 155px; height: 227px;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiBp5hG3FY6y17aeX5XKsDHB3spmHKB2fvaDWwfb-gYWdGHfH4Rb6akqwH8YnKAPO6eknS93yX21lU_45gVrojkwzUfwPYmRH4yTCdxgWOIrEMtCdD6q8_oqxBALR0DkTxlCHA1f_whYX8r/s320/Image1.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5312312657966434322" /></a><br />In a SQL Server Integration Service (SSIS) package, when lookups are performed on the tables which are inserted in the same package by a previous task, the error “could not match” is raised. <br /> <br />See the failed package layout. From the insert data, two records will be inserted and at the lookup those values are used to get the id. However, the package fails indicating that records are not found even though those records are in the table. <br /><br />This is happening due to the cache setting in the lookup. There are three types of cache, Full, Partial and No caching. <br /><br />Full pre-caching, is when the complete reference data set is read before the input is processed. This is the default caching type. <br /><br />Partial caching, is when the Lookup transformation specifies the size of the cache that is loaded with reference data. This option is available only with connections that support keyed access. <br /><br />No caching, the reference data set is accessed by each row in the rowset. <br /><br />As the default caching type is full, before the start of the data flow, it will cache the lookup data to improve the performance. As the data is inserted after the caching, the lookup it will failed. Therefore, for this kind of implementation you need to set caching type to No CachinAnil Yadavhttp://www.blogger.com/profile/09905003317091006921noreply@blogger.com0tag:blogger.com,1999:blog-2307145203574942123.post-27129817814135091222009-03-11T07:18:00.000-07:002009-03-11T07:23:09.837-07:00Action plan: how to get listed in Google's search results for many keywordsTargeting several keywords on your website is crucial if you want to get high rankings on Google. The more keywords that are related to a special topic can be found on your website, the more likely it is that Google will find your website relevant to that topic.<br /><br />There are good ways and bad ways to target multiple keywords. If you choose the wrong way, search engines will think that you are a spammer. If you choose the right way, your website will get higher rankings. The following three steps will help you to get high rankings for as many keywords as possible.<br /><br /><strong>Step 1: Remove stuffed keywords from your web pages</strong><br />Some webmasters think that the best way to optimize a website for multiple keywords is to add them on every web page and separate them with a comma.<br /><br />These websites usually have extremely long meta tags with dozens if not hundreds of keywords. You won't get high rankings on Google with that method. Add dozens of keywords on a web page is called keyword stuffing and all search engines consider this spam. You actually risk your search engine rankings if you use this method on your web pages.<br /><br /><strong>To do: </strong>Check if your web pages contain long keywords lists. If yes, remove them from your pages.<br /><br /><strong>Step 2: Optimize different pages of your website for different keywords</strong><br />If you add many keyword on the same web page, then the web page will be somewhat relevant to many keywords. If you optimize a web page for one keyword/keyphrase, then your web page will be very relevant to this keyword.<br /><br />It is much easier to get high rankings if the web page is very relevant to the searched keyword. For that reason, you should optimize as many pages of your website for as many keywords as possible.<br /><br />The keywords for which you optimize your web pages should be related. If you have a shoe shop, optimize different pages of your website for the keywords sports shoes, sneakers, sandals and boots. By having optimized web pages for these keywords, you show Google that your website is relevant to the topic shoes.<br /><br /><strong>To do:</strong> Find the best keywords for your website and then optimize different pages of your website for different keywords.<br /><br /><strong>Step 3: Add keyword modifiers and combine these variations</strong><br />In step two, you have optimized your web pages for keywords such as sneakers and sandals. The next step is to add modifiers to these keywords on your optimized pages. For example, the sneakers page could include these keyword variations: sneakers shop, nike sneakers, cool sneakers, etc.<br /><br />It's usually a good idea to combine all of these keyword variations in one sentence (for example for your web page title): Buy cool Nike sneakers in our sneakers shop in London. With this single sentence, your web page can be found for many keyword combinations that are included in this sentence.<br /><br /><strong>To do:</strong> Find keyword modifiers and add different keyword variations to your optimized pages.<br /><br />If you optimize many pages of your website for different but related keywords and if you add variations of each keyword to each page than the content of your web site will be perfectly optimized for high rankings on Google.<br /><br />If you combine optimized content with good inbound links, your website will get the best possible rankings on Google and other search engines.Anil Yadavhttp://www.blogger.com/profile/09905003317091006921noreply@blogger.com0tag:blogger.com,1999:blog-2307145203574942123.post-53687487368791377972009-03-11T07:12:00.000-07:002009-03-11T07:15:21.299-07:00Nine factors that affect your website usabilityOptimizing web pages for search engines does not mean creating special pages for search engines. Optimizing web pages for search engines is often the same as optimizing web pages for web surfers.<br /><br />If you do it correctly, your website will be attractive to both web surfers and search engine spiders. The following list shows nine factors that can improve the usability of your website as well as your search engine rankings.<br /><br /><strong>1. You should have fast loading web pages</strong><br />Usability: Web surfers don't want to wait for web pages.<br />Search engine optimization: Search engines can index your web pages more easily. <br /><br /><strong>2. Your web pages should be easy to read</strong><br />Usability: It's easier for web surfers to read your web pages.<br />Search engine optimization: Near-white text on a white background and tiny text is considered spam by most search engines.<br /><br /><strong>3. The contents of your web pages should be clearly arranged</strong><br />Usability: Clear headings, paragraphs and bullet lists make your web pages easier to read.<br />Search engine optimization: Clear headings, paragraphs and bullet lists make it easier for search engines to find the topic of your web pages.<br /><br /><strong>4. Your web page images should use the IMG ALT attribute</strong><br />Usability: Web surfers with images turned off and visually impaired visitors will be able to see the content of your images.<br />Search engine optimization: Search engines cannot index the content of your images but they can index the content of the IMG ALT attribute.<br /><br /><strong>5. You should use custom 404 not found error pages</strong><br />Usability: If your 404 not found page contains links to other pages of your website or a search form then people might remain on your website.<br />Search engine optimization: Proper 404 error pages make sure that search engines index the right pages of your website.<br /><br /><strong>6. Your website should be easy to navigate</strong><br />Usability: Clear and concise navigation links that are easy to find help your website visitors to find content on your site.<br />Search engine optimization: Clear and concise navigation links that contain your keywords make it easy for search engines to index all of your web pages.<br /><br /><strong>7. Important content is above the fold</strong><br />Usability: Web surfers with small computer screens can quickly see what your web page is about.<br />Search engine optimization: The sooner your important content appears in the HTML code of your web pages, the more likely it is that it will be indexed by search engines.<br /><br /><strong>8. Your web page titles are explanatory</strong><br />Usability: If web surfers bookmark your web pages, a clear web page title will help them to find it again.<br />Search engine optimization: The web page title is one of the most important SEO elements. It should contain your keywords and it should look attractive so that web surfers click on it when they see your web page title in the search results.<br /><br /><strong>9. The URLs of your web pages are meaningful and self-explanatory</strong><br />Usability: It's much easier to remember a web page like www.example.com/support than a web page like www.example.com/123123-werwc.php?2342234.<br />Search engine optimization: If your URLs contain your keywords, this can have a positive effect on your search engine rankings. Dynamic URLs with many variables can lead to problems with search engine spiders.<br /><br />There are many more factors that influence the position of your website in Google's search results.Anil Yadavhttp://www.blogger.com/profile/09905003317091006921noreply@blogger.com0tag:blogger.com,1999:blog-2307145203574942123.post-55262154639358422502009-03-10T12:07:00.000-07:002009-03-10T12:10:26.393-07:00What's new in SQL Server 2008 Reporting Services?With the release of SQL Server 2008, Reporting Services (SSRS) reached its third version. While the previous version contained many new features, this latest release of SSRS is easier to use and an even more powerful enterprise-level reporting tool. This article will discuss the most significant and most useful new features that make SQL Server 2008 Reporting Services a worthwhile upgrade. <br /><br /><strong>IIS is no longer required </strong><br />The previous two releases relied on IIS to host the RS Web service and the Report Manager Web application. Many organizations have made a rule of not installing IIS on the same machine as SQL Server for security reasons. This dependency on IIS therefore became a showstopper for using SSRS for many potential users. Based on user feedback, Microsoft re-architected SSRS 2008 and removed this dependency. <br /><br />SSRS now handles HTTP via HTTP.SYS, a native OS kernel that intercepts and handles HTTP requests just like IIS. If this sounds familiar, that's because it's the same mechanism used by the Native XML Web Services in SQL Server 2005 – a feature that, interestingly, was removed in SQL Server 2008. Microsoft also claims that this implementation provides better performance and scalability. The Reporting Services Configuration tool has been updated to provide management capabilities for Report Manager and Reporting Server services. <br /><br /><strong>Better memory management </strong><br />Users who have deployed SSRS in high-traffic environments may have noticed performance degradation when many concurrent users access reports with many pages. The reason for the slowness was that SSRS was holding rendered reports in memory, causing problems if memory demands became too high. SSRS now saves rendered pages to the file system to remove the load on server memory. For large reports, it also renders reports as the user is viewing them. This on-demand processing decreases the load on the server by not rendering pages the user might potentially not view at all. In addition to these memory enhancements, you can now also set a threshold on how much memory SQL Server Reporting Services should use. <br /><br /><strong>Export into Microsoft Word </strong><br />This feature has been requested pretty much since SSRS 2000 came out in 2004, mainly because there are more Word users than Excel users. For whatever reason, this feature did not make it to SQL Server 2005, but it's finally here and certainly a much welcome addition to all the other export options. <br /><strong><br />Improved charting components</strong><br />Last year Microsoft acquired Dundas Data Visualization technology for SSRS. Dundas is a company specializing in developing powerful visual components for developers, such as chart, gauge, map and calendar controls. Purchasing the code base for the chart, gauge and calendar controls allowed Microsoft to provide better integration of these components with SSRS and to control the components' future direction. Dundas components have always been visually stunning, so it's no surprise that the new chart component is like night and day compared with chart control in the previous versions of SSRS. <br /><br />In my opinion, if you do a lot of charting in your reports, this feature alone justifies the effort to upgrade SQL Server 2008 Reporting Services. The new chart region includes many additional chart types, such as cylinder, pyramid, radar and funnel. Other useful enhancements are interval labeling for avoiding label collisions, combining small slices into a single slice in a pie chart and more 2-D and 2-D visual effects.<br /><br /><strong>Tablix data region </strong><br />This new data region combines the functionality of both table and matrix data regions. This region allows you to combine multiple column groups and row groups, essentially allowing you to create reports that have both dynamic row and dynamic column output. You can now more easily create nested and recursive groups and make them adjacent to each other. There is a lot more to the Tablix data region than I can describe in this space. For more details, read the Understanding the Tablix Data Region section of Microsoft Books Online.<br /><strong><br />Enhanced CSV export </strong><br />Comma-separated value (CSV) export can be very useful for automating report processing if you are interested in extracting data elements from the report. SSRS CSV export in previous versions included both the data and the layout in the export. But the inclusion of report layout elements in the CSV output created additional work and complications. The new output format is much cleaner and easier to use for automating programmatic processing. <br /><br />There are plenty of significant reasons for you to give SSRS 2008 serious consideration. And remember, just like Integration Services, SSRS is a standalone tool you can install on a dedicated server. This means, if your organization is not ready to migrate to SQL Server 2008, you can upgrade just your reporting server and use these new features, using your 2000 or 2005 databases as data sources.Anil Yadavhttp://www.blogger.com/profile/09905003317091006921noreply@blogger.com0tag:blogger.com,1999:blog-2307145203574942123.post-4068675454169637592009-03-10T12:01:00.000-07:002009-03-12T07:54:36.746-07:00History of SQL Server Integration ServicesIntegration Services is the successor to Data Transformation Services (DTS). DTS had humble beginnings. It was started on a shoestring budget with very few resources. Its first incarnation was a proof-of-concept transformation, which later became known as the data pump. The proof of concept caught the attention of some folks around Microsoft, and it was given some funding.<br /><br />The first release of DTS shipped with SQL Server 7.0 to receptive users. The alternatives at the time were either difficult to work with, expensive, or both. Many DBAs were forced to write custom transformation software, which was inflexible and difficult to maintain. Some tools had limitations, such as the need for source and destination schemas to match exactly, direct dependence on a particular database product, and/or no transformation capabilities. Many wrote custom parsing and transformation applications. For example, many companies are only now converting from hand-coded flat file parsers, SQL scripts, and transformation code to a standard platform such as Integration Services.<br /><br />The first release of DTS addressed several of these issues and simplified life for a lot of people. By using OLEDB for its data access layer, DTS could access various data sources with little or no custom coding. DTS was also affordable because it shipped "in the box" with SQL Server. Users had access to all the power of more expensive products, yet incurred no additional cost for their ETL tools. This was obviously a benefit to IT shops trying to stretch their budgets. DTS was a flexible product that was easy to use. There were also a number of standard tasks in the box, including the Transform Data, Execute Process, Active X Script, Execute SQL, and Bulk Insert Tasks.<br /><br />SQL Server 8.0 added even more functionality by adding more tasks. The Execute Package, FTP, and MSMQ Tasks added incremental improvements across the product. However, users experienced some frustration with DTS when attempting to work with large datasets and some of the other limitations inherent in a script-based tool. The time was ripe to create a truly enterprise-ready integration tool.<br /><br />In 2000, SQL Server decided to make a substantial investment in the ETL and Integration space and brought together some talented folks who formulated the ideas behind the Data Flow Task and the next version of Integration Services. Over a period of five years, the development time frame for SQL Server 2005, the DTS team completely redesigned and rewrote DTS to become Integration Services 2005.<br /><br />Integration Services 2008 is the next incarnation of that release. While not as revolutionary a release as in 2005, Integration Services 2008 brings some new capabilities and incremental improvements across the product.Anil Yadavhttp://www.blogger.com/profile/09905003317091006921noreply@blogger.com0tag:blogger.com,1999:blog-2307145203574942123.post-16530792598143857742009-03-10T11:58:00.000-07:002009-03-12T07:54:36.747-07:00What Is SQL Server Integration Services?Depending on whom you ask, you might get different answers to that question ranging from descriptions such as a data import/export wizard, to an ETL tool, to a control flow engine, to an application platform, or to a high-performance data transformation pipeline. All are correct because Integration Services is a set of utilities, applications, designers, components, and services all wrapped up into one powerful software application suite. SQL Server Integration Services (SSIS) is many things to many people.<br /><br /><strong>Data Import/Export Wizard</strong><br />One of the most popular features of Integration Services is the Import/Export Wizard, which makes it easy to move data from a source location such as a flat file or database table to a flat file, table, or other destination. The Import/Export Wizard was the first utility developed back in the SQL Server 7.0 time frame and continues today as an important utility in the database administrator's (DBA) toolbox.<br /><br /><strong>ETL tool</strong><br />ETL is an acronym for Extract, Transform, and Load and describes the processes that take place in data warehousing environments for extracting data from source transaction systems; transforming, cleaning, deduplicating, and conforming the data; and finally loading it into cubes or other analysis destinations. Although Data Transformation Services (DTS), Integration Services' predecessor application, was considered a valuable tool for doing ETL, Integration Services is where true Enterprise ETL became available in SQL Server.<br /><br /><strong>Control flow engine</strong><br />The processes involved in moving data from location to location and transforming it along the way are not restricted to only processing data. Integration Services provides a control flow for performing work that is tangentially related to the actual processing that happens in data flow, including downloading and renaming files, dropping and creating tables, rebuilding indexes, performing backups, and any other number of tasks. Integration Services provides a full-featured control flow to support such activities.<br /><br /><strong>Application platform</strong><br />Developers can create applications that use Integration Services as a platform, embedding the engines within their application using the provided object models. As a developer, you can embed the Integration Services engines and components within your applications using the object models.<br /><br /><strong>High performance data transformation data pipeline</strong><br />That's a mouthful and really incorporates two ideas: high performance and data pipelining. The Data Flow Task is a high-performance tool because you can use it to perform complex data transformations on very large datasets for incredibly performant processing. The pipeline concept means that you can process data from multiple heterogeneous data sources, through multiple parallel sequential transformations, into multiple heterogeneous data destinations, making it possible to process data found in differing formats and on differing media in one common "sandbox" location.Anil Yadavhttp://www.blogger.com/profile/09905003317091006921noreply@blogger.com0tag:blogger.com,1999:blog-2307145203574942123.post-43694687771337071002009-03-10T11:53:00.000-07:002009-03-10T11:54:25.776-07:00Performance Tuning SQL Server Backup and RestoreSQL Server backup and restore performance is often a greater issue than many DBAs realize. This is because the time it takes to backup and restore a database plays a very large part in how large a single SQL Server database can be. For example, if it takes more than 24 hours to backup or restore a database, obviously this presents some problems.<br />What can sometimes happen to the DBA is that a database may be of reasonable size now, given their current server hardware, and backups and restores may take a reasonable amount of time. But as time goes by, the database grows, and at some point the amount of time to backup and restore the database becomes too long given the backup and restore window available. The moral to this story, keep a close eye on backup and restore times, and factor this in when projecting future hardware needs for your SQL Server databases. [6.5, 7.0, 2000, 2005]Anil Yadavhttp://www.blogger.com/profile/09905003317091006921noreply@blogger.com0tag:blogger.com,1999:blog-2307145203574942123.post-19676412152736726452009-03-07T13:10:00.000-08:002009-03-07T13:11:58.560-08:00Reasons Why You May Not Want to Use a Heapwhy heaps aren't usually a good choice when designing a database. Now, I didn't say that heaps are always bad, but in most cases, I think they should be avoided (read the list to see why).<br /><br />As I compiled the list, in some cases I have had the opportunity to verify that they are true, but in other cases, I have not had the time to verify them. I would like your input on the list. Is there anything on the list that is not true, or only partially true (under some circumstances, but not others), and what other reason are there to avoid using heaps that have I left out?<br /><br />I look forward to your feedback.<br /><br />If non-clustered indexes are not added to a heap, then all queries against a heap will require table scans to retrieve the requested data. If the heap is large, then these queries will be very resource intensive and hurt SQL Server's overall performance. <br />Since the data in a heap is unordered, performing a table scan on a heap can cause a lot of extra I/O activity because inefficient random reads, not efficient sequential reads, are more the norm. <br />While a non-clustered index can be added to a heap to speed up some queries, in almost all cases, to make use of a non-clustered index, the use of a bookmark lookup is required. A bookmark lookup means that once the record(s) to be returned by the query are identified in the non-clustered index, additional reads (the bookmark lookup) must be made of the related rows in the heap, so that all of the data requested by the query is returned. This is not very I/O efficient, especially if many rows are returned. At some point, it may be faster for SQL Server to do a table scan than it is to use a non-clustered index when returning many rows. <br />Heaps cannot be replicated using SQL Server replication. <br />If you want to create an XML index on an XML data column, a clustered index must exist on the table. <br />If you want to create a spatial index on a spatial data column (GEOMETRY or GEOGRAPHY), a clustered index must exist on that table. <br />If a heap has a non-clustered index on it (as the primary key), and data is inserted into the table, two writes have to occur. One write for inserting the row, and one write for updating the non-clustered index. On the other hand, if a table has a clustered index as the primary key, inserts take only one write, not two writes. This is because a clustered index, and its data, are one in the same. Because of this, it is faster to insert rows into a table with a clustered index as the primary key than it is to insert the same data into a heap that has a non-clustered index as its primary key. This is true whether or not the primary key is monotonically increasing or not. <br />When data is updated in a heap, and the updated row is larger than the old row and can't fit into the old space, a forwarding record is inserted into the original location that points to the new location of the page. If this happens a lot, then there is a lot of space wasted in a database maintaining the forwarding records. This also contributes to additional I/O activity as both the pointer, and the row, have to be read. <br />Even if data updated in a heap is not larger than the old row (the updated data is smaller or the same size than the original data), updating a heap with a non-clustered primary key is slower than updating the same table that has a clustered index as the primary key. This is because updating a table with a clustered index is less write intensive than updating a heap with a non-clustered index as its primary key. <br />If a row is deleted from a heap with a non-clustered index as its primary key, it is slower than deleting the same row from the same table with a clustered index as its primary key. This is because it takes more I/O to perform this task on a heap than on a clustered index. <br />When data is deleted from a heap, the data on the page is not compressed (reclaimed). And should all of the rows of a heap page are deleted, often the entire page cannot be reclaimed. This not only wastes space, it contributes to fragmentation of the data pages within a database. <br />If you take two identical tables, one that is a heap with a non-clustered index as its primary key, and a table that has a clustered index as its primary key, the heap with the non-clustered index will be substantially larger, wasting valuable space and increasing disk I/O. <br />The ALTER INDEX rebuild and reorganize options cannot be used to defragment and reclaim space in a heap (but they can used to defragment non-clustered indexes on a heap). If you want to defragment a heap, you have three options: 1) create a clustered index on the heap, then drop the clustered index; 2) Use SELECT INTO to copy the old table to a new table; or 3) use BCP or SSIS to move the data from the old table to a new table.Anil Yadavhttp://www.blogger.com/profile/09905003317091006921noreply@blogger.com0tag:blogger.com,1999:blog-2307145203574942123.post-2192129322962754972009-03-05T09:05:00.000-08:002009-03-05T09:07:03.403-08:00Importing and exporting bulk data with SQL Server's bcp utilityThe Bulk Copy Program (bcp) utility in Microsoft SQL Server enables database administrators to import bulk data into a table or export it from a table into a file. It also supports several options that define how data will be exported, where it will be imported and which data will be loaded. <br />This tip discusses several examples of bcp commands to bulk-copy data in and out of SQL Server tables. The examples have been tested on SQL Server 2005 and SQL Server 2008, and for all examples I used the AdventureWorks sample database.<br /><br />Exporting data with the bcp utility<br /><br />One of the simplest operations that you can perform with the bcp utility is to bulk-copy data out of a SQL Server table into a text file. By entering the command at a Windows command prompt, you can run a bcp command. The following command, for example, copies the data from the Sales.vSalesPerson view in the AdventureWorks database to the C:\Data\SalesPerson.txt file:<br /><br />bcp AdventureWorks.Sales.vSalesPerson out C:\Data\SalesPerson.txt -c –T<br /><br />As you can see, the bcp command begins with the utility name, followed by the fully qualified name of the table (database.schema.table). The out keyword is next, which tells the bcp utility that the data will be exported from the table. The path and file name of the target text file follows the out keyword. Note that the command examples in this tip might wrap to multiple lines. All the examples, however, should be run as a single command.<br /><br />In addition to these basic arguments, the bcp utility supports several switches that control the utility's behavior. In the example above, the -c switch indicates that all data is treated as character data, regardless of how it is stored in the source table. If you do not specify the -c switch or another type-related switch, you will be prompted to specify the type for each column after you enter the bcp command. You may notice that bcp command switches are case sensitive.<br /><br />The next switch in the above example is -T, which tells the bcp utility to use a trusted connection to connect to the SQL Server instance. If you do not specify -T, you must provide your user name (the -U switch) and your password (the -P switch), or you will be prompted for this information.<br /><br />Because no instance is specified in the preceding example, the utility uses the default instance on the local machine. To specify a SQL Server instance, use the -S switch, followed by the server name, as in the following example:<br /><br />bcp AdventureWorks.Sales.vSalesPerson out C:\Data\SalesPerson.txt -c -T -S Server01<br /><br />The bcp utility will now connect to the default instance on Server01. If you want to connect to an instance other than the default, you must specify the instance name along with the server name, as in Server01\SqlSrv.<br /><br />By default, the bcp utility uses tabs to separate fields in the target file. You can, however, override that behavior by using the -t switch, as in the following example:<br /><br />bcp AdventureWorks.Sales.vSalesPerson out C:\Data\SalesPerson.csv -c -T -t,<br /><br />In this case, a comma is specified after the -t switch, which means that the data fields will now be separated by commas. This allows you to save the data to a .csv file, as I've done here, which can be handy if you want to view the data in a Microsoft Excel file.<br /><br />These examples have been limited to exporting data from a table. But you can also use the bcp command to run a Transact-SQL query and export the query results. The following bcp command, for example, includes a SELECT statement that retrieves only the SalesPersonID, FirstName, and LastName columns from the vSalesPerson view:<br /><br />bcp "SELECT SalesPersonID, FirstName, LastName FROM AdventureWorks.Sales.vSalesPerson" <br />queryout C:\Data\SalesPerson.csv -c -T -t,<br /><br />In this scenario, the query, enclosed in quotes, is passed into the bcp command instead of the table name. In addition, the out keyword has been replaced with queryout. But the rest of the command is the same as the preceding example. As a result, the SalesPerson.csv file will now contain only the three specified columns. You can also refine your query further: You can include, for example, a WHERE clause that qualifies which rows are returned from the source table.<br /><br />Importing data with the bcp utility <br /><br />The bcp utility makes importing data as simple as exporting data. To run the examples in this section, first run the following T-SQL script to create the SalesPeople table in the AdventureWorks database:<br /><br />USE AdventureWorks<br />GO<br />IF OBJECT_ID (N'SalesPeople', N'U') IS NOT NULL<br />DROP TABLE dbo.SalesPeople<br />GO<br />CREATE TABLE dbo.SalesPeople (<br />SalesPersonID INT IDENTITY PRIMARY KEY, <br />FirstName NVARCHAR(50) NOT NULL, <br />LastName NVARCHAR(50) NOT NULL<br />)<br /><br />To import data, you need a source file from which to copy the data. For instance, the following command uses the recently created SalesPerson.csv file to load data into the SalesPeople table:<br /><br />bcp AdventureWorks.dbo.SalesPeople in C:\Data\SalesPerson.csv -c -T -t,<br /><br />First, you must specify the target table, followed by the in keyword that replaces out or queryout in this scenario. Next, you must specify the path and file name of the source file, followed by any applicable switches. <br /><br />When you run this command and view the results, note that while the source file includes the salesperson IDs, these values are not inserted into the SalesPersonID column. The column is defined as an IDENTITY column, so the source data is ignored. To retain the original values, you must add the -E switch to the command, as in the following example:<br /><br />bcp AdventureWorks.dbo.SalesPeople in C:\Data\SalesPerson.csv -c -T -t, -E<br /><br />Your table will now include the desired data.<br /><br />Using a format file<br /><br />When importing or exporting data, you may find that the structure of your source data does not match the structure of the target data. The columns in a text file, for example, might not be in the same order as those in your target table, or the number of columns may be different. You can address this issue by creating a format file that maps the structures of the source and destination. Let's look at an example to demonstrate how this works.<br /><br />Suppose you use the following command to export data from the vSalesPerson view to the SalesPeople.txt file:<br /><br />bcp "SELECT LastName, FirstName, SalesPersonID FROM AdventureWorks.Sales.vSalesPerson"<br />queryout C:\Data\SalesPeople.txt -c -T -t,<br /><br />This command uses the same arguments as those in previous examples. Note, however, the order in which the columns are retrieved from the view: LastName, then FirstName, and finally SalesPersonID.<br /><br />Now suppose you plan to use this file to import data into the SalesPeople table. The columns in the table, however, are defined in a different order from that in the text file. To address this issue, you can create a format file to map the columns from the source to the destination. The following command demonstrates how to create a format file:<br /><br />bcp AdventureWorks.dbo.SalesPeople format nul -f C:\Data\SalesPeople.fmt -c -T -t,<br /><br />As previous examples show, the command first specifies the target table. This time, however, the table name is followed by the keywords format nul, which indicate that the bcp utility should create a format file. The -f argument is then used to specify a path and file name for the format file, followed by several switches. As a result, when you run this command, a format file is generated that contains the structure of the SalesPeople table.<br /><br />The following data shows the content of the SalesPeople.fmt format file that is generated by the above command:<br /><br />10.0 <br />3 <br />1 SQLCHAR 0 12 "," 1 SalesPersonID "" <br />2 SQLCHAR 0 100 "," 2 FirstName SQL_Latin1_General_CP1_CI_AS <br />3 SQLCHAR 0 100 "\r\n" 3 LastName SQL_Latin1_General_CP1_CI_AS <br /><br /><br />The first row of the file (10.0) identifies the version of bcp that is being used. The second row (3) identifies the number of columns in the table, and the next three rows provide information about the columns themselves:<br /><br /><br />The first field indicates the order in which the columns should appear in the source document. <br /><br /><br />The second field shows the source file data type for each column. Because I specified the -c switch when I generated the file, a character data type is used for all fields as they are extracted from the data file. SQL Server will then convert this data into the correct types as the data is inserted.<br /><br /><br />The third field shows the prefix length for the field, which is used by SQL Server to provide the most compact file storage. A value of 0 is used automatically if you specify the -c switch when creating the format file.<br /><br /><br />The fourth field indicates the number of bytes for the data type of a particular field.<br /><br /><br />The fifth field shows how columns and rows are terminated. Because the -t switch was used when creating the format file, the field values of the source file must be terminated by commas.<br /><br /><br />The sixth field maps to the order in which the columns appear in the SQL Server table.<br /><br /><br />The seventh and final field provides the collation information for character columns in the SQL server table.<br /><br />To use the format file to import data into the SalesPeople table, we must modify the file as follows:<br /><br />10.0 <br />3 <br />1 SQLCHAR 0 100 "," 3 LastName SQL_Latin1_General_CP1_CI_AS <br />2 SQLCHAR 0 100 "," 2 FirstName SQL_Latin1_General_CP1_CI_AS <br />3 SQLCHAR 0 12 "\r\n" 1 SalesPersonID "" <br /><br /><br />As you can see, the order of the columns has been modified to reflect how they appear in the format file. The SalesPersonID column is now listed last and it is terminated with \r\n. The LastName column is now listed first and is terminated with a comma.<br /><br />After you modify and save the format file, it is ready to use in your bcp commands. The following example demonstrates how to call the format file:<br /><br />bcp AdventureWorks.dbo.SalesPeople in C:\Data\SalesPeople.txt -f C:\Data\SalesPeople.fmt –T<br /><br />Note that, when you import data from the SalesPeople.txt file, you must also use the -f switch to call the format file. Also notice that you no longer need to include the -t and -c switches because the information is now included in the format file.Anil Yadavhttp://www.blogger.com/profile/09905003317091006921noreply@blogger.com0tag:blogger.com,1999:blog-2307145203574942123.post-69324436312293624962009-02-13T02:19:00.000-08:002009-02-13T02:28:26.281-08:00Passing a Table to a Stored ProcedureSQL Server 2005 and previous versions do not support passing a table variable to a stored procedure. <br /><br />This article introduces the new feature added to SQL Server 2008, which supports passing a TABLE to a stored procedure or function.<br /><br />This article is based on SQL Server 2008 CTP 3. Some of the information may change by the time the product is finally released.<br /><br />Before we create a Function or Stored Procedure that accepts a TABLE variable, we need to define a User Defined TABLE Type. SQL Server 2008 introduced a new User defined TABLE type. A TABLE type represents the structure of a table that can be passed to a stored procedure or function.<br /><br />So the first step is to create a User Defined TABLE type. The following TSQL code creates a User defined TABLE type named "ItemInfo".<br /><br /> CREATE TYPE ItemInfo AS TABLE ( <br /> ItemNumber VARCHAR(50),<br /> Qty INT )<br /><br />You can use the system view SYS.TYPES to see the type that you have just created. The following query returns all the types defined in the system.<br /><br /> SELECT * FROM SYS.TYPES<br /> /* If you just need to find information about the TABLE types, you could find it from the following TSQL query.*/<br /> SELECT * FROM SYS.TYPES WHERE is_table_type = 1<br />/* There is another view, which is handy to find information about TABLE types. */<br /> SELECT * FROM SYS.TABLE_TYPES<br /><br />We have created a TABLE type that we need. Now let us see how it works. Let us create a variable of type "ItemInfo" and try to insert a few records to it. Then lets query the table variable to see if the information is correctly inserted. [code]<br /><br /> /* Let us declare a variable of type ItemInfo which is a TABLE Type */<br /><br /> DECLARE @items AS ItemInfo<br />/* Insert values to the variable */<br /> INSERT INTO @Items (ItemNumber, Qty)<br /><br /> SELECT '11000', 100 UNION ALL<br /><br /> SELECT '22000', 200 UNION ALL<br /> SELECT '33000', 300<br /><br />/* Lets check if the values are correctly inserted or not */<br /><br /> SELECT * FROM @Items<br /><br />/* OUTPUT: <br /> ItemNumber Qty<br /> -------------------------------------------------- -----------<br /> 11000 100<br /> 22000 200<br /> 33000 300<br /> */<br /><br />Now let us create a stored procedure that accepts a TABLE variable. Let us create a very simple stored procedure which accepts a TABLE variable and SELECTs contents of the table.<br /><br /> 1 CREATE PROCEDURE TableParamDemo<br /> 2 (<br /> 3 @Items ItemInfo<br /> 4 )<br /> 5 <br /> 6 AS<br /> 7 <br /> 8 SELECT *<br /> 9 FROM @Items<br /><br />Well, this would generate the following error:<br /><br /> 1 /*<br /> 2 Msg 352, Level 15, State 1, Procedure TableParamDemo, Line 1<br /> 3 The table-valued parameter "@Items" must be declared with the READONLY option.<br /> 4 */<br /><br />A table variable that is passed to a stored procedure or function should be marked as READONLY. The "callee" cannot modify the table being passed into it. Here is the correct code.<br /><br /> 1 CREATE PROCEDURE TableParamDemo<br /> 2 (<br /> 3 @Items ItemInfo READONLY<br /> 4 )<br /> 5 <br /> 6 AS<br /> 7 <br /> 8 SELECT *<br /> 9 FROM @Items<br /><br />Now let us execute the stored procedure we just created. Run the following code.<br /><br /> 1 /*<br /> 2 declare the variable<br /> 3 */<br /> 4 DECLARE @items AS ItemInfo<br /> 5 <br /> 6 /*<br /> 7 Insert values to the variable<br /> 8 */<br /> 9 <br /> 10 INSERT INTO @Items (ItemNumber, Qty)<br /> 11 SELECT '11000', 100 UNION ALL<br /> 12 SELECT '22000', 200 UNION ALL<br /> 13 SELECT '33000', 300<br /> 14 <br /> 15 /*<br /> 16 Execute the procedure<br /> 17 */<br /> 18 EXECUTE TableParamDemo @Items<br /> 19 <br /> 20 /*<br /><br /> 21 OUTPUT:<br /> 22 <br /> 23 ItemNumber Qty<br /> 24 -------------------------------------------------- -----------<br /> 25 11000 100<br /> 26 22000 200<br /> 27 33000 300<br /> 28 <br /> 29 */<br /><br />You cannot modify the TABLE parameter passed into the stored procedure. If you try to do so, you will get an error as shown in the following example.<br /><br /> 1 CREATE PROCEDURE TableParamDemo<br /> 2 (<br /> 3 @Items ItemInfo READONLY<br /> 4 )<br /> 5 <br /> 6 AS<br /> 7 <br /> 8 SELECT *<br /> 9 FROM @Items<br /> 10 <br /> 11 INSERT INTO @Items (ItemNumber, Qty)<br /> 12 SELECT '1001', 20<br /> 13 <br /> 14 /*<br /> 15 OUTPUT:<br /> 16 <br /> 17 Msg 10700, Level 16, State 1, Procedure TableParamDemo, Line 11<br /> 18 The table-valued parameter "@Items" is READONLY and cannot be modified.<br /> 19 */<br />Conclusions<br /><br />The support for TABLE variables is very interesting. While working with User Defined TABLE Type, please note that you cannot use it as a column of a table. Please also note that, once created, you cannot alter the structure of the TABLE.Anil Yadavhttp://www.blogger.com/profile/09905003317091006921noreply@blogger.com0tag:blogger.com,1999:blog-2307145203574942123.post-91490604875997055192009-02-13T02:16:00.000-08:002009-02-13T02:17:28.899-08:00SQL Server Federated Database Performance TuningFor very large databases, consider using federated database servers to balance the processing load across multiple SQL 2000 or 2005 servers. This technique horizontally partitions SQL 2000 or 2005 data over one or more SQL Servers, allowing the client application to send SQL statements to the server in the federation having most of the data required by the statement. This way, the query load is spread over multiple SQL 2000 or 2005 servers.<br /><br />Federated database servers works best for databases that can be naturally partitioned over multiple servers. For example, if the data can be segregated by product line or geographical location, then the data can easily be separated over multiple databases in a federation.<br /><br />Here's a federated database might work. Let's say that a company has customers in North America, South America, Europe, and Asia, and that the data stored for every customer, no matter where they are located, is identical. One option would be to partition the data based horizontally on continent. In this case, four different SQL Servers would be needed in the federation, each storing its respective data.<br /><br />In almost all cases, when a query is run against the customer data stored in the federated SQL Servers, the data will all be related to a specific continent. Because of this, only one SQL Server will be hit with the query, not all of the SQL Servers. And assuming that customers in all continents will be queried, the separation of customers by continent on different SQL Servers will spread the queries among all of the servers, allowing greater performance and scalability. [2000, 2005]Anil Yadavhttp://www.blogger.com/profile/09905003317091006921noreply@blogger.com0tag:blogger.com,1999:blog-2307145203574942123.post-80504533237898354372009-02-08T07:15:00.000-08:002009-02-08T07:20:21.947-08:00String comparison: binary vs. dictionaryIt is well known that you get better performance if you compare two strings in a binary collation than in a dictionary collation. But is the performance difference significant enough to warrant the use of an explicit COLLATE clause in the string comparison expression? That was a question came up in a conversation I had with a colleague recently.<br /><br /> To get a feel for the extent of the performance difference, I ran several tests comparing two commonly-used collations: <br /><br />· Latin1_General_BIN, and <br />· SQL_Latin1_General_CP1_CI_AS<br /> <br />More specifically, I ran the following T-SQL script in a loop for 100 times:<br /><br />DECLARE @s1 varchar(max), @s2 varchar(max), @i int<br />DECLARE @dummy int, @dt datetime<br />DECLARE @length int<br /><br />SET @length = 1000000 -- or 100, or 1000000<br /><br />SELECT @s1 = REPLICATE(CAST('a' as varchar(max)), @length)<br />SELECT @s2 = REPLICATE(CAST('a' as varchar(max)), @length)<br /><br />SELECT @i = 1, @dt = GETDATE()<br /><br /> WHILE @i < 1000<br /> BEGIN<br /> IF @s1 = @s2 COLLATE SQL_Latin1_General_CP1_CI_AS <br /> --IF @s1 = @s2 COLLATE Latin1_General_BIN<br /> SET @dummy =0<br /> SET @i = @i + 1<br /> END<br /><br />SELECT DATEDIFF(ms, @dt, GETDATE())<br /> <br /><br />The script was run in a number of scenarios:<br /><br /><br />· The @length variable was set to 100, 10,000, and 1,000,000 to see the impact of the string length on the comparison method,<br /><br />· The string comparison was done in the IF clause with either the Latin1_General_BIN collation or the SQL_Latin1_General_CP1_CI_AS collation<br /><br />The most salient point to note is that as the string length increases, the performance difference between the two comparison methods increases. While there was no performance difference when the strings were 100 bytes long, the binary comparison was about twice as fast as the dictionary comparison when the string length was 10,000 bytes. When the string length increased to 1,000,000, the binary comparison was about seven times faster.<br /><br /> So, if you are comparing two short strings, you probably shouldn’t bother to explicitly cast the comparison into a binary collation. But when the strings can be very long, it can result in a huge performance improvement to explicitly specify a binary collation for string comparison.<br /><br /> Note that string comparison in COLLATE Latin1_General_BIN and string comparison in COLLATE SQL_Latin1_General_CP1_CI_AS have different semantics. But there are many cases where you have control over your data, and the difference in comparison semantics does not matter (e.g. when you know your data is all in lower case). <br /><br /> Nothing is particularly new here. Just want to contribute some data points to the community.Anil Yadavhttp://www.blogger.com/profile/09905003317091006921noreply@blogger.com0tag:blogger.com,1999:blog-2307145203574942123.post-53818542377271099692009-02-01T22:53:00.000-08:002009-02-01T22:55:02.343-08:00Indexed View Performance Tuning and OptimizationIf your application needs to access views often, consider adding a unique clustered index to your views to significantly improve performance. When a view is created with a unique clustered index, the view is created with the clustered index, which is created and stored in the database the same way as a clustered index on a table.<br /><br />Once a unique clustered index has been created for a view, you can also create non-clustered indexes for the same view, which can be used by queries against the view to enhance performance.<br /><br />As the underlying tables of the view are modified, the clustered index, and any non-clustered indexes of the view, is modified so that it is always up-to-date when it is accessed. And just like indexes on tables, indexes on views experience modification overhead. So only add an index to a view if the benefit of its speed increase when running exceeds the time it takes to update the view's index. <br /><br />Indexed views can be used by SQL Server two different ways. First, the view can be called directly from a query, as conventional views are currently used. But instead of running the view's underlying SELECT statement and creating the view's result set on the fly, it uses the unique clustered index to display the results of the view almost immediately. Second, any query that is run on SQL Server 2000/2005 is automatically evaluated to see if any existing indexed views exist that would fulfill the query. If so, the Query Optimizer uses the indexed query, even though it has not been specified in the query, greatly speeding the query.<br /><br />To get the most benefit out of indexed views, you need to use the SQL Server 2000/2005 Enterprise Edition. While you can create indexed views in the other editions of SQL Server 2000/2005, they will not be automatically considered by the query optimizer, and they require the use of the NOEXPAND hint to be used. [2000, 2005]Anil Yadavhttp://www.blogger.com/profile/09905003317091006921noreply@blogger.com0