Tuesday, September 30, 2008

Optimizing Your WHERE Clause

The WHERE clause is one of the most commonly used optional part of a query. Simply put, it filters out rows and narrows the number of rows returned by a query based on the condition included in the clause.

It's a common misconception that the SQL optimizer will always use an index whenever the table has a useful one. This is not always the case. In some cases, index will not be use and a table/index scan will be performed resulting in slow processing.

It's also widely accepted that the arrangement of expressions and the operator used does not matter, since the optimizer will parse and prepare an execution plan for the query anyway. Although this is true most of the time, arranging the logical expression properly can improve processing.

Here are some consideration that I keep in mind whenever I build my WHERE clause.

Avoid using expression that has a function on a column. This will prevent the optimizer from using the index and instead perform a table/index scan.

This query will not use an index:
select *
from AdventureWorksDW..FactInternetSalesReason
WHERE substring(SalesOrderNumber,1,2) = 'SI'

Modified, this one will use an index:
select *
from AdventureWorksDW..FactInternetSalesReason
WHERE SalesOrderNumber like 'SI%'

If the use of a function can not be avoided, use an indexed computed column instead.

If the SQL Server is not configured to be case-sensitive, do not bother using LOWER() or UPPER() functions .

These are three identical queries that will return identical results:
select * 
from AdventureWorksDW..DimGeography
WHERE CountryRegionCode = 'AU'

select *
from AdventureWorksDW..DimGeography
WHERE upper(CountryRegionCode) = 'au'

select *
from AdventureWorksDW..DimGeography
WHERE CountryRegionCode = 'au'
The second query, however, will not use an index.

Use the equal (=) operator to compare two strings instead of LIKE.

These two queries will return the same results:
select * from AdventureWorksDW..FactInternetSales
WHERE salesordernumber = 'SO43703'

select * from AdventureWorksDW..FactInternetSales
WHERE salesordernumber LIKE 'SO43703'
The first query is more efficient than the second one. If the LIKE operator can not be avoided, use as much leading character as much as possible. If the application performs too many LIKE operation, consider SQL Server's full-text search option instead.
select * from AdventureWorksDW..FactInternetSalesReason
WHERE salesordernumber like 'S%1'
The above query will perform faster than the query below:
select * from AdventureWorksDW..FactInternetSalesReason
WHERE salesordernumber like '%1'
Although they are not identical, given the choice, use the former rather than the latter.

Here are the most common operators in WHERE clause arranged based on best performing first:
=
>, <, >=, <=
LIKE
<>, NOT
Avoid using NOT operator as much as possible. Although not always the case, WHERE clause that uses the NOT operator does not utilize index.
select * from AdventureWorksDW..FactInternetSales
WHERE not ShipDateKey >= 10
will perform faster as:
select * from AdventureWorksDW..FactInternetSales
where ShipDateKey <10
Given the choice, use EXISTS() instead of IN(). Moreover, IN() have some issues handling with NULL values.

Force the optimizer to utilize an index by using index hint on the query. Use this as the last resort for optimization.

Given the choice, use BETWEEN instead of IN. The BETWEEN operator performs faster than IN.

If the clause have multiple logical expression connected by two or more AND operators, locate the expression that will be LEAST likely to be true. This way, if it's false, the clause will immediately end. If both expressions are equally likely to be false, test the least complex expression first. This way, if it's false, the more complex one need not be tested. Also, consider creating an index for a selective column or a covering index for the query.

If the clause have multiple logical expression connected by two or more OR operators, locate the expression that will be MOST likely to be true. This way, if it's true, the clause will immediately end. If both expressions are likely to be true, test the least complex expression first. This way, if it's true, the more complex one need not be tested.

Remember that IN operator is actually another form of OR, so place the most probable value at the start of the list.

A query will perform a table/index scan if it contains OR operator and if any of the referenced column does not have a useful index.

This query will perform a table/index scan. For this query to utilize an index, there must be an index on all three columns in the clause. Even if two out of the four columns have an index, it will still perform a table/index scan.
select *
from AdventureWorksDW..FactInternetSales
where ShipDateKey = 3
or PromotionKey = 3
or SalesTerritoryKey = 5
or SalesOrderLIneNumber = 3
If creating an index for each of these columns is not an option, rewrite the query to use UNION ALL (not UNION) instead. This way the query with useful index will be utilize, even if it's just the one or two out of the four query. It will still execute more efficiently.
select *
from AdventureWorksDW..FactInternetSales
where ShipDateKey = 3
union all
select *
from AdventureWorksDW..FactInternetSales
where PromotionKey = 3
union all
select *
from AdventureWorksDW..FactInternetSales
where SalesTerritoryKey = 5
union all
select *
from AdventureWorksDW..FactInternetSales
where SalesOrderLIneNumber = 3
The above query will give the same results but will run faster than using multiple ORs. If ShipDateKey and PromotionKey have useful index, it will use their respective index for that part of the query, improving the speed of the entire query.

Given the option, use EXIST or a LEFT JOIN instead of IN to test/compare the relationship of a parent-child tables.

Beware of redundant clause.

This query have a redundant WHERE clause:
select DueDateKey, ShipDateKey,
PromotionKey,
CustomerKey
from AdventureWorksDW..FactInternetSales
WHERE
DueDateKey = 8 and ShipDateKey = 8 and
PromotionKey = 1
or ShipDateKey = 8 and PromotionKey = 1
ShipDateKey = 8 and PromotionKey = 1 is a subset of DueDateKey = 8 and ShipDateKey = 8 and PromotionKey = 1. What the optimizer will do is it will return all the rows requested.

Here's the result set:
DueDateKey  ShipDateKey PromotionKey CustomerKey
----------- ----------- ------------ -----------
13 8 1 11003
13 8 1 14501
13 8 1 21768
13 8 1 25863
13 8 1 28389

It may look like the query returned the requested rows efficiently. In reality what happen is, it returned these rows twice then perform a DISTINCT to remove the redundant rows.

Convert frequently running batch to stored procedure, specially, if the queries in the batch use user-defined variables. The optimizer might not take advantage of a useful index to run the queries in the batch. This is because the optimizer does not know the value of these variables when it chooses a way to access the data. If the batch is not frequently executed, consider including an INDEX hit on the query.

As much as possible, include a WHERE clause to the query. This will improve the way SQL Server retrieve the data.



~~CK

Monday, September 29, 2008

Using Encryption to Store Sensitive Data

SQL Server, even the earlier version, has a significant number of ways to protect data from unauthorized users (ie. Views, functions, user rights, etc). However, administrators and database owners can still view table contents that might be too sensitive for anyone to see, like password stored in tables that are frequently used by UIs and other front-end apps.

SQL Server 2005 has an undocumented system function that can be use to encrypt a string to store encrypted information. It should be noted, however, that the function is a one way hash encryption. It means you can not decrypt the stored data. The only way to use it is to compare it with an encrypted string.
declare @string varchar(50), @EncryptedString varbinary(max)

set @string = 'correctpassword'
set @EncryptedString = pwdencrypt(@string)

select
pwdcompare('correctpassword',@EncryptedString ) correctpass,

pwdcompare('wrongpassword',@EncryptedString ) wongpass

Here's the result set:
correctpass wongpass
----------- -----------
1 0

This function is useful if you don't need to display the encrypted data later. If you need the decrypted data, later (like encrypting date of birth data then you'll need to display it later),it would be better to use a customized encryption function.



~~CK

Wednesday, September 24, 2008

How do you get the actual size of a Database?

How do you get the actual size of SQL Server Database? Quick answer: execute a sp_spaceused command. However, I found this excerpt from Books Online "When updateusage is specified, the SQL Server Database Engine scans the data pages in the database and makes any required corrections…..There are some situations, for example, after an index is dropped, when the space information for the table may not be current." In short, sp_spaceused is not always accurate.

For more accurate calculation of database size, try the following code:
select 
cast(size/128.0 as numeric(10,1)) as [size-mb],
cast(FILEPROPERTY(name, 'spaceused')/128.0 as numeric(10,1)) as [used-mb],
maxsize [maximum size-mb],
cast(growth*8/1024 as varchar(11))+
case when status&0x100000>0 then '%' else '' end [growth],

isnull(filegroup_name(groupid),'log') [groupname],
cast(100*(size/128-(FILEPROPERTY(name,'spaceused')/128))/(size/128.0)
as int) [% free]

from sysfiles
Here's the result set:
size-mb  used-mb  maximum size-mb growth   groupname  % free
-------- -------- --------------- -------- ---------- -----------
500.0 10.4 -1 64 PRIMARY 98
58.0 5.3 268435456 16 log 91


~~CK

What's the Difference Between SET and SELECT?

Is there really a difference between SET and SELECT when used to assign values to variables, in T-SQL? The misconception is that, there's no difference between them. I've been creating SQL Scripts for some time now and had been using these two statements interchangeably. That is until I got an unnoticed logical error resulting from using one instead of the other. So what are the similarities and differences? Both commands can assign values to a variable. For the most part, their similarities end there. Fundamentally, SET is the ANSI standard way of assigning values to a variable, while SELECT is not. Another visible difference is that SET can assign a value to only one variable, while SELECT can assign value to one or more variable at a time. Consider these variables:
declare @myVar1 int, @myVar2 int
This code should run successfully:
SELECT @myVar1 = 1, @myVar2 = 2
print '@myVar1'
print @myVar1
print ''
print '@myVar2'
print @myVar2
Here are the results:
@myVar1
1
 
@myVar2
2
This code will result to an error:
SET @myVar1 = 11, @myVar2 = 12
print '@myVar1'
print @myVar1
print ''
print '@myVar2'
print @myVar2
Here's the error returned:
Msg 102, Level 15, State 1, Line 11
Incorrect syntax near ','.
(I'll use the PRINT command to display variable content to avoid confusion between the command assigning values to variable and displaying values.) In order for this code to run, assignment of values should be done separately for each variable.
SET @myVar1 = 11
SET @myVar2 = 12
print '@myVar1'
print @myVar1
print ''
print '@myVar2'
print @myVar2
Here's the result:
@myVar1
11

@myVar2
12
Standards, however, is not the only consideration when using these two statements. Here's a sample application of how these commands could tremendously affect SQL Scripts:
declare @rowsaffected int, @errornumber int

select id/0 from sysobjects
set @rowsaffected = @@rowcount
set @errornumber = @@error

print 'rowsaffected'
print @rowsaffected
print ''
print 'errornumber'
print @errornumber
Here are the results:
Msg 8134, Level 16, State 1, Line 4
Divide by zero error encountered.

rowsaffected
0

errornumber
0
The code returned an error (division by zero), but the system variable @@error returned zero (0). This is not due to SET and SELECT per se. The @@error system variable checks if there's an error on the last t-sql statement executed. In this case set @rowsaffected = @@rowcount, is a valid statement, hence it did not return any error number. This could result in unhandled errors. In another application, I want to return the number of rows affected.
declare @rowsaffected int, @errornumber int

select id from sysobjects where id <= 10
set @errornumber = @@error
set @rowsaffected = @@rowcount

print 'rowsaffected'
print @rowsaffected
print ''
print 'errornumber'  
print @errornumber
Here are the results:
id
-----------
4
5
7
8

rowsaffected
1
 
errornumber
0
Notice that the query actually returned four rows, but the @@rowcount system variable reported only one row. The @@rowcount is a system variable that returns the number of rows affected by the previous statement. In this case, set @errornumber = @@error, returns one row. Using SET to assign the values of these system variables, in this case, is not advisable. To capture both values at the same time, use the SELECT statement.
declare @rowsaffected int, @errornumber int

select id from sysobjects where id <= 10
select @errornumber = @@error,@rowsaffected = @@rowcount

print 'rowsaffected'
print @rowsaffected
print ''
print 'errornumber'  
print @errornumber

id
-----------
4
5
7
8

rowsaffected
4
 
errornumber
0

select id/0 from sysobjects where id <= 10
select @errornumber = @@error,@rowsaffected = @@rowcount

print 'rowsaffected'
print @rowsaffected
print ''
print 'errornumber'  
print @errornumber
 
-----------
Msg 8134, Level 16, State 1, Line 4
Divide by zero error encountered.

rowsaffected
0
 
errornumber
8134
In both cases, the @@rowcount and @@error returned the right values. SET and SELECT also varies on the way scalar values are being assigned to variables, specifically if the value is coming from a query. Consider this sample table:
SET NOCOUNT ON
declare @Test table (i int, j varchar(15))
INSERT INTO @Test (i, j) VALUES (1, 'First Row')
INSERT INTO @Test (i, j) VALUES (2, 'Second Row')

select * from @Test

i           j
----------- ----------
1           First Row
2           Second Row
Using the two commands:
declare @myVarUseSET varchar(15), @myVarUseSELECT varchar(15)

SET @myVarUseSET = (select j from @Test where i = 2)
SELECT @myVarUseSELECT = j from @Test where i = 2

print '@myVarUseSET'
print @myVarUseSET
print ''
print '@myVarUseSELECT'
print @myVarUseSELECT
Here are the results:
@myVarUseSET
Second Row

@myVarUseSELECT
Second Row
The values are accurately assigned to both variables. What will happen if the query returned more than one rows.
SET @myVarUseSET = (select j from @Test)
SELECT @myVarUseSELECT = j from @Test

print '@myVarUseSET'
print @myVarUseSET
print ''
print '@myVarUseSELECT'
print @myVarUseSELECT
Here are the results:
Msg 512, Level 16, State 1, Line 10
Subquery returned more than 1 value. 
This is not permitted when the subquery follows =, !=, <, <= , >, >= or 
when the subquery is used as an expression.
@myVarUseSET

@myVarUseSELECT
Second Row
Notice that the SET query failed while the SELECT succeeded. How do SELECT statements choose which value among the rows returned will be stored to the variable? It will be the last row returned. If the query has an ORDER BY clause, it will return the last row as sorted by the ORDER BY clause. What will happen if, instead of multiple values returned, the query did not return any rows? To be able to appreciate more, I initialize the value of the variables and tried to assign a value that's coming from a query that did not return any rows.
select @myVarUseSET = 'initial value',
@myVarUseSELECT = 'initial value'

set @myVarUseSET = (select j from @Test where i = 3)
select @myVarUseSELECT = j from @Test where i = 3

print '@myVarUseSET'
print isnull(@myVarUseSET,'NULL')
print ''
print '@myVarUseSELECT'
print isnull(@myVarUseSELECT,'NULL')
Here are the results:
@myVarUseSET
NULL

@myVarUseSELECT
initial value
The way SET executes is that it will set the value of the variable to NULL. SELECT, on the other hand, will not replace the value of the variable. Notice that the variable @myVarUseSELECT still contains the initialized value of the variable. If not properly handled, this could lead to unnoticed errors that might affect the result of any scripts. Lastly, speed. Dealing with a single variable, there's not much of speed differential between the two commands. SET @variable = value and SELECT @variable = value have almost the same speed with very negligible difference. However, if it's dealing with three or more variables SELECT beats SET by almost 50% for the mere fact that SELECT can assign values to multiple variables at a time. Based on the abovementioned samples and cases, speed should not the only thing that must be considered. Keep in mind how these commands handle the results of the assignment operator.


~~ CK

Monday, September 22, 2008

Convert Column into A Delimited String, Part 1


Here's a problem that I usually encounter on the forum and at work. Sometimes I need a stored proc or function to read a table and return a single string delimited with either comma or tab or bar.
Although SQL Server 2005 included a PIVOT/UNPIVOT relational operators, I think it's still worth it to look at this solution.
Consider this table:
declare @myTable table (rownum int, rowname varchar(10))
declare @DelimitedString varchar(max)

insert @myTable values (1, 'One')
insert @myTable values (2, 'Two')
insert @myTable values (3, 'Three')
insert @myTable values (4, 'Four')
insert @myTable values (5, 'Five')
insert @myTable values (6, 'Six')
insert @myTable values (7, 'Seven')
insert @myTable values (8, 'Eight')
insert @myTable values (9, 'Nine')
insert @myTable values (10, 'Ten')

select * from @mytable

rownum      rowname
----------- ----------
1           One
2           Two
3           Three
4           Four
5           Five
6           Six
7           Seven
8           Eight
9           Nine
10          Ten
I need this output:
One,Two,Three,Four,Five,Six,Seven,Eight,Nine,Ten
So far, this is the best code that I got:
declare @DelimitedString varchar(max)

SELECT
@DelimitedString = COALESCE(@DelimitedString+',' , '') + ColumnName 
FROM @MyTable

SELECT @DelimitedString
Here's the result set:
-------------------------------------------------
One,Two,Three,Four,Five,Six,Seven,Eight,Nine,Ten
Then I tried parsing that string back to rows:
declare @xmldoc int
declare @DelimitedString varchar(250)

set @DelimitedString = 'One,Two,Three,Four,Five,Six,Seven,Eight,Nine,Ten'

set @DelimitedString = '<root><z><y>' + replace(@DelimitedString, ',', '</y></z><z><y>') + '</y></z></root>'

select @DelimitedString
The variable now looks like this:
<root>
  <z>
    <y>One</y>
  </z>
  <z>
    <y>Two</y>
  </z>
  <z>
    <y>Three</y>
  </z>
  <z>
    <y>Four</y>
  </z>
  <z>
    <y>Five</y>
  </z>
  <z>
    <y>Six</y>
  </z>
  <z>
    <y>Seven</y>
  </z>
  <z>
    <y>Eight</y>
  </z>
  <z>
    <y>Nine</y>
  </z>
  <z>
    <y>Ten</y>
  </z>
</root>
Followed by...
exec sp_xml_preparedocument @xmldoc OUTPUT, @DelimitedString

select y as rows
from openxml(@xmldoc, '/root/z',2)
with (y varchar(15))

exec sp_xml_removedocument @xmldoc
Here's the result set:
rows
---------------
One
Two
Three
Four
Five
Six
Seven
Eight
Nine
Ten
As always, comments are welcome


~~ CK

Friday, September 19, 2008

How do SQL Server Programmers Search for Their Dates?

Being a user of SQL Serever, I dealt with searching and using dates in logical conditions, in one way or another. Initially, it was quite confusing.

Here are a couple of factors.

First, it's because, as of SQL 2005, there are no data types that holds date only, nor time only data. SQL Server stores both the date part and time part using the same data type. The only difference is the precision of these data types.

Second is because you have to use string literals to handle static date data. Users usually miss the time part of the string literal. SQL Server will default the converted time part to zero (00:00:00.000). This could lead to logical conditions to return unexpected results. (For more on all these, read my old notes).

Try and simulate some logical conditions. Consider this sample table:
declare @myTable table(rownum int, DateData datetime)
set nocount on

insert into @mytable values(1,'2005-10-14 01:36:25.440')
insert into @mytable values(2,'2005-10-14 01:36:25.497')
insert into @mytable values(3,'2005-10-14 01:36:25.570')
insert into @mytable values(4,'2005-10-14 01:36:25.627')
insert into @mytable values(5,'2005-10-14 01:36:25.683')
insert into @mytable values(6,'2005-10-14 01:36:25.740')
insert into @mytable values(7,'2005-10-15 00:00:00.000')
insert into @mytable values(8,'2008-07-24 12:52:42.360')
insert into @mytable values(9,'2008-07-25 00:00:00.000')
insert into @mytable values(10,'2008-07-25 12:38:35.060')
insert into @mytable values(11,'2008-07-25 12:38:35.137')
insert into @mytable values(12,'2008-07-26 00:00:00.000')
insert into @mytable values(13,'2008-08-13 00:00:00.000')

select * from @myTable

rownum datedata
----------- -----------------------
1 2005-10-14 01:36:25.440
2 2005-10-14 01:36:25.497
3 2005-10-14 01:36:25.570
4 2005-10-14 01:36:25.627
5 2005-10-14 01:36:25.683
6 2005-10-14 01:36:25.740
7 2005-10-15 00:00:00.000
8 2008-07-24 12:52:42.360
9 2008-07-25 00:00:00.000
10 2008-07-25 12:38:35.060
11 2008-07-25 12:38:35.137
12 2008-07-26 00:00:00.000
13 2008-08-13 00:00:00.000

Now search for specific records using the date data as filter condition :
select *
from @myTable
where DateData = '20080724'


select *
from @myTable
where DateData = '20080813'
Here are the result sets:
rownum      datedata
----------- -----------------------

rownum datedata
----------- -----------------------
13 2008-08-13 00:00:00.000
Now, why did the first query did not return any rows while the second query returned the 13th row? That's in spite of row 8 being the only one with date July 24, 2008 anyway.

A more detail analysis shows that the WHERE clause of the query compares the DateData column to a date-formatted string. In order to compare two values, they must be of the same data type, otherwise, the data type with the lower precedence will be converted to the data type with the higher precedence. In this case, datetime is higher than any other native string data types. The first query is synonymous to:
select * from @myTable
where datedata = cast('20080724' as datetime)


rownum datedata
----------- -----------------------
Since the function converted a date-literal with no time part, it sets the time to zero (00:00:00.000). Resulting to a false condition (2008-07-24 12:53:00 <> 2008-07-24 00:00:00.000).

So how do SQL Server Programmers search for their dates? Here are some options how to retrieve the 8th record.

It's always possible tp convert the date column into string with no time part and compare two strings. Something like:
select * from @myTable
where convert(varchar(20), datedata, 112) = '20080724'

rownum datedata
----------- -----------------------
8 2008-07-24 12:52:42.360
It did. Technically, there's nothing wrong with this code. It returned the desired result. The only problem that it will encounter later is performance. SQL Server will not use index to optimize our query since we use a function in date column for our WHERE clause. This is OK if the query is processing a small table. If it's reading a large volume of data, this will tremendously affect the execution time.

Another option is to grab all records with dates between 20080724 AND 20080725.
select * from @myTable
where datedata BETWEEN '20080724' and '20080725'

rownum datedata
----------- -----------------------
8 2008-07-24 12:52:42.360
9 2008-07-25 00:00:00.000
Now why did it include row 9? It's the way how BETWEEN operator works. BETWEEN is a logical operator that operates inclusive-ly. The above query is similar to:
select * from @myTable
where datedata >= '20080724' and datedata <= '20080725'
The second condition will follow the “explicit conversion of lower precedent data type with no time part” rule, so the 9th row was included to the result set. So, try and include the time part:
select * from @myTable 
where datedata
BETWEEN '20080724 00:00:00.000' and '20080724 23:59:59.999'


rownum datedata
----------- -----------------------
8 2008-07-24 12:52:42.360
9 2008-07-25 00:00:00.000
Why did it still return 2 rows? The conversion, as it is, has nothing to do with this. The second expression ('20080724 23:59:59.999') will be properly converted to a DateTime expression, complete with time part. However, the DateTime precision is 3.333 seconds (For more about DateTime and SmallDatetime rounding and precision, read my old notes) resulting the second part to be rounded to '2008-07-25 00:00:00.000'. To resolve this rounding issue, try to prevent the time part from rounding up by doing:
select * from @myTable
where datedata
BETWEEN '20080724 00:00:00.000' and '20080724 23:59:59.998'


rownum datedata
----------- -----------------------
8 2008-07-24 12:52:42.360
And if it's SmallDateTime, the code should be :
select * from @myTable 
where datedata
BETWEEN '20080724 00:00:00.000' and '20080724 23:59:00'


rownum datedata
----------- -----------------------
8 2008-07-24 12:53:00
Now there are two ways of doing it depending on which data types the query is dealing with. Although, these are all valid codes, it is not recommended to have two versions of codes to handle these two sibling data types. Also, it's necessary to be always conscious of whether it's dealing with a DateTime or a SmallDateTime expression. To be more flexible, it would be better to create something like this:
select * from @myTable 
where datedata >='20080724 ' and datedata < '20080725'

rownum datedata
----------- -----------------------
8 2008-07-24 12:53:00
The above code is clearer and SQL Server can also use index for query optimization. For range of dates, it can easily be extended and modify the condition of the same code:
select * from @myTable 
where datedata >='20080724 ' and datedata < '20080727'

rownum datedata
----------- -----------------------
8 2008-07-24 12:53:00
9 2008-07-25 00:00:00
10 2008-07-25 12:39:00
11 2008-07-25 12:39:00
12 2008-07-26 00:00:00
As a general rule, use a method that can handle both DateTime and SmallDateTime, specially if you the requirement does need to deal with the time part. Also, consider the performance of the query. Placing a date data type column in a function as part of a filter condition will ignore the index and will perform a table scan. Most importantly, if using a function is unavoidable, be aware that most date and time functions are dependent to the user's default language, SET LANGUAGE and SET DATEFORMAT settings.



~~CK

Monday, September 15, 2008

The Basics of Dates

On my old notes, I wrote down some of the similarities and differences between smalldatetime and datetime data types. Now I'll write up some more of the basics that must be considered before using these data types.
Default Values
Both these data types use the base date (January 1, 1900) as their default value. Even if Datetime can handle much earlier dates, it will always default to the base date.
select cast('' as smalldatetime) as 'smalldatetime',
cast('' as datetime) as 'datetime'
Here is the result set:
smalldatetime           datetime
----------------------- -----------------------
1900-01-01 00:00:00     1900-01-01 00:00:00.000
Both function returned the base date as the default date.

Time, however, will always default to zero (00:00:00). The only difference is the fractional seconds that DateTime can handle.
select
cast('12/29/2008' as smalldatetime) as 'smalldatetime',
cast('12/29/2008' as datetime) as 'datetime'

smalldatetime            datetime
----------------------- -----------------------
2008-12-29 00:00:00     2008-12-29 00:00:00.000

Rounding and Precisions

SmallDateTime has a simple way of preserving its one minute precision. All dates with 29.998 seconds and below will be rounded down to the nearest minute, all dates with 29.999 seconds and higher will be rounded up to the next minute.
select '20091229 23:59:29.998' as 'string',
cast('20091229 23:59:29.998' as smalldatetime) as 'smalldatetime'

string                smalldatetime
--------------------- -----------------------
20091229 23:59:29.998 2009-12-29 23:59:00


select '20091229 23:59:29.999' as 'string',
cast('20091229 23:59:29.999' as smalldatetime) as 'smalldatetime'

string                smalldatetime
--------------------- -----------------------
20091229 23:59:29.999 2009-12-30 00:00:00
DateTime has a more complicated rounding off calculation. Dates are rounded to increments of .000, .003 or .007 seconds.
Consider the following examples:
declare @DateData table(stringdate varchar(25))
set nocount on
insert into @DateData values('20091229 23:59:29.990')
insert into @DateData values('20091229 23:59:29.991')
insert into @DateData values('20091229 23:59:29.992')
insert into @DateData values('20091229 23:59:29.993')
insert into @DateData values('20091229 23:59:29.994')
insert into @DateData values('20091229 23:59:29.995')
insert into @DateData values('20091229 23:59:29.996')
insert into @DateData values('20091229 23:59:29.997')
insert into @DateData values('20091229 23:59:29.998')
insert into @DateData values('20091229 23:59:29.999')

select stringdate, cast(stringdate as datetime) converted
from @DateData
Here's the result set:
stringdate                converted
------------------------- -----------------------
20091229 23:59:29.990     2009-12-29 23:59:29.990
20091229 23:59:29.991     2009-12-29 23:59:29.990
20091229 23:59:29.992     2009-12-29 23:59:29.993
20091229 23:59:29.993     2009-12-29 23:59:29.993
20091229 23:59:29.994     2009-12-29 23:59:29.993
20091229 23:59:29.995     2009-12-29 23:59:29.997
20091229 23:59:29.996     2009-12-29 23:59:29.997
20091229 23:59:29.997     2009-12-29 23:59:29.997
20091229 23:59:29.998     2009-12-29 23:59:29.997
20091229 23:59:29.999     2009-12-29 23:59:30.000
Notice that .995 is rounded to .997 even if it's exactly between .003 and .007. This is because seconds are still integers and will still follow the integer rule of rounding off.

Formats

Whenever a date values is used in T-SQL, it’s probably specified in string literal. However, SQL Server might incorrectly interpret these strings as different dates. A date value '08/12/05' could be interpreted in six different ways. These interpretations are usually affected by the SET DATEFORMAT and SET LANGUAGE settings. There are some string literals that are not affected by these settings. Unless you are sure of these settings, try using a setting-independent format.

Here are some of the acceptable date literal formats:

Numeric
Separated
'12/29/2008 14:21:00.000'
DF
LN
Numeric
Unseparated
'20081229 14:21:00.000'
-
-
ANSI-SQL'1998-12-23 14:23:05'
DF
LN
Alphabetic'29 December 1998 14:23:05'
-
LN
ODBC
DateTime
{ts '1998-12-29 14:23:05'}
-
-
ODBC Date{d '1998-12-29'
-
-
ODBC Time{t '14:29:09'}
-
-
ISO 8601'1998-12-29T14:27:09'
-
-
Time'14:29:09' '2:29:09 PM'
-
-

DF - SET DATEFORMAT dependent
LN - SET LANGUAGE dependent

Notice that ANSI-SQL is also a numeric-separated format, but still LANGUAGE and DATEFORMAT dependent. ODBC uses escape sequences to identify date (d), time (t) and timestamp (date+time). SQL Server always treats ODBC date data as DateTime.

Let's see how DATEFORMAT and LANGUAGE settings can affect your literal string. The code below will give an error:
set dateformat dmy
select cast('12/29/2008' as datetime)

Here's the result:

-----------------------
Msg 242, Level 16, State 3, Line 2
The conversion of a char data type to a datetime data type resulted in
an out-of-range datetime value.
This code however, is valid:
set dateformat mdy
select cast('12/29/2008' as datetime)
Here's the result:
-----------------------
2008-12-29 00:00:00.000
Here's how language setting can affect your date literals:
set language British
select datename(month,cast('08/12/05' as datetime))
Here's the result:
Changed language setting to British.

------------------------------
December
And changing the setting :
set language us_english
select datename(month,cast('08/12/05' as datetime))
Can change the result of the same code to:
Changed language setting to us_english.
------------------------------
August

To avoid these errors, try to use those formats that are not dependent to LANGUAGE and DATEFORMAT settings.



~~CK

Friday, September 12, 2008

How SQL Programmers Choose their Dates


(I just created this blog and been thinking where to start. I decided to start with one of the basic data types. The, sometimes confusing, Datetime and its younger sibling, SmallDateTime Data Types. I hope this notes will be able to help you understand the similarities and differences between these two data types)


As of SQL 2005, there are only two data types that handles both date and time data: DateTime and SmallDateTime. There was no type that handles neither date only, nor time only. Both data types handles data with a date part combined with a time of the day.

Instead of repeating what has been posted on articles around the net and Book Onlines, I think it would be better to just give the highlights.


SmallDateTimeDateTime
Minimum Value
January 1, 1900 00:00:00January 1, 1753 00:00:00.000
Maximum Value
June 6, 2079 59:59:00December 31, 9999 59:59:59.997
Precision
Up to 3.333 millisencondsUp to one minute
Storage Size
8 bytes (Two 4-byte int)4 bytes (Two 2-byte int)
Accuracy
Rounds to increments
of .000, .003, .007
Up to 29.998 seconds are rounded
down, all else round up
Default Value (base date)
January 1, 1900 00:00:00January 1, 1900 00:00:00


It's a common misconception that SQL Server stores these data type in some sort of date-structured formats. Internally, it's being stored as a two-part integer. DateTime, being able to handle higher precision, stores data in a two 4-byte integer. The first four stores the number of days before or after the base date. The other four stores the number of milliseconds since midnight. SmallDateTime stores data as a two 2-byte integer. The first two stores the number of days after the base date, the other two stores the number of minutes since midnight.


How to choose?

The answer would depend on the range of value that will be processed. Based on the table above, SmallDateTime can only handle dates from January 1, 1900 through June 6, 2079. If the expected date value is outside of this range, use DateTime.

Some application would only require up to the minute accuracy. For example, most payroll applications calculate number of hours worked up to the minute only, and does not bother considering seconds and milliseconds. In this case, use SmallDateTime. On the other hand, some telecommunication company uses the actual Elapsed Conversation Duration (ECD), that is accurate to the seconds, and sometimes up to milliseconds ,to come up with various calculations of the billable minutes. Some calculations consider a second as a billable unit, others use pulse (6 seconds = 1 pulse). Depending on the application (consider future requirements as well) these sample cases could easily be the major consideration in deciding which type to use.

Once a date data is stored to a SmallDateTime column, the second precision will be gone even if the data type is modified to DateTime later. Only those newly inserted or newly updated values will be affected.



~~CK