Blog ala Vidar

SQL, AppFrame and other cool technologies

Monthly Archives: October 2008

NULL?

A couple of days ago Torgeir came over to my desk and told me proud “you shouldn’t use SELECT when setting variables. You should use SET”. Then he told me the reason, and it got me thinking.. The reason is simple. If you do this:

DECLARE @PrimKey AS UNIQUEIDENTIFIER
SELECT @PrimKey = PrimKey
	FROM stbl_WinClient_Projects
	WHERE ProjectName = 'Sys.Security.ManageUsers'
PRINT @PrimKey
SELECT @PrimKey = PrimKey
	FROM stbl_WinClient_Projects
	WHERE ProjectName = 'A.Project.That.Does.Not.Exist'
PRINT @PrimKey

Notice that it will return the same PrimKey both times. When you’re writing the query, you might think that this will give you NULL on the last SELECT, but it won’t because there’s no records returned, and it will therefore not touch the variable. The correct way of doing this would then be:

DECLARE @PrimKey AS UNIQUEIDENTIFIER
SET @PrimKey = (SELECT PrimKey
		FROM stbl_WinClient_Projects
		WHERE ProjectName = 'Sys.Security.ManageUsers')
PRINT @PrimKey
SET @PrimKey = (SELECT PrimKey
		FROM stbl_WinClient_Projects
		WHERE ProjectName = 'A.Project.That.Does.Not.Exist')
PRINT @PrimKey

In some cases you might want to add ISNULL. For instance in subselects, this might be important. In the next query you’ll notice what I’m talking about. The first column will return NULL, and the other will return 0.000000:

SELECT
  (SELECT TOP 1 ISNULL(Amount, 0) AS Amount
	FROM atbv_Accounting_Transactions
	WHERE Series = 'SomethingThatDoNotExists'
	ORDER BY Period DESC) AS Amount1,
  ISNULL((SELECT TOP 1 Amount AS Amount
	FROM atbv_Accounting_Transactions
	WHERE Series = 'SomethingThatDoNotExists'
	ORDER BY Period DESC), 0) AS Amount2

Both queries are using ISNULL, but the first is using INSIDE the subquery, but since this will never return any records, it will never come that far.. The second uses ISNULL OUTSIDE the subquery, so if the subquery returns NULL, it will replace it with 0.0000

NB! One last thing… Some bright souls may have noticed that I haven’t used   NOLOCK on the first queries. I’ll send you a price. Good boy/girl 😀 ALWAYS REMEMBER TO USE WITH (NOLOCK) WHEN SELECTING FROM TABLES!

Restore to point-in-time

What do you do if one of your customers calls and sais he’s just deleted something and he REALLY needs it back? We do this test about ones a month, to check that our backup-routines is working. For this example, we’ll say that it’s a Friday and the current time is 16:00. Your customer deleted what he needed 30 minutes ago (15:30). Here’s our setup:
We take full backup ones a week to C:\SQL\Backup\MyDatabase.bak
Differential is taken every night to C:\SQL\Backup\MyDatabase_Diff.bak
Transaction log backup is taken every day at 12:00 to C:\SQL\Backup\MyDatabase.trn

First thing to do is to take a backup of your transaction log:

BACKUP LOG [MyDatabase]
	 TO DISK = 'C:\SQL\Backup\MyDatabase_CustomLog.trn'
	 WITH NAME = N'MyDatabase-Transaction Log Backup'

When this is done, start restoring your last FULL backup of the database:

RESTORE DATABASE MyDatabase_BU
	 FROM DISK = 'C:\SQL\Backup\MyDatabase.bak'
	 WITH
	 MOVE 'MyDatabase_Data' TO 'C:\SQL\Backup\MyDatabase_BU.MDF',
	 MOVE 'MyDatabase_Log' TO 'C:\SQL\Backup\MyDatabase_BU.LDF',
	 NORECOVERY

We’re using “NORECOVERY” because we’re going to push more files on this database before making it available. In our environment we take full backup ones a week, with differentials every night, and transaction logs at 12:00. Here’s the code for adding the diff: NB! You only need the latest differential.

RESTORE DATABASE MyDatabase_BU
	 FROM DISK = 'C:\SQL\Backup\MyDatabase_Diff.bak'
	 WITH FILE = 1,
	 NORECOVERY

After adding the differential, we’ll add the transaction log from 12:00:

RESTORE LOG MyDatabase_BU
	 FROM DISK = 'C:\SQL\Backup\MyDatabase.trn'
	 WITH NORECOVERY

And when that’s done, we’ll add the transaction log we just created, but with a “point in time” attribute, STOPAT. Since the customer did the delete at 15:30, we’ll restore till 15:29. Notice that we’re using WITH RECOVERY on the last one!

RESTORE LOG MyDatabase_BU
	 FROM DISK = 'C:\SQL\Backup\MyDatabase_CustomLog.trn'
	 WITH RECOVERY, STOPAT = '2008-10-24 15:29'

After this is done, you can insert the records FROM MyDatabase_BU to MyDatabase, and your customer will be VERY happy 🙂

SQL Server 2008 Express Edition

Looking for management studio? Here’s the direct link: http://download.microsoft.com/download/7/9/4/794bfafa-aea7-45d4-a6ea-4e92f09918e3/SQLEXPRWT_x86_ENU.exe
This includes both the server (express edition) and management studio. In 2008 management studio has the management node on the server, and is also compatible with SQL Server Agent 🙂

Table Valued Parameters?

This is a new feature to SQL Server 2008, where you can send a table to a SP or UDF. I’ve got the question about how to handle this in 2005 many times, and I’ve now decided to post a blog about it. Before you read more about this, please consider. IS IT REALLY NECCESSARY to do this from the client? Can we do everything on the SQL Server? If so, DO IT! My following example can all be done on the server, but this is only to show you how to do it if you really need to get it from the client.

Let’s say we got a bunch of data we want to send to a SP. In this case, I’ve just built up a dataset from stbv_System_Users, where I’ve got one column, Login.

Dim vUsersDataSet As New DataSet
Dim vUsers As New afRecordSource("stbv_System_Users")

vUsers.SelectColumns.Add("Login")
vUsers.MaxRecords = 10

vUsersDataSet.Tables.Add(afAccessLayer.GetData(vUsers))

Dim vProcedure As New AppFrame3.Common.Data.afProcedureCall
vProcedure.ProcedureName = "astp_Vidar_Test1"
vProcedure.Parameters.Add("MyXML", vUsersDataSet.GetXml().ToString())

Try
	AppFrame3.Common.Data.afAccessLayer.ExecProcedure(vProcedure)
Catch ex As Exception
	AppFrame3.Win.CR3.HandleException(ex)
End Try

Here you see I’ve put it in a dataset, then used .GetXml to send it as a parameter to my SP:

CREATE PROCEDURE astp_Vidar_Test1 (@MyXML AS XML)
AS

DECLARE @DocID AS INT

EXEC sp_xml_preparedocument @DocID OUTPUT, @MyXML

SELECT *
	FROM OPENXML(@DocID, '/NewDataSet/Table1', 2)
	WITH (Login NVARCHAR(128))

EXEC sp_xml_removedocument @DocID

My SP takes one parameter, and “converts” it to SQL using OPENXML. Now I could go on, inserting these records somewhere. OPENXML takes three parameters. First one is docid, which is created by sp_xml_preparedocument. Second one is the path in the XML you want to handle. Third one is what type of input it is. If it’s with elements or attributes. 2 is elements, 1 is attributes. Under you’ll see the XML created by the .GetXml method on the dataset. I don’t care about the PrimKey field, so I’ve only specified the Login field in the WITH clause.

<NewDataSet>
  <Table1>
    <Login>vidar</Login>
    <PrimKey>63444aa2-e0b8-4c89-b476-229c81145961</PrimKey>
  </Table1>
  <Table1>
    <Login>vidar2</Login>
    <PrimKey>e72d74d0-3c8f-4256-b188-8001557d9849</PrimKey>
  </Table1>
</NewDataSet>

Generate Change Script

Many of you probably noticed TeamDoc had some issues today. It was my fault, but before you shoot me, let me explain what happen, so it won’t happen to you! You’ve maybe noticed when doing changes to big tables, management studio goes to timeout, because it takes too long. The work around for me has always been to press the “Generate Change Script”, copy the script to a new query window, and execute it. Since there’s no timeout set on query windows, this will execute till it’s finished. And in ALL cases this has worked, until today.

We found a bug in TeamDoc, which was related to the Folder field in the Documents-table allowing NULL. We were filtering WHERE Folder + ‘ ‘ + Title LIKE ‘%’ + @SearchText + ‘%’. When Folder is NULL, this doesn’t work. So, instead of changing all SPs to handle NULL-values, we decided to change the field in the table, to not allow NULL, but have (‘’) as default value (because we just changed it to allow NULL). Since I’ve changed the design of this table about 1000 times before, I know it will go to timeout, so I automatically pressed the “Generate Change Script” button, copied the script to a new window, and executed it. Guess what happened! YEAH! Errors.

What happens when you change a field like this is that it creates a new table (tmp_your_table_name), puts the records in this table, then it recreates the original table (DROP, CREATE), then it inserts the records back from tmp_your_table_name, and puts on all indexes etc. The boring thing is that when inserting into tmp_your_table_name it’s not that cleaver that it puts a ISNULL around the Folder-field. And what happens? Well, it doesn’t insert any records, then it drops the original table, and creates it again. What about transactions? Well, it creates transactions inside this script, but not AROUND the script, so there’s no “master” transaction. Therefore, the data of the Documents-table was lost. JIPPI!

We quickly ran in to JanCB to restore a backup, to get the records out as soon as possible. We had a backup from about 09:00, and a log from about 12:00. But, before 12:00 I did some tests (setting the initial size of MDF and LDF files), and took some backups to test if it had any influence on the .bak file. Btw, there were no change to the size of the bak-file. Then we remembered that the old server was still running, so we picked the records of the Documents-table from this. Then we’re only missing the TeamDocs created from Sunday night, till now. There were only 3, so we created them manually by inserting based on the inputs table.

If I hadn’t done any tests, including backups, we could have restored the backup from 09:00, then the log from 12:00 and then a transaction log to a point-in-time, right before I did the change, and picked the data from that database. That would have taken a lot more time though…

ANYHOW. Everything is up and running, and there shouldn’t be anything missing. Sorry for making TeamDocs unavailable 🙂

NB! We found one very nice feature in 2008. I was aware of this, but not that it was THAT efficient! Backup compression! Without compression, it takes about 30 minutes to create the 89GB bak file. With compression, it takes 4 minutes, and the bak file isn’t more than 34GB! That’s impressive!

Main Database Server Upgrade 2

We are now finished upgrading. Because of missing SCSI-controller, we couldn’t set all the disks up as we wanted, but as soon as the controller is up and running, tempdb and LDF will be moved to separate arrays. The new server has the double amount of both CPUs and memory, and the disks it’s running on is quicker. We did some quick performance tests, and it looked VERY promising. Although we have to wait till Monday to see the actual performance (because we were the only users logged on).

We did some other minor things, like add more memory to TSQLDEV1 and add a new DHCP-range.

For server monitoring we’re using a tool called “What’s up”. 30 seconds after we unplugged the old server, all of us (me, JanCB, Leif and Åsmund) got SMS that it was down. We’ve got monitoring on ALL servers in our network. Nice to see that it’s working 🙂 We also got Securitas monitoring our main server room, in case the power shuts down. To test this, we used a hair dryer on the censor. The censor is set up to alarm Securitas if it exceeds 40 degrees Celsius. We heated it up to 42, and 5 minutes later, JanCB got a phone call from Securitas saying our server room was over heated! Before we got this monitoring, we only had “what’s up” to notice us. But what happens if the power shuts down? Well, all servers run on UPS’s. After some time, the battery in the UPS’s is empty, so the server shuts down. What happens then? The “what’s up” server is down, so it can’t notice us. One extra cool “feature” we didn’t know was that after we heated the censor to 40 degrees, the alarm in the building started to yell. We also got messages from another monitoring system that the noise level in the server room was to high 😛 SWEEEEEET!

After everything was set up, we did some tests. Teamdoc, internal applications, MarketMaker etc. Everything seems to be working. There’s one SP that was causing troubles though. sstp_Database_Maintenance_RefreshAllViews. This works perfectly in 2005, but in 2008 I had to add ROLLBACK TRANSACTION inside the catch-block. If not, it “thought” that all views failed for some weird reason. I’ll read more about it tomorrow. Have to get some sleep now. Here’s the code:

DECLARE @ViewName NVARCHAR(128)
DECLARE @ViewsToRefresh TABLE (
	ViewName NVARCHAR(128),
	TSQLCommand NVARCHAR(MAX) NULL
)

INSERT INTO @ViewsToRefresh (ViewName)
SELECT [name]
	FROM sys.sysobjects
	WHERE LEFT([name], 4) IN ('atbl', 'stbl',
				'arpt', 'atbv', 'atbx', 'aviw',
				'stbv', 'stbx', 'sviw')
	  AND xtype='V'
	ORDER BY [name]

WHILE EXISTS (SELECT * FROM @ViewsToRefresh)
BEGIN
	SELECT TOP 1
	  @ViewName = ViewName
		FROM @ViewsToRefresh

	BEGIN TRY
		EXEC sp_refreshview @ViewName
	END TRY
	BEGIN CATCH
		ROLLBACK TRANSACTION
		PRINT 'Error refreshing ' + @ViewName
	END CATCH

	DELETE
		FROM @ViewsToRefresh
		WHERE ViewName = @ViewName
END

Main Database Server Upgrade

As many of you probably already know, we’re going to upgrade our main database server this weekend. We’re already starting preparations tomorrow, and boy am I looking forward to it! The “old” TVS2 is going to be our new server now. This has been the technology’s main virtual host server. It’s running with 2 x Quad processors of each 2.66GHz, with 16GB RAM. We will run Windows Server 2008, and of course SQL Server 2008! The new disk setup will be as following:

Name Function RAID Disks Total Capacity
C:\ Operative System RAID 1 2 x 146GB 15k RPM SAS 146GB
D:\ Backup databases RAID 10 4 x 300GB 10k RPM SAS 600GB
E:\ MDF file RAID 10 14 x 146GB 15k RPM SAS 1022GB
F:\ LDF file RAID 10 12 x 36GB 10k RPM SCSI 216GB
G:\ tempdb RAID 1 2 x 36GB 10k RPM SCSI 36GB

We discussed this disk-setup with Jon Jahren, and he “approved” it. The only thing he commented was that to get absolutely maximum performance, we should think about SSD disks. Because of the price (SSD disks costs more than the double of SAS disks), and the fact that we already had SAS, we didn’t go for SSDs. At first we were thinking of splitting the MDF file into several files, placing them of several arrays, but according to Jon we would get into more trouble administrating it than we would gain on performance, although the SQLCAT-team (Customer Advisory Team) sais this is “best practice” for performance. We trust Jon, since he’s been helping companies that are much bigger than us with the same problems.

We are starting tomorrow preparing the new server. Copying backups etc will take some time, although we’ve got 1Gb network between the servers. The downtime period is set from 21:00 to 09:00 (12 hours) on Saturday, but if everything goes as planned we’ll be finished before I usually go to bed 😉 I won’t say when!

We are also doing some other minor upgrades. Among them is adding more disks to the server which is running PIMS_R3.

I’ve got a little competition for you (we LOVE competitions in tech). First one to answer gets a prize!
Who is this?

UPDATE World FROM Ølensvåg

I have now set up some of my “preferred” sessions for TechEd. Of course, database sessions 🙂 This year there are 17 sessions I found very interesting. I’m not really sure about the others, but I’ll read some more about them later. The most interesting on my list is “Managing Unstructured and Semi-Structured Data in SQL Server 2008”, “Performance Enhancements in SQL Server 2008”, “Understanding SQL Server 2008 Security” and “Tell Us Where It Hurts!”. I’m really looking forward to going to TechEd. It’s under 4 weeks till we’re going! 😀

Today we had a FIFA tournament in Ølensvåg. FIFA is a football (for USA dudes: soccer) game on Playstation 3 (and probably other consoles). We were 10 guys, choosing one team each, playing first in groups, then semi-finals and final. Guess who won. Yeah, you’re correct. Of course it was me! Easy peasy!

I’ve started a new project today. In Baker they’ve been struggling with performance issues. They’re using web (with LOTS of features) and CR2. We decided it was a good solution to make it all in CR3, from scratch. New (R3) data-model, new client etc. Me and Nils Arne have been involved in this solution before, so we gathered and made a solid data-model to start with. Stian Skjerveggen is with me, developing CR3 forms. I’m of course taking care of the database-stuff. Kai Fischer will probably also join us in a couple of days. It’s really great to start from scratch, with a clean data-model. We’ll always manage to port the old data to the new data-model, so that’s not something I’m worried about. The thing I’m worried about is time, as always. In one week, we’re supposed to have something to show. I’m staying positive about this, because I know it’s manageable. 

I’m looking forward to that day when we can say that everyone’s running on R3. Maybe within a couple of months? 🙂

Common mistakes

When doing QA on SQL-objects, I’ve come over some very common mistakes. Number one has to be people forgetting that the Inserted and Deleted views inside triggers may contain more than more record.
This is a very good (and simplyfied) example of code from an insert trigger:

DECLARE @Login AS NVARCHAR(128)

SELECT @Login = Login
	FROM Inserted

IF NOT EXISTS (SELECT * FROM stbl_System_Users WITH (NOLOCK) WHERE Login = @Login)
BEGIN
	INSERT INTO stbl_System_Users (Login) VALUES (@Login)
END

This will only work if you’re inserting one record, but what if you’re inserting several? It will only do the code you’ve added for ONE (random) record.
The solution for this is very simple:

INSERT INTO stbl_System_Users
  (Login)
SELECT
  I.Login
	FROM Inserted AS I
	WHERE NOT EXISTS (SELECT *
				FROM stbl_System_Users WITH (NOLOCK)
				WHERE Login = I.Login)

In some cases you might want to execute a SP. Instead of INSERT, then EXEC:

DECLARE @Login AS NVARCHAR(128)

SELECT @Login = Login
    FROM Inserted

IF NOT EXISTS (SELECT * FROM stbl_System_Users WITH (NOLOCK) WHERE Login = @Login)
BEGIN
    EXEC astp_Some_SP @Login = @Login
END

Solution to this:

DECLARE @Login AS NVARCHAR(128)

SELECT  I.Login
	INTO #LoopLogins
	FROM Inserted AS I
	WHERE NOT EXISTS (SELECT *
				FROM stbl_System_Users WITH (NOLOCK)
				WHERE Login = I.Login)

WHILE EXISTS (SELECT * FROM #LoopLogins)
BEGIN
	SELECT TOP 1
	  @Login = Login
		FROM #LoopLogins

	EXEC astp_Some_SP @Login = @Login

	DELETE
		FROM #LoopLogins
		WHERE Login = @Login
END