Blog ala Vidar

SQL, AppFrame and other cool technologies

Tag Archives: SQL Server 2008

SQL Weekend

From left: Stian, Erik, Svein Even, Erik Martin, Vidar, Kai Roger, Jekaterina, Werner, Mauricio. (Photo: Werner Waage)

This weekend me and 9 others from my department had what we called a SQL Weekend. No, not Spa Weekend, but SQL Weekend. The goal was to learn as much as possible about SQL Server and start studying for the 70-433 (TS: Microsoft SQL Server 2008, Database Development) exam, but also to socialize. We arrived at Rica Maritim Hotel an hour after work and went straight to Naustet, which is the restaurant at that hotel, and had dinner. If you’re in Haugesund and want a good meal, you really have to go there. It was delicious!

After dinner we headed up to one of the meeting rooms we had ordered and started preparing for our talks the next day. The plan was that each person would pick a chapter in the book and have a presentation about it. My chapter was chapter 4; Using additional query techniques. I’ve already passed this exam, and my presentation was done WAY before this weekend, so I used my “spare” time to help the others preparing their sessions.

When we started getting tired of PowerPoint, we all went to the bar to have something to drink and socialize a bit. It’s nice to have some time chatting about something else just for a couple of hours.

Stian having his talk about SQLCLR and FileStream.

The next day (Saturday) started with breakfast at 08:00. Breakfast was, as the dinner the day before, one of the best meals I’ve ever had. After eating a bit too much we went to the meeting room to continue preparing for our presentations. A couple of hours later we Werner Waage started the show with having his session (chapter 1; Data Retrieval). Everyone followed him with their chapters. After a while it went for the sport in referring to my blog, so at the end Stian Skjerveggen topped it off with having a sexy picture of me in his slides.

To summarize this was an awesome weekend. Everyone learned a lot, loved the service from the hotel, the food was great and we got to hang out together. I’m really hoping we can do this again, and then maybe invite people outside our department. Anyone want to join? 🙂

If you click on the post title, you can download the presentations and demos.

SQL Server 2008 R2

R2 has finally hit RTM (Release To Manufacturing). Is it worth upgrading? Always! But there aren’t many changes that’s not related to the BI (Business Intelligence) part. Here are the keywords: Better SQL Azure compatibility, support for 256(!!!) logical processors. 2008 “only” supported 64. SSMS dashboards.

They’ve also done a couple of things regarding licensing. Datacenter and Parallel Data Warehouse editions have been introduced. Oh, yeah and Express Edition will now support up to 10GB databases, instead of 4GB.

Recursive CTEs

Some might panic now and think “OMG WTF does this mean LOL?!?!?!?!”. Let’s start with the second word (or acronym), CTE stands for Common Table Expressions and means that you can write something like a sub-select in T-SQL. An example would be:

		FROM Employees


This doesn’t make much sense, as you could just do:

SELECT TOP 1 * FROM Employees

But, if we add the other word, “Recursive”, which means doing something repeatedly, we might add some sense to it. First we’ll create a test table and put in some test data.


  (ID, Name, ManagerID)
SELECT 2, 'Morten', 1 UNION ALL
SELECT 3, 'Trygve', 2 UNION ALL
SELECT 4, 'Vidar', 3 UNION ALL
SELECT 5, 'Håvard', 3 UNION ALL
SELECT 6, 'Stian', 5 UNION ALL
SELECT 7, 'Trine', 5 UNION ALL
SELECT 8, 'Werner', 4 UNION ALL
SELECT 9, 'Jakob', 2 UNION ALL
SELECT 10, 'Geir Ove', 9 UNION ALL
SELECT 11, 'Øystein', 10

As you see from this, Petter is the CEO. He’s only got one directly under him, and that’s Morten. Under Morten you’ve got Trygve, which has two directly under him: Vidar and Håvard. THIS IS JUST TEST DATA! Now that we got some test data, we can try to find out who’s got who as managers. We want it in the correct order, so that the hierarchy is correct. Petter first, then Morten, Trygve, Jakob. Then Trygve and Jakob’s employees. How to solve this? EASY!

WITH MyCTE (ID, Name, ManagerID, SortOrder) AS
	SELECT ID, Name, ManagerID, 0 AS SortOrder
		FROM Employees
	SELECT E.ID, E.Name, E.ManagerID, C.SortOrder + 1
		FROM Employees AS E

	ORDER BY SortOrder, ManagerID

First we’re selecting out the one where ManagerID IS NULL, and then we’re recursively joining this and unioning it to show the output. Try it yourself and you’ll see for yourself!

Off topic… My new awesome background image, thanks to Werner:

Sparse Columns

You’ve probably seen this in a list of features in SQL 2008, but what exactly is it and how can you take advantage of it? A sparse column just means that it will not store NULL. When you have a NULLable field, NULLs are actually stored in normal columns. It doesn’t use much space, but still. So, why is this a problem? Well, if you have an index on a NULLable field, the index will have as many NULLs as the field and therefore not be particularly selectable. In some tables, like the Objects table in CMS there are MANY NULLable fields, which on most records are set to NULL.

A sparse column is stored as XML, so you only want to add this to fields that have at least 50% NULLs, since there’s an overhead for each field that has data. For example, an integer field non sparse uses 4 bytes. The same field with sparse, uses 8 bytes, but for non-NULL data. Let’s do a little experiment.

	Field1 INT,

	Field1 INT,
	Field2 INT NULL,
	Field3 INT NULL

  (Field1, Field2, Field3)
	FROM sys.syscomments AS C
	CROSS JOIN sys.syscolumns

  (Field1, Field2, Field3)
	FROM sys.syscomments AS C
	CROSS JOIN sys.syscolumns

EXEC sp_spaceused 'test1'
EXEC sp_spaceused 'test2'

We create two tables, almost identical. The only thing different between these are that Field2 and 3 are SPARSE in test1. Then we populate both tables with 100000 records, and set NULL for Field2 and 3. This is the result I got from sp_spaceused:

This tells me that for 100.000 records, two NULLable fields will all saves me (about) 1 MB with data when using sparse. Doesn’t sound much, but if you compare it to the one without sparse it’s almost 50%. And that’s with only two NULLable fields and no indexes. In one of our customers CMS database there’s over 300.000 records in their Objects table. This table has 55 NULLable fields. Doing a SELECT * on this tables, I see that most of the NULLable fields are actually NULL more than 80% of the time. Five of them are indexed too.

So, what’s the downside of sparse columns? Well, for starters it uses the double amount of space to store data, when it’s not NULL. You can not use text, ntext, geometry and geography, you can’t have default values on them. It is also not supported by merge replication. But it’s supported by AppFrame R3, so start using it!

One thing I forgot to mention is column sets. If you add a column set to your table with sparse columns, the sparse columns will be stored in this column (remember, it’s all just XML). Keep in mind that using column sets have more restrictions. For instance, you can’t change a column set. If you have a sparse columns in your table and you want to add a column set, you need to re-create the table.

One thing that’s “cool” about column sets though is that when you select * you get the column set (with your sparse columns) as an XML:

	Field1 INT,

  (Field1, Field2, Field3)
  (1, 1, 1),
  (2, NULL, NULL),
  (3, 3, 3),
  (4, NULL, NULL)

SELECT *, Field3
	FROM test3
	WHERE Field2 = 1

The result:

If you think this is something you should use, read more about it here:

My favorite bloggers at SQL Skills:


Using Sparse Columns

Using Column Sets

Why convert to SQL Server 2008?

I’ve been asked by a couple of people this question. Microsoft released SP1 a couple of weeks ago, but what’s the difference between 2005 and 2008, and when is the next version coming? The next version of SQL Server is scheduled for 2011 (NB! This is a rumor!!!). That’s two years. Then SQL Server 2005 will be… 6 years old. Anyhow, here’s a list of nice features that you don’t got in 2005:

For developers:

  • New column types – Date, Time, Datetimeoffset, Datetime2
  • MERGE statement
  • Sparse Columns – Performs very good compared to large tables with many NULL-able columns
  • Spatial Data types – Geometry etc
  • Transact-SQL IntelliSense
  • Table Valued Parameters
  • Grouping sets

For DBAs:

  • Easier to maintain multiple servers
  • Transparent Data Encryption – Encrypt the physical database files without any application changes
  • Auditing
  • Resource Governor – ability to set "maximum CPU/memory usage" on heavy queries, users etc
  • Data Compression – Reduce the storage requirements of both database and backups
  • Performance Data Collection – Makes it easy to find issues and bad performing queries
  • Integrated Full Text Search
  • Policy management
  • Database Mirroring – Improved compared to 2005 with automatic page recovery of corrupted pages
  • Hot Add CPU

SQL Server 2008 Express Edition

Looking for management studio? Here’s the direct link:
This includes both the server (express edition) and management studio. In 2008 management studio has the management node on the server, and is also compatible with SQL Server Agent 🙂

Generate Change Script

Many of you probably noticed TeamDoc had some issues today. It was my fault, but before you shoot me, let me explain what happen, so it won’t happen to you! You’ve maybe noticed when doing changes to big tables, management studio goes to timeout, because it takes too long. The work around for me has always been to press the “Generate Change Script”, copy the script to a new query window, and execute it. Since there’s no timeout set on query windows, this will execute till it’s finished. And in ALL cases this has worked, until today.

We found a bug in TeamDoc, which was related to the Folder field in the Documents-table allowing NULL. We were filtering WHERE Folder + ‘ ‘ + Title LIKE ‘%’ + @SearchText + ‘%’. When Folder is NULL, this doesn’t work. So, instead of changing all SPs to handle NULL-values, we decided to change the field in the table, to not allow NULL, but have (‘’) as default value (because we just changed it to allow NULL). Since I’ve changed the design of this table about 1000 times before, I know it will go to timeout, so I automatically pressed the “Generate Change Script” button, copied the script to a new window, and executed it. Guess what happened! YEAH! Errors.

What happens when you change a field like this is that it creates a new table (tmp_your_table_name), puts the records in this table, then it recreates the original table (DROP, CREATE), then it inserts the records back from tmp_your_table_name, and puts on all indexes etc. The boring thing is that when inserting into tmp_your_table_name it’s not that cleaver that it puts a ISNULL around the Folder-field. And what happens? Well, it doesn’t insert any records, then it drops the original table, and creates it again. What about transactions? Well, it creates transactions inside this script, but not AROUND the script, so there’s no “master” transaction. Therefore, the data of the Documents-table was lost. JIPPI!

We quickly ran in to JanCB to restore a backup, to get the records out as soon as possible. We had a backup from about 09:00, and a log from about 12:00. But, before 12:00 I did some tests (setting the initial size of MDF and LDF files), and took some backups to test if it had any influence on the .bak file. Btw, there were no change to the size of the bak-file. Then we remembered that the old server was still running, so we picked the records of the Documents-table from this. Then we’re only missing the TeamDocs created from Sunday night, till now. There were only 3, so we created them manually by inserting based on the inputs table.

If I hadn’t done any tests, including backups, we could have restored the backup from 09:00, then the log from 12:00 and then a transaction log to a point-in-time, right before I did the change, and picked the data from that database. That would have taken a lot more time though…

ANYHOW. Everything is up and running, and there shouldn’t be anything missing. Sorry for making TeamDocs unavailable 🙂

NB! We found one very nice feature in 2008. I was aware of this, but not that it was THAT efficient! Backup compression! Without compression, it takes about 30 minutes to create the 89GB bak file. With compression, it takes 4 minutes, and the bak file isn’t more than 34GB! That’s impressive!

Main Database Server Upgrade 2

We are now finished upgrading. Because of missing SCSI-controller, we couldn’t set all the disks up as we wanted, but as soon as the controller is up and running, tempdb and LDF will be moved to separate arrays. The new server has the double amount of both CPUs and memory, and the disks it’s running on is quicker. We did some quick performance tests, and it looked VERY promising. Although we have to wait till Monday to see the actual performance (because we were the only users logged on).

We did some other minor things, like add more memory to TSQLDEV1 and add a new DHCP-range.

For server monitoring we’re using a tool called “What’s up”. 30 seconds after we unplugged the old server, all of us (me, JanCB, Leif and Åsmund) got SMS that it was down. We’ve got monitoring on ALL servers in our network. Nice to see that it’s working 🙂 We also got Securitas monitoring our main server room, in case the power shuts down. To test this, we used a hair dryer on the censor. The censor is set up to alarm Securitas if it exceeds 40 degrees Celsius. We heated it up to 42, and 5 minutes later, JanCB got a phone call from Securitas saying our server room was over heated! Before we got this monitoring, we only had “what’s up” to notice us. But what happens if the power shuts down? Well, all servers run on UPS’s. After some time, the battery in the UPS’s is empty, so the server shuts down. What happens then? The “what’s up” server is down, so it can’t notice us. One extra cool “feature” we didn’t know was that after we heated the censor to 40 degrees, the alarm in the building started to yell. We also got messages from another monitoring system that the noise level in the server room was to high 😛 SWEEEEEET!

After everything was set up, we did some tests. Teamdoc, internal applications, MarketMaker etc. Everything seems to be working. There’s one SP that was causing troubles though. sstp_Database_Maintenance_RefreshAllViews. This works perfectly in 2005, but in 2008 I had to add ROLLBACK TRANSACTION inside the catch-block. If not, it “thought” that all views failed for some weird reason. I’ll read more about it tomorrow. Have to get some sleep now. Here’s the code:

DECLARE @ViewsToRefresh TABLE (
	ViewName NVARCHAR(128),

INSERT INTO @ViewsToRefresh (ViewName)
SELECT [name]
	FROM sys.sysobjects
	WHERE LEFT([name], 4) IN ('atbl', 'stbl',
				'arpt', 'atbv', 'atbx', 'aviw',
				'stbv', 'stbx', 'sviw')
	  AND xtype='V'
	ORDER BY [name]

	  @ViewName = ViewName
		FROM @ViewsToRefresh

		EXEC sp_refreshview @ViewName
		PRINT 'Error refreshing ' + @ViewName

		FROM @ViewsToRefresh
		WHERE ViewName = @ViewName

Main Database Server Upgrade

As many of you probably already know, we’re going to upgrade our main database server this weekend. We’re already starting preparations tomorrow, and boy am I looking forward to it! The “old” TVS2 is going to be our new server now. This has been the technology’s main virtual host server. It’s running with 2 x Quad processors of each 2.66GHz, with 16GB RAM. We will run Windows Server 2008, and of course SQL Server 2008! The new disk setup will be as following:

Name Function RAID Disks Total Capacity
C:\ Operative System RAID 1 2 x 146GB 15k RPM SAS 146GB
D:\ Backup databases RAID 10 4 x 300GB 10k RPM SAS 600GB
E:\ MDF file RAID 10 14 x 146GB 15k RPM SAS 1022GB
F:\ LDF file RAID 10 12 x 36GB 10k RPM SCSI 216GB
G:\ tempdb RAID 1 2 x 36GB 10k RPM SCSI 36GB

We discussed this disk-setup with Jon Jahren, and he “approved” it. The only thing he commented was that to get absolutely maximum performance, we should think about SSD disks. Because of the price (SSD disks costs more than the double of SAS disks), and the fact that we already had SAS, we didn’t go for SSDs. At first we were thinking of splitting the MDF file into several files, placing them of several arrays, but according to Jon we would get into more trouble administrating it than we would gain on performance, although the SQLCAT-team (Customer Advisory Team) sais this is “best practice” for performance. We trust Jon, since he’s been helping companies that are much bigger than us with the same problems.

We are starting tomorrow preparing the new server. Copying backups etc will take some time, although we’ve got 1Gb network between the servers. The downtime period is set from 21:00 to 09:00 (12 hours) on Saturday, but if everything goes as planned we’ll be finished before I usually go to bed 😉 I won’t say when!

We are also doing some other minor upgrades. Among them is adding more disks to the server which is running PIMS_R3.

I’ve got a little competition for you (we LOVE competitions in tech). First one to answer gets a prize!
Who is this?

Hot tips after installing SQL Server 2008 management studio

After you’ve installed 2008, there’s two issues annoying my shiny metal ass:
1.) Shortcuts doesn’t work properly. CTRL + N normally opens a new window, but not anymore! To fix this, go to Tools -> Options -> Environment -> Keyboard and click “Reset to Default”. Now it works 🙂 You should think that you get default when installing, but no…
2.) For (stupid) security reasons, management studio doesn’t allow you to do changes to a table that makes it rebuild the table. Since we always have PrimKey, Created, CreateBy etc at the end of the table, when you add a field it has to be before the PrimKey-field, and therefore you need to rebuild your table. To fix this issue, go to Tools -> Options -> Table and Database Designers and uncheck “Prevent saving changes that require table re-creation”.

Both issues is already reported to Microsoft. The first one was not intentional, but the second one was “by design”. I’ve voted that this should NOT be by design. Let’s hope they’ll listen to me this time!