Blog ala Vidar

SQL, AppFrame and other cool technologies

Tag Archives: Maintenance

Backup Stored Procedure

I’ve had a couple of issues regarding the maintenance plan-tool in management studio (at least before 2008), and therefore I’ve never really put my trust in this tool. Instead of this, I’ve created a SP which now is available as an s-object in AppFrame. Will be available with the next build, but if you need it NOW, here it is:

This will take backups, (full, diff or transactional) using default settings and saving files with the following syntax:
DatabaseName_backup_201006231500.bak

CREATE PROCEDURE sstp_Database_Maintenance_Backup
(
	@BackupType AS CHAR(4) = 'FULL',
	@BackupPath AS NVARCHAR(300)
)
AS

IF @BackupType NOT IN ('FULL', 'DIFF', 'TRAN')
BEGIN
	RAISERROR('BackupType must be set to FULL, DIFF or TRAN!', 18, 1)
	RETURN
END
ELSE IF ISNULL(@BackupPath, '') = ''
BEGIN
	RAISERROR('BackupPath does not have any value!', 18, 1)
	RETURN
END
ELSE IF RIGHT(@BackupPath, 1)  '\'
BEGIN
	SET @BackupPath = @BackupPath + '\'
END


DECLARE
  @DBNAME AS NVARCHAR(128),
  @Filename AS NVARCHAR(128),
  @SQL AS NVARCHAR(MAX)


SET @DBNAME = (SELECT DB_NAME())
SET @Filename =
	@DBNAME +
	'_backup_' +
	CONVERT(VARCHAR(8), GETDATE(), 112) +
	REPLACE(CONVERT(VARCHAR(5), GETDATE(), 108), ':', '')

IF @BackupType = 'FULL'
BEGIN
	SET @Filename = @Filename + '.bak'
	SET @SQL = 'BACKUP DATABASE [' + @DBNAME + '] '
END
ELSE IF @BackupType = 'DIFF'
BEGIN
	SET @Filename = @Filename + '.diff'
	SET @SQL = 'BACKUP DATABASE [' + @DBNAME + '] '
END
ELSE IF @BackupType = 'TRAN'
BEGIN
	SET @Filename = @Filename + '.trn'
	SET @SQL = 'BACKUP LOG [' + @DBNAME + '] '
END

SET @SQL = @SQL + 'TO DISK = N''' + @BackupPath + '' + @Filename + ''' WITH '

IF @BackupType = 'DIFF'
BEGIN
	SET @SQL = @SQL + 'DIFFERENTIAL, '
END

SET @SQL = @SQL + 'NOFORMAT, INIT, NAME = N''' + @DBNAME + ' ' + @BackupType
SET @SQL = @SQL + ' Database Backup'', SKIP, NOREWIND, NOUNLOAD, STATS = 10'

EXEC(@SQL)

You can download the .sql file from skydrive.

Advertisements

Restore to point-in-time

What do you do if one of your customers calls and sais he’s just deleted something and he REALLY needs it back? We do this test about ones a month, to check that our backup-routines is working. For this example, we’ll say that it’s a Friday and the current time is 16:00. Your customer deleted what he needed 30 minutes ago (15:30). Here’s our setup:
We take full backup ones a week to C:\SQL\Backup\MyDatabase.bak
Differential is taken every night to C:\SQL\Backup\MyDatabase_Diff.bak
Transaction log backup is taken every day at 12:00 to C:\SQL\Backup\MyDatabase.trn

First thing to do is to take a backup of your transaction log:

BACKUP LOG [MyDatabase]
	 TO DISK = 'C:\SQL\Backup\MyDatabase_CustomLog.trn'
	 WITH NAME = N'MyDatabase-Transaction Log Backup'

When this is done, start restoring your last FULL backup of the database:

RESTORE DATABASE MyDatabase_BU
	 FROM DISK = 'C:\SQL\Backup\MyDatabase.bak'
	 WITH
	 MOVE 'MyDatabase_Data' TO 'C:\SQL\Backup\MyDatabase_BU.MDF',
	 MOVE 'MyDatabase_Log' TO 'C:\SQL\Backup\MyDatabase_BU.LDF',
	 NORECOVERY

We’re using “NORECOVERY” because we’re going to push more files on this database before making it available. In our environment we take full backup ones a week, with differentials every night, and transaction logs at 12:00. Here’s the code for adding the diff: NB! You only need the latest differential.

RESTORE DATABASE MyDatabase_BU
	 FROM DISK = 'C:\SQL\Backup\MyDatabase_Diff.bak'
	 WITH FILE = 1,
	 NORECOVERY

After adding the differential, we’ll add the transaction log from 12:00:

RESTORE LOG MyDatabase_BU
	 FROM DISK = 'C:\SQL\Backup\MyDatabase.trn'
	 WITH NORECOVERY

And when that’s done, we’ll add the transaction log we just created, but with a “point in time” attribute, STOPAT. Since the customer did the delete at 15:30, we’ll restore till 15:29. Notice that we’re using WITH RECOVERY on the last one!

RESTORE LOG MyDatabase_BU
	 FROM DISK = 'C:\SQL\Backup\MyDatabase_CustomLog.trn'
	 WITH RECOVERY, STOPAT = '2008-10-24 15:29'

After this is done, you can insert the records FROM MyDatabase_BU to MyDatabase, and your customer will be VERY happy 🙂

Main Database Server Upgrade 2

We are now finished upgrading. Because of missing SCSI-controller, we couldn’t set all the disks up as we wanted, but as soon as the controller is up and running, tempdb and LDF will be moved to separate arrays. The new server has the double amount of both CPUs and memory, and the disks it’s running on is quicker. We did some quick performance tests, and it looked VERY promising. Although we have to wait till Monday to see the actual performance (because we were the only users logged on).

We did some other minor things, like add more memory to TSQLDEV1 and add a new DHCP-range.

For server monitoring we’re using a tool called “What’s up”. 30 seconds after we unplugged the old server, all of us (me, JanCB, Leif and Åsmund) got SMS that it was down. We’ve got monitoring on ALL servers in our network. Nice to see that it’s working 🙂 We also got Securitas monitoring our main server room, in case the power shuts down. To test this, we used a hair dryer on the censor. The censor is set up to alarm Securitas if it exceeds 40 degrees Celsius. We heated it up to 42, and 5 minutes later, JanCB got a phone call from Securitas saying our server room was over heated! Before we got this monitoring, we only had “what’s up” to notice us. But what happens if the power shuts down? Well, all servers run on UPS’s. After some time, the battery in the UPS’s is empty, so the server shuts down. What happens then? The “what’s up” server is down, so it can’t notice us. One extra cool “feature” we didn’t know was that after we heated the censor to 40 degrees, the alarm in the building started to yell. We also got messages from another monitoring system that the noise level in the server room was to high 😛 SWEEEEEET!

After everything was set up, we did some tests. Teamdoc, internal applications, MarketMaker etc. Everything seems to be working. There’s one SP that was causing troubles though. sstp_Database_Maintenance_RefreshAllViews. This works perfectly in 2005, but in 2008 I had to add ROLLBACK TRANSACTION inside the catch-block. If not, it “thought” that all views failed for some weird reason. I’ll read more about it tomorrow. Have to get some sleep now. Here’s the code:

DECLARE @ViewName NVARCHAR(128)
DECLARE @ViewsToRefresh TABLE (
	ViewName NVARCHAR(128),
	TSQLCommand NVARCHAR(MAX) NULL
)

INSERT INTO @ViewsToRefresh (ViewName)
SELECT [name]
	FROM sys.sysobjects
	WHERE LEFT([name], 4) IN ('atbl', 'stbl',
				'arpt', 'atbv', 'atbx', 'aviw',
				'stbv', 'stbx', 'sviw')
	  AND xtype='V'
	ORDER BY [name]

WHILE EXISTS (SELECT * FROM @ViewsToRefresh)
BEGIN
	SELECT TOP 1
	  @ViewName = ViewName
		FROM @ViewsToRefresh

	BEGIN TRY
		EXEC sp_refreshview @ViewName
	END TRY
	BEGIN CATCH
		ROLLBACK TRANSACTION
		PRINT 'Error refreshing ' + @ViewName
	END CATCH

	DELETE
		FROM @ViewsToRefresh
		WHERE ViewName = @ViewName
END

Main Database Server Upgrade

As many of you probably already know, we’re going to upgrade our main database server this weekend. We’re already starting preparations tomorrow, and boy am I looking forward to it! The “old” TVS2 is going to be our new server now. This has been the technology’s main virtual host server. It’s running with 2 x Quad processors of each 2.66GHz, with 16GB RAM. We will run Windows Server 2008, and of course SQL Server 2008! The new disk setup will be as following:

Name Function RAID Disks Total Capacity
C:\ Operative System RAID 1 2 x 146GB 15k RPM SAS 146GB
D:\ Backup databases RAID 10 4 x 300GB 10k RPM SAS 600GB
E:\ MDF file RAID 10 14 x 146GB 15k RPM SAS 1022GB
F:\ LDF file RAID 10 12 x 36GB 10k RPM SCSI 216GB
G:\ tempdb RAID 1 2 x 36GB 10k RPM SCSI 36GB

We discussed this disk-setup with Jon Jahren, and he “approved” it. The only thing he commented was that to get absolutely maximum performance, we should think about SSD disks. Because of the price (SSD disks costs more than the double of SAS disks), and the fact that we already had SAS, we didn’t go for SSDs. At first we were thinking of splitting the MDF file into several files, placing them of several arrays, but according to Jon we would get into more trouble administrating it than we would gain on performance, although the SQLCAT-team (Customer Advisory Team) sais this is “best practice” for performance. We trust Jon, since he’s been helping companies that are much bigger than us with the same problems.

We are starting tomorrow preparing the new server. Copying backups etc will take some time, although we’ve got 1Gb network between the servers. The downtime period is set from 21:00 to 09:00 (12 hours) on Saturday, but if everything goes as planned we’ll be finished before I usually go to bed 😉 I won’t say when!

We are also doing some other minor upgrades. Among them is adding more disks to the server which is running PIMS_R3.

I’ve got a little competition for you (we LOVE competitions in tech). First one to answer gets a prize!
Who is this?

Replication in SQL Server

Tuesday we had pizza & learning in Ølen. I held the session, and the topic was replication in SQL Server. We even had guests from a company called Quantum Solutions. Some of you might know Roar or Erik from before.
Replication is a BIG topic. We used about three hours on it, but we could have used three years on it. It’s HUGE! In addition to replication, I added Database Snapshots, Database Mirroring and Log Shipping, because I feel this is very relevant in Omega.

Database Snapshots
This is a “point in time” snapshot of a database. The purpose of this is to have a read only database to run reports, or for quick backup. Say, for instance you are going to make significant changed in a table. Delete 100000 rows or similar. For insurance you take a backup of your database before you run the query. If something fails, you have to restore the database. If we’re talking about AppFrameR2 (Omegas main database), backup plus restore takes about 1.5 hours. For 1 delete statement?! Instead of this, you could create a snapshot before you run the query, and if the query fails, you can just revert to the snapshot. When you make changes to a database with a snapshot attached, the changes will only be done on the main database. The original pages of the changes will be copied to the snapshot. This is why it’s called a point in time snapshot. If you’re going to revert, you can only have ONE snapshot. If not, you can have several.

Database Mirroring
Database mirroring is exactly what the name says: You have one database with a mirror database. The mirror database is not accessible before you change the roles of the databases. There are three different roles in database mirroring; principal, mirror and witness. Principal is the main database, where all users are connected and changes are done. Mirror is a “slave” database which gets transactions from the principal. The purpose of this is failover. If you have a witness server, you can change the roles. The principal can then be mirror, and vice versa. You can then take down the old principal (now mirror), run upgrades, add hardware etc, without the database to actually go down. When the database is up and running again, it will be synchronized with the running principal (old mirror), and when that’s done, the old mirror will be set as mirror again, and the old principal will be set to principal again. To be able to read from a mirror, you have to make a snapshot of it. It is not possible to access a mirror. Not even in management studio, right-clicking and properties.
The witness server is used for automatically failover. If the principal is not accessible (hardware crash or similar), witness will set mirror as principal till principal is up and running again. To use database mirroring you must have two standard (or enterprise) editions of SQL Server. Developer does not work. The witness may run as express edition though. For communication between the servers, database mirroring uses endpoints.
There are three modes in database mirroring: High Availability, High Performance and High Protection.

  • High Availability commits synchronous transactions. If it commits on the mirror, it commits on the principal AFTER it commits on the mirror. If it rolls back on the mirror, it doesn’t run the transaction on the principal. This of course takes a bit more time than running a transaction on only one database.
  • High Performance does the opposite. It runs the transaction on the principal, and sends it asynchronous to the mirror if it commits on the principal.
  • High Protection is the same as High Availability except that it does not use a witness server. For failover, you must do it manually.

Log Shipping
Log shipping is sort of like database mirroring, except it’s not running every transaction over to the mirror. In log shipping the roles are called primary, secondary and monitor. The primary server takes a backup of the transaction log, copies it to a network share. Then the secondary (you can have multiple secondary servers) copies it from the network share to a local folder, and then restores the transaction log. The monitor’s responsibility is to keep track of the process and raise alerts when the process fails.

Replication
Now over to what Microsoft calls replication. There are three different types of roles; Publisher, Distributor and Subscriber. The publisher is in most cases the only place you can do data modifications. This is the primary server. The distributor is a server which takes care of distributing changes to the subscribers, and a subscriber is the database that receives copies from the publisher (via the distributor). In some cases the subscriber is read-only. The reason for this is that many companies have the need of several read-only databases for reporting etc.
There are two types of subscription types. Push and pull. Push should be used when there’s a stable and permanent connection. The distributor copies data TO the subscriber. Pull subscription should be used when there’s not a stable connection. The subscriber then copies the data FROM the distributor.

There are four types of replication, Snapshot, Transactional, Peer-to-Peer and Merge.

  • Snapshot replication copies datasets and does not use the transaction log. This should only be used on small amounts and static data.
  • Transactional replication uses the transaction log and sends “bulks” of the transaction log.
  • Peer-to-Peer is the same as transactional except it all servers are publishers, distributors AND subscribers. This means that if you have 4 servers, all of these will send transactions to the other three servers.
  • Merge replication is like snapshot replication, except that all servers allows modifications. All servers are then both publisher and subscriber.