Blog ala Vidar

SQL, AppFrame and other cool technologies

Tag Archives: SQL Server

SQL Server Licensing

Occasionally I get questions related to licensing, which is boring but necessary to know a bit about. There are many resources out there to help, but some might be more confusing than others. Therefore I’m posting this to give you a quick overview of how it works.

Two licensing types

Processor License; For physical servers you need one license per physical CPU. If you buy two quad-core CPUs, that’s two physical CPUs, not eight. In other words, you need 2 processor licenses. Now, if you have a virtual server, it’s a bit different. There you need one license per virtual CPU assigned to the virtual machine. If you have a virtual hosting machine, with two quad cores, running 4 virtual servers which each has access to 2 cores, and each of those virtual servers are running SQL Server, you need 4 * 2 processor licenses. Processor licensing is recommended if you don’t know how many users or devices that’s going to connect to your server(s), or the number of connected users/devices are so high it’s less expensive to buy processor licenses.

CALs; Using CAL-licensing you first need a server software license, and then one CAL per user or device connecting. NB! This is not concurrent users or devices. And, this is counting physical users/devices, not the number of SQL Server logins on your server.

Pricing

Edition Per Processor Software CAL
Enterprise $27,495 $8,592 $164
Standard $7,171 $898 $164

Editions

There are many editions of SQL Server, as most of Microsoft’s server products. Here are the most common ones.

  • Compact Edition; Mostly used in embedded system to help caching data, syncing or similar. Limited to 4 GB. This is completely free, but isn’t an adequate server.
  • Express Edition; Used for small databases with few users. Free, but limited to 1 CPU, 1 GB RAM and 10 GB (data file, not counting the log file).
  • Developer Edition; Same as Enterprise, no physical (or feature) limitations, but license limits you to development and testing only. $27 per developer, unlimited computers.
  • Standard Edition; Most common in medium size companies. Limited to 4 CPUs and 64 GB RAM. Some feature limitations. Pricing: see table above.
  • Enterprise Edition; Most common in big size companies. Limited to 8 CPUs and 2 TB RAM. Pricing: see table above.

To see which features are in each edition, please visit Microsoft’s “Edition Compare” at http://www.microsoft.com/sqlserver/2008/en/us/editions-compare.aspx

Example

A world-wide customer needs a SQL Server to run PIMS and other systems. They have already bought a server with 4 x quad-core CPU and put it in their in-house data center. All their offices are connected to their HQ with very good connections so no need for replication or similar. This customer has about 1.000 employees, each connecting to one or more of the systems on their new SQL Server. Enterprise Edition is probably the one you want to go for.

Server + CALs
License Price per # Total
Server $8,592 1 $8,592
CALs $164 1.000 $164,000
$172,592
Per Processor
Price per # Total
$27,495 4 $109,980


As you see in the tables, 2 physical CPUs are much cheaper than 1.000 CALs. In addition, you don’t have to worry about the customer adding new employees to the systems. Be aware though that the server license is per OSE (Operating System Environment), so if you run multiple virtual Windows Servers with SQL Server installed on each of them, you need one server license per virtual machine.

If you want to read more details visit Microsoft’s SQL Server information pages:
http://www.microsoft.com/sqlserver/2008/en/us/licensing-faq.aspx#licensing
http://www.microsoft.com/sqlserver/2008/en/us/pricing.aspx
http://www.microsoft.com/sqlserver/2008/en/us/editions.aspx
http://www.microsoft.com/sqlserver/2008/en/us/editions-compare.aspx

Advertisements

Unused Indexes

Two weeks ago, I held a session called “Hardcore SQL” session at AppEd where I (among other things) said that you should always start with removing unused indexes, before adding missing indexes. You can find my blog post about missing indexes here. So, how do you find unused indexes?

SELECT
  O.name AS ObjectName, I.name AS IndexName,
  user_seeks + user_scans + user_lookups AS Reads,
  user_updates AS Writes, P.rows AS RowsNum
    FROM sys.dm_db_index_usage_stats AS IUS
    INNER JOIN sys.indexes AS I ON I.index_id = IUS.index_id AND IUS.object_id = I.object_id
    INNER JOIN sys.partitions AS P ON P.index_id = IUS.index_id AND IUS.object_id = P.object_id
    INNER JOIN sys.objects AS O ON IUS.object_id = O.object_id
    WHERE O.type = 'U'
      AND IUS.database_id = DB_ID()
      AND I.type_desc = 'nonclustered'
      AND I.is_primary_key = 0
      AND I.is_unique_constraint = 0
      AND P.rows > 10000
    ORDER BY Reads, RowsNum DESC

Btw, I’ve got a Live Mesh folder with SQL scripts. If anyone wants access to it (and maybe wants to add scripts) you know how to contact me.

Missing Indexes

In SQL Server 2005, Microsoft introduced Dynamic Management Views and Functions (DMV/DMFs). These provides real-time data about indexes, hardware, query usage and many many other things. Using this, you can actually get suggested indexes from the SQL Server itself, that in it’s mind it would gain performance if you implemented them. So, why isn’t these generated automatically? The simple answer is disk-usage. If you implemented all these, you’d end up with A LOT of indexes, and therefore a much bigger database. So, which should you implement?

SELECT
  DENSE_RANK() OVER (ORDER BY MIGS.avg_user_impact DESC, MIG.index_handle) AS IndexRank,
  RIGHT(MID.statement, CHARINDEX('[', REVERSE(MID.statement))) AS table_name,
  MIC.column_id, MIC.column_name, MIC.column_usage,
  MIGS.unique_compiles, MIGS.user_seeks,
  MIGS.avg_total_user_cost, MIGS.avg_user_impact
    FROM sys.dm_db_missing_index_details AS MID
    CROSS APPLY sys.dm_db_missing_index_columns (MID.index_handle) AS MIC
    INNER JOIN sys.dm_db_missing_index_groups AS MIG
        ON MIG.index_handle = MID.index_handle
    INNER JOIN sys.dm_db_missing_index_group_stats AS MIGS
        ON MIG.index_group_handle = MIGS.group_handle
    WHERE statement LIKE '%atbl%'
      AND MIGS.unique_compiles > 10
    ORDER BY
      MIGS.avg_user_impact DESC,
      MIG.index_handle,
      CASE
        WHEN column_usage = 'EQUALITY' THEN 1
        WHEN column_usage = 'INEQUALITY' THEN 2
        ELSE 3
      END,
      column_id

Using this query, you get indexes, with it’s columns, ordered by the average user impact. You should consider those that have a relatively high avg_user_impact and/or avg_total_user_cost.

Backup Stored Procedure

I’ve had a couple of issues regarding the maintenance plan-tool in management studio (at least before 2008), and therefore I’ve never really put my trust in this tool. Instead of this, I’ve created a SP which now is available as an s-object in AppFrame. Will be available with the next build, but if you need it NOW, here it is:

This will take backups, (full, diff or transactional) using default settings and saving files with the following syntax:
DatabaseName_backup_201006231500.bak

CREATE PROCEDURE sstp_Database_Maintenance_Backup
(
	@BackupType AS CHAR(4) = 'FULL',
	@BackupPath AS NVARCHAR(300)
)
AS

IF @BackupType NOT IN ('FULL', 'DIFF', 'TRAN')
BEGIN
	RAISERROR('BackupType must be set to FULL, DIFF or TRAN!', 18, 1)
	RETURN
END
ELSE IF ISNULL(@BackupPath, '') = ''
BEGIN
	RAISERROR('BackupPath does not have any value!', 18, 1)
	RETURN
END
ELSE IF RIGHT(@BackupPath, 1)  '\'
BEGIN
	SET @BackupPath = @BackupPath + '\'
END


DECLARE
  @DBNAME AS NVARCHAR(128),
  @Filename AS NVARCHAR(128),
  @SQL AS NVARCHAR(MAX)


SET @DBNAME = (SELECT DB_NAME())
SET @Filename =
	@DBNAME +
	'_backup_' +
	CONVERT(VARCHAR(8), GETDATE(), 112) +
	REPLACE(CONVERT(VARCHAR(5), GETDATE(), 108), ':', '')

IF @BackupType = 'FULL'
BEGIN
	SET @Filename = @Filename + '.bak'
	SET @SQL = 'BACKUP DATABASE [' + @DBNAME + '] '
END
ELSE IF @BackupType = 'DIFF'
BEGIN
	SET @Filename = @Filename + '.diff'
	SET @SQL = 'BACKUP DATABASE [' + @DBNAME + '] '
END
ELSE IF @BackupType = 'TRAN'
BEGIN
	SET @Filename = @Filename + '.trn'
	SET @SQL = 'BACKUP LOG [' + @DBNAME + '] '
END

SET @SQL = @SQL + 'TO DISK = N''' + @BackupPath + '' + @Filename + ''' WITH '

IF @BackupType = 'DIFF'
BEGIN
	SET @SQL = @SQL + 'DIFFERENTIAL, '
END

SET @SQL = @SQL + 'NOFORMAT, INIT, NAME = N''' + @DBNAME + ' ' + @BackupType
SET @SQL = @SQL + ' Database Backup'', SKIP, NOREWIND, NOUNLOAD, STATS = 10'

EXEC(@SQL)

You can download the .sql file from skydrive.

SQL Server 2008 R2

R2 has finally hit RTM (Release To Manufacturing). Is it worth upgrading? Always! But there aren’t many changes that’s not related to the BI (Business Intelligence) part. Here are the keywords: Better SQL Azure compatibility, support for 256(!!!) logical processors. 2008 “only” supported 64. SSMS dashboards.

They’ve also done a couple of things regarding licensing. Datacenter and Parallel Data Warehouse editions have been introduced. Oh, yeah and Express Edition will now support up to 10GB databases, instead of 4GB.

SQL Server Analysis Services

Today a couple of us in Ølen had the pleasure of attending a session with Jan Ove Halvorsen with SSAS (SQL Server Analysis Services) as topic. This was an introduction to SSAS, although most of it was heavy stuff. I’ve played with SSAS before, but have had problems understanding some of the basics. This might help you on the way.

What is SSAS?

SSAS is part of the OLAP BI (Business Intelligence) part of SQL Server.  It’s also a part of the SQL Server licensing. This means that if you own a SQL Server license, you also own a SSAS license. It is recommended to install it on a server beside SQL Server, because it’s heavy on RAM usage. To connect to it you either use Visual Studio, Management Studio or an API. You can also connect to it using ADOMD.NET.

SSAS Modes

There are three main modes in SSAS:
MOLAP – Multidimensional OLAP – Data, aggregations and structures are stored in OLAP
HOLAP – Hybrid OLAP – Mix of MOLAP and ROLAP
ROLAP – Relational OLAP – Uses data from SQL Server but stores some structural data in OLAP

This means that you don’t need to run non-live data. For instance you can run MOLAP on monthly updates with a SQL Server Agent job running every month (or similar), storing everything in SSAS. This makes it VERY fast to query. If you need more live data, you can run ROLAP, so you get live data all the time. This will not be as fast, and it will also give a load on the SQL Server each time a query is executed.

With the 2005 version there came a new feature to SSAS, enabling it to “subscribe” to updates on the SQL Server. So, if you’re running ROLAP, it will generate the cube, and just use that local cube till there’s updates in the underlying data (on the SQL Server). When there’s an update SSAS gets a notification that there are changes, and it will rebuild the cube. Either the whole cube, or just parts of it.

OLAP Cubes and Dimensions

In SSAS you’re browsing cubes. A cube is a “collection” of dimensions. A dimension is a set of data, in many cases a one to one relation to a table or similar in SQL Server. This might for example be “Products”, “Cities”, “Countries” or similar. You also must have a time dimension. This means that all your data are aggregated so that every possible join of these (Products, Cities, Countries and Time) are stored as one “table”. This is what’s making this so fast to query, because everything’s “prejoined”.

Hierarchy

You can create hierarchies in SSAS. The most common is the Year –> Month –> Day hierarchy. This enables you to group by year, and just by clicking expand, you’ll get group by month in your client without having to get new data from the server.

Creating cubes

There are several ways of creating cubes, dimensions and other stuff. You can either use Visual Studio, Management Studio, or you can use a query language. This can be done using MDX (MultiDimensional eXpressions) which looks a lot like Transact SQL, but is also VERY different. An example could be:

SELECT
  { [Measures].[Store Sales] } ON COLUMNS,
  { [Date].[2002], [Date].[2003] } ON ROWS
	FROM Sales
	WHERE ( [Store].[USA].[CA] )

In this example Sales is the name of the cube. Store is one of their dimensions.

Handling strings in SQL Server

In .NET, if you have a string it’s:
VB: Dim vMyString As String
C#: string vMyString;

In SQL Server you’ve got more than one choice. varchar, nvarchar (with or without MAX), char, nchar, text and ntext. That’s why SQL Server is better than .NET. No, just kidding. When you’ve got a String in .NET, what’s the equivalent? nvarchar. If you’re trying to put varchar in .NET, you’ll end up with an nvarchar (since the text is changed to Unicode).

SQL is built to store large amounts of data and retrieving it faster than lightning. It’s not built to handle strings, but sometimes you need to do it on the SQL Server. That’s when functions like SUBSTRING, LEFT, RIGHT, REVERSE and PARSENAME are useful! The last one, PARSENAME, is a function that’s saved me A LOT of time. The purpose of this function is to retrieve table names, schema names etc from a string. For example a string like “Server.Database.Schema.Table”. If you put PARSENAME and set the second parameter to 1, you’ll always get the Table. 2 will always give you Schema etc. Now, how can we use this other than retrieving object-names?

DECLARE @Name AS NVARCHAR(100)

SET @Name = 'Nordnes, Vidar Stangeland'
SET @Name = REPLACE(@Name, ',', '.')

SELECT
  PARSENAME(@Name, 1) AS FirstName,
  PARSENAME(@Name, 2) AS LastName

The @Name variable contains a name where you’ve got “LastName, FirstName (MiddleName)”. Replacing comma with a period enables you to use PARSENAME. The cool thing about this is that it returns NULL if it doesn’t have any values. If you didn’t use PARSENAME, you’d have to do something like:

DECLARE @Name AS NVARCHAR(100)

SET @Name = 'Nordnes, Vidar Stangeland'

SELECT
  CASE WHEN CHARINDEX(',', @Name) > 0
	THEN SUBSTRING(@Name,
			CHARINDEX(',', @Name) + 2,
			LEN(RIGHT(REVERSE(@Name),
				CHARINDEX(',',
					REVERSE(@Name)))))
	ELSE @Name
  END AS FirstName,
  CASE WHEN CHARINDEX(',', @Name) > 0
	THEN LEFT(@Name, CHARINDEX(',', @Name) - 1)
	ELSE @Name
  END AS LastName

WAY more time consuming, for doing something that should be pretty easy. REVERSE is also a great function to know about. As is REPLICATE, STUFF, ISNULL (or COALESCE), LTRIM and RTRIM.

While I’m at it, “asdf “ (with trailing space) isn’t always the same as “asdf” (without trailing space). Just take a look at this example:

SELECT '1' WHERE 'asdf' = ' asdf'
UNION ALL
SELECT '2' WHERE 'asdf' = 'asdf '
UNION ALL
SELECT '3' WHERE 'asdf ' = ' asdf'
UNION ALL
SELECT '4' WHERE 'asdf ' = 'asdf'
UNION ALL
SELECT '5' WHERE 'asdf' LIKE ' asdf'
UNION ALL
SELECT '6' WHERE 'asdf' LIKE 'asdf '
UNION ALL
SELECT '7' WHERE 'asdf ' LIKE ' asdf'
UNION ALL
SELECT '8' WHERE 'asdf ' LIKE 'asdf'

Before you run the code, think about what you THINK will be returned. 1-4 is the same as 5-8 except the = vs LIKE. My first guess was 0 records. My second guess was 2 and 6. None of my guesses was correct though.

Remember also that a nvarchar(MAX) field can contain very much data. Returning this to the client could be very slow, especially if you include this field in other fields (for instance one field returning HTML-data, and one without the HTML-formatting), or as we often see here in Omega, a SearchColumn which is used by our framework for filtering. This field normally consist of all the (n)varchar-fields in the table.

Incremental for each domain

I’m currently creating a new data model for a customer. They’ve got an Elements table with a field called “Num” (Number) which is supposed to be incremental for each domain. How to solve that? Well, you can create a UDF and set the default value of the No field to this UDF, but then you don’t support inserting more than one row. So, how did I solve it? There’s (at least) two ways. You either create an instead of trigger, or do as I did. Here’s an example:

CREATE TABLE MyElements (
	Domain NVARCHAR(128),
	Num INT,
	PrimKey UNIQUEIDENTIFIER
)
GO

ALTER TABLE MyElements
	ADD CONSTRAINT DF_MyElements_PrimKey
	DEFAULT (NEWSEQUENTIALID())
	FOR PrimKey
GO

ALTER TABLE MyElements
	ADD CONSTRAINT DF_MyElements_Num
	DEFAULT (RAND(CONVERT([INT], CONVERT([VARBINARY](36), NEWID(), 0), 0))*(100000))
	FOR Num
GO

CREATE TRIGGER MyElements_ITrig
ON MyElements
FOR INSERT
AS

WITH MyCTE AS
(
	SELECT
	  PrimKey, Num,
	  ROW_NUMBER() OVER(PARTITION BY Domain ORDER BY PrimKey) AS RowID
		FROM MyElements AS E
)

UPDATE E
	SET E.Num = C.RowID
	FROM MyElements AS E
	INNER JOIN Inserted AS I ON E.PrimKey = I.PrimKey
	INNER JOIN MyCTE AS C ON I.PrimKey = C.PrimKey AND C.Num  C.RowID

First we create a table. Notice that the PrimKey field has NEWSEQUENTIALID() as default, and that the Num field has a default value with some cool random functionality. Then we create the trigger where we have a CTE that creates a ROW_NUMBER partitioned by Domain, ordered by PrimKey. Then we do an update of our table. Try yourself!

INSERT INTO MyElements
  (Domain)
SELECT TOP 1000
  'Omega'
	FROM sys.syscomments

Azure

What does Azure mean? Blue sky. But what does Microsoft mean with this? You might have heard of “the cloud”. Storing data in the cloud etc. This is a term that is very popular nowadays. The point here is that your data (or applications) are stored “in the cloud”, which means that you don’t have a relationship to where it’s actually stored or executed. I just created a database in Microsoft’s SQL Azure. I have NO idea where this database is, but I can access it through management studio or similar. There are a lot of other things you can do “in the cloud”. You can store your pictures, make it do heavy calculations etc. Some have been dreaming about having a screen (TV like) where all the actual computing is done “in the cloud”. I think it’s really interesting, because most of our computers are idling most of the time anyways.

So, why am I writing about this? Arild posted a blog about SQL Azure CTP earlier which started a discussion if we should support it. That lead me to try to make AppFrame “Azure compatible”. After a couple of hours, I’ve made my decision: If we should go “Azure”, we need to do it from scratch. There are so many things that are not supported by the SQL Azure which are imho show stoppers. For starters, you can not connect to it using management studio. Correction, you CAN, but you don’t get the object browser, and it’s very limited. The unsupported features are a long list. This in it self isn’t a big problem, because we don’t use most of them, but here’s a list of those we do use, and a little description.

SUSER_SNAME() – This is used EVERYWHERE. Literally! Triggers, views, defaults etc
NEWSEQUENTIALID() – Can be replaced by NEWID(), but you loose performance
WITH (NOLOCK) – Can be removed, but can lead to performance issues and lock-problems
FullText Indexing – Can of course be replaced by normal WHERE-clauses, but this will be a performance issue.

This is the biggest problems I’ve found so far, and I’ve only scripted tables and views. In addition, you don’t have sysobjects, syslogins, sysusers etc. Making AppFrame Azure compatible IS possible, but it would in my mind be just as time consuming as making it MySQL compatible. I’m sure Microsoft will add more features. This is only the CTP, so I hope they’ll at least add SUSER_SNAME and NEWSEQUENTIALID. The others we CAN live without.

So, to summarize. Should we be thinking to “go to the cloud”? No, at least not yet. I see it as a MySQL replacer. It’s great for small applications, but when you want to do real enterprise-stuff, you run MSSQL Server Enterprise Edition 🙂 One other thing to keep in mind is where is my data stored? All companies on in the US needs to be SOX compliant. This might be a problem for Azure. Don’t get me wrong. I’m positive about Azure, but not as an enterprise database!

If you want to test out Azure SQL, go to http://connect.microsoft.com and sign up for Azure SQL.

NB! We’ve got two new bloggers (from TGE). Looking forward to their postings 🙂
Tim Bosinius
Florian Biermaier

Comma Separated List in SQL

What do you do if you got some records of for example users and you want to return them in a comma separated list? Most of you will probably do something like this:

SET NOCOUNT ON

CREATE TABLE MyTable (
	Login NVARCHAR(128)
)

INSERT INTO MyTable
  (Login)
VALUES
  ('Vidar'), ('Johnny'), ('Peter'), ('JanCB'), ('Georgij'), ('Tomas'), ('Torgeir')

DECLARE @Logins AS NVARCHAR(MAX)
DECLARE @Login AS NVARCHAR(128)

SELECT Login
	INTO #MyLogins
	FROM MyTable

WHILE EXISTS (SELECT * FROM #MyLogins)
BEGIN
	SELECT TOP 1
	  @Login = Login
		FROM #MyLogins

	IF @Logins IS NULL
	BEGIN
		SET @Logins = @Login
	END
	ELSE
	BEGIN
		SET @Logins = @Logins + ', ' + @Login
	END

	DELETE
		FROM #MyLogins
		WHERE Login = @Login
END

DROP TABLE #MyLogins
DROP TABLE MyTable

PRINT @Logins

Yes, I know. This is not best practice with creating two tables etc…. but you get my point. Looping through a record set and add a field to a variable. But, why not just do:

SET NOCOUNT ON

CREATE TABLE MyTable (
	Login NVARCHAR(128)
)

INSERT INTO MyTable
  (Login)
VALUES
  ('Vidar'), ('Johnny'), ('Peter'), ('JanCB'), ('Georgij'), ('Tomas'), ('Torgeir')

DECLARE @Logins AS NVARCHAR(MAX) = ''

SELECT @Logins = @Logins + Login + ', '
	FROM MyTable

DROP TABLE MyTable

PRINT SUBSTRING(@Logins, 1, LEN(@Logins) - 1)

This works in SQL Server 2005 too. You just have to SET @Logins = ‘’ after you declare it. And you need to split up the values to be inserted.