This is a very popular topic in the IT industry nowadays. Some believe it’s because of the global warming “wave”, but I tend to think it’s more than just that. Let’s dive into it!
My first meeting with this technology was about 8-10 years ago. I was running Linux, but wanted to have the possibility of also using Microsoft applications without having to do a reboot. I installed VMware and created a virtual machine inside it running Windows. I also remember the IT administrator at high school telling me that another high school was running Linux with multiple VMware machines, running Windows Server.
So, what is virtualization? According to Wikipedia, virtualization is “a broad term that refers to the abstraction of computer resources” and a virtual machine is “a software implementation of a machine (computer) that executes programs like a real machine”. I would describe it as making it possible to utilize more of the physical hardware available. Virtualization is a broad term, and is used for Application Virtualization (remote desktop-like) etc, but we’ll focus on hardware virtualization in this blog post. In fact, most of the servers out there are idling most of the time. I’m not thinking of the servers hosting Microsoft.com, Google.com etc but any other not that heavily used. In Omega we’ve got about 20-25 physical servers. What do they do? Domain controllers, DNS, mail, database, backup, web etc. Between 8 and 16 there’s some load on these servers, but how much of the actual hardware do you think is being used when most of Omegas employees are at home? Why should they then use the same amount of power, producing the same amount of heat etc as when we’re actually working? A couple of years ago we bought our first “hosting server”. If I remember correctly it was running 8 CPU cores and 16GB RAM. We installed VMware on this to run our test-servers in the technology department. In here we had our own Domain controller, web servers etc. After a while we bought several “hosting servers”, and now we’ve converted all of them to Hyper-V, Microsoft’s virtual servers. Today we got over 20 virtual servers. Most of them are test-servers, like SQL, web, XP (to test IE6) etc, but also some production servers. There are several reasons why we do this.
- Reduce the cost of having to buy multiple servers.
- Less power consumption which leads to less head and lower electrical bills.
- Less administrative work because we can log on to the physical box if a virtual server crashes, instead of going to Ølen in the middle of the night to reboot a computer.
- Enables us to add more “juice” if needed
You might think “isn’t this like putting all your eggs in one casket? Well, yes and no. Of course, many of the servers are running on the same hardware, so if one physical server crashes we got a bigger problem than if one of them crashed as a physical server. But, the quality of these servers tend to be much better than cheaper servers, and we’re able to run RAID10 on all servers, so if one disk crashes, the server’s still running. Also, in Windows Server 2008 R2 there’s a new feature called “live migration” which enables you to automatically move virtual servers between the hosting servers.
So, what should run as virtual server and what should still be on “bare metal” (physical box)? SQL Servers with heavy load should not run as virtual servers, because of the heavy disk IO. SQL Server love RAM, you should feed it like a hungry baby. AD controllers and exchange servers should also run as physical boxes, but there’s no reason why for example web-servers should run on physical servers. By the way, did you know that Omega only runs web servers on virtual servers? For example, TeamDoc have been running on a virtual server for several months.
I’ve had several discussions with people that are skeptical to virtual servers, and I do understand them, but remember one thing. If you have ordered a 2 CPU server with 4 GB RAM, and suddenly figure out that this server isn’t good enough. What do you do? Well you probably have to order a NEW server because 32 bit servers only support 4 GB RAM. You would have to go for one new motherboard, CPUs, more RAM etc. What would you do if it was a virtual server? You’d call the administrator: “could you add 2 GB more RAM to my server? And while you’re at it add 2 more CPU cores.” The administrator then would go into his Hyper-V manage, right click the server. Shut down. Right click it again, Settings and add more RAM and CPU, and then start the server. It would take about 2 minutes compared to ordering a new server from Dell, which would take about 2 weeks, and would cost A LOT more. When we create new virtual servers, we normally add 1 CPU and 1 GB RAM. Then, if needed, we add more and it would only take a couple of minutes. Shouldn’t that be the way to go?
Btw, did you know that most of Microsoft.com, TechNet and MSDN are running on virtual servers?