For two unrelated projects I am looking into server virtualization. Both are for production systems and are not developer or consumer focused. Most of the conversations I’ve had about VMs so far have been in the context of software development and software testing, but I know there are many people out there that have successfully virtualized their production server environment. In talking about this with the people I’d be working on these projects with, here is a list of pros and cons we came up with based on what we’ve heard or read here and there — none of us are VM experts.
- Can setup so that data is on one drive and OS/apps are on another, with each virtual drive being a separate VHD (virtual Hard Disk) file. With that, we can easily backup the data drive separate from the OS/app drive, and in the event of a major problem, we can restore one without the other. This can also be done with physical hardware, but we do not have access to the physical hardware.
- Can create multiple virtual servers. For example, we can put e-mail on its own server, SQL Server on its own, and web on its own. We can then run all three VMs on a single physical machine. If we tax the limits of the physical machine, moving one of the virtual machines to another physical is a simple file copy (for the most part).
- Backups and restores of entire servers or disks are file copies.
- If we have two physical servers we can schedule regular backs from one to the other and in the event of one physical server going bad, we can turn on all the VMs on a single physical server while repairing/replacing the first physical server. Things would run more slowly, but at least they would be up.
- We can test in other environments, such as Linux/Apache/Mono in a virtual server without having to have new hardware.
- A problem with one virtual server will not affect the other servers.
- Adding more servers is easy. Make a copy of one and change a few settings.
- Takes up more disk space as there are multiple copies of OS and some apps — installed for each VM.
- I would guess that running all three VMs (from the example above) on a single real machine would be less performant that running the three services directly on the real server.
- Multiple licenses to OS/apps are needed. Multiple licenses = more $$$.
So based on what I have here so far, I have a few questions for my readers.
- What pros and cons have I missed? What pros and cons have I listed, but are incorrect, or have significant caveats?
- Can anyone provide any real world advice, info or data that would help us determine if, how and what we should virtualize?
- Are there some services that should not be virtualized? POP3 e-mail? Exchange Server? SQL Server? IIS? If so, why and under what conditions? Is it OK sometimes, but not in certain cases?
- How much does the load on one VM affect the host? What about the other VMs?
- What about the host server? Minimum hardware specs? Recommended hardware specs? How do I calculate what I need? Do I simply add the specs of the VMs to calculate the specs of the host?
- Microsoft Virtual Server? VMWare Server?
If you have anything to add, please leave a comment here or contact me here or reply on Twitter. I know there is a ton of info out there, but since this is not my area of expertise, I’d prefer to hear from someone I know who knows — even if what they share is simply their approval/disapproval of another source of information.
First of all, you should move License cost from con to pro because it actually saves your money. Win2003 stand allows you to run 2 instances on one host for no extra cost, enterprise allows 4, and data center allows unlimited. So basically, if you buy one enterprise license you can have 4 win2003 server running on one physical machine.
You can almost virtualize anything except the system that requires specific hardware that is not compatible with either Virtual server or VMware.
The most beautiful thing in virtualization is the ability to prepare the disaster recovery. It’s just so easy to do so. I know you’ve already touched this a bit in your post but I just want to emphasize a little more.
The bottleneck of the performance of utilizing virtualization is on the HD. Implementing a SAN if having the budget room or attach a separate storage would be good too. SAS is more preferable than SATA if you want to virtualize the heavy duty HD-intensive servers like Exchange and SQL.
I am using VMware server for virtual machines that house my email server, my web server, and others. One of my customers is hosting multiple VMware server VMs for various applications I have built for them.
The disaster recovery benefits cannot be overstated. We have some apps that are a real pain to set-up and configure. Virtualization has made it so much easier for us to have confidence that if we install a service pack or make some other changes, we can recover easily if things go wrong.
I can’t speak much to the capacity/performance side of things…I’m not supporting large quantities of simultaneous users. But for my purposes, it has been a great thing.
We talked the other day about this after the user group meeting. What I did not tell you was that I got some great tips from Scott Hanselman’s blog. I enabled hardware assistance for virtualization and it changed everything. I also suggest using Win2003 and WinXP for virtual machines. I did Vista for a while but it takes up so much more space and that leaves me more space for backup copies of my environments.
@Kent, Thanks for commenting. I was not awar of the licensing benefit you mentioned. Now that you say it I seem to remember hearing something like that in passing, but did not recall. At the momentI don’t think any of our needs require special hardware — just GHz, HD and RAM — so that is encouraging that we can likely virtualize everything. I’ll definitely push for the fastest drives we can get.
@Avonelle, First off, good to hear from you again. It’s been a while. Backup and recovery were definitely a huge benefit for our discussion. Also, the option to move an entire server to new hardware with just a network file copy.
@Brennan, I remember you (or someone) mentioning Scott Hanselman’s post the other day but I haven’t re-read it yet since he originally posted it. In my mind I kind of separated that conversation from this topic because at the time I was asking you with respect to development and testing machines (Virtual “PC”, workstations, interactive logins, etc.), and this post was more related to servers (virtual “server”, running unattended, Win2003/2008 server, SQL, IIS, etc.). I suppose many of the tips are the same either way, though. I’ll definitely check it out again. Thanks for the reminder.
Potential performance degradation is an issue that should be considered, but we’ve found that VMs can actually help us take advantage of the physical resources in some instances. As an example, we have a wide variety of Windows services, web services, and legacy applications that use very little CPU, memory, and network resources by themselves. Rather than lumping all these services onto one physical box, we segregated them into categories of services on separate VMs.
I also agree that VMs can greatly simplify deployment. Most of our .NET code base is easily deployed by simply copying files, but some of them (notably the legacy applications) are a royal pain because of dependencies on third-party applications and COM components. It took us a few weeks to deploy and test everything on the VMs initially, but future deployments are as simple as copying the image files from a backup store. Hopefully (crossing my fingers) this means no more monkeying with InstallShield and Wise install scripts.
Deploying most of the stuff on VMs should theoretically make it significantly easier to relocate to new collocation facilities in the future as well. We’ve done this a couple of times and it has always been extremely painful. Now that we have most everything on VMs, it should be a simple matter of moving the images instead of installing everything from scratch.
A word about how to estimate physical host requirements and specs – VMware (and several virtualization tool providers, like Uptime) provide Capacity Planning tools that largely automate the process. That, combined with the ease of deploying virtual lab environments for testing, takes the guesswork out of what kind and how much hardware to use.