When virtual machines were initially introduced, the basic idea behind using them was fairly simple. Many servers, such as DHCP servers and DNS servers don’t even come close to using the hardware’s full potential. In fact, often times less than ten percent of a server’s processing power is actually used on an ongoing basis. This under utilization of resources has been further compounded as processors have grown to be more powerful over the years. Virtualization allows companies to save money on hardware costs by making better use of a server’s available resources, by allowing multiple virtual machines to use resources that would otherwise have gone to waste. When implemented properly, this is a great alternative to spending lots of money on multiple physical servers.
Of course the real trick is configuring the host operating system and the guest operating systems to use resources in an efficient manner. After all, you want to gain the maximum usage out of your hardware, but at the same time, you don’t want to end up in a situation in which the server’s resources are spread too thin and performance begins to suffer. In this article, I’m going to try to walk you through this rather delicate balancing act.
Before I Begin
Before I get started, I want to explain that resource allocation methods vary considerably depending on what virtualization software you are using. The techniques that I am going to be discussing are intended for use with Microsoft’s Hyper-V. If you are using VMware or Microsoft’s Virtual Server or Virtual PC, then the basic concepts that I will be discussing will still be relevant for the most part, but you are going to have to make some adjustments to account for the needs of the virtualization product that you are using.
Host Operating System Configuration
I want to start out by talking about the host operating system. All of the guest operating systems are dependent on the host operating system, so it is important for the host operating system to be configured properly.
The first thing that you need to know about the host operating system is that you must be running a 64-bit version of Windows Server 2008 in order to use Hyper-V. Your guest operating systems will be able to run either 64-bit or 32-bit operating systems, but 64-bit is an absolute requirement for the host operating system.
Another requirement that you need to be aware of is that your hardware must support hardware level virtualization. In Microsoft’s previous virtualization products, the virtual machines ran on top of the host operating system, and any calls to the hardware were passed through the host operating system. This approach proved to be very inefficient though. With Hyper-V, guest operating systems communicate directly with the server’s hardware. This results in much better performance than what was previously possible, but it also means that you won’t be able to run Hyper-V unless your server supports hardware level virtualization.
Hardware level virtualization is available on both Intel and AMD platforms. If you are using an Intel processor it must support Intel VT (Intel Virtualization Technology). You can read more about Intel VT. AMD’s virtualization support is called AMD-V, which you can read about here.
OK, I’ve talked about some of the absolute requirements for the host operating system, but let’s talk about the resources that the host operating system will need.
When it comes to virtualization, one of the most important resources that you will have to manage is memory. I recommend allocating 2 GB of RAM to the host operating system. Keep in mind that you don’t have to perform any special procedure to allocate memory to the host operating system, but you do have to specify how much memory you allocate to the guest operating systems. Your memory should be allocated in such a way that when you total the amounts of memory that are being used by all of the guest operating systems that are running simultaneously, there is at least a 2 GB difference between the sum total of the memory being used by your virtual machines and the total amount of memory that is installed in the server.
Obviously, you may need more or less memory depending on how Windows Server 2008 is configured. This brings up another good point though. Your host operating system’s only job is to host guest operating systems. Therefore, Hyper-V should be the only server role that is installed. If you need to run additional roles, then those roles should be installed in a guest operating system. Your host operating system needs to be able to service the guest operating systems with maximum efficiency, and it can’t do that if additional roles are installed.
One thing that I want to quickly mention before I move on is that although it is considered to be a bad practice, it is possible for the host operating system to host additional server roles. However, I have run across several posts on the Internet that have said that installing the Hyper-V role on a domain controller essentially destroyed Windows. I have never tried this myself, so I can’t tell you exactly what happens if you do try to install Hyper-V on a domain controller. Even so, enough people have blogged about the problem that I thought that I should at least mention it to you.
It might sound a little strange, but it has been my experience that network cards can actually be one of the biggest bottlenecks in some situations. For example, if you are sunning several virtual machines, each of which is running one or more network intensive applications, then a network connection could quickly become saturated.
Hyper-V is designed in such a way that you can install multiple network cards in your server, and then assign a different NIC to each virtual server instance. In an ideal situation, you should have a NIC for the host operating system, and a dedicated NIC for each guest operating system. This isn’t always possible though.
For example, I have a server that I use to host several virtual machines. Although this is a fairly high end server, it only has three expansion slots. Two of those expansion slots are filled with RAID controllers, so that only left one expansion slot that I could use for a NIC. Fortunately, the server also features an integrated NIC, giving me a total of two gigabit connections, which meets my needs. I just assigned each virtual machine to use one NIC or the other, based on the amount of network traffic that I expected the virtual machine to generate.
Keep in mind that distributing virtual machine related network traffic across your available NICs does not always mean assigning half of the virtual machines to one NIC, and half of the virtual machines to the other. Some virtual machines are going to send or receive a lot more traffic than others, and this is an important consideration when assigning NICs to a virtual machine. The good news is that Hyper-V is flexible enough that you can reallocate NIC usage at a later time if necessary. Switching NICs does require you to shut down the virtual machine though.
The first thing that I want to talk about is performance monitoring. It seems a little odd to me, but for some reason performance monitoring seems as though it has become the hot virtualization topic almost overnight. I think that part of the reason for this is that people are starting to realize that the Performance Monitor cannot be completely trusted in a Hyper-V environment. It isn't just Performance Monitor that becomes unreliable though. Many of the other available resource monitoring mechanisms can also no longer be trusted. For example, it is very common for the Hyper-V management console to report completely different levels of CPU usage than what is displayed under the Windows Task Manager. In fact, if you look at Figure A, you can see that the Hyper-V manager is reporting 5% CPU utilization, while the Windows Task Manager is reporting that the virtual machine is using 0% of the CPU resources.
Figure A: The Windows Task Manager and the Hyper-V Manager rarely agree as to how much CPU time a virtual machine is using
The reason for this discrepancy is a little bit complicated, but I will try to keep my explanation as simple as I can. The most important thing to remember when trying to understand the reason for this discrepancy is that Hyper-V allows virtual machines to communicate directly with the server's hardware. Of course this raises the question that if this is true then why does the Windows Task Manager even show a process for the virtual machine?
The reason why Windows Task Manager shows a process for the virtual machine is because the host operating system has to be able to interact with the guest machine to a minimal degree. Remember that Hyper-V allows you to do things like a snapshoting or saving a virtual machine state. The host operating system uses a worker process to connect to virtual machines so that these types of tasks can be performed. The same process is also used for emulating hardware devices.
OK, so we have established that the Windows Task Manager is completely unaware of how much CPU time a virtual machine is actually using. This raises at least a couple of other questions though. One question is why CPU resources are the virtual machines actually using? Perhaps the more obvious question though, is how can we track that usage?
As I mentioned earlier, Hyper-V allows virtual machines to communicate directly with hardware through the use of the Hypervisor. This is where the vast majority of the CPU usage actually occurs for the virtual machine. You can get a rough idea of how much CPU time a virtual machine is really using by looking at either the Performance Monitor or the Windows Task Manager from within the virtual machine.
If you look at Figure B, you can see that I have run the Task Manager inside of my virtual server. At the time that the screen capture was taken, the Windows Task Manager reports the CPU usage for the virtual machine at 13%, while the Hyper-V Manager only reports the CPU usage at 4% at that same moment.
Figure B: There is a big discrepancy between the Windows Task Manager and the Hyper-V Manager
As you can see, there is a big difference between the level of CPU utilization report by the Windows Task Manager from within the virtual machine, and what is being reported by the Hyper-V Manager. So which one is right? Actually, it is a trick question; both reports are completely wrong, but the Windows Task Manager is a whole lot closer to being right than the Hyper-V Manager is.
The reason why you cannot completely trust the information given to you by the Windows Task Manager is that the performance data that is being reported is skewed. For example, a machine that I got the screen capture off of has a single processor with four cores. As such, there is no dedicated processor servicing the virtual machine. Instead, the virtual machine receives a percentage of the physical hardware's total CPU time. The virtual machine is not aware of this. It thinks that it has full reign of the system CPU. The CPU usage report provided by the Windows Task Manager would only be accurate if the virtual machine had been allocated all of the system’s CPU time or if there was a dedicated processor servicing the virtual machine. As it is, the Windows Task Manager is actually reporting what percentage of the CPU resources that have been allocated to the virtual machine are actually being used.
So how do you know how much CPU time a virtual machine is actually consuming? I do not think that there is really any way to know for sure exactly how much CPU time a virtual machine is consuming. However, Microsoft has integrated a number of Hyper-V specific counters into the host operating system’s copy of Performance Monitor. Since these counters are Hyper-V aware, you can use them to get a pretty good idea of how the virtual machines are performing.
There are way too many Hyper-V related counters to talk about them all, but there are two counters that you might be particularly interested in. One such counter is the Hyper-V Hypervisor Virtual Processor\% Guest Run Time counter. It is designed to show you how much CPU time a guest machine is really using. For example, if you look at Figure C, you can see that my virtual server was actually consuming about 31% of the server’s physical CPU resources when the screen shot was taken, but the Hyper-V Manager was only reporting a utilization of 7%.
Figure C: The Hyper-V Hypervisor Virtual Processor\% Guest Run Time counter is probably the most reliable way of seeing how much CPU time a virtual machine is using
Another counter that is interesting to look at is the Hyper-V Hypervisor Virtual Processor\% Hypervisor Run Time counter. It shows you how hard the server’s Hypervisor is working to manage the virtual machines. This counter won’t normally be anywhere near as high as the last counter that I showed you unless you have a lot of virtual machines running at the same time. Even so, this counter is worth paying attention to because it does reflect a level of processor usage that is otherwise unreported.
So Which One Do You Use?
In this article, I've shown you several different ways of obtaining performance benchmarks for virtual machines, so you may be wondering which method you should use. The answer really just depends on your goal. If you're trying to find out how much CPU time is actually being consumed, then I would recommend monitoring the Hyper-V Hypervisor Virtual Processor\% Guest Run Time counter. However, if your goal is to find out whether or not a virtual machine's performance is acceptable for a particular application, then you are best off running the Performance Monitor in the usual way, but from inside the virtual machine. The reason for this is that the resources that are displayed when you run the Performance Monitor inside a virtual machine are the same resources that are available to applications that are running on that virtual machine.
The Role of Metrics
In the previous part of this article series, explained that the various metrics that are typically used for measuring the amount of CPU time that is being consumed by virtual machines are extremely misleading. This greatly complicates the task of allocating CPU resources to the virtual machines that are running on a server, because it is difficult to figure out how much CPU time each virtual machine is actually using. Even so, there are some ways of getting the job done.
The key to allocating CPU, or any other types of resources in Hyper-V, is to remember that everything is relative. For example, Microsoft has released some guidelines for virtualizing Exchange Server. One of the things that was listed was that the overall system requirements for Exchange Server are identical whether Exchange is being run on a virtual machine, or on a dedicated server.
Assuming that the same principle applies to other types of environments, we can use basic performance monitoring within a virtual machine as a way of helping is to allocate resources to a virtual machine. For example, most of the books that I have read on performance monitoring state that, on average, no more than 80% of the CPU's resources should be consumed. If we look at the %Processor Time counter in Performance Monitor (within a virtual machine), we can see what percentage of the processor time the virtual machine thinks is in use. Keep in mind that this is not the actual amount of CPU time that is being used by the virtual machine, but it does not really matter. What is important is perception. In other words, how much CPU time does the virtual machine think that it is using?
Virtual Machine CPU Resources
Obviously, running Performance Monitor within a virtual machine is easy. The real question is what do you do if you find out that a virtual machine’s CPU resources are being over or under utilized?
As you get a feel for how a virtual machine's CPU resources are being utilized, you can allocate more or less CPU time to the virtual machine. For example, if Performance Monitor tells you that a virtual machine's CPU is constantly running at 100%, then it means that not enough CPU time is being allocated to the virtual machine, or that the server simply does not have sufficient CPU resources to support the virtual machine's needs. In either case, you would need to check to see how much CPU time is being allocated to the virtual machine. If not all of server's CPU time is being consumed, you could allocate additional CPU time to the virtual machine, which should bring down the value of the %Processor Time counter.
Before I show you how to allocate CPU resources, it is important to keep in mind that a single host operating system running Hyper-V can host multiple guest operating systems, and that resources are allocated separately for each virtual machine. Therefore, if you want to provide additional CPU resources to a particular virtual machine, you may have to take some CPU resources away from another virtual machine, so that those resources can be freed up for the virtual machine that really needs them.
Adjusting the VM CPU Resources
With that in mind, open the Hyper-V Manager, and then right click on the virtual machine that you want to adjust the CPU resources on. Select the Settings command from the resulting shortcut menu, and Windows will display the settings for the virtual machine. Keep in mind that some of the settings will not be available unless the virtual machine is shut down.
When the Settings window opens, the left side of the screen will list various system components that you can adjust the settings for. Select the Processor from the list. When you do, the right side of the screen will display some processor specific settings, as shown in Figure A.
Figure A: You can allocate processor resources on a per virtual machine basis
As you can see in the figure above, there are several different options that you can set in regard to the CPU resources that are assigned to the virtual machine. The first setting in the list allows you choose the number of logical processors to be assigned to the virtual machine. Logical processors mirror the number of physical cores installed on the machine. For example, the server that I used to capture the screenshot above contains four processor cores. That being the case, I have the option of assigning 1, 2, or 4 logical processors to the virtual machine. In this particular case I am only assigning a single logical processor to the virtual machine, even though the machine could benefit from having multiple logical processors. The reason why am doing this is because this particular server hosts three separate virtual machines. By limiting each virtual machine to using a single logical processor, I am effectively reserving a CPU core for the host operating system, and one for each of the virtual machines.
The next setting on the list is the Virtual Machine Reserve setting. This setting allows you to reserve a percentage of the machine's overall CPU resources for this particular virtual machine. This setting is handy if you have a virtual machine that is running CPU intensive applications, and you want to ensure that it always has at least a minimal level of CPU resources available to it. Notice in the figure that the virtual machine reserve is set to zero. This means that I am not specifically reserving any CPU resources for the virtual machines.
The next setting is the Virtual Machine Limit setting. This setting is basically the opposite of the Virtual Machine Reserve setting. Rather than guaranteeing a minimal level of CPU resources, this setting prevents the virtual machine from consuming an excessive amount of the available CPU resources. If you look at the figure, you can see that the virtual machine limit is set to 100. Just beneath that the percent of total system resources set to 25. The reason for the seemingly contradictory settings is that the virtual machine is allowed to use up to 100% of the CPU resources associated with one logical processor. Since there are four logical processors in the machine, this constitutes 25% of the machine’s total CPU resources.
The last setting on the list is the Relative Weight setting. You can use this setting as an alternative to the settings that I have already discussed. The basic idea is that virtual machines with higher relative weights receive more CPU time, and virtual machines with lower relative weights receive less CPU time. By default, all virtual machines are assigned a relative weight of 100 to prevent anyone virtual machine from receiving preferential treatment.
One of the things that I find most interesting about memory allocation for Hyper-V is that Microsoft does not seem to make any specific recommendations as to the required amount of memory. If you check the hardware requirements section of the Hyper-V website, you will see that Microsoft really only lists two different requirements. First, the host operating system must be running a 64-bit version of Windows Server 2008. Granted, this is not exactly a hardware requirement, but it does imply the need for a 64-bit processor. The other requirement is that the server must support hardware assisted virtualization. Both Intel and AMD solutions are supported.
These are the only two hardware requirements that are listed beyond those required for running the host operating system. My own personal observation is that when you launch a virtual machine, it seems to consume a trivial amount of memory on the host operating system, but not enough to really worry about, as long as the host operating system is not deprived of resources. Of course this is in addition to the memory that you allocate to the virtual machine.
I have spent a lot of time working with Hyper-V, and I have experimented with a lot of different memory configurations. My own recommendation for allocating memory to virtual machines is to start out by figuring out how many virtual machines you want to run simultaneously. After doing so, plan the system requirements for those virtual machines just as you would if they were physical machines. Finally, add all of the required memory together, and then add 2 GB for the host operating system. Some people like to allocate a bit more memory to the host operating system, but I have found that this really is not necessary as long as the host operating system is not hosting any applications other than Hyper-V.
I recently virtualized some of my production servers using a physical machine that had 8 GB of RAM. I subtracted 2 GB off the top for the host operating system, which left me with 6 GB to play with. I split the remaining memory evenly into three 2 GB chunks that I used for three separate virtual machines.
One important thing to keep in mind when you are allocating memory to virtual machines is that you can only allocate memory to virtual machines, not to the host operating system. The host operating system uses what is left over after all of your virtual machines have started. It is also important to remember that simply allocating memory to a virtual machine does not cause the memory to be consumed. The memory is only consumed once the virtual machine has been started. Until you start a virtual machine, the memory that has been allocated to that virtual machine is available to the host operating system.
The reason why I wanted to point this out is to show that it is important not to shortchange the host operating system. After all, you cannot reserve memory for the host operating system, and it gets whatever memory is left over after the virtual machines start, so it is easy to accidentally leave the host operating system with insufficient memory.
Disk Resource Allocation
Every virtual machine that you create is assigned one or more virtual hard drives. A virtual hard drive is simply a large file that acts as a repository for all of the files that are associated with the virtual machine. Like any file, you can tell Windows to put a virtual hard drive just about anywhere so long as there is sufficient disk space. Of course this leaves the question of where you should put your virtual hard drive files so that the server will perform optimally.
I tend to think that the answer to this question depends on how much money you want to spend. From a performance standpoint, the ideal solution would be to use an independent storage array for each virtual hard drive. Another optimal solution would be to store your virtual hard drives on a SAN.
The biggest problems with these two solutions are cost. The whole point of using virtualization is usually to help reduce hardware costs. You are not exactly reducing the costs if you end up having to purchase a bunch of direct attached storage arrays, or if you end up having to build a Storage Area Network. On the other hand, if you happen to already have a SAN in place then you should definitely use it.
So, what if you do not have a huge budget and you do not have a SAN at your disposal? Hyper-V does not really limit you in terms of where you can put the virtual hard drives. From a software perspective, it makes no difference to Hyper-V if you put the virtual hard drive files on the server’s system volume or if you put each virtual hard drive on a dedicated raid array. Of course if you did put all of your virtual hard drives onto the system volume then the server's performance is going to suffer tremendously.
I have several servers in my own organization that are acting as host servers for virtual machines. I use two different techniques for placing virtual hard drives, depending upon how heavily I anticipate disk resources being used.
If the virtual machines being hosted on a server are not running disk intensive applications, then I usually end up just creating one large raid 10 array and using it to store all of the virtual hard drives.
If I know that a virtual server is going to be more disk intensive then I start with the same physical hardware that I would if I were creating a large raid 10 array. Rather than creating one large array I allocate the individual disks into several smaller arrays. By doing so I am able to ensure that no one virtual hard drive consumes all the available disk throughput. This technique also helps me to avoid the costs associated with purchasing individual dedicated arrays for each virtual machine.
I have found both of these techniques work really well for me. The biggest trick is just anticipating what type of load each individual virtual machine is going to be placing on the disk subsystem.