Monitoring and optimizing host server performance

When server virtualization first became popular among IT professionals, virtualization technology was marketed as a way to decrease costs by making better use of existing server hardware. The idea was that

    Requires Free Membership to View

    When you register, you’ll also receive targeted alerts from my team of editorial writers and independent industry experts with the latest news, tips, and advice to help you do your job more efficiently and effectively. Our goal is to keep you informed on the hottest topics and biggest challenges faced by IT professionals today working with data center technologies.

    Margie Semilof, Editorial Director

    By submitting your registration information to SearchDataCenter.com you agree to receive email communications from TechTarget and TechTarget partners. We encourage you to read our Privacy Policy which contains important disclosures about how we collect and use your registration and other information. If you reside outside of the United States, by submitting this registration information you consent to having your personal data transferred to and processed in the United States. Your use of SearchDataCenter.com is governed by our Terms of Use. You may contact us at [email protected].

many servers only used 10% to 20% of their available resources, so it made sense to combine those servers onto a single box rather than purchasing dedicated hardware for each server.

But server virtualization has undergone a fundamental shift in recent years. Today, virtualization seems to be more about achieving flexibility and resiliency within data centers. Rather than virtualizing only servers with low overhead, some organizations have begun virtualizing entire data centers. Doing that allows organizations to treat the host servers as a pool of resources that can be allocated on an as-needed basis.

Although this approach offers flexibility and a degree of protection against hardware failures, it means that the organizations that use this approach are virtualizing high-demand servers that may have been previously considered poor candidates for virtualization. Now is the time to optimize your virtualization hosts to ensure that your servers are able to cope with the ever-increasing demands placed upon them.

Disk resources
A server’s hard drives are by far its slowest components. In virtual data centers, it’s important to design your storage subsystem in a way that prevents it from becoming a major bottleneck. The ideal solution is to use a SAN for virtual server storage. Even if a SAN isn’t in the budget, you can still take some steps to ensure that disk resource contention doesn’t bog down your virtual machines (VMs).

Rule No. 1 is that the host operating system should always be placed on a dedicated drive -- not just a dedicated volume. This keeps it from competing with the VMs for disk resources. If your host server can accommodate an extra drive, then consider moving the host operating system’s pagefile to a dedicated drive as well.

RAID arrays are essential to adequate virtual server performance. At a minimum, your disk arrays should use RAID 1, but RAID 1+0 is a better choice because it provides fault tolerance without the performance overhead of RAID 5. If possible, dedicate a disk array to each virtual server.

Although the type of storage array that you use is important, the disks used in the array are equally important. If two or more virtual servers are sharing an array, then consider using 10K RPM drives. They cost a bit more than 7500 RPM drives but will help your virtual servers to perform better.

Don’t forget to use hot-swappable SCSI drives in your storage arrays. You don’t want to have to take an entire array offline to replace a failed hard drive, especially if multiple virtual servers share the array.

Regardless of which type of storage you are using, make sure that you have the appropriate driver installed. I have seen several real-life situations in which Windows automatically identified a storage device but did so incorrectly. Although the storage devices worked in each case, the performance was less than optimal.

You can gain an additional performance boost by configuring your virtual servers to use virtual hard drives of a fixed size. Although dynamically expanding virtual hard drives are handy, they can kill a server’s performance.

Memory and CPU resources
Physical memory is the single biggest factor in the number of VMs that a server can host and in the efficiency of those VMs. Always install as much memory as your server’s system board will accommodate. Don’t be shy about allocating memory to your VMs but make sure to leave enough free memory for the host OS to function properly.

Some virtualization products do not prevent an administrator from overextending a server’s CPU resources. They allow you to allocate more virtual CPUs than the total number of physical CPU cores installed in the system. For the best performance, reserve at least two CPU cores for the host operating system and make sure that every additional virtual CPU that you allocate has a corresponding physical CPU core.

 Keep in mind that this recommendation is based on the desire to achieve optimal performance. You can sometimes allocate more virtual CPUs than the number of physical CPU cores installed in your server and still achieve a reasonable level of performance, but the performance won’t be optimal.

Host operating system
One aspect of virtual server optimization that is often overlooked is that the host operating system burdens the hardware with its own resource demands. Not all virtualization products rely on a conventional Windows Server operating system.  Hyper-V Server, for example, is a dedicated, stand-alone product that has a much smaller footprint than a full-blown Windows Server OS.

If maximum performance is your primary goal, then use a standalone virtualization product, whether it is Hyper-V Server or something else.  But sometimes system management requirements may dictate the need to run a traditional OS on your host server.  In that case, there are some steps that you can take to minimize the host operating system’s overhead.

Begin by taking an inventory of the processes that are running on your host OS and determining which are necessary and which are not. Under no circumstances should your host OS be running applications other than a few critical system management applications, such as backup agents or antivirus software. All other applications should be uninstalled, and unnecessary system services should be disabled.

Make sure that the antivirus software that is running on the host OS is not configured to scan folders containing virtual hard drives or any other components related to your VMs. At best, scanning those folders will diminish your server’s performance. At worst, your antivirus software could potentially corrupt a virtual hard drive file.

Another optimization technique that you can use is to change the host operating system’s processor scheduling method. Windows Server offers a setting that allows you to gear processor scheduling to favor either running programs or background services. Virtualization hosts should always favor background services.

Finally, if your host server performs automatic defragmentation, you should schedule the defragmentation process to occur only during idle times. Likewise, if you perform automated defragmentation of your VMs, that defragmentation should be set to occur during off-peak hours according to a schedule that prevents multiple VMs from performing defragmentations simultaneously.

As virtualization hosts begin to handle more demanding workloads, it becomes more important than ever to optimize your host server’s performance. Optimization ensures that the available pool of hardware resources is used as efficiently as possible.

ABOUT THE AUTHOR: Brien M. Posey has received Microsoft’s Most Valuable Professional award six times for his work with Windows Server, IIS, file systems/storage and Exchange Server. He has served as CIO for a nationwide chain of hospitals and healthcare facilities and was once a network administrator for Fort Knox.

This was first published in October 2010

Join the conversationComment

Share
Comments

    Results

    Contribute to the conversation

    All fields are required. Comments will appear at the bottom of the article.

    Disclaimer: Our Tips Exchange is a forum for you to share technical advice and expertise with your peers and to learn from other enterprise IT professionals. TechTarget provides the infrastructure to facilitate this sharing of information. However, we cannot guarantee the accuracy or validity of the material submitted. You agree that your use of the Ask The Expert services and your reliance on any questions, answers, information or other materials received through this Web site is at your own risk.